#ceph IRC Log

Index

IRC Log for 2016-05-04

Timestamps are in GMT/BST.

[0:06] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[0:08] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[0:09] * puffy (~puffy@c-71-198-18-187.hsd1.ca.comcast.net) has joined #ceph
[0:11] * csoukup (~csoukup@2605:a601:9c8:6b00:82a:4075:413f:7786) Quit (Ping timeout: 480 seconds)
[0:15] * Xylios (~notmyname@128.153.145.125) has joined #ceph
[0:17] * Kyso_ (~EdGruberm@76GAAE1D6.tor-irc.dnsbl.oftc.net) Quit ()
[0:25] * squizzi (~squizzi@107.13.31.195) Quit (Remote host closed the connection)
[0:27] * wwdillingham (~LobsterRo@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[0:31] * bene2 (~bene@2601:18c:8501:41d0:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[0:43] * fsimonce (~simon@87.13.130.124) Quit (Quit: Coyote finally caught me)
[0:44] * Xylios (~notmyname@7V7AAEAZ0.tor-irc.dnsbl.oftc.net) Quit ()
[0:45] * Dysgalt (~SinZ|offl@ppn254.stwserver.net) has joined #ceph
[0:52] * blizzow (~jburns@50.243.148.102) Quit (Ping timeout: 480 seconds)
[0:52] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Ping timeout: 480 seconds)
[0:56] * analbeard (~shw@host109-157-136-215.range109-157.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:00] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[1:04] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[1:08] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:11] * derjohn_mobi (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[1:13] * LeaChim (~LeaChim@host86-150-161-6.range86-150.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:14] * Dysgalt (~SinZ|offl@4MJAAENKJ.tor-irc.dnsbl.oftc.net) Quit ()
[1:21] * Kwen (~Solvius@freedom.ip-eend.nl) has joined #ceph
[1:28] * mattbenjamin (~mbenjamin@aa2.linuxbox.com) Quit (Ping timeout: 480 seconds)
[1:45] * ghostnote (~Uniju@tor02.zencurity.dk) has joined #ceph
[1:46] * derjohn_mobi (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[1:51] * Kwen (~Solvius@06SAAB1O8.tor-irc.dnsbl.oftc.net) Quit ()
[1:51] * colde2 (~clarjon1@216.218.134.12) has joined #ceph
[1:51] * puffy (~puffy@c-71-198-18-187.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[1:54] * rendar (~I@host150-180-dynamic.246-95-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[2:00] * oms101 (~oms101@p20030057EA084400C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:06] * shohn (~shohn@dslb-088-073-137-088.088.073.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[2:08] * oms101 (~oms101@p20030057EA070900C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[2:15] * ghostnote (~Uniju@6AGAABMHQ.tor-irc.dnsbl.oftc.net) Quit ()
[2:15] * Vale (~Bj_o_rn@tor-exit-node.seas.upenn.edu) has joined #ceph
[2:19] * th0m_ (~smuxi@static-qvn-qvu-164067.business.bouyguestelecom.com) Quit (Remote host closed the connection)
[2:19] * th0m (~smuxi@static-qvn-qvu-164067.business.bouyguestelecom.com) has joined #ceph
[2:21] * colde2 (~clarjon1@6AGAABMHZ.tor-irc.dnsbl.oftc.net) Quit ()
[2:21] * Nephyrin (~biGGer@192.42.116.16) has joined #ceph
[2:23] * huangjun|2 (~kvirc@113.57.168.154) has joined #ceph
[2:27] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[2:31] * xarses (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[2:35] * th0m (~smuxi@static-qvn-qvu-164067.business.bouyguestelecom.com) Quit (Ping timeout: 480 seconds)
[2:43] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[2:44] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[2:44] * Vale (~Bj_o_rn@4MJAAENL8.tor-irc.dnsbl.oftc.net) Quit ()
[2:45] * brianjjo (~Heliwr@185.100.85.132) has joined #ceph
[2:51] * Nephyrin (~biGGer@6AGAABMIW.tor-irc.dnsbl.oftc.net) Quit ()
[2:51] * zc00gii (~Jebula@anonymous.sec.nl) has joined #ceph
[2:59] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[3:00] * th0m (~smuxi@static-qvn-qvu-164067.business.bouyguestelecom.com) has joined #ceph
[3:03] * csoukup (~csoukup@2605:a601:9c8:6b00:9f0:9642:435e:dd2c) has joined #ceph
[3:09] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[3:12] * csoukup (~csoukup@2605:a601:9c8:6b00:9f0:9642:435e:dd2c) Quit (Ping timeout: 480 seconds)
[3:14] * brianjjo (~Heliwr@6AGAABMJL.tor-irc.dnsbl.oftc.net) Quit ()
[3:18] * atheism (~atheism@182.48.117.114) has joined #ceph
[3:21] * zc00gii (~Jebula@4MJAAENM2.tor-irc.dnsbl.oftc.net) Quit ()
[3:21] * skney1 (~lobstar@tor.les.net) has joined #ceph
[3:23] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:c876:1fb3:dcf6:668a) Quit (Ping timeout: 480 seconds)
[3:23] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:26] * yanzheng (~zhyan@118.116.113.70) has joined #ceph
[3:27] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[3:27] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[3:27] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[3:28] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[3:29] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:29] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[3:29] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[3:30] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:30] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[3:30] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[3:30] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[3:30] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) has joined #ceph
[3:30] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[3:30] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) Quit ()
[3:30] * georgem (~Adium@206.108.127.16) has joined #ceph
[3:31] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:32] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[3:33] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: No route to host)
[3:33] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:34] * wgao (~wgao@106.120.101.38) Quit (Ping timeout: 480 seconds)
[3:39] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[3:39] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:40] * EinstCr__ (~EinstCraz@58.247.119.250) has joined #ceph
[3:40] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[3:40] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[3:41] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:42] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[3:43] * wgao (~wgao@106.120.101.38) has joined #ceph
[3:47] * zhaochao (~zhaochao@125.39.9.159) has joined #ceph
[3:48] * EinstCr__ (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[3:49] * nupanick (~Architect@tor.metaether.net) has joined #ceph
[3:50] * IvanJobs (~hardes@103.50.11.146) has joined #ceph
[3:51] * skney1 (~lobstar@7V7AAEA2K.tor-irc.dnsbl.oftc.net) Quit ()
[3:51] * Kurimus1 (~rapedex@4MJAAENN7.tor-irc.dnsbl.oftc.net) has joined #ceph
[3:56] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[3:59] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[4:00] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[4:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[4:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[4:04] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[4:04] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[4:05] * bliu (~liub@203.192.156.9) Quit (Ping timeout: 480 seconds)
[4:12] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[4:13] * cholcombe (~chris@12.39.178.119) has joined #ceph
[4:19] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[4:19] * vata (~vata@cable-21.246.173-197.electronicbox.net) has joined #ceph
[4:19] * nupanick (~Architect@6AGAABMLX.tor-irc.dnsbl.oftc.net) Quit ()
[4:19] * Corti^carte (~Zeis@tor02.zencurity.dk) has joined #ceph
[4:21] * Kurimus1 (~rapedex@4MJAAENN7.tor-irc.dnsbl.oftc.net) Quit ()
[4:21] * Kayla (~zapu@06SAAB1VP.tor-irc.dnsbl.oftc.net) has joined #ceph
[4:22] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[4:23] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[4:24] * kefu (~kefu@183.193.162.205) has joined #ceph
[4:25] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[4:26] * flisky (~Thunderbi@36.110.40.21) has joined #ceph
[4:28] * jclm (~jclm@marriott-hotel-ottawa-yowmc.sites.intello.com) Quit (Ping timeout: 480 seconds)
[4:34] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[4:36] * wgao (~wgao@106.120.101.38) Quit (Ping timeout: 480 seconds)
[4:43] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[4:44] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[4:45] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[4:45] * wgao (~wgao@106.120.101.38) has joined #ceph
[4:48] * kefu (~kefu@183.193.162.205) Quit (Ping timeout: 480 seconds)
[4:49] * richardus1 (~Bored@orion.enn.lu) has joined #ceph
[4:49] * Corti^carte (~Zeis@4MJAAENOL.tor-irc.dnsbl.oftc.net) Quit ()
[4:51] * Kayla (~zapu@06SAAB1VP.tor-irc.dnsbl.oftc.net) Quit ()
[4:51] * blip2 (~pakman__@relay1.cavefelem.com) has joined #ceph
[4:56] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Remote host closed the connection)
[4:57] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[4:57] * Mika_c_ (~quassel@122.146.93.152) has joined #ceph
[4:57] * rakeshgm (~rakesh@106.51.26.7) Quit (Quit: Leaving)
[4:57] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[4:58] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[5:00] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[5:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[5:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[5:01] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[5:03] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[5:03] * vata (~vata@cable-21.246.173-197.electronicbox.net) Quit (Quit: Leaving.)
[5:03] * Mika_c (~quassel@122.146.93.152) Quit (Ping timeout: 480 seconds)
[5:05] <ronrib> what's the deal with ceph and ROCE? https://github.com/ceph/ceph/blob/master/README.xio
[5:06] <ronrib> looks like there's a handful of guides that deal with compiling accelio, just wondering how maintainable it is at the moment
[5:07] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[5:07] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[5:08] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[5:08] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[5:08] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[5:09] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[5:09] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[5:10] * mnathani2 (~mnathani_@192-0-149-228.cpe.teksavvy.com) Quit ()
[5:12] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[5:15] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[5:15] * bliu (~liub@203.192.156.9) has joined #ceph
[5:16] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[5:17] * EinstCr__ (~EinstCraz@58.247.119.250) has joined #ceph
[5:18] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: No route to host)
[5:18] * EinstCr__ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[5:18] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[5:19] * richardus1 (~Bored@6AGAABMN1.tor-irc.dnsbl.oftc.net) Quit ()
[5:19] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[5:19] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[5:19] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[5:20] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[5:20] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[5:20] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[5:20] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[5:21] * blip2 (~pakman__@7V7AAEA3S.tor-irc.dnsbl.oftc.net) Quit ()
[5:24] * cholcombe (~chris@12.39.178.119) Quit (Ping timeout: 480 seconds)
[5:24] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:27] * shaunm (~shaunm@74.83.215.100) Quit (Ping timeout: 480 seconds)
[5:28] * Vacuum__ (~Vacuum@i59F790E2.versanet.de) has joined #ceph
[5:30] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[5:31] * deepthi (~deepthi@115.118.215.242) has joined #ceph
[5:35] * Vacuum_ (~Vacuum@88.130.214.133) Quit (Ping timeout: 480 seconds)
[5:38] * cholcombe (~chris@12.39.178.119) has joined #ceph
[5:39] * geli (~geli@geli-2015.its.utas.edu.au) has joined #ceph
[5:41] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[5:42] * MentalRay (~MentalRay@107.171.161.165) Quit ()
[5:54] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[5:56] * PcJamesy (~Peaced@06SAAB1YG.tor-irc.dnsbl.oftc.net) has joined #ceph
[5:56] * khyron (~khyron@fixed-190-159-187-190-159-75.iusacell.net) has joined #ceph
[5:58] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[6:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[6:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[6:03] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[6:04] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:05] * cholcombe (~chris@12.39.178.119) Quit (Ping timeout: 480 seconds)
[6:05] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[6:06] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:07] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:07] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:07] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:08] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:08] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:08] * shohn (~shohn@dslb-092-078-031-004.092.078.pools.vodafone-ip.de) has joined #ceph
[6:08] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:08] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[6:09] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:09] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:10] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:10] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:10] * mnathani2 (~mnathani_@192-0-149-228.cpe.teksavvy.com) has joined #ceph
[6:10] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:11] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:11] * mnathani2 (~mnathani_@192-0-149-228.cpe.teksavvy.com) Quit ()
[6:11] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:11] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:12] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:12] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:12] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:12] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:13] * scuttlemonkey is now known as scuttle|afk
[6:13] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:13] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:14] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:14] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:14] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:14] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:16] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:16] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:16] * overclk (~quassel@121.244.87.117) has joined #ceph
[6:16] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:17] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:17] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:17] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:19] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:19] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Read error: Connection reset by peer)
[6:19] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:20] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[6:20] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:20] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:20] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:21] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:21] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:21] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:21] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:21] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:22] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:22] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:23] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:23] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:24] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:24] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:24] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:24] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:24] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:25] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:25] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:25] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:26] * PcJamesy (~Peaced@06SAAB1YG.tor-irc.dnsbl.oftc.net) Quit ()
[6:26] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:26] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:27] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:28] * EinstCr__ (~EinstCraz@58.247.119.250) has joined #ceph
[6:28] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:28] * Bwana (~Coestar@ns330209.ip-5-196-66.eu) has joined #ceph
[6:28] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[6:28] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:29] * karnan (~karnan@121.244.87.117) has joined #ceph
[6:30] * EinstC___ (~EinstCraz@58.247.119.250) has joined #ceph
[6:30] * Einst____ (~EinstCraz@58.247.119.250) has joined #ceph
[6:30] * EinstC___ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:31] * hr__ (~hardes@103.50.11.146) has joined #ceph
[6:31] * Einst____ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:31] * EinstC___ (~EinstCraz@58.247.119.250) has joined #ceph
[6:31] * cmorandin (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) has joined #ceph
[6:32] * Einst____ (~EinstCraz@58.247.119.250) has joined #ceph
[6:32] * EinstC___ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:32] * EinstC___ (~EinstCraz@58.247.119.250) has joined #ceph
[6:32] * Einst____ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:32] * IvanJobs (~hardes@103.50.11.146) Quit (Read error: Connection reset by peer)
[6:33] * EinstC___ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:33] * EinstC___ (~EinstCraz@58.247.119.250) has joined #ceph
[6:35] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[6:35] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:35] * EinstC___ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:35] * EinstC___ (~EinstCraz@58.247.119.250) has joined #ceph
[6:36] * EinstCr__ (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[6:36] * EinstC___ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:36] * drnexus (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) has joined #ceph
[6:37] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[6:37] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:37] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: No route to host)
[6:37] * EinstCr__ (~EinstCraz@58.247.119.250) has joined #ceph
[6:38] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:38] * EinstCr__ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:38] * EinstCr__ (~EinstCraz@58.247.119.250) has joined #ceph
[6:38] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:38] * EinstCr__ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:39] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:41] * EinstCr__ (~EinstCraz@58.247.119.250) has joined #ceph
[6:41] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:41] * erwan_taf (~erwan@37.161.38.139) Quit (Ping timeout: 480 seconds)
[6:41] * EinstCr__ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:41] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:42] * EinstCr__ (~EinstCraz@58.247.119.250) has joined #ceph
[6:42] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:43] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:43] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:43] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[6:43] * EinstCr__ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:43] * Skaag (~lunix@rrcs-67-52-140-5.west.biz.rr.com) Quit (Ping timeout: 480 seconds)
[6:43] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:44] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:44] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:45] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:45] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:45] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:45] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:46] * drnexus (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[6:46] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:46] * cmorandin (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[6:46] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:47] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:47] * EinstCr__ (~EinstCraz@58.247.119.250) has joined #ceph
[6:47] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:48] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:48] * EinstCr__ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:48] * EinstCr__ (~EinstCraz@58.247.119.250) has joined #ceph
[6:49] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:49] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:50] * EinstC___ (~EinstCraz@58.247.119.250) has joined #ceph
[6:50] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:51] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:51] * EinstC___ (~EinstCraz@58.247.119.250) Quit (Read error: No route to host)
[6:51] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:51] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:51] * EinstC___ (~EinstCraz@58.247.119.250) has joined #ceph
[6:52] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:52] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[6:52] * kefu_ is now known as kefu|afk
[6:52] * EinstC___ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:53] * EinstC___ (~EinstCraz@58.247.119.250) has joined #ceph
[6:53] * EinstC___ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:53] * EinstC___ (~EinstCraz@58.247.119.250) has joined #ceph
[6:54] * Einst____ (~EinstCraz@58.247.119.250) has joined #ceph
[6:54] * EinstC___ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:54] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[6:54] * Einst____ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:55] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:56] * dux0r (~anadrom@tor-exit.dhalgren.org) has joined #ceph
[6:56] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:56] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:56] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:57] * EinstCr__ (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[6:57] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:57] * EinstCr__ (~EinstCraz@58.247.119.250) has joined #ceph
[6:57] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:58] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:58] * EinstCr__ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:58] * Bwana (~Coestar@76GAAE1KX.tor-irc.dnsbl.oftc.net) Quit ()
[6:58] * Random (~CobraKhan@exit1.ipredator.se) has joined #ceph
[6:58] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[6:58] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[6:59] * EinstCr__ (~EinstCraz@58.247.119.250) has joined #ceph
[7:00] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[7:00] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[7:00] * EinstCr__ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[7:00] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[7:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[7:01] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[7:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[7:01] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[7:01] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[7:02] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[7:02] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) Quit (Quit: leaving)
[7:02] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[7:02] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[7:03] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[7:03] * kefu|afk (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:03] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[7:03] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[7:03] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[7:05] * kefu (~kefu@183.193.162.205) has joined #ceph
[7:05] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[7:05] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[7:07] * EinstCr__ (~EinstCraz@58.247.119.250) has joined #ceph
[7:07] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[7:07] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[7:07] * EinstCr__ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[7:07] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[7:10] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[7:13] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:18] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[7:20] * kefu (~kefu@183.193.162.205) Quit (Ping timeout: 480 seconds)
[7:21] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) has joined #ceph
[7:23] * kefu_ is now known as kefu
[7:24] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) Quit ()
[7:24] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) has joined #ceph
[7:26] * dux0r (~anadrom@6AGAABMR7.tor-irc.dnsbl.oftc.net) Quit ()
[7:26] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) has left #ceph
[7:26] * Sliker (~SweetGirl@76GAAE1L3.tor-irc.dnsbl.oftc.net) has joined #ceph
[7:26] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Read error: Connection reset by peer)
[7:28] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[7:28] * wwdillingham (~LobsterRo@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wwdillingham)
[7:28] * Random (~CobraKhan@06SAAB1Z4.tor-irc.dnsbl.oftc.net) Quit ()
[7:31] * rendar (~I@95.238.176.76) has joined #ceph
[7:33] * Bj_o_rn (~Uniju@chomsky.torservers.net) has joined #ceph
[7:34] * Kurt (~Adium@2001:628:1:5:a116:4b84:8ed0:cd45) Quit (Read error: Connection reset by peer)
[7:36] * Kurt (~Adium@2001:628:1:5:f15f:5930:6325:dee7) has joined #ceph
[7:46] * rraja (~rraja@121.244.87.117) has joined #ceph
[7:56] * Sliker (~SweetGirl@76GAAE1L3.tor-irc.dnsbl.oftc.net) Quit ()
[7:56] * Jamana (~Teddybare@ns316491.ip-37-187-129.eu) has joined #ceph
[7:56] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[8:00] * kefu (~kefu@114.92.122.74) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * yanzheng (~zhyan@118.116.113.70) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * wushudoin (~wushudoin@38.99.12.237) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * batrick (~batrick@2600:3c00::f03c:91ff:fe96:477b) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * Tene (~tene@173.13.139.236) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * ccourtaut (~ccourtaut@178.62.125.124) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * sw3 (sweaung@2400:6180:0:d0::66:100f) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * _nick (~nick@zarquon.dischord.org) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * benner (~benner@188.166.111.206) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * gtrott (sid78444@id-78444.tooting.irccloud.com) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * sig_wall_ (adjkru@xn--hwgz2tba.lamo.su) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * essjayhch (sid79416@id-79416.highgate.irccloud.com) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * joelio (~joel@81.4.101.217) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * tacodog40k (~tacodog@dev.host) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * rossdylan (rossdylan@losna.helixoide.com) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * magicrobotmonkey (~magicrobo@8.29.8.68) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * Gugge-47527 (gugge@92.246.2.105) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * shaon (~shaon@shaon.me) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * smiley_ (~smiley@205.153.36.170) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * PoRNo-MoRoZ (~hp1ng@mail.ap-team.ru) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * lookcrabs (~lookcrabs@tail.seeee.us) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * bstillwell (~bryan@bokeoa.com) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * rektide (~rektide@eldergods.com) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * masterpe (~masterpe@2a01:670:400::43) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * carter (~carter@li98-136.members.linode.com) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * jnq (sid150909@0001b7cc.user.oftc.net) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * braderhart (sid124863@braderhart.user.oftc.net) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * andrewschoen (~andrewsch@50.56.86.195) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * dis (~dis@00018d20.user.oftc.net) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * davidzlap (~Adium@2605:e000:1313:8003:90f5:10a4:d675:6c9d) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * joshd (~jdurgin@206.169.83.146) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * df (~defari@digital.el8.net) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * jluis (~joao@8.184.114.89.rev.vodafone.pt) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * funnel (~funnel@81.4.123.134) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * jmn (~jmn@nat-pool-bos-t.redhat.com) Quit (synthon.oftc.net beauty.oftc.net)
[8:00] * kefu (~kefu@114.92.122.74) has joined #ceph
[8:00] * yanzheng (~zhyan@118.116.113.70) has joined #ceph
[8:00] * wushudoin (~wushudoin@38.99.12.237) has joined #ceph
[8:00] * batrick (~batrick@2600:3c00::f03c:91ff:fe96:477b) has joined #ceph
[8:00] * Tene (~tene@173.13.139.236) has joined #ceph
[8:00] * bstillwell (~bryan@bokeoa.com) has joined #ceph
[8:00] * joshd (~jdurgin@206.169.83.146) has joined #ceph
[8:00] * jluis (~joao@8.184.114.89.rev.vodafone.pt) has joined #ceph
[8:00] * davidzlap (~Adium@2605:e000:1313:8003:90f5:10a4:d675:6c9d) has joined #ceph
[8:00] * lookcrabs (~lookcrabs@tail.seeee.us) has joined #ceph
[8:00] * PoRNo-MoRoZ (~hp1ng@mail.ap-team.ru) has joined #ceph
[8:00] * jmn (~jmn@nat-pool-bos-t.redhat.com) has joined #ceph
[8:00] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) has joined #ceph
[8:00] * funnel (~funnel@81.4.123.134) has joined #ceph
[8:00] * smiley_ (~smiley@205.153.36.170) has joined #ceph
[8:00] * rektide (~rektide@eldergods.com) has joined #ceph
[8:00] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) has joined #ceph
[8:00] * shaon (~shaon@shaon.me) has joined #ceph
[8:00] * Gugge-47527 (gugge@92.246.2.105) has joined #ceph
[8:00] * masterpe (~masterpe@2a01:670:400::43) has joined #ceph
[8:00] * magicrobotmonkey (~magicrobo@8.29.8.68) has joined #ceph
[8:00] * dis (~dis@00018d20.user.oftc.net) has joined #ceph
[8:00] * jnq (sid150909@0001b7cc.user.oftc.net) has joined #ceph
[8:00] * rossdylan (rossdylan@losna.helixoide.com) has joined #ceph
[8:00] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[8:00] * tacodog40k (~tacodog@dev.host) has joined #ceph
[8:00] * joelio (~joel@81.4.101.217) has joined #ceph
[8:00] * essjayhch (sid79416@id-79416.highgate.irccloud.com) has joined #ceph
[8:00] * df (~defari@digital.el8.net) has joined #ceph
[8:00] * sig_wall_ (adjkru@xn--hwgz2tba.lamo.su) has joined #ceph
[8:00] * gtrott (sid78444@id-78444.tooting.irccloud.com) has joined #ceph
[8:00] * carter (~carter@li98-136.members.linode.com) has joined #ceph
[8:00] * benner (~benner@188.166.111.206) has joined #ceph
[8:00] * braderhart (sid124863@braderhart.user.oftc.net) has joined #ceph
[8:00] * _nick (~nick@zarquon.dischord.org) has joined #ceph
[8:00] * sw3 (sweaung@2400:6180:0:d0::66:100f) has joined #ceph
[8:00] * andrewschoen (~andrewsch@50.56.86.195) has joined #ceph
[8:00] * ccourtaut (~ccourtaut@178.62.125.124) has joined #ceph
[8:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[8:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[8:02] * Mika_c_ (~quassel@122.146.93.152) Quit (Ping timeout: 480 seconds)
[8:03] * Bj_o_rn (~Uniju@4MJAAENRK.tor-irc.dnsbl.oftc.net) Quit ()
[8:03] * Thononain (~WedTM@exit1.torproxy.org) has joined #ceph
[8:08] * Mika_c_ (~quassel@122.146.93.152) has joined #ceph
[8:08] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[8:12] * geli (~geli@geli-2015.its.utas.edu.au) has left #ceph
[8:12] * geli (~geli@geli-2015.its.utas.edu.au) has joined #ceph
[8:14] * Mika_c__ (~quassel@122.146.93.152) has joined #ceph
[8:15] * Mika_c (~quassel@122.146.93.152) Quit (Ping timeout: 480 seconds)
[8:17] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:17] * deepthi (~deepthi@115.118.215.242) Quit (Ping timeout: 480 seconds)
[8:20] * The1_ (~the_one@87.104.212.66) Quit (Ping timeout: 480 seconds)
[8:21] * Mika_c_ (~quassel@122.146.93.152) Quit (Ping timeout: 480 seconds)
[8:24] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[8:26] * Jamana (~Teddybare@6AGAABMT0.tor-irc.dnsbl.oftc.net) Quit ()
[8:26] * deepthi (~deepthi@106.206.140.137) has joined #ceph
[8:33] * Thononain (~WedTM@7V7AAEA5T.tor-irc.dnsbl.oftc.net) Quit ()
[8:33] * lobstar (~Shnaw@tor2r.ins.tor.net.eu.org) has joined #ceph
[8:33] * shylesh (~shylesh@121.244.87.118) has joined #ceph
[8:38] * cmorandin (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) has joined #ceph
[8:44] * Hemanth (~hkumar_@121.244.87.117) has joined #ceph
[8:46] * cmorandin (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[8:46] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[8:51] * garphy is now known as garphy`aw
[8:53] * garphy`aw is now known as garphy
[8:54] * owasserm (~owasserm@bzq-79-178-117-203.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[8:54] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[8:56] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[8:56] * AotC (~utugi____@06SAAB13R.tor-irc.dnsbl.oftc.net) has joined #ceph
[8:57] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[9:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[9:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[9:01] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[9:02] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[9:03] * lobstar (~Shnaw@4MJAAENSM.tor-irc.dnsbl.oftc.net) Quit ()
[9:03] * Kakeru (~straterra@tor-exit7-readme.dfri.se) has joined #ceph
[9:03] * erwan_taf (~erwan@46.231.131.178) has joined #ceph
[9:05] * bliu (~liub@203.192.156.9) Quit (Quit: Leaving)
[9:11] * analbeard (~shw@support.memset.com) has joined #ceph
[9:13] * flisky (~Thunderbi@36.110.40.21) Quit (Read error: Connection reset by peer)
[9:13] * allaok (~allaok@161.106.4.5) has joined #ceph
[9:16] * allaok (~allaok@161.106.4.5) Quit ()
[9:16] * allaok (~allaok@161.106.4.5) has joined #ceph
[9:17] * flisky (~Thunderbi@36.110.40.30) has joined #ceph
[9:21] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[9:22] * owasserm (~owasserm@bzq-82-81-161-51.red.bezeqint.net) has joined #ceph
[9:23] * pabluk__ is now known as pabluk_
[9:23] * allaok (~allaok@161.106.4.5) Quit (Quit: Leaving.)
[9:23] * allaok (~allaok@161.106.4.5) has joined #ceph
[9:24] * fsimonce (~simon@87.13.130.124) has joined #ceph
[9:26] * AotC (~utugi____@06SAAB13R.tor-irc.dnsbl.oftc.net) Quit ()
[9:26] * verbalins (~Drezil@torsrvu.snydernet.net) has joined #ceph
[9:26] * derjohn_mobi (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[9:29] * Lokta (~Lokta@carbon.coe.int) has joined #ceph
[9:32] * allaok (~allaok@161.106.4.5) Quit (Quit: Leaving.)
[9:32] * owasserm (~owasserm@bzq-82-81-161-51.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[9:32] * allaok (~allaok@161.106.4.5) has joined #ceph
[9:33] * Kakeru (~straterra@06SAAB130.tor-irc.dnsbl.oftc.net) Quit ()
[9:33] * Shadow386 (~richardus@lumumba.torservers.net) has joined #ceph
[9:36] * nass51 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) has joined #ceph
[9:36] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[9:36] * vincepii (~textual@77.245.22.78) has joined #ceph
[9:37] <Lokta> Hello everyone ! Ceph noob here :) I'm trying to build a small CephFS cluster for PoC purposes and despite the healty state of the cluster i keep having a "mount error 5" on mount and ceph-fuse doesn't answer.
[9:37] <Lokta> I have 4 servers
[9:37] <vincepii> HI All! Question on ceph-docker, is someone using it in production (or is it recommended), or is the project meant more for testing, development?
[9:38] <Lokta> 1 is admin+ client, 2 is mon + osd, 3 is mds + osd, 4 is just osd
[9:41] <Be-El> Lokta: does ceph -s list the mds server?
[9:42] <Lokta> fsmap e2: 0/0/1 up
[9:43] * PavelK (~oftc-webi@216.69.244.82) has joined #ceph
[9:43] <PavelK> Hello guys
[9:43] <Lokta> if I ps on the server i can see /usr/bin/ceph-mds running
[9:43] <PavelK> Can somebody help me with cephfs
[9:43] <Lokta> hi !
[9:43] <PavelK> ?
[9:44] <Lokta> note : i'm running debian jessie
[9:44] <PavelK> i'm running CoreOS :)
[9:44] * derjohn_mob (~aj@46.189.28.56) has joined #ceph
[9:45] <PavelK> getting strange behaviour, when i trying to mount newly created cephfs as admin user i'm getting permission denied error, mount.ceph 10.1.3.11:/ /mnt/mycephfs/ -o name=admin,secretfile=/etc/ceph/admin.secret mount error 13 = Permission denied
[9:46] <PavelK> any help will be appreciated, please where i'm wrong ?
[9:46] <Be-El> Lokta: 0/0/1 means that the system knows about the mds, but the mds is not running or recognized by ceph
[9:47] <Be-El> Lokta: have a look at the mds log file (should be located in /var/log/ceph on the mds host)
[9:47] <Lokta> mmh there is no such file there
[9:47] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) has joined #ceph
[9:47] * nass51 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) Quit (Read error: Connection reset by peer)
[9:48] <Lokta> PavelK does ceph auth list show the same key you are using ?
[9:48] <PavelK> yes, i got admin secret from there
[9:49] <Be-El> PavelK: does the secret file contain just the plain key or the complete keyring?
[9:49] <PavelK> as per documentation
[9:49] <PavelK> plain key
[9:49] <Be-El> PavelK: any messages in the kernel log?
[9:49] * nass51 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) has joined #ceph
[9:49] <PavelK> it's coreOS, any message
[9:49] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) Quit ()
[9:49] <Be-El> Lokta: in that case your mds is not running correctly. try to restart it
[9:50] <PavelK> [429927.163924] libceph: mon0 10.1.3.11:6789 session established
[9:50] <PavelK> and nothing happens
[9:50] <PavelK> just permission denied error
[9:50] * nass51 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) Quit ()
[9:51] <Be-El> PavelK: can you try it with ceph-fuse (to single out problems with the mds/ceph side)
[9:51] <PavelK> Be-El: ceph-fuse[25904]: starting ceph client ceph-fuse[25904]: ceph mount failed with (1) Operation not permitted ceph-fuse[25902]: mount failed: (1) Operation not permitted
[9:53] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:54] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) has joined #ceph
[9:54] <Be-El> PavelK: any hints in the mds log?
[9:55] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) Quit ()
[9:56] * verbalins (~Drezil@6AGAABMV1.tor-irc.dnsbl.oftc.net) Quit ()
[9:56] * Epi (~tritonx@188.214.93.129) has joined #ceph
[9:56] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) has joined #ceph
[9:57] * allaok (~allaok@161.106.4.5) has left #ceph
[9:57] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[9:57] <PavelK> ds.0.server handle_client_session forbidden path claimed as mount root: / by
[9:59] * b0e (~aledermue@213.95.25.82) has joined #ceph
[10:00] <Be-El> PavelK: do you use the jewel release? and if that is the case, does the admin key contain the privileged for the cephfs root path?
[10:00] <PavelK> yes, i'm using ceph 10.2
[10:00] <Lokta> I restarted the entire cluster via /etc/init.d/ceph stop then start
[10:00] <PavelK> it contains allow *
[10:00] <PavelK> no paths specified there
[10:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[10:01] <Lokta> since -a didn't work. and the mds didn't start, no log at all
[10:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[10:02] <Lokta> ceph -s shows the same output as before but now status shows every node active instead of dead
[10:02] <Lokta> if i ps i can't see the mds anymore
[10:03] * Shadow386 (~richardus@06SAAB148.tor-irc.dnsbl.oftc.net) Quit ()
[10:03] * rcfighter (~Dinnerbon@76GAAE1O4.tor-irc.dnsbl.oftc.net) has joined #ceph
[10:04] <Be-El> PavelK: see http://docs.ceph.com/docs/master/cephfs/client-auth/#path-restriction I haven't used jewel yet, so I don't know whether the path restriction are mandatory
[10:04] <Be-El> Lokta: and still no logfile for the mds daemon?
[10:04] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[10:04] <Lokta> nope
[10:05] <Lokta> i tried starting it via /etc/init.d/ceph start mds
[10:05] <PavelK> yeah, i looked at, thank you will try to figure out
[10:06] * The1w (~jens@node3.survey-it.dk) has joined #ceph
[10:06] <Lokta> and without mds specified, which should start all services on the host if i understood correctly
[10:06] <PavelK> Lokta: please try to start it as ceph-mds -d -i <mds_ID>
[10:06] * DanFoster (~Daniel@office.34sp.com) has joined #ceph
[10:06] <PavelK> it will provide you debug mesages to stdout
[10:06] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[10:06] * Concubidated1 (~cube@c-50-173-245-118.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:07] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) has joined #ceph
[10:07] <Lokta> the mds id is the same as in auth list ?
[10:07] <Lokta> for exemple is have osd.[0-1-2] and mds.<hostname>
[10:08] <PavelK> yes, i have something like that
[10:08] <PavelK> ceph-mds -i ceph-mds
[10:08] <PavelK> where ceph-mds it's my ID of mds instance
[10:08] <PavelK> ps -ef | grep ceph-mds shud show you current ID of your instance
[10:10] <Lokta> so in my case it's the hostname
[10:10] <Lokta> ceph -s shows 1/1/1 up
[10:10] <Lokta> last line says active_start
[10:11] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[10:11] <Lokta> still no log file though
[10:12] <PavelK> it seems to be up
[10:13] <PavelK> would you please clarify, what issue are you running thru ?
[10:13] * T1 (~the_one@87.104.212.66) has joined #ceph
[10:14] <Lokta> the mount command is timing out
[10:15] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:16] <PavelK> got the same problem, can you telnet between all nodes to actual ports ?
[10:16] <Lokta> i'm using the command "mount -t ceph <mon-address>:6789:/ /mnt/cephfs -o name=admin,secretfile=/etc/ceph/key" the secret file only contains the key (without the "key =")
[10:18] <Lokta> if i telnet the mon:6789 i get something like "ceph v027 " and some char my term can't print
[10:18] <PavelK> can you paste error here
[10:18] <Lokta> before if was mount error 5
[10:18] <Lokta> right now i'm just waiting, no error atm but no mount neither
[10:19] <Lokta> *either ? my english sucks ><
[10:19] <PavelK> Lol, the same here
[10:19] <PavelK> with english i mean, but actually for ceph-fuse nothing happens as well, it seems got stuck
[10:19] * vincepii (~textual@77.245.22.78) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[10:20] <PavelK> and i can't kill it
[10:20] <Lokta> it's been a good 5 min and the mount command is still running, should i kill it and try fuse ?
[10:20] <Lokta> i had the same issue yesterday with fuse
[10:20] <Lokta> mount gave me an error and fuse just didn't answer anymore
[10:20] <PavelK> not sure it will help, i'm trying to deal with that
[10:21] <PavelK> probably the same issue that i'm facing
[10:21] <Lokta> mmmh
[10:21] <Lokta> my server isn't answering anymore
[10:21] <Lokta> brb gotta go in server room
[10:22] <Lokta> i'm using linux 3.16.0-4 maybe that's an issue
[10:24] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[10:24] <PavelK> 4.3.6-coreos not sure, i'm using 4.X kernel
[10:25] <Lokta> server died
[10:26] * Epi (~tritonx@4MJAAENUD.tor-irc.dnsbl.oftc.net) Quit ()
[10:26] <Lokta> i have a huge stack trace
[10:26] * n0x1d (~Shesh@192.42.116.16) has joined #ceph
[10:26] <Lokta> i will use BP kernel
[10:26] * ade (~abradshaw@85.158.226.30) has joined #ceph
[10:27] * Concubidated (~cube@c-50-173-245-118.hsd1.ca.comcast.net) has joined #ceph
[10:28] <Lokta> only the client died, the cluster is still working
[10:28] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[10:30] * b0e (~aledermue@213.95.25.82) has joined #ceph
[10:33] * rcfighter (~Dinnerbon@76GAAE1O4.tor-irc.dnsbl.oftc.net) Quit ()
[10:34] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[10:35] <PavelK> hm, but i seems lost node at all
[10:35] <Lokta> rebooted the server and now fuse works
[10:35] <Lokta> i guess the kernel version is an issue for the module then ?
[10:36] <PavelK> cool, will try the same, are you running bare-metal ceph? or virtualized ?
[10:36] <Lokta> bare
[10:36] <Lokta> i have 4 HP proliant g7 for my tests
[10:38] * vincepii (~textual@77.245.22.78) has joined #ceph
[10:39] * cmorandin (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) has joined #ceph
[10:39] <Lokta> running debian jessie 64 stable
[10:39] <Lokta> and beside ntp and ssh nothing runnin on it
[10:39] <Lokta> +g
[10:41] * hr__ (~hardes@103.50.11.146) Quit (Ping timeout: 480 seconds)
[10:44] <Lokta> Be-El : Seems that the issue i was facing before is that the mds didn't start, any idea what might cause that ?
[10:44] <Lokta> thank you ! :)
[10:45] * garphy is now known as garphy`aw
[10:46] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[10:47] <Be-El> Lokta: no clue if there's no log file
[10:47] * cmorandin (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[10:47] <Be-El> Lokta: you can try do start it manually in non-daemon mode in a shell
[10:50] <Lokta> same command as before, ceph-mds -i xx ?
[10:50] * khyron (~khyron@fixed-190-159-187-190-159-75.iusacell.net) Quit (Read error: Connection reset by peer)
[10:50] * khyron (~khyron@fixed-190-159-187-190-159-75.iusacell.net) has joined #ceph
[10:50] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) has joined #ceph
[10:51] <Lokta> i have a log file now
[10:52] <Lokta> now errors in it
[10:52] <Lokta> wait nvm, not same file
[10:53] <Lokta> ceph-mds.admin.log shows help menu
[10:53] <Lokta> ceph-mds.gfs3.log shows usual log (gfs3 is the name of the server)
[10:56] * n0x1d (~Shesh@7V7AAEA7W.tor-irc.dnsbl.oftc.net) Quit ()
[10:56] * clarjon1 (~brannmar@tor4thepeople1.torexitnode.net) has joined #ceph
[10:57] * IvanJobs (~hardes@103.50.11.146) has joined #ceph
[10:58] * LeaChim (~LeaChim@host86-150-161-6.range86-150.btcentralplus.com) has joined #ceph
[10:58] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[10:58] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) has joined #ceph
[10:58] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) has joined #ceph
[10:59] * garphy`aw is now known as garphy
[11:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[11:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[11:06] <titzer> hey cephers
[11:06] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[11:08] <titzer> Quick question, what's the difference between "journal queue max ops" and "filestore queue max ops"?
[11:10] <Hemanth> How do I start rbd-mirror daemon with systemd ??
[11:10] * ade (~abradshaw@85.158.226.30) Quit (Remote host closed the connection)
[11:11] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[11:13] * PavelK (~oftc-webi@216.69.244.82) Quit (Ping timeout: 480 seconds)
[11:15] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:16] * vincepii (~textual@77.245.22.78) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:18] * vincepii (~textual@77.245.22.78) has joined #ceph
[11:18] * kefu is now known as kefu|afk
[11:19] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:990a:ff3e:882:a247) has joined #ceph
[11:19] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[11:20] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) has joined #ceph
[11:24] * owasserm (~owasserm@bzq-82-81-161-51.red.bezeqint.net) has joined #ceph
[11:24] * owasserm (~owasserm@bzq-82-81-161-51.red.bezeqint.net) Quit ()
[11:24] * owasserm (~owasserm@bzq-82-81-161-51.red.bezeqint.net) has joined #ceph
[11:25] * ade (~abradshaw@85.158.226.30) has joined #ceph
[11:25] * jirib (~jirib@176.74.139.218) has joined #ceph
[11:26] <jirib> hi, i'd like to play with ceph and i can't decide if i should go with EL7/centos7 or fedora
[11:26] * clarjon1 (~brannmar@4MJAAENVM.tor-irc.dnsbl.oftc.net) Quit ()
[11:26] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[11:26] <darkfader> jirib: imo only reason for fedora is if you wanna do crazy stuff with iscsi/fb/ib re-export
[11:27] <darkfader> there it'll be easier to patch up some things
[11:27] <jirib> darkfader: i'd like to play with ha iscsi with ceph backend
[11:27] <darkfader> i'd use fedora then
[11:27] <jirib> ok thx
[11:28] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) has joined #ceph
[11:28] * jennna (~green@86.121.198.49) has joined #ceph
[11:29] * kefu|afk (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:29] <jennna> https://www.change.org/p/high-court-of-justice-of-england-and-wales-save-vladimir-bukovsky
[11:30] * jirib (~jirib@176.74.139.218) has left #ceph
[11:31] * jowilkin (~jowilkin@c-98-207-136-41.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[11:32] * jowilkin (~jowilkin@c-98-207-136-41.hsd1.ca.comcast.net) has joined #ceph
[11:33] * Misacorp (~darkid@justus.impium.de) has joined #ceph
[11:34] * derjohn_mob (~aj@46.189.28.56) Quit (Ping timeout: 480 seconds)
[11:36] <Lokta> i lost another server
[11:36] <Lokta> and on restart ceph didn't start again
[11:37] <Lokta> i'll update all kernel to 4.5 and see if it's more stable
[11:38] <Lokta> and the log doesn't say why the osd didn't go up again
[11:38] * owasserm (~owasserm@bzq-82-81-161-51.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[11:38] <Lokta> has someone had a similar issue ?
[11:38] * zhaochao_ (~zhaochao@125.39.9.151) has joined #ceph
[11:41] * andrei__1 (~andrei@88.96.248.30) has joined #ceph
[11:42] <andrei__1> Hello guys
[11:42] * huangjun|2 (~kvirc@113.57.168.154) Quit (Ping timeout: 480 seconds)
[11:42] <andrei__1> I've got a question on OS refresh on the ceph osd servers. I've asked on the mailing list, but didn't get an answer. Was wondering if anyone here could advise
[11:43] <andrei__1> I am planning to have a clean OS reinstall on our ceph osd servers and mons
[11:43] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[11:43] <andrei__1> I was hoping to do this without any downtime
[11:43] <andrei__1> and with minimal service level disruption
[11:43] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) has joined #ceph
[11:43] <andrei__1> I've got 3 osd servers and 3 mons
[11:44] <andrei__1> each osd server has 10 osds and 2 ssds for journals
[11:44] * derjohn_mob (~aj@46.189.28.37) has joined #ceph
[11:44] <andrei__1> what is the best way to go about this task?
[11:44] <darkfader> andrei__1: any chance you can re-purpose 3 other servers to go up to 6 for the time?
[11:44] * hr__ (~hardes@103.50.11.146) has joined #ceph
[11:45] <andrei__1> darkfader, nope, sorry, they don't have any disks in them. they are blade servers
[11:45] * zhaochao (~zhaochao@125.39.9.159) Quit (Ping timeout: 480 seconds)
[11:45] <darkfader> i mean 3 addln servers
[11:45] <darkfader> not even more disks to the same :)
[11:45] * InIMoeK (~InIMoeK@95.170.93.16) has joined #ceph
[11:45] <andrei__1> darkfader, do you know if ceph would be okay taking existing osds after a clean OS reinstall?
[11:46] * IvanJobs (~hardes@103.50.11.146) Quit (Read error: Connection reset by peer)
[11:46] <andrei__1> darkfader, i don't think so, I don't have any spare servers at all
[11:46] <darkfader> andrei__1: well 10 times the risk
[11:46] <darkfader> :/
[11:46] <andrei__1> ouch!
[11:47] <darkfader> generally yeah you can reinstall the OS and reinstall ceph and mount everything back to where it was
[11:47] <darkfader> if you wipe it on reinstall you'll be very unhappy
[11:47] <darkfader> it's just not proper practice
[11:47] <andrei__1> what is the proper recommended practice?
[11:47] <darkfader> but it'll work most likely
[11:47] * linjan (~linjan@86.62.112.22) has joined #ceph
[11:47] <darkfader> have more servers :/
[11:48] <darkfader> anyway, i suppose you still got enough free space to take one osd server out and keep full redundancy still?
[11:48] <andrei__1> well, i don't have that option unfortunately ((
[11:48] <andrei__1> dark: i've got 3 osd servers with replica of 2
[11:48] <darkfader> andrei__1: are you sure? lease one for a month and it'll be super easy
[11:49] <andrei__1> so, if i take one out for maintenance, the cluster would still work
[11:49] * jennna (~green@86.121.198.49) Quit ()
[11:49] * pabluk_ is now known as pabluk__
[11:49] <darkfader> andrei__1: yeah
[11:50] <darkfader> are the mons separate or on the same
[11:50] * owasserm (~owasserm@bzq-82-81-161-50.red.bezeqint.net) has joined #ceph
[11:50] <darkfader> if they're on the same it would be more tricky
[11:50] <andrei__1> two mon servers are the same as osd servers
[11:50] <andrei__1> and one mon is on a totally different server
[11:51] <andrei__1> i've got spare blade servers which I can make ceph mons
[11:51] <andrei__1> so, i can migrate the mon function from the two osd servers
[11:51] <InIMoeK> hi guys, I'm trying to make an osd node design
[11:51] <darkfader> yeah probably will need to :>
[11:51] <InIMoeK> 4 x 2tb data and journal on Intel DC P3x00
[11:51] <InIMoeK> 10 osd nodes
[11:52] <andrei__1> that could be done, assuming ceph would be okay with doing 3 mons > 5 mons and then back to 3 mons
[11:52] <InIMoeK> but I'm having my doubts about the MTBF of the intel ssd
[11:52] <andrei__1> InIMoeK, they are pretty good and durable
[11:52] <andrei__1> InIMoeK, it depends on your cluster requirements
[11:53] <InIMoeK> but the journal can be pretty write intensive
[11:53] <andrei__1> InIMoeK, is it heavy in writes?
[11:53] <InIMoeK> hmm
[11:53] <InIMoeK> there will be some mysql DB machines on there ( running in proxmox )
[11:53] <andrei__1> InIMoeK, I think the the intel ssds are pretty good for endurance. you should be able to do about 3-5 ssd writes per day for about 5 years if I am not mistaken. However, please check the figures yourself
[11:54] * garphy is now known as garphy`aw
[11:54] <InIMoeK> yes I'm checking the spec sheet now
[11:54] * The1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[11:55] <andrei__1> InIMoeK, please don't expect any miracles from ceph when it comes to database usage. I've not had much luck with ceph being a super performer for a bunch of small block size random reads/writes
[11:55] <andrei__1> darkfader, could you please point me to the documentation that describes the process that I need to follow? I was not able to find any instructions
[11:56] * Sun7zu1 (~hifi@2.tor.exit.babylon.network) has joined #ceph
[11:57] <darkfader> http://docs.ceph.com/docs/firefly/rados/operations/add-or-rm-mons/
[11:57] <InIMoeK> andrei__1, I'm not trying to make an super high performant cluster. It's more a second cluster to offload the less critical machines from our primary VMware cluster which has high performant storage etc
[11:57] <darkfader> start by safeguarding your mons
[11:58] <darkfader> so that you'll know for sure that if you make a mistake there you still have the 2 other mon available
[11:58] <andrei__1> darkfader, okay. Am I right in assuming that I should be okay with having 5 mons?
[11:58] <andrei__1> instead of the recommended 3 mons?
[11:59] <darkfader> i think that is fine
[12:00] <darkfader> 5 is just not adding much but if you drop 5->4 i think there'll be no quorum issue and that would help for your migration
[12:00] * vincepii (~textual@77.245.22.78) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[12:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[12:01] <andrei__1> InIMoeK, got it. Ceph is pretty good with concurrency. So, for example, my small cluster can easily deliver 3GB/s (gigabyte) aggregate reads when doing benchmarking from about 10 vms using about 8-10 threads each. However, a single thread would not be faster then about 50 MB/s
[12:01] <andrei__1> this is for uncached reads with 1M or 4M block size
[12:01] * vincepii (~textual@77.245.22.78) has joined #ceph
[12:02] <andrei__1> which sucks if you look at the single vm / single thread performance
[12:02] <andrei__1> but if you look at the cluster wide performance, it's pretty good
[12:02] <andrei__1> darkfader, okay, got it.
[12:03] * Misacorp (~darkid@6AGAABMZS.tor-irc.dnsbl.oftc.net) Quit ()
[12:03] <andrei__1> darkfader, so, what do I do once i've increased the number of mons?
[12:03] <darkfader> re-install the first node's OS that doesn't have mons on it
[12:03] <andrei__1> okay
[12:03] <darkfader> i think you should set the "out" timer to really long for that one
[12:03] <darkfader> since thats where you'll run into issues and document them
[12:04] <darkfader> so you won't have a fuill refill
[12:04] <andrei__1> darkfader, should I just do ceph osd set noout?
[12:04] <darkfader> ok yeah
[12:04] <darkfader> i'd go as far as unplugging the osd/jnl disks during reinstall
[12:05] <andrei__1> darkfader, you mean physically unplugging? to make sure they are not wiped by mistake?
[12:05] <darkfader> yeah
[12:05] <andrei__1> got it!
[12:05] <andrei__1> thanks for the tip
[12:05] <InIMoeK> andrei__1, yes so overal performance should be fine I guess
[12:05] <andrei__1> darkfader, should I save the ceph working folders, like /var/lib/ceph?
[12:06] <darkfader> andrei__1: yeah after it's stopped
[12:06] <darkfader> and the udev rules
[12:06] <InIMoeK> will be testing proxmox first in a test cluster without any flash backing to see the raw sata disk performance
[12:06] <andrei__1> darkfader, I don't have any udev rules
[12:06] <andrei__1> as I am using /dev/disk/by-id paths
[12:06] <andrei__1> and not the /dev/sdX paths
[12:07] <andrei__1> coz my drive letters change with pretty much every reboot
[12:07] <andrei__1> InIMoeK, expect the singe thread performance to be about 2-3 times slower than the raw disk performance
[12:07] <andrei__1> for the reads at least
[12:08] <andrei__1> and if you have journal on the same disk, the performance would be even worse
[12:08] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[12:08] <InIMoeK> I have a test cluster with 60 x 1tb sata
[12:08] <InIMoeK> on 3 hosts
[12:08] <darkfader> andrei__1: ok :)
[12:08] <InIMoeK> for testing
[12:08] <InIMoeK> and indeed journal on same disk as OSD
[12:08] <andrei__1> darkfader, so, after saving the ceph folder, is it safe to reinstall the OS?
[12:09] <InIMoeK> my "prod" cluster would be 40 x 2tb with the intel flash backing for journal
[12:09] <darkfader> andrei__1: i can't think of anything else, maybe someone else spots something?
[12:10] * garphy`aw is now known as garphy
[12:13] <andrei__1> darkfader, so, once i've reinstalled the os, do I manually install the ceph packages, copy back the /var/lib/ceph folders and perhaps the /etc/ceph folder
[12:13] <andrei__1> and start up the ceph services?
[12:14] <andrei__1> is this the right way?
[12:14] <andrei__1> InIMoeK, that should work just fine. What is your network?
[12:15] * ade (~abradshaw@85.158.226.30) Quit (Quit: Too sexy for his shirt)
[12:15] * ade (~abradshaw@85.158.226.30) has joined #ceph
[12:16] <darkfader> andrei__1: yeah, plus mounting the dirs in between
[12:17] <andrei__1> darkfader, shouldn't ceph automatically mount folders when starting the ceph-osd-all service? It does that automatically at the moment
[12:17] <andrei__1> i've got nothing in the fstab to manually mount the osds
[12:17] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) Quit (Remote host closed the connection)
[12:18] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) has joined #ceph
[12:18] <s3an2> Do we know when we should expect the next release of Jewel, I am hitting https://github.com/ceph/ceph/pull/8766 in test upgrades at the moment
[12:20] <darkfader> andrei__1: yes, i had that with udev originally that's why i got confused
[12:20] <darkfader> *blink*
[12:22] <andrei__1> s3an2, yeah, I would like to know the release dates as well. I am planning to upgrade to Jewel asap, but feel a bit scared installing the .0 release
[12:22] <andrei__1> i would prefer to wait for .1 or .2 releases at least
[12:23] <darkfader> andrei__1: i checked in ceph-disk it uses /etc and the statedir
[12:23] <andrei__1> it burned me once a few years back
[12:23] <darkfader> as long as you bring back those on time it's ok in fact
[12:23] <darkfader> the -all does ceph disk activate (!= prepare) so it'll fail at worst but not do evil things
[12:23] <andrei__1> darkfader, okay, the what do you mean by the statedir? the /var/lib/ceph stuff?
[12:23] <darkfader> yeah
[12:23] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[12:24] <andrei__1> got it
[12:25] <andrei__1> darkfader, so, this method should be the quickest way I assume, with minimal rebalancing of pgs
[12:25] <andrei__1> darkfader, but it is not the safest way of doing things, right?
[12:25] <darkfader> ack
[12:26] <darkfader> the totally safe/fast is if you make a new OS on usb or sth like that
[12:26] <darkfader> but only fast in switchover time
[12:26] * Sun7zu1 (~hifi@7V7AAEA87.tor-irc.dnsbl.oftc.net) Quit ()
[12:26] <darkfader> slow in preparation :)
[12:29] <andrei__1> darkfader, what about removing the osds from the cluster altogether, reinstalling the server and adding them back in?
[12:30] <darkfader> then you'll have the big impact in any case
[12:30] <andrei__1> would it not be the safest way? assuming there is enough free space left after removing the osds?
[12:30] <darkfader> ah
[12:30] <andrei__1> darkfader, yeah, as I would probably need to rebalance everything twice for each osd removal/addition
[12:30] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:30] <andrei__1> it's going to take ages i think
[12:31] <darkfader> no, what you plan to do has the worst case that equals the outcome of this
[12:31] <darkfader> like, by just reinstalling the os part you got a good chance of bringing them back in
[12:31] <andrei__1> darkfader, got it
[12:31] <darkfader> bbl, other things want me back :>
[12:32] <darkfader> (i.e. waiting in a telco alone?)
[12:32] <andrei__1> thanks for your help mate!
[12:33] * vincepii (~textual@77.245.22.78) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:34] * andrei__1 (~andrei@88.96.248.30) Quit (Quit: Ex-Chat)
[12:39] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[12:42] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[12:43] * derjohn_mob (~aj@46.189.28.37) Quit (Ping timeout: 480 seconds)
[12:44] * Mika_c__ (~quassel@122.146.93.152) Quit (Remote host closed the connection)
[12:46] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: This computer has gone to sleep)
[12:47] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[12:48] <s3an2> andrei__1, its not just me then :)
[12:49] * ade (~abradshaw@85.158.226.30) Quit (Quit: Too sexy for his shirt)
[12:51] * ngoswami (~ngoswami@121.244.87.116) Quit ()
[12:52] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[12:53] * derjohn_mob (~aj@46.189.28.37) has joined #ceph
[12:56] * FNugget (~Catsceo@jupiter.m3l.io) has joined #ceph
[12:58] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[12:59] * raarts (~Adium@82-171-243-109.ip.telfort.nl) has joined #ceph
[13:00] <raarts> hi, just started my first test deploy on jewel, during ceph-deploy I get: Setting system user ceph properties..usermod: user ceph is currently used by process 10032
[13:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[13:01] * jclm (~jclm@marriott-hotel-ottawa-yowmc.sites.intello.com) has joined #ceph
[13:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[13:02] * Racpatel (~Racpatel@2601:87:3:3601::6f15) Quit (Quit: Leaving)
[13:03] * rapedex (~Kakeru@freedom.ip-eend.nl) has joined #ceph
[13:04] * penguinRaider (~KiKo@14.139.82.6) Quit (Quit: Leaving)
[13:04] * Hemanth (~hkumar_@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:05] * pmanny (~rodri@14.139.82.6) has joined #ceph
[13:06] * owasserm (~owasserm@bzq-82-81-161-50.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[13:06] * branto (~branto@nat-pool-brq-t.redhat.com) Quit (Quit: Leaving.)
[13:09] <pmanny> hi ceph noob here. I am trying to install ceph using ceph(manual) guide on the website. I am using the release infernalis(9.2.1). Whenever I try to activate an osd I get the following error http://pastebin.com/eeCrgkZE. Am I doing something wrong here?
[13:12] * Racpatel (~Racpatel@2601:87:3:3601::6f15) has joined #ceph
[13:13] * owasserm (~owasserm@bzq-82-81-161-50.red.bezeqint.net) has joined #ceph
[13:16] * jclm1 (~jclm@marriott-hotel-ottawa-yowmc.sites.intello.com) has joined #ceph
[13:16] * Hemanth (~hkumar_@121.244.87.118) has joined #ceph
[13:18] * jclm (~jclm@marriott-hotel-ottawa-yowmc.sites.intello.com) Quit (Ping timeout: 480 seconds)
[13:18] * atheism (~atheism@182.48.117.114) Quit (Ping timeout: 480 seconds)
[13:19] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[13:19] <InIMoeK> atheism left because he did not believe in Ceph
[13:19] <InIMoeK> badumtsjj
[13:22] * flisky (~Thunderbi@246e281e.test.dnsbl.oftc.net) Quit (Quit: flisky)
[13:23] * ana_ (~oftc-webi@static.ip-171-033-130-093.signet.nl) has joined #ceph
[13:23] * ana_ (~oftc-webi@static.ip-171-033-130-093.signet.nl) has left #ceph
[13:23] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[13:24] * simonada (~oftc-webi@static.ip-171-033-130-093.signet.nl) has joined #ceph
[13:24] * mattbenjamin (~mbenjamin@206.121.37.170) has joined #ceph
[13:24] * Hemanth (~hkumar_@121.244.87.118) Quit (Ping timeout: 480 seconds)
[13:24] * garphy is now known as garphy`aw
[13:24] * zhaochao_ (~zhaochao@125.39.9.151) Quit (Quit: ChatZilla 0.9.92 [Firefox 45.1.0/20160426232238])
[13:26] * FNugget (~Catsceo@76GAAE1RT.tor-irc.dnsbl.oftc.net) Quit ()
[13:26] * Kottizen (~Corti^car@46.166.138.168) has joined #ceph
[13:26] <IcePic> pmanny: permission denied smells like some part should be run as root and isnt.
[13:26] * hr__ (~hardes@103.50.11.146) Quit (Quit: Leaving)
[13:27] <IcePic> or the username "ubuntu" may not perform the activities like " creating empty object store in /var/local/osd0"
[13:27] * vincepii (~textual@77.245.22.78) has joined #ceph
[13:27] * libracious (~libraciou@catchpenny.cf) Quit (Remote host closed the connection)
[13:27] * libracious (~libraciou@catchpenny.cf) has joined #ceph
[13:28] <Be-El> pmanny: why do you use a directory instead of a block device?
[13:28] <simonada> Hello everyone. I had a cluster with 10 unfound objects. Cluster was in recovery state, but at some point no recovery was really happening because of these unfound objects. I marked the as lost with ceph pg pgid mark_unfound_lost delete, so recovery could continue. Now all pgs pending have been backfilled, remapped etc. However, ceph -s still shows the message of 7 unfound objects. Quering ceph health detail no longer shows which PGs
[13:29] <pmanny> Be-El, I was building the cluster to learn stuff mostly
[13:29] <simonada> I wonder why do we still have that 7 unfound objects reported, even though it looks like we marked them as lost and removed them, and it worked
[13:30] <Be-El> pmanny: and as IcePic already mentionend, the unix user probably does not have sufficient permissions
[13:31] <simonada> Any hints on what could be happening?
[13:31] <pmanny> Be-El, the username I mentioned in the command has access for password sudo .. I set that up already
[13:32] * vincepii (~textual@77.245.22.78) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:33] * rapedex (~Kakeru@76GAAE1RW.tor-irc.dnsbl.oftc.net) Quit ()
[13:33] <Be-El> pmanny: you can log into that host and run the ceph-osd --mkfs command manually using sudo
[13:33] * vincepii (~textual@77.245.22.78) has joined #ceph
[13:37] <pmanny> Be-El, just tried that by logging into the machine, strangely the command which gave error fails even with sudo
[13:37] * shyu (~Shanzhi@119.254.120.66) Quit (Remote host closed the connection)
[13:38] * jclm1 (~jclm@marriott-hotel-ottawa-yowmc.sites.intello.com) Quit (Quit: Leaving.)
[13:38] <Be-El> pmanny: ceph-deploy is just ssd + the right command + some shortcuts
[13:38] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[13:39] <Be-El> pmanny: so if any command fails you can also manually perform the same command to debug the problem
[13:39] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[13:39] <Be-El> pmanny: and back to the problem itself: i have no clue, i haven't used jewel or ceph-deploy yet
[13:40] <pmanny> pmanny, I am using Infernalis
[13:42] <Be-El> in that case i would propose to try jewel, since it is the latest release
[13:43] * garphy`aw is now known as garphy
[13:43] <pmanny> Be-El, sure thanks :-)
[13:44] * adun153 (~ljtirazon@124.105.23.189) has joined #ceph
[13:46] * adun153 (~ljtirazon@124.105.23.189) Quit ()
[13:47] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Ping timeout: 480 seconds)
[13:50] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[13:52] * TMM (~hp@185.5.122.2) has joined #ceph
[13:56] * Kottizen (~Corti^car@46.166.138.168) Quit ()
[13:56] * ahmeni (~Corneliou@81-7-17-171.blue.kundencontroller.de) has joined #ceph
[13:56] * th0m (~smuxi@static-qvn-qvu-164067.business.bouyguestelecom.com) Quit (Remote host closed the connection)
[13:58] * i_m (~ivan.miro@deibp9eh1--blueice4n5.emea.ibm.com) has joined #ceph
[13:59] * jclm (~jclm@72.143.50.126) has joined #ceph
[14:00] * jclm (~jclm@72.143.50.126) Quit ()
[14:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[14:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[14:02] * Lokta (~Lokta@carbon.coe.int) Quit ()
[14:02] * shylesh (~shylesh@121.244.87.118) Quit (Remote host closed the connection)
[14:02] * pabluk__ is now known as pabluk_
[14:08] <vincepii> guys, is a MDS stateless? That is, if you have only one and the server where it runs crashes, it's just a matter of redeploying it to be back to a working state of CephFS?
[14:08] <vincepii> (assume that server runs only the MDS and no OSDs or MONs)
[14:10] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[14:10] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[14:13] * yankcrime (~yankcrime@185.43.216.241) has joined #ceph
[14:16] <smerz> hm not sure. i thought it kept and replicated state but now you ask i'm not sure tbh
[14:18] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Ping timeout: 480 seconds)
[14:20] <Be-El> vincepii: MDS does not have any local state on disk except its keyring
[14:21] * blynch_ (~blynch@vm-nat.msi.umn.edu) Quit (Ping timeout: 480 seconds)
[14:21] <vincepii> Be-El: ok, thanks! and multi-MDS (active/passive) is still discouraged in jewel?
[14:22] * ade (~abradshaw@85.158.226.30) has joined #ceph
[14:23] * bniver (~bniver@nat-pool-bos-u.redhat.com) has joined #ceph
[14:24] * blynch (~blynch@vm-nat.msi.umn.edu) has joined #ceph
[14:26] * ahmeni (~Corneliou@76GAAE1SW.tor-irc.dnsbl.oftc.net) Quit ()
[14:26] * kefu (~kefu@183.193.162.205) has joined #ceph
[14:26] * rcfighter (~Frymaster@185.100.86.128) has joined #ceph
[14:26] * mattbenjamin (~mbenjamin@206.121.37.170) Quit (Ping timeout: 480 seconds)
[14:27] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[14:27] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[14:30] * allaok (~allaok@161.106.4.5) has joined #ceph
[14:30] * allaok (~allaok@161.106.4.5) Quit ()
[14:30] * allaok (~allaok@161.106.4.5) has joined #ceph
[14:30] * allaok (~allaok@161.106.4.5) Quit ()
[14:31] * valeech (~valeech@50-205-143-162-static.hfc.comcastbusiness.net) has joined #ceph
[14:31] * allaok (~allaok@161.106.4.5) has joined #ceph
[14:31] * allaok (~allaok@161.106.4.5) has left #ceph
[14:32] * wjw-freebsd2 (~wjw@176.74.240.1) has joined #ceph
[14:34] * sankarshan (~sankarsha@121.244.87.117) has joined #ceph
[14:34] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: This computer has gone to sleep)
[14:35] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[14:36] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[14:38] <raarts> ceph newbie.
[14:38] <raarts> Having huge problems installing jewel on debian jessie. Both manual as well as ceph-deploy
[14:38] * vincepii (~textual@77.245.22.78) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[14:39] * kefu (~kefu@183.193.162.205) Quit (Ping timeout: 480 seconds)
[14:39] <raarts> Following documentation, but bumping into walls all the time.
[14:39] <raarts> Is this to be expected? Am I doing it wrong?
[14:40] * vincepii (~textual@77.245.22.78) has joined #ceph
[14:43] <raarts> I entered a bug for ceph-deploy: http://tracker.ceph.com/issues/15726
[14:44] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[14:44] <raarts> And now I bumped into the following problem: HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds; 64 pgs stuck inactive
[14:45] <raarts> After much digging I manually started ceph-osd and it reported: WARNING: max attr value size (1024) is smaller than osd_max_object_name_len (2048). Your backend filesystem appears to not support attrs large enough to handle the configured max rados name size. You may get unexpected ENAMETOOLONG errors on rados operations or buggy behavior
[14:45] * simonada (~oftc-webi@static.ip-171-033-130-093.signet.nl) Quit (Ping timeout: 480 seconds)
[14:46] <raarts> Apparently I need to have an xfs or btrfs filesystem, but that is nowhere in the startup docs.
[14:46] <raarts> I feel you guys make it pretty hard to point a beginner in the right direction
[14:47] <raarts> (not meant personally of course)
[14:47] <via> the docs talk about filesystems, recommend against ext4, but provide instructions on how to increase the max attr size
[14:47] <via> http://docs.ceph.com/docs/hammer/rados/configuration/filesystem-recommendations/
[14:47] <via> filestore xattr use omap = true
[14:48] <raarts> which I tried and it did not work
[14:48] <via> starting with jewel ext4 is explicitly recommended against because even with that option its possible youll hit those limits
[14:49] <via> http://docs.ceph.com/docs/jewel/rados/configuration/filesystem-recommendations/
[14:49] <via> provides options to allow it to keep working, but i wouldn't if its a new cluster
[14:49] <raarts> Sorry, I am frustrated, because I am following the quick startup, but I bump into all sort of problems that are somewhere else in the docs
[14:50] * shaunm (~shaunm@74.83.215.100) has joined #ceph
[14:50] <raarts> Thing is, this is a *serious* storage system, so you kinda expect the quick-start docs and/or the ceph-deploy to just work
[14:51] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[14:51] <raarts> Can someone take a quick look at http://tracker.ceph.com/issues/15726 and see if this is a common problem?
[14:53] * erwan_taf (~erwan@46.231.131.178) Quit (Read error: Connection reset by peer)
[14:53] <raarts> but ok, I'll just stumble through. Thanks, I am writing my own ansible playbook to install ceph for my use case.
[14:53] * evelu (~erwan@46.231.131.178) has joined #ceph
[14:53] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:54] <raarts> (also I would expect ceph-deploy to emit a warning if an ext4 filesystem is found)
[14:54] <raarts> I just think the ramp-up could be made more fluent.
[14:56] * rcfighter (~Frymaster@7V7AAEBA8.tor-irc.dnsbl.oftc.net) Quit ()
[14:58] * kefu_ (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[15:01] * haomaiwang (~haomaiwan@106.187.51.170) has joined #ceph
[15:03] * mohmultihouse (~mohmultih@gw01.mhitp.dk) has joined #ceph
[15:06] <vincepii> raarts what about https://github.com/ceph/ceph-ansible?
[15:07] * delcake2 (~xul@4MJAAEN0M.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:07] <vincepii> I'm setting that up at this very moment (going through vars), but I haven't tried it yet
[15:08] <raarts> @vincepii that one is just too big and complicated to adjust to my situation
[15:08] <raarts> also I alwayd want to know how the parts of a system tie together before deploying it.
[15:09] <raarts> Don't want to wait with learning the system until the time it breaks down :-(
[15:09] * redf_ (~red@80-108-89-163.cable.dynamic.surfer.at) Quit (Ping timeout: 480 seconds)
[15:10] <raarts> I have everything running I think. What does this mean: 1 pgs are stuck inactive for more than 300 seconds
[15:11] <raarts> 64 pgs peering 1 pgs stuck inactive
[15:12] * Hemanth (~hkumar_@121.244.87.117) has joined #ceph
[15:12] * deepthi (~deepthi@106.206.140.137) Quit (Quit: Leaving)
[15:14] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[15:15] * atheism (~atheism@106.38.140.252) has joined #ceph
[15:19] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[15:22] * blynch (~blynch@vm-nat.msi.umn.edu) Quit (Ping timeout: 480 seconds)
[15:26] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[15:26] * Xa (~Altitudes@76GAAE1VU.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:28] * cholcombe (~chris@12.39.178.125) has joined #ceph
[15:29] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:29] * swami2 (~swami@49.44.57.245) has joined #ceph
[15:29] <swami2> hello - I am seeing near full warn for 4 OSDs aout of 252 OSDs...what is best way to fix this issues? currently 60% raw space used
[15:32] * The_BallPI (~pi@20.92-221-43.customer.lyse.net) Quit (Remote host closed the connection)
[15:32] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[15:37] * delcake2 (~xul@4MJAAEN0M.tor-irc.dnsbl.oftc.net) Quit ()
[15:37] * Kaervan (~nupanick@broadband-77-37-218-145.nationalcablenetworks.ru) has joined #ceph
[15:38] * raarts (~Adium@82-171-243-109.ip.telfort.nl) Quit (Quit: Leaving.)
[15:42] * vincepii (~textual@77.245.22.78) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:43] * kefu (~kefu@183.193.162.205) has joined #ceph
[15:45] * ade (~abradshaw@85.158.226.30) Quit (Ping timeout: 480 seconds)
[15:45] * vincepii (~textual@77.245.22.78) has joined #ceph
[15:47] * csoukup (~csoukup@159.140.254.98) has joined #ceph
[15:48] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[15:49] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:49] * Hemanth (~hkumar_@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:50] * dsl (~dsl@204.155.27.220) has joined #ceph
[15:52] * bene3 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[15:53] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Ping timeout: 480 seconds)
[15:54] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[15:56] * Xa (~Altitudes@76GAAE1VU.tor-irc.dnsbl.oftc.net) Quit ()
[15:56] * Crisco (~ulterior@207.244.70.35) has joined #ceph
[15:56] * BlaXpirit (~irc@blaxpirit.com) Quit (Ping timeout: 480 seconds)
[15:56] * wwdillingham (~LobsterRo@65.112.8.139) has joined #ceph
[15:57] * ade (~abradshaw@089144222191.atnat0031.highway.bob.at) has joined #ceph
[15:58] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[15:59] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:00] * ivancich (~ivancich@aa2.linuxbox.com) Quit (Ping timeout: 480 seconds)
[16:00] * b0e (~aledermue@213.95.25.82) has joined #ceph
[16:01] * haomaiwang (~haomaiwan@106.187.51.170) Quit (Remote host closed the connection)
[16:01] * haomaiwang (~haomaiwan@106.187.51.170) has joined #ceph
[16:02] * BlaXpirit (~irc@blaxpirit.com) has joined #ceph
[16:03] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[16:07] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[16:07] * kefu (~kefu@183.193.162.205) Quit (Ping timeout: 480 seconds)
[16:07] * Kaervan (~nupanick@06SAAB2IT.tor-irc.dnsbl.oftc.net) Quit ()
[16:08] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:09] * rntavares (bd798c55@107.161.19.109) has joined #ceph
[16:10] <hroussea> swami2: reweight-by-utilization
[16:10] <mistur> Hello
[16:11] <mistur> is that possible to have "ceph df" output in bytes or kilobytes instead of M or G ? (infernalis)
[16:11] * overclk (~quassel@121.244.87.117) Quit (Remote host closed the connection)
[16:11] <mistur> or at least get this for a pool in another way ?
[16:16] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[16:19] <m0zes> mistur: would 'ceph df --format json-pretty' work for you?
[16:19] <IcePic> mistur: ask for -f json-pretty, then you get bytes
[16:20] * rntavares (bd798c55@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[16:21] <mistur> m0zes, IcePic: perfect :D \o/
[16:21] <mistur> thanks
[16:22] * yanzheng (~zhyan@118.116.113.70) Quit (Quit: This computer has gone to sleep)
[16:23] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Ping timeout: 480 seconds)
[16:24] * rntavares (bd798c55@107.161.19.109) has joined #ceph
[16:26] * IvanJobs (~hardes@183.192.73.116) has joined #ceph
[16:26] * The_Ball (~pi@20.92-221-43.customer.lyse.net) has joined #ceph
[16:26] * Crisco (~ulterior@06SAAB2JL.tor-irc.dnsbl.oftc.net) Quit ()
[16:26] * cyphase (~Esvandiar@185.100.86.69) has joined #ceph
[16:27] * cyphase is now known as Guest2691
[16:30] * Kurt (~Adium@2001:628:1:5:f15f:5930:6325:dee7) Quit (Quit: Leaving.)
[16:32] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:33] * ade (~abradshaw@089144222191.atnat0031.highway.bob.at) Quit (Ping timeout: 480 seconds)
[16:33] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[16:40] * b0e (~aledermue@213.95.25.82) has joined #ceph
[16:45] * vincepii (~textual@77.245.22.78) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[16:46] * rntavares (bd798c55@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[16:52] * shaunm (~shaunm@74.83.215.100) Quit (Ping timeout: 480 seconds)
[16:53] * vincepii (~textual@77.245.22.78) has joined #ceph
[16:56] * Guest2691 (~Esvandiar@6AGAABNAJ.tor-irc.dnsbl.oftc.net) Quit ()
[16:56] * vincepii (~textual@77.245.22.78) Quit ()
[16:57] * Skaag (~lunix@rrcs-67-52-140-5.west.biz.rr.com) has joined #ceph
[16:57] * swami2 (~swami@49.44.57.245) Quit (Quit: Leaving.)
[16:58] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[17:00] * rraja (~rraja@121.244.87.117) Quit (Ping timeout: 480 seconds)
[17:01] * haomaiwang (~haomaiwan@106.187.51.170) Quit (Remote host closed the connection)
[17:01] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) has joined #ceph
[17:01] * PuyoDead (~brannmar@4MJAAEN4A.tor-irc.dnsbl.oftc.net) has joined #ceph
[17:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[17:02] * mohmultihouse (~mohmultih@gw01.mhitp.dk) Quit (Ping timeout: 480 seconds)
[17:02] * rraja (~rraja@121.244.87.117) has joined #ceph
[17:03] * rraja (~rraja@121.244.87.117) Quit ()
[17:04] * Miouge (~Miouge@188.189.65.14) has joined #ceph
[17:05] * atheism (~atheism@106.38.140.252) Quit (Ping timeout: 480 seconds)
[17:07] * ivancich (~ivancich@aa2.linuxbox.com) has joined #ceph
[17:07] * offender (~Jebula@46.29.248.238) has joined #ceph
[17:08] * fcami (~fcami@2pu44-1-78-229-253-33.fbx.proxad.net) has joined #ceph
[17:08] * bene3 (~bene@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[17:10] * kefu_ (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:10] * LongyanG (~long@15255.s.time4vps.eu) has joined #ceph
[17:10] * Long_yanG (~long@15255.s.time4vps.eu) Quit (Remote host closed the connection)
[17:16] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[17:17] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[17:17] * mtb` (~mtb`@157.130.171.46) has joined #ceph
[17:17] * mtb` (~mtb`@157.130.171.46) Quit ()
[17:17] * mtb` (~mtb`@157.130.171.46) has joined #ceph
[17:18] <georgem> does rbd have a garbage collection for its rados objects, similar to radosgw?
[17:19] * kefu (~kefu@183.193.162.205) has joined #ceph
[17:23] * vata (~vata@207.96.182.162) has joined #ceph
[17:24] * xarses (~xarses@64.124.158.100) has joined #ceph
[17:26] * overclk (~quassel@117.202.103.162) has joined #ceph
[17:27] * kefu (~kefu@183.193.162.205) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:29] * kefu (~kefu@183.193.162.205) has joined #ceph
[17:30] * rraja (~rraja@121.244.87.117) has joined #ceph
[17:31] * PuyoDead (~brannmar@4MJAAEN4A.tor-irc.dnsbl.oftc.net) Quit ()
[17:31] * AluAlu1 (~Grimmer@65.19.167.130) has joined #ceph
[17:34] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[17:35] <fcami> leseb, ktdreyer is there a recommended Ansible version for the ceph-ansible playbook? Is "latest Ansible" a reasonable assumption? I'm working on Jewel, if that matters.
[17:35] <leseb> fcami: hum 1.9.4 and 2.0.0.1 should work
[17:35] <leseb> 2.1 should work as well
[17:35] <leseb> but 1.9.4 and 2.0.0.1 are 100% sure compatible
[17:36] <leseb> actually 2.0.0.1 is probably the best one
[17:36] <fcami> leseb: noted, thank you.
[17:37] <leseb> fcami: np np
[17:37] * yanzheng (~zhyan@125.70.21.216) has joined #ceph
[17:37] * offender (~Jebula@6AGAABNB4.tor-irc.dnsbl.oftc.net) Quit ()
[17:37] * Arcturus (~storage@Relay-J.tor-exit.network) has joined #ceph
[17:38] * linjan (~linjan@86.62.112.22) Quit (Ping timeout: 480 seconds)
[17:39] * kefu (~kefu@183.193.162.205) Quit (Ping timeout: 480 seconds)
[17:39] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:39] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[17:40] * ivancich (~ivancich@aa2.linuxbox.com) Quit (Remote host closed the connection)
[17:41] * ivancich (~ivancich@aa2.linuxbox.com) has joined #ceph
[17:41] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[17:42] * yanzheng (~zhyan@125.70.21.216) Quit (Quit: This computer has gone to sleep)
[17:43] * bene3 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[17:43] * Miouge_ (~Miouge@188.188.85.165) has joined #ceph
[17:44] * mattbenjamin (~mbenjamin@173-165-86-195-Illinois.hfc.comcastbusiness.net) has joined #ceph
[17:45] * Miouge (~Miouge@188.189.65.14) Quit (Ping timeout: 480 seconds)
[17:45] * Miouge_ is now known as Miouge
[17:45] * analbeard (~shw@support.memset.com) Quit (Ping timeout: 480 seconds)
[17:46] <jdillaman> georgem: rbd will remove objects as they are freed (discards, shrink the image, remove the image, etc)
[17:47] <georgem> jdillaman: so right away basically, when I remove the image..
[17:48] <jdillaman> georgem: correct
[17:48] * Concubidated (~cube@c-50-173-245-118.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:49] * raarts (~Adium@82-171-243-109.ip.telfort.nl) has joined #ceph
[17:50] <georgem> jdillaman: another question, any hint where should I look if formatting a 10 TB attached cinder volume backed by Ceph takes forever? the mkfs.xfs process has been in D state for many hours, I don't see the number of rados objects in the pool increasing and no traffic over the network from the compute node where the instance formatting the volume is
[17:50] * overclk_ (~quassel@117.202.99.185) has joined #ceph
[17:50] * linjan (~linjan@86.62.112.22) has joined #ceph
[17:52] * ade (~abradshaw@193.43.158.229) has joined #ceph
[17:52] <jdillaman> georgem: do you have an Ceph admin socket for that VM in question?
[17:52] <joshd1> georgem: possibly due to trying to discard the whole disk in tiny chunks - try mkfs.xfs -K
[17:53] * overclk (~quassel@117.202.103.162) Quit (Ping timeout: 480 seconds)
[17:53] * derjohn_mob (~aj@46.189.28.37) Quit (Ping timeout: 480 seconds)
[17:53] <georgem> jdillaman: no, but I can get one; joshd: thanks, I'll try that
[17:55] * sankarshan (~sankarsha@121.244.87.117) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[17:58] * squizzi_ (~squizzi@nat-pool-rdu-t.redhat.com) has joined #ceph
[17:59] <mtb`> random question: has anyone run into objects being unretrievable from radosgw?
[17:59] <mtb`> context: i have a k7 m2 erasure coded ceph cluster, and I seem to have found a number of 'magic' filesizes that when uploaded via s3 are unretrievable. i've been able to reproduce this across 3 separate clusters with different version of hammer (v94.3 & v94.6)
[18:00] * squizzi (~squizzi@107.13.31.195) Quit (Remote host closed the connection)
[18:00] * garphy is now known as garphy`aw
[18:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[18:01] * AluAlu1 (~Grimmer@7V7AAEBF4.tor-irc.dnsbl.oftc.net) Quit ()
[18:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[18:01] * InIMoeK (~InIMoeK@95.170.93.16) Quit (Ping timeout: 480 seconds)
[18:04] * blizzow (~jburns@50.243.148.102) has joined #ceph
[18:07] * Arcturus (~storage@06SAAB2NL.tor-irc.dnsbl.oftc.net) Quit ()
[18:07] * Spikey (~rcfighter@4MJAAEN6P.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:07] * pabluk_ is now known as pabluk__
[18:08] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[18:08] * yanzheng (~zhyan@125.70.21.216) has joined #ceph
[18:08] * karnan (~karnan@106.51.142.65) has joined #ceph
[18:09] * Concubidated (~cube@c-50-173-245-118.hsd1.ca.comcast.net) has joined #ceph
[18:09] * owasserm (~owasserm@bzq-82-81-161-50.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[18:11] * overclk (~quassel@117.202.99.53) has joined #ceph
[18:11] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[18:12] * TMM (~hp@185.5.122.2) Quit (Quit: Ex-Chat)
[18:14] * overclk_ (~quassel@117.202.99.185) Quit (Ping timeout: 480 seconds)
[18:16] * yanzheng (~zhyan@125.70.21.216) Quit (Quit: This computer has gone to sleep)
[18:19] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[18:20] * Miouge (~Miouge@188.188.85.165) Quit (Quit: Miouge)
[18:20] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) Quit (Quit: Leaving)
[18:20] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[18:20] * valeech (~valeech@50-205-143-162-static.hfc.comcastbusiness.net) Quit (Quit: valeech)
[18:22] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[18:24] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) Quit (Remote host closed the connection)
[18:24] * kefu_ is now known as kefu
[18:25] * evelu (~erwan@46.231.131.178) Quit (Ping timeout: 480 seconds)
[18:26] * linjan (~linjan@86.62.112.22) Quit (Ping timeout: 480 seconds)
[18:30] * derjohn_mob (~aj@88.128.82.11) has joined #ceph
[18:31] * airsoftglock (~storage@tor-exit.eecs.umich.edu) has joined #ceph
[18:32] <sugoruyo> hey folks, I have 4 PGs stuck incomplete and I'm trying to figure out how to recover them without losing the data, can help me with this?
[18:33] <blizzow> How do I copy/convert an image in my RBD pool to format 2 from format 1?
[18:35] * kefu is now known as kefu|afk
[18:35] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[18:37] * Spikey (~rcfighter@4MJAAEN6P.tor-irc.dnsbl.oftc.net) Quit ()
[18:37] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[18:37] * danielsj (~legion@tor-exit4-readme.dfri.se) has joined #ceph
[18:38] * yanzheng (~zhyan@125.70.21.216) has joined #ceph
[18:39] * yanzheng (~zhyan@125.70.21.216) Quit ()
[18:40] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:40] * i_m (~ivan.miro@deibp9eh1--blueice4n5.emea.ibm.com) Quit (Read error: Connection reset by peer)
[18:42] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:43] <m0zes> blizzow: currently it is an offline process. http://cephnotes.ksperis.com/blog/2013/07/30/convert-rbd-to-format-v2
[18:49] * ade (~abradshaw@193.43.158.229) Quit (Ping timeout: 480 seconds)
[18:53] * mattbenjamin (~mbenjamin@173-165-86-195-Illinois.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[18:55] <blizzow> m0zes: gross :( but thanks! :)
[18:56] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[19:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[19:01] * airsoftglock (~storage@6AGAABNFB.tor-irc.dnsbl.oftc.net) Quit ()
[19:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[19:04] * yanzheng (~zhyan@125.70.21.216) has joined #ceph
[19:05] <sugoruyo> anyone who can help figure out how to get PGs out of incomplete state?
[19:06] * BrianA (~BrianA@fw-rw.shutterfly.com) has joined #ceph
[19:07] * danielsj (~legion@06SAAB2PT.tor-irc.dnsbl.oftc.net) Quit ()
[19:07] * hyst (~hyst@marcuse-1.nos-oignons.net) has joined #ceph
[19:07] <xcezzz> sugoruyo: what is your replica size?
[19:08] * nwf (~nwf@00018577.user.oftc.net) has joined #ceph
[19:09] * ibravo (~ibravo@72.83.69.64) Quit (Quit: Leaving)
[19:10] * a1-away (~jelle@62.27.85.48) Quit (Server closed connection)
[19:10] * wjw-freebsd2 (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[19:10] * a1-away (~jelle@62.27.85.48) has joined #ceph
[19:11] * delaf (~delaf@legendary.xserve.fr) Quit (Server closed connection)
[19:12] * delaf (~delaf@legendary.xserve.fr) has joined #ceph
[19:13] <PoRNo-MoRoZ> xcezzz :D
[19:13] <xcezzz> howdy lol
[19:13] <PoRNo-MoRoZ> standart question as i can see :DD
[19:13] <PoRNo-MoRoZ> about replica
[19:13] * IvanJobs (~hardes@183.192.73.116) Quit (Quit: Leaving)
[19:14] <PoRNo-MoRoZ> i'm okay
[19:14] <PoRNo-MoRoZ> aswell as my cluster :D
[19:15] * yanzheng (~zhyan@125.70.21.216) Quit (Quit: This computer has gone to sleep)
[19:16] <sugoruyo> xcezzz: it's a 3 copy pool with RBD images
[19:18] <xcezzz> sugoruyo: whats min_size?? `ceph osd pool get <poolname> min_size`
[19:20] * DanFoster (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[19:22] * itwasntandy (~andrew@bash.sh) Quit (Server closed connection)
[19:22] * itwasntandy (~andrew@bash.sh) has joined #ceph
[19:25] * kefu|afk (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:25] * Peltzi (peltzi@peltzi.fi) Quit (Server closed connection)
[19:25] * Peltzi (peltzi@peltzi.fi) has joined #ceph
[19:27] * yankcrime (~yankcrime@185.43.216.241) Quit (Quit: Textual IRC Client: www.textualapp.com)
[19:29] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:29] <georgem> joshd1: the "-K" works great, it only took 3 min to format the 10 TB with XFS
[19:29] * shylesh__ (~shylesh@45.124.226.134) has joined #ceph
[19:29] * cholcombe (~chris@12.39.178.125) Quit (Ping timeout: 480 seconds)
[19:30] <joshd> georgem: glad to hear it
[19:31] * Gibri (~aldiyen@nl3x.mullvad.net) has joined #ceph
[19:31] * mattbenjamin (~mbenjamin@173-165-86-195-Illinois.hfc.comcastbusiness.net) has joined #ceph
[19:33] <sugoruyo> xcezzz: min_size is 2, I tried lowering it to 1 but it didn't help
[19:35] * IcePic (~jj@c66.it.su.se) Quit (Server closed connection)
[19:35] * IcePic (~jj@c66.it.su.se) has joined #ceph
[19:35] <sugoruyo> there's a host that has been having problems and we've taken it out, but one of it's OSDs was primary on 4 PGs, taking that OSD down+out caused those PGs to change all of their OSDs.
[19:36] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[19:37] * hyst (~hyst@06SAAB2Q4.tor-irc.dnsbl.oftc.net) Quit ()
[19:37] * pakman__ (~cheese^@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[19:37] * rendar (~I@95.238.176.76) Quit (Ping timeout: 480 seconds)
[19:38] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[19:39] * rendar (~I@95.238.176.76) has joined #ceph
[19:40] * squizzi_ is now known as squizzi
[19:41] * saltsa (~joonas@dsl-hkibrasgw1-58c018-65.dhcp.inet.fi) Quit (Server closed connection)
[19:41] * saltsa (~joonas@dsl-hkibrasgw1-58c018-65.dhcp.inet.fi) has joined #ceph
[19:42] * billwebb (~billwebb@50-203-47-138-static.hfc.comcastbusiness.net) has joined #ceph
[19:43] * billwebb (~billwebb@50-203-47-138-static.hfc.comcastbusiness.net) Quit ()
[19:44] * billwebb (~billwebb@50-203-47-138-static.hfc.comcastbusiness.net) has joined #ceph
[19:44] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[19:44] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) has joined #ceph
[19:47] * derjohn_mob (~aj@88.128.82.11) Quit (Ping timeout: 480 seconds)
[19:49] * mykola (~Mikolaj@91.245.73.44) has joined #ceph
[19:50] * mykola (~Mikolaj@91.245.73.44) Quit (Remote host closed the connection)
[19:50] * mykola (~Mikolaj@91.245.73.44) has joined #ceph
[19:52] * shylesh__ (~shylesh@45.124.226.134) Quit (Remote host closed the connection)
[19:55] * alram (~alram@192.41.52.12) has joined #ceph
[19:57] * liiwi (liiwi@idle.fi) Quit (Server closed connection)
[19:58] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[19:59] * barra204 (~barra204@50.250.250.45) has joined #ceph
[19:59] * barra204 (~barra204@50.250.250.45) Quit ()
[19:59] * barra204 (~barra204@50.250.250.45) has joined #ceph
[20:00] * bene3 (~bene@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[20:00] * barra204 (~barra204@50.250.250.45) Quit ()
[20:00] * barra204 (~barra204@50.250.250.45) has joined #ceph
[20:00] * ibravo (~ibravo@72.83.69.64) has joined #ceph
[20:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[20:01] * Gibri (~aldiyen@06SAAB2RY.tor-irc.dnsbl.oftc.net) Quit ()
[20:01] * offender (~Throlkim@166.70.207.2) has joined #ceph
[20:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[20:02] * pabluk__ is now known as pabluk_
[20:02] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Server closed connection)
[20:02] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[20:02] * mattbenjamin (~mbenjamin@173-165-86-195-Illinois.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[20:03] * pabluk_ is now known as pabluk__
[20:03] * mattbenjamin (~mbenjamin@173-165-86-195-Illinois.hfc.comcastbusiness.net) has joined #ceph
[20:04] * dsl (~dsl@204.155.27.220) Quit (Quit: Leaving...)
[20:07] * pakman__ (~cheese^@7V7AAEBJK.tor-irc.dnsbl.oftc.net) Quit ()
[20:07] * Xa (~datagutt@tor4thepeople1.torexitnode.net) has joined #ceph
[20:07] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) Quit (Remote host closed the connection)
[20:10] * marioskogias (~marioskog@tsf-484-wpa-2-188.epfl.ch) has joined #ceph
[20:11] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[20:11] * marioskogias (~marioskog@tsf-484-wpa-2-188.epfl.ch) has left #ceph
[20:15] * marioskogias (~marioskog@tsf-484-wpa-2-188.epfl.ch) has joined #ceph
[20:15] * marioskogias (~marioskog@tsf-484-wpa-2-188.epfl.ch) Quit ()
[20:20] * overclk (~quassel@117.202.99.53) Quit (Remote host closed the connection)
[20:22] <kfox1111> question... I went to the summit and discussed getting refstack to test radosgw equally with swift. they said they cant since radosgw isn't an openstack project. Now that languages other then python are starting to become acceptable to the openstack community, is it reasonable to put radosgw under the big tent?
[20:23] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) has joined #ceph
[20:26] * Svedrin (svedrin@elwing.funzt-halt.net) Quit (Ping timeout: 480 seconds)
[20:27] * Georgyo (~georgyo@2600:3c03:e000:71::cafe:3) Quit (Quit: No Ping reply in 180 seconds.)
[20:27] * thesix (~thesix@leifhelm.mur.at) Quit (Ping timeout: 480 seconds)
[20:28] * Georgyo (~georgyo@shamm.as) has joined #ceph
[20:28] * thesix (~thesix@leifhelm.mur.at) has joined #ceph
[20:29] * tacticus (~tacticus@2400:8900::f03c:91ff:feae:5dcd) Quit (Ping timeout: 480 seconds)
[20:29] * guerby (~guerby@ip165.tetaneutral.net) Quit (Ping timeout: 480 seconds)
[20:29] * tacticus (~tacticus@2400:8900::f03c:91ff:feae:5dcd) has joined #ceph
[20:29] * bla (~b.laessig@chimeria.ext.pengutronix.de) Quit (Ping timeout: 480 seconds)
[20:29] * jayjay (~jayjay@2a00:f10:121:400:444:3cff:fe00:4bc) Quit (Ping timeout: 480 seconds)
[20:31] * bla (~b.laessig@chimeria.ext.pengutronix.de) has joined #ceph
[20:31] * offender (~Throlkim@76GAAE14J.tor-irc.dnsbl.oftc.net) Quit ()
[20:31] * rogst (~clusterfu@216.230.148.77) has joined #ceph
[20:31] * jayjay (~jayjay@185.27.175.112) has joined #ceph
[20:37] * Xa (~datagutt@4MJAAEOAP.tor-irc.dnsbl.oftc.net) Quit ()
[20:38] * danieagle (~Daniel@189.0.86.76) has joined #ceph
[20:39] * vata (~vata@207.96.182.162) Quit (Remote host closed the connection)
[20:40] * guerby (~guerby@ip165.tetaneutral.net) has joined #ceph
[20:42] * Solvius (~SaneSmith@146.0.43.126) has joined #ceph
[20:43] * ktdreyer (~kdreyer@polyp.adiemus.org) Quit (Server closed connection)
[20:43] * Svedrin (svedrin@elwing.funzt-halt.net) has joined #ceph
[20:46] * sfrode (frode@sandholtbraaten.com) Quit (Server closed connection)
[20:46] * sfrode (frode@sandholtbraaten.com) has joined #ceph
[20:51] * linjan (~linjan@176.195.204.116) has joined #ceph
[20:53] * frickler (~jens@v1.jayr.de) Quit (Server closed connection)
[20:53] * frickler (~jens@v1.jayr.de) has joined #ceph
[20:59] <grauzikas> Hello, i have one question, how to do better in ceph server, i want to use two or 3 raid controllers in total i want to use 16-24 ssd drives (depends on how many raid controllers ill add) and i want to make raid10, so what method is better:
[20:59] <grauzikas> create on each raid controller raid10 (8 ssd drives on each controller) and add them to ceph as single hard drive
[20:59] <grauzikas> create on each raid controller raid10 and use software raid to... dont know :) i think this isnt good idea
[20:59] <grauzikas> any other ideas? i want to create ceph for private cloud, raid10 for performance and security, i think i gona use not sas ssd drives (because sas ssd drives is really expensyve)
[20:59] <grauzikas> Thank you all for reading my post :)
[21:00] <grauzikas> i`m planing to use 12gb/s raid controllers from LSI 8 internal ports and i dont want to use expanders
[21:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[21:01] * rogst (~clusterfu@4MJAAEOBG.tor-irc.dnsbl.oftc.net) Quit ()
[21:01] * Pommesgabel (~OODavo@199.68.196.124) has joined #ceph
[21:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[21:02] <grauzikas> planning to use lsi 9361-8i
[21:03] <m0zes> why are your using raid at all? ceph should be handling that for you.
[21:04] <m0zes> I would export all the ssds as jbod, and run a ceph-osd service per disk (perhaps 2 or 4 per ssd if they are *fast*)
[21:05] <grauzikas> i think that with raid controller and BBU there will be better perfomance, because this works in a hardware way, also replacement of disks i think should work better, i`m not expert with ceph thats why i`m asking
[21:06] <grauzikas> also in motherboard there no that much sata connections
[21:06] <s3an2> grauzikas, does your RAID card support JBOD with BBU?
[21:07] * daiver (~daiver@95.85.8.93) has joined #ceph
[21:08] <m0zes> there are situations that the cache on the raid controller can help with *bursts* of writes. some raid hbs disable the cache with jbod mode. you can, of course, create "single-disk" raid-0 arrays to emulate jbod without throwing out the onboard cache.
[21:08] <grauzikas> exactly i`m planing to buy raid controllers only, so i can buy any of them
[21:08] <grauzikas> till today i was planning to buy
[21:08] <grauzikas> http://www.avagotech.com/products/server-storage/raid-controllers/megaraid-sas-9361-8i#specifications
[21:09] * garphy`aw is now known as garphy
[21:09] <m0zes> anyway, ceph is best suited to handling the entire replication stack, as it can be taught about the failure domains and allow it to accomodate those into the placement of the replicas of objects. i.e. keeping multiple copies of the same object off the same host.
[21:12] * Solvius (~SaneSmith@4MJAAEOBU.tor-irc.dnsbl.oftc.net) Quit ()
[21:12] * Aal (~Sirrush@edwardsnowden1.torservers.net) has joined #ceph
[21:12] <grauzikas> i want to create openstack private cloud witch one will have 12 dual cpu servers, i`m planing to use simple ethernet switch 48x10gbe and 40gbe uplink (to witch one ceph will be connected) and everything i have and i dont want to buy expensive san server, so ceph looks really good for this job
[21:13] <grauzikas> but i need raid10 or any other solution to keep data really secure (also i`m planning to create an backup or redunddancy with hhds)
[21:14] <grauzikas> and i need high I/O performance (not only MB/s but iops too) thats why i want to use ssd drives
[21:15] <m0zes> ceph has replication built-in
[21:16] <grauzikas> and i dont want to use CPU cache or servers memory to take load for drives
[21:16] <grauzikas> servers CPU cache and memory
[21:16] <ira> grauzikas: Any co-located storage solution will.
[21:18] <grauzikas> ok so your offers is just connect ssd drives directly to motherboard and use them in this way and create everything with ceph?
[21:19] <ira> I'd tend to go that way... You are going to give up some performance vs. a local raid array with any networked solution.
[21:20] <ira> Also getting good SSDs will help ;).
[21:21] * bniver (~bniver@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[21:22] <grauzikas> i was thinking to create two raid10 on each of controllers and add to ceph those two raid controllers as simple drives. By you this is not good idea? if there will be 4 raid controllers, what then? same think?
[21:22] <xcezzz> grauzikas: check out cern???s ceph presentations
[21:22] <xcezzz> grauzikas: they go into their setup with external jbod setups for a massive ceph cluster
[21:22] <xcezzz> to run openstack/cinder etc type volumes
[21:22] <ira> grauzikas: I wouldn't bother with raid with ceph. Let ceph do its job.
[21:23] <xcezzz> grauzikas: ya dont bother with raid
[21:23] <T1> grauzikas: ceph is a way of eliminating the need for expensive raid controllers
[21:23] <T1> and yes - do not use raid for OSDs
[21:23] * daiver_ (~daiver@95.85.8.93) has joined #ceph
[21:23] <ira> I might do it JUST on the boot volume because recovering it is a PITA. But that's the only volume I'd raid ;).
[21:23] <xcezzz> https://www.youtube.com/watch?v=OopRMUYiY5E
[21:23] <grauzikas> interesting :) but then i need expanders and they costs almost same price
[21:24] <grauzikas> raid controller you can find on ebay 12gb/s for ~400 eur
[21:24] <T1> expanders are cheap
[21:24] <ira> vs. a good server... they are cheap. Or they should be.
[21:24] <grauzikas> if motherboard have sata3 you cant connet to one sata3 tons of ssd drives
[21:25] * pabluk__ (~pabluk@laf94-2-78-193-105-62.fbxo.proxad.net) Quit (Server closed connection)
[21:25] <T1> yes you can
[21:25] <T1> breakout cables
[21:25] * pabluk__ (~pabluk@2a01:e34:ec16:93e0:5171:fa74:26b3:7db4) has joined #ceph
[21:25] <xcezzz> ya??? 4:1
[21:25] <T1> easily 4 or 8 drives per connector
[21:25] <grauzikas> ok thank you all, you helped to me very much :) again rading and planing :)
[21:25] <grauzikas> reading*
[21:26] <T1> and do you self a favor..
[21:26] <xcezzz> there???s really no need to even raid boot volumes???
[21:26] <T1> forget EVERYTHING about using cache
[21:26] <xcezzz> just net boot everything
[21:26] <ira> xcezzz: If you have the config for it... sure ;)
[21:26] <T1> do not use write cache on anything
[21:26] <xcezzz> ^^
[21:26] <T1> it WILL result in dataloss at some point
[21:26] * wwdillingham (~LobsterRo@65.112.8.139) Quit (Quit: wwdillingham)
[21:27] <grauzikas> T1 ok, thank you
[21:28] <T1> the only place you might benefit a small bit is write cache on a ssd-based journal that has power loss protection (ie. build in capacitors that ensures everything written to the ssd's cache WILL be written to flash before it looses all power
[21:28] <T1> )
[21:28] * lookcrabs (~lookcrabs@tail.seeee.us) Quit (Read error: Connection reset by peer)
[21:28] <T1> .. and even then you might not see any performance increase at all
[21:28] <T1> (I didn??t using Intel S3710 drives)
[21:28] <ira> T1: Also if the SSD has proper caps, you'll not write amplify as badly...
[21:28] <T1> didn't even
[21:28] <grauzikas> you mean ssd drive to do same as BBU job
[21:28] <T1> NO
[21:29] <T1> it's no way near the same
[21:29] <ira> T1 has it right.
[21:29] <xcezzz> journal = fault point
[21:29] <xcezzz> for any and all OSDs that use it
[21:29] * Pommesgabel (~OODavo@6AGAABNLS.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[21:29] <T1> a bbu can hold data in the ram-based cache for some time - perhaps long enough for you to react and replace it
[21:30] <T1> but a ssd with power loss prevention will only hold power on the ram-based cache long enough for the controller to write everything to flash before it too looses power
[21:30] <T1> it's done with capacitors
[21:30] * daiver (~daiver@95.85.8.93) Quit (Ping timeout: 480 seconds)
[21:31] <T1> they do not hold power for much more than a few seconds (probably < 1 sec when power is drawn due to data beeing written to flash on that tiny amount of current)
[21:32] <mtb`> has anyone run into objects being unretrievable from radosgw after a successful upload
[21:32] <grauzikas> i see, thank you for information, now i need to read again and again what you posted and then ill get everithing
[21:32] <grauzikas> again thank you
[21:33] <T1> on a side note, Dells latest largish raid controllers have 1 or 2 GB on non-volatile cache that can survive powerloss for many months
[21:33] <T1> there is a battery on the controller. but if power fails the battery only has to last as long as it takes to write the data from ram-based cache to an onboard SD card
[21:33] <T1> .. and then it powers down
[21:34] * wwdillingham (~LobsterRo@ip814bf84.g.packetsurge.net) has joined #ceph
[21:34] <T1> then that SD card can be moved to a new controller, powered up and cache read in from the SD card and applied to the disks
[21:35] * fouxm (~foucault@ks01.commit.ninja) Quit (Server closed connection)
[21:35] * branto (~branto@178.253.163.131) has joined #ceph
[21:36] * fouxm (~foucault@ks01.commit.ninja) has joined #ceph
[21:42] * Aal (~Sirrush@4MJAAEOCT.tor-irc.dnsbl.oftc.net) Quit ()
[21:45] * Guest8815 (~quassel@109.74.11.233) Quit (Server closed connection)
[21:45] * kiranos (~quassel@109.74.11.233) has joined #ceph
[21:51] <wwdillingham> jdillaman: after force promoting the backup rbd images and then deleting them to recover from my split brain situation I was able to get the daemon to resync the rbd images, however, I have a new error, I havent touched the pools so not sure what happened to begin causing the errors: https://paste.fedoraproject.org/362681/62390472/
[21:53] * BrianA (~BrianA@fw-rw.shutterfly.com) Quit (Read error: Connection reset by peer)
[21:55] * cholcombe (~chris@12.39.178.125) has joined #ceph
[21:56] <jdillaman> wwdillingham: hmm ...
[21:58] <wwdillingham> thats on debug level 20
[22:00] <jdillaman> wwdillingham: any chance you can run "rados --pool <pool name> --cluster <backup cluster> ls | grep -v rbd_data | sort" and pastebin it for me?
[22:00] <wwdillingham> jdillaman: of course, 1 sec
[22:00] <mtb`> anyone running a k7m2 erasure coded cluster?
[22:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[22:01] <jdillaman> wwdillingham: it looks like it tried to open the local image, found it didn't exist, to it tries to create it but hits a collision with an existing image of the same name
[22:01] * haomaiwang (~haomaiwan@106.187.51.170) has joined #ceph
[22:02] * karnan (~karnan@106.51.142.65) Quit (Quit: Leaving)
[22:02] * davidzlap1 (~Adium@2605:e000:1313:8003:514c:5b05:5df9:c7b9) has joined #ceph
[22:03] <wwdillingham> jdillaman: https://paste.fedoraproject.org/362703/23922031/
[22:04] * TehZomB (~sese_@185.100.87.73) has joined #ceph
[22:05] <wwdillingham> jdillaman: yea that raises a question, what happens if and rbd device is created, then mirrored, then deleted from the primary, and then a new devices (totally different) is created?
[22:05] <wwdillingham> created with the same name
[22:06] <jdillaman> wwdillingham: the goal is to propagate the deletion of the peer but it isn't included in 10.2.0
[22:07] * libracious (~libraciou@catchpenny.cf) Quit (Remote host closed the connection)
[22:07] <jdillaman> wwdillingham: did you delete an image and recreate it after it replicated?
[22:07] * davidzlap (~Adium@2605:e000:1313:8003:90f5:10a4:d675:6c9d) Quit (Ping timeout: 480 seconds)
[22:08] <wwdillingham> jdillaman: I did not
[22:08] <jdillaman> wwdillingham: if not, can you run "rados --pool <pool> --cluster <primary cluster> getomapval rbd_directory id_3406a879e2a9e3"
[22:08] * cholcombe (~chris@12.39.178.125) Quit (Ping timeout: 480 seconds)
[22:09] <wwdillingham> jdillaman: do i run any danger of impacting an rbd device with that command?
[22:09] <jdillaman> no ... it's a R/O command to pull a reverse lookup from the directory (image id -> name)
[22:09] <wwdillingham> jdillaman: ok, 1 sec
[22:10] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[22:10] <wwdillingham> [root@ceph-mon02 tmp]# rados --pool ox60_root_disk --cluster ceph getomapval rbd_directory id_3406a879e2a9e3
[22:10] <wwdillingham> value (16 bytes) :
[22:10] <wwdillingham> 00000000 0c 00 00 00 6f 6e 65 2d 33 30 2d 33 37 38 2d 30 |....one-30-378-0|
[22:10] <wwdillingham> 00000010
[22:11] <jdillaman> wwdillingham: thx -- so what happens when you run "rbd --pool <pool> --cluster <backup cluster> info one-30-378-0"?
[22:11] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[22:12] * Jamana (~DougalJac@tor-relay.zwiebeltoralf.de) has joined #ceph
[22:12] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[22:14] * amospalla (~amospalla@0001a39c.user.oftc.net) Quit (Server closed connection)
[22:14] <wwdillingham> jdillaman: https://paste.fedoraproject.org/362704/62392828/
[22:14] * branto (~branto@178.253.163.131) Quit (Quit: Leaving.)
[22:14] * amospalla (~amospalla@0001a39c.user.oftc.net) has joined #ceph
[22:14] <jdillaman> wwdillingham: odd -- so definitely able to successfully open the image
[22:15] <jdillaman> wwdillingham: are these rbd-mirror errors repeating every 30 seconds or so?
[22:15] * mykola (~Mikolaj@91.245.73.44) Quit (Quit: away)
[22:15] <wwdillingham> O no
[22:15] <wwdillingham> I think i know my problem
[22:15] <wwdillingham> damn, I may have wasted your time
[22:15] <jdillaman> two rbd-mirror daemons running?
[22:16] <wwdillingham> im running the rbd-mirror as the wrong user
[22:16] <wwdillingham> let me check :(
[22:16] <wwdillingham> no, thats not it
[22:16] <wwdillingham> i am running it as the same user as i used to go the rbd info command you gave me
[22:16] <wwdillingham> get*
[22:17] <wwdillingham> jdillaman: I am only running the daemon on my backup cluster
[22:17] <wwdillingham> per your advice yesterday
[22:17] <jdillaman> wwdillingham: k
[22:17] <jdillaman> wwdillingham: is the daemon still running and output log entries?
[22:18] <wwdillingham> it is yes
[22:18] <jdillaman> any log entries with "ImageReplayer[2/3406a879e2a9e3]"?
[22:18] <wwdillingham> the error repeats seven times, though I have 8 objects being replicated
[22:19] <wwdillingham> err mirrored
[22:20] <wwdillingham> I just restarted the daemon with logging to a file, but in the past 1 minute that string doesnt exist in the log
[22:21] * daiver_ (~daiver@95.85.8.93) Quit (Remote host closed the connection)
[22:22] * daiver (~daiver@95.85.8.93) has joined #ceph
[22:22] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:22] <jdillaman> any new errors?
[22:22] <wwdillingham> jdillaman: 1/8 of the images being replicated have rbd_children, 7/8 of the mirrored images are those rbd_children. I am getting the error 7 times per 30 seconds??? perhaps correlation?
[22:22] <jdillaman> ah
[22:23] <wwdillingham> the errors generally looks the same
[22:24] <jdillaman> clone replication is broken in 10.2.0
[22:25] <jdillaman> it's on my list to fix asap
[22:25] <wwdillingham> Ahh, well that means its very problematic as all the VMs we have are clones by design of our cloud service (open nebula)
[22:25] <wwdillingham> presumably others ?
[22:26] <jdillaman> yeah -- high priority
[22:26] <wwdillingham> jdillaman: that explains it, okay, is their a feature on the tracker I can watch?
[22:26] <wwdillingham> thanks for your time??? let me know if I can help test anything
[22:26] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has left #ceph
[22:27] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[22:27] <jdillaman> wwdillingham: http://tracker.ceph.com/issues/14937
[22:28] <wwdillingham> much appreciated.
[22:29] <jdillaman> sorry for the churn -- we should have done a better job putting the known issues in the release notes
[22:29] <wwdillingham> jdillaman: one more thing unrelated, what is ???deep-flatten??? I cant find any explanation, and cant seem to retroactively enable it on pre-jewel created rbd images
[22:29] <jdillaman> wwdillingham: yeah, it can only be created on new images since it changes the IO handling of cloned images
[22:30] <jdillaman> wwdillingham: basically, pre-deep-flatten when you had a copy-on-write from a parent to a clone image and had snapshots on the clone, the snapshots would still depend on the parent image
[22:30] <jdillaman> ... so if you flattened the image (with snapshots), the snapshots would still be linked to the parent
[22:31] <jdillaman> w/ deep-flatten, even with snapshots you can flatten an image and remove the link to the parent
[22:32] * daiver (~daiver@95.85.8.93) Quit ()
[22:33] <wwdillingham> jdillaman: that makes sense, thanks again for your help
[22:33] <jdillaman> wwdillingham: np
[22:33] * barra204 (~barra204@50.250.250.45) Quit (Ping timeout: 480 seconds)
[22:34] * TehZomB (~sese_@6AGAABNNX.tor-irc.dnsbl.oftc.net) Quit ()
[22:35] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) has joined #ceph
[22:40] * liiwi (liiwi@idle.fi) has joined #ceph
[22:40] * The1_ (~the_one@87.104.212.66) has joined #ceph
[22:42] * Jamana (~DougalJac@06SAAB2WZ.tor-irc.dnsbl.oftc.net) Quit ()
[22:42] * shaunm (~shaunm@nat-pool-rdu-u.redhat.com) has joined #ceph
[22:47] * T1 (~the_one@87.104.212.66) Quit (Ping timeout: 480 seconds)
[22:50] * cholcombe (~chris@12.39.178.125) has joined #ceph
[22:52] * wwdillingham (~LobsterRo@ip814bf84.g.packetsurge.net) Quit (Quit: wwdillingham)
[22:53] * chiluk (~quassel@172.34.213.162.lcy-01.canonistack.canonical.com) Quit (Server closed connection)
[22:53] * chiluk (~quassel@172.34.213.162.lcy-01.canonistack.canonical.com) has joined #ceph
[22:55] * vZerberus (~dogtail@00021993.user.oftc.net) Quit (Server closed connection)
[22:55] * vZerberus (dog@msg.sys5.org) has joined #ceph
[22:55] <kfox1111> question... I went to the summit and discussed getting refstack to test radosgw equally with swift. they said they cant since radosgw isn't an openstack project. Now that languages other then python are starting to become acceptable to the openstack community, is it reasonable to put up radosgw for inclusion under the big tent?
[22:58] * billwebb (~billwebb@50-203-47-138-static.hfc.comcastbusiness.net) Quit (Quit: billwebb)
[22:59] * georgem (~Adium@24.114.56.245) has joined #ceph
[23:00] * georgem (~Adium@24.114.56.245) Quit ()
[23:00] * georgem (~Adium@206.108.127.16) has joined #ceph
[23:01] * haomaiwang (~haomaiwan@106.187.51.170) Quit (Remote host closed the connection)
[23:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[23:02] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[23:04] * Chaos_Llama (~elt@zeta.oxyl.net) has joined #ceph
[23:05] * barra204 (~barra204@50.250.250.45) has joined #ceph
[23:05] * mattbenjamin (~mbenjamin@173-165-86-195-Illinois.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:06] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) has joined #ceph
[23:07] * derjohn_mob (~aj@x590ccec9.dyn.telefonica.de) has joined #ceph
[23:08] * barra204 (~barra204@50.250.250.45) Quit (Remote host closed the connection)
[23:09] * barra204 (~barra204@50.250.250.45) has joined #ceph
[23:14] * mhackett (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[23:16] * haplo37 (~haplo37@199.91.185.156) Quit (Ping timeout: 480 seconds)
[23:17] * raarts (~Adium@82-171-243-109.ip.telfort.nl) Quit (Quit: Leaving.)
[23:21] * wwdillingham (~LobsterRo@mobile-166-186-168-11.mycingular.net) has joined #ceph
[23:22] * Kilty (~LiftedKil@is.in.the.madhacker.biz) has joined #ceph
[23:25] * ibravo (~ibravo@72.83.69.64) Quit (Quit: Leaving)
[23:25] * jclm (~jclm@marriott-hotel-ottawa-yowmc.sites.intello.com) has joined #ceph
[23:26] * wwdillingham (~LobsterRo@mobile-166-186-168-11.mycingular.net) Quit (Read error: Connection reset by peer)
[23:27] <Kilty> having some problems with PG creation
[23:27] <Kilty> used juju to deploy a cluster - I have 3 mons, 3 osds, and a radosgw
[23:28] <Kilty> ceph -s on a mon node shows osds as 3 up, 3 in
[23:28] <Kilty> but all my pgs are stuck creating
[23:28] <Kilty> any pointers on where to look to troubleshoot this?
[23:31] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[23:34] * Chaos_Llama (~elt@06SAAB2YT.tor-irc.dnsbl.oftc.net) Quit ()
[23:36] * flaf (~flaf@2001:41d0:1:7044::1) Quit (Server closed connection)
[23:36] * flaf (~flaf@2001:41d0:1:7044::1) has joined #ceph
[23:40] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[23:44] * danieagle (~Daniel@189.0.86.76) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[23:53] * ifur (~osm@0001f63e.user.oftc.net) Quit (Server closed connection)
[23:53] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.