#ceph IRC Log

Index

IRC Log for 2016-05-03

Timestamps are in GMT/BST.

[0:00] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[0:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[0:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[0:01] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[0:05] * Scymex (~Inuyasha@4MJAAEMIT.tor-irc.dnsbl.oftc.net) has joined #ceph
[0:06] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[0:07] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[0:09] * jclm (~jclm@marriott-hotel-ottawa-yowmc.sites.intello.com) has joined #ceph
[0:12] * mog_2 (~PierreW@06SAABZ3A.tor-irc.dnsbl.oftc.net) Quit ()
[0:12] * Aal (~neobenedi@6AGAABK9F.tor-irc.dnsbl.oftc.net) has joined #ceph
[0:33] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[0:35] * Scymex (~Inuyasha@4MJAAEMIT.tor-irc.dnsbl.oftc.net) Quit ()
[0:35] * Pettis (~Guest1390@anonymous.sec.nl) has joined #ceph
[0:36] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[0:41] * badone (~badone@66.187.239.16) has joined #ceph
[0:42] * Aal (~neobenedi@6AGAABK9F.tor-irc.dnsbl.oftc.net) Quit ()
[0:43] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[0:45] * xarses (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[0:56] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Read error: Connection reset by peer)
[0:57] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[0:59] * georgem (~Adium@206.108.127.16) has joined #ceph
[1:00] * ircolle (~Adium@2601:285:201:633a:4cd8:3cde:b655:18ac) Quit (Quit: Leaving.)
[1:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[1:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[1:04] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[1:05] * Pettis (~Guest1390@4MJAAEMJW.tor-irc.dnsbl.oftc.net) Quit ()
[1:05] * Da_Pineapple (~Shesh@6AGAABLA2.tor-irc.dnsbl.oftc.net) has joined #ceph
[1:05] * vata (~vata@cable-21.246.173-197.electronicbox.net) has joined #ceph
[1:06] * rendar (~I@host38-182-dynamic.12-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[1:08] * kevinc (~kevinc__@client64-174.sdsc.edu) Quit (Quit: Leaving)
[1:12] * Maza (~Zyn@4MJAAEMKY.tor-irc.dnsbl.oftc.net) has joined #ceph
[1:15] * rburkholder (~overonthe@199.68.193.54) Quit (Read error: Connection reset by peer)
[1:15] * dynamicudpate (~overonthe@199.68.193.54) Quit (Write error: connection closed)
[1:16] * dynamicudpate (~overonthe@199.68.193.54) has joined #ceph
[1:16] * rburkholder (~overonthe@199.68.193.62) has joined #ceph
[1:17] * mhack (~mhack@66-168-117-78.dhcp.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[1:17] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:20] * csoukup (~csoukup@2605:a601:9c8:6b00:8994:8094:ee36:8f24) has joined #ceph
[1:27] * LeaChim (~LeaChim@host86-150-161-6.range86-150.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:35] * Da_Pineapple (~Shesh@6AGAABLA2.tor-irc.dnsbl.oftc.net) Quit ()
[1:35] * Jones (~cyphase@tor-exit1-readme.dfri.se) has joined #ceph
[1:35] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[1:37] * csoukup (~csoukup@2605:a601:9c8:6b00:8994:8094:ee36:8f24) Quit (Ping timeout: 480 seconds)
[1:40] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[1:41] * rdias (~rdias@bl7-92-98.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[1:42] * Maza (~Zyn@4MJAAEMKY.tor-irc.dnsbl.oftc.net) Quit ()
[1:43] * rdias (~rdias@2001:8a0:749a:d01:d5d8:718:fe40:4d73) has joined #ceph
[1:44] * fsimonce (~simon@87.13.130.124) Quit (Quit: Coyote finally caught me)
[2:00] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[2:01] * oms101 (~oms101@p20030057EA06F800C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:05] * Jones (~cyphase@06SAABZ7Z.tor-irc.dnsbl.oftc.net) Quit ()
[2:05] * notarima (~nih@85.159.237.210) has joined #ceph
[2:08] * shohn (~shohn@dslb-188-102-024-152.188.102.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[2:09] * oms101 (~oms101@p20030057EA084400C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[2:12] * Kakeru (~MJXII@62.102.148.67) has joined #ceph
[2:17] * wushudoin (~wushudoin@2601:646:8202:5ed0:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[2:27] * huangjun (~kvirc@113.57.168.154) has joined #ceph
[2:27] * wwdillingham (~LobsterRo@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[2:29] <wwdillingham> I periodically get this when running rbd-mirror is this a normal expected notification ?: librbd::object_map::LockRequest: failed to lock object map: (17) File exists
[2:30] * fint (~dr@181.176.39.53) has joined #ceph
[2:31] * yanzheng (~zhyan@118.116.113.70) has joined #ceph
[2:32] * fint (~dr@181.176.39.53) Quit (Quit: Saliendo)
[2:32] * fint (~dr@181.176.39.53) has joined #ceph
[2:35] * notarima (~nih@6AGAABLCJ.tor-irc.dnsbl.oftc.net) Quit ()
[2:35] * Aethis (~rcfighter@06SAABZ93.tor-irc.dnsbl.oftc.net) has joined #ceph
[2:37] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[2:42] * Kakeru (~MJXII@76GAAE0LO.tor-irc.dnsbl.oftc.net) Quit ()
[2:42] * allenmelon (~Frymaster@nooduitgang.schmutzig.org) has joined #ceph
[2:45] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Ping timeout: 480 seconds)
[2:47] * yanzheng (~zhyan@118.116.113.70) Quit (Quit: This computer has gone to sleep)
[3:04] * bliu (~liub@203.192.156.9) has joined #ceph
[3:05] * Aethis (~rcfighter@06SAABZ93.tor-irc.dnsbl.oftc.net) Quit ()
[3:05] * ylmson (~yuastnav@146.0.43.126) has joined #ceph
[3:06] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[3:10] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) has joined #ceph
[3:12] * allenmelon (~Frymaster@76GAAE0MI.tor-irc.dnsbl.oftc.net) Quit ()
[3:12] * Bonzaii (~K3NT1S_aw@chomsky.torservers.net) has joined #ceph
[3:21] * IvanJobs (~hardes@103.50.11.146) has joined #ceph
[3:22] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:27] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[3:28] * atheism (~atheism@182.48.117.114) has joined #ceph
[3:31] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[3:35] * ylmson (~yuastnav@76GAAE0NB.tor-irc.dnsbl.oftc.net) Quit ()
[3:35] * Xerati (~Esge@0.tor.exit.babylon.network) has joined #ceph
[3:38] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[3:39] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[3:41] * zhaochao (~zhaochao@125.39.9.151) has joined #ceph
[3:42] * Bonzaii (~K3NT1S_aw@76GAAE0NK.tor-irc.dnsbl.oftc.net) Quit ()
[3:42] * neobenedict (~mog_@128.153.145.125) has joined #ceph
[3:43] * yanzheng (~zhyan@118.116.113.70) has joined #ceph
[3:51] * yanzheng1 (~zhyan@118.116.113.70) has joined #ceph
[3:52] * yanzheng (~zhyan@118.116.113.70) Quit (Ping timeout: 480 seconds)
[3:57] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[3:58] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[4:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[4:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[4:05] * Xerati (~Esge@76GAAE0N9.tor-irc.dnsbl.oftc.net) Quit ()
[4:05] * xul (~luigiman@orion.enn.lu) has joined #ceph
[4:10] * fint (~dr@181.176.39.53) Quit (Quit: Saliendo)
[4:12] * neobenedict (~mog_@7V7AAEADK.tor-irc.dnsbl.oftc.net) Quit ()
[4:12] * legion (~Kalado@hessel3.torservers.net) has joined #ceph
[4:14] * flisky (~Thunderbi@36.110.40.27) has joined #ceph
[4:18] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[4:20] * kefu (~kefu@183.193.162.205) has joined #ceph
[4:21] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[4:21] * csoukup (~csoukup@2605:a601:9c8:6b00:4519:887:2c23:18e1) has joined #ceph
[4:22] * rmart04 (~rmart04@75-148-217-209-Houston.hfc.comcastbusiness.net) has joined #ceph
[4:23] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[4:24] * jclm1 (~jclm@marriott-hotel-ottawa-yowmc.sites.intello.com) has joined #ceph
[4:25] * rmart04 (~rmart04@75-148-217-209-Houston.hfc.comcastbusiness.net) Quit ()
[4:28] * jclm (~jclm@marriott-hotel-ottawa-yowmc.sites.intello.com) Quit (Ping timeout: 480 seconds)
[4:28] * kefu (~kefu@183.193.162.205) Quit (Ping timeout: 480 seconds)
[4:31] * wwdillingham (~LobsterRo@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wwdillingham)
[4:34] * Racpatel (~Racpatel@2601:87:3:3601::675d) Quit (Quit: Leaving)
[4:35] * xul (~luigiman@76GAAE0OV.tor-irc.dnsbl.oftc.net) Quit ()
[4:35] * Crisco1 (~Heliwr@81-7-15-115.blue.kundencontroller.de) has joined #ceph
[4:37] * wwdillingham (~LobsterRo@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[4:39] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) Quit (Ping timeout: 480 seconds)
[4:42] * legion (~Kalado@76GAAE0O4.tor-irc.dnsbl.oftc.net) Quit ()
[4:42] * Bobby (~xolotl@80.255.3.122) has joined #ceph
[4:48] * geli (~geli@geli-2015.its.utas.edu.au) Quit (Quit: Leaving.)
[4:49] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) has joined #ceph
[4:53] * i_m (~ivan.miro@31.207.236.130) Quit (Quit: Leaving.)
[4:58] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[5:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[5:01] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[5:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[5:03] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:05] * Crisco1 (~Heliwr@4MJAAEMPM.tor-irc.dnsbl.oftc.net) Quit ()
[5:05] * Grimhound (~verbalins@185.100.87.82) has joined #ceph
[5:06] * Mika_c (~quassel@122.146.93.152) Quit (Remote host closed the connection)
[5:07] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[5:12] * Bobby (~xolotl@76GAAE0PW.tor-irc.dnsbl.oftc.net) Quit ()
[5:12] * narthollis (~rf`@192.42.116.16) has joined #ceph
[5:22] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[5:22] * i_m (~ivan.miro@88.206.104.168) has joined #ceph
[5:25] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[5:28] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[5:29] * Vacuum_ (~Vacuum@88.130.214.133) has joined #ceph
[5:35] * Grimhound (~verbalins@06SAAB0E9.tor-irc.dnsbl.oftc.net) Quit ()
[5:35] * Malcovent (~Da_Pineap@tor-amici-exit.tritn.com) has joined #ceph
[5:36] * Vacuum__ (~Vacuum@88.130.208.17) Quit (Ping timeout: 480 seconds)
[5:36] * kefu_ is now known as kefu
[5:42] * overclk (~quassel@121.244.87.117) has joined #ceph
[5:42] * narthollis (~rf`@76GAAE0QM.tor-irc.dnsbl.oftc.net) Quit ()
[5:42] * geegeegee (~Cue@91.250.241.241) has joined #ceph
[6:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[6:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[6:03] * adun153 (~ljtirazon@112.198.101.210) has joined #ceph
[6:05] * Malcovent (~Da_Pineap@4MJAAEMRA.tor-irc.dnsbl.oftc.net) Quit ()
[6:05] * kalmisto (~Tarazed@195.12.190.38) has joined #ceph
[6:06] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[6:09] * shohn (~shohn@dslb-088-073-137-088.088.073.pools.vodafone-ip.de) has joined #ceph
[6:12] * geegeegee (~Cue@6AGAABLG7.tor-irc.dnsbl.oftc.net) Quit ()
[6:15] * skorgu_ (~skorgu@pylon.skorgu.net) Quit (Remote host closed the connection)
[6:16] * skorgu (skorgu@pylon.skorgu.net) has joined #ceph
[6:16] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * jclm1 (~jclm@marriott-hotel-ottawa-yowmc.sites.intello.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * csoukup (~csoukup@2605:a601:9c8:6b00:4519:887:2c23:18e1) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * yanzheng1 (~zhyan@118.116.113.70) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * bliu (~liub@203.192.156.9) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * rburkholder (~overonthe@199.68.193.62) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * vata (~vata@cable-21.246.173-197.electronicbox.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * badone (~badone@66.187.239.16) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * ivancich (~ivancich@aa2.linuxbox.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * jowilkin (~jowilkin@c-98-207-136-41.hsd1.ca.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * KindOne (kindone@h183.41.30.71.dynamic.ip.windstream.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * linuxkidd (~linuxkidd@241.sub-70-210-192.myvzw.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * debian112 (~bcolbert@24.126.201.64) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * ronrib (~boswortr@45.32.242.135) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * TiCPU (~owrt@2001:470:1c:40::2) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * Nats_ (~natscogs@114.31.195.238) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * ffilz (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * sage (~quassel@2607:f298:6050:709d:11d7:393f:b703:d028) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * wer (~wer@216.197.66.226) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * trociny (~mgolub@93.183.239.2) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * arthurh (~arthurh@38.101.34.128) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * mnaser (~mnaser@162.253.53.193) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * jiffe (~jiffe@nsab.us) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * aiicore_ (~aiicore@s30.linuxpl.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * s3an2 (~root@korn.s3an.me.uk) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * Aeso (~aesospade@aesospadez.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * diegows_ (~diegows@main.woitasen.com.ar) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * Walex (~Walex@72.249.182.114) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * diq (~diq@2620:11c:f:2:c23f:d5ff:fe62:112c) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * ndevos (~ndevos@nat-pool-ams2-5.redhat.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * Superdawg (~Superdawg@ec2-54-243-59-20.compute-1.amazonaws.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * kingcu (~kingcu@kona.ridewithgps.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * logan- (~logan@63.143.60.136) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * blynch_ (~blynch@vm-nat.msi.umn.edu) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * vvb (~vvb@168.235.85.239) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * Randleman (~jesse@89.105.204.182) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * Pintomatic (sid25118@id-25118.ealing.irccloud.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * getzburg (sid24913@id-24913.ealing.irccloud.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * \ask (~ask@oz.develooper.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * Bosse (~bosse@erebus.klykken.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * oliveiradan (~doliveira@137.65.133.10) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * koma (~koma@0001c112.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * wwdillingham (~LobsterRo@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * IvanJobs (~hardes@103.50.11.146) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * bstillwell (~bryan@bokeoa.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * joshd (~jdurgin@206.169.83.146) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * jluis (~joao@8.184.114.89.rev.vodafone.pt) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * davidzlap (~Adium@2605:e000:1313:8003:90f5:10a4:d675:6c9d) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * lookcrabs (~lookcrabs@tail.seeee.us) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * PoRNo-MoRoZ (~hp1ng@mail.ap-team.ru) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * pdrakewe_ (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * jmn (~jmn@nat-pool-bos-t.redhat.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * funnel (~funnel@81.4.123.134) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * shaon (~shaon@shaon.me) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * magicrobotmonkey (~magicrobo@8.29.8.68) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * sw3 (sweaung@2400:6180:0:d0::66:100f) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * _nick (~nick@zarquon.dischord.org) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * benner (~benner@188.166.111.206) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * DeMiNe0_ (~DeMiNe0@104.131.119.74) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * gtrott (sid78444@id-78444.tooting.irccloud.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * sig_wall_ (adjkru@xn--hwgz2tba.lamo.su) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * essjayhch (sid79416@id-79416.highgate.irccloud.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * joelio (~joel@81.4.101.217) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * tacodog40k (~tacodog@dev.host) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * jnq (sid150909@0001b7cc.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * smiley_ (~smiley@205.153.36.170) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * braderhart (sid124863@braderhart.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * Gugge-47527 (gugge@92.246.2.105) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * batrick (~batrick@2600:3c00::f03c:91ff:fe96:477b) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * dis (~dis@00018d20.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * ccourtaut (~ccourtaut@178.62.125.124) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * andrewschoen (~andrewsch@50.56.86.195) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * rossdylan (rossdylan@losna.helixoide.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * masterpe (~masterpe@2a01:670:400::43) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * df (~defari@digital.el8.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * rektide (~rektide@eldergods.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * carter (~carter@li98-136.members.linode.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * trey (~trey@trey.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * adun153 (~ljtirazon@112.198.101.210) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * post-factum (~post-fact@vulcan.natalenko.name) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * scheuk (~scheuk@204.246.67.78) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * penguinRaider (~KiKs@146.185.31.226) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * neurodrone (~neurodron@158.106.193.162) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * etienneme (~arch@88.ip-167-114-240.eu) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * branto (~branto@nat-pool-brq-t.redhat.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * dougf_ (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * niknakpaddywak (~xander.ni@outbound.lax.demandmedia.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * Rickus (~Rickus@office.protected.ca) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * leseb (~leseb@81-64-223-102.rev.numericable.fr) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * gregsfortytwo (~gregsfort@transit-86-181-132-209.redhat.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * thadood (~thadood@slappy.thunderbutt.org) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * Tene (~tene@173.13.139.236) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * xcezzz (~xcezzz@97-96-111-106.res.bhn.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * mfa298 (~mfa298@krikkit.yapd.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * darkfader (~floh@88.79.251.60) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * destrudo (~destrudo@64.142.74.180) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * arbrandes (~arbrandes@ec2-54-172-54-135.compute-1.amazonaws.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * fli (fli@eastside.wirebound.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * Sgaduuw (~eelco@willikins.srv.eelcowesemann.nl) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * DrewBeer_ (~DrewBeer@216.152.240.203) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * yebyen (~yebyen@martyfunkhouser.csh.rit.edu) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * SpamapS (~SpamapS@xencbyrum2.srihosting.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * m0zes (~mozes@ns1.beocat.ksu.edu) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * iggy (~iggy@mail.vten.us) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * wateringcan (~mattt@lnx1.defunct.ca) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * singler (~singler@zeta.kirneh.eu) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * scubacuda (sid109325@0001fbab.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * dustinm` (~dustinm`@105.ip-167-114-152.net) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * zerick (~zerick@irc.quassel.zerick.me) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * SamYaple (~SamYaple@162.209.126.134) Quit (synthon.oftc.net resistance.oftc.net)
[6:16] * KindOne_ (kindone@h183.41.30.71.dynamic.ip.windstream.net) has joined #ceph
[6:17] * KindOne_ is now known as KindOne
[6:17] * csoukup (~csoukup@2605:a601:9c8:6b00:4519:887:2c23:18e1) has joined #ceph
[6:17] * rburkholder (~overonthe@199.68.193.62) has joined #ceph
[6:19] * trey (~trey@trey.user.oftc.net) has joined #ceph
[6:20] * Keiya (~andrew_m@192.42.115.101) has joined #ceph
[6:20] * adun153 (~ljtirazon@112.198.101.210) has joined #ceph
[6:20] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[6:20] * jclm1 (~jclm@marriott-hotel-ottawa-yowmc.sites.intello.com) has joined #ceph
[6:20] * yanzheng1 (~zhyan@118.116.113.70) has joined #ceph
[6:20] * bliu (~liub@203.192.156.9) has joined #ceph
[6:20] * vata (~vata@cable-21.246.173-197.electronicbox.net) has joined #ceph
[6:20] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[6:20] * badone (~badone@66.187.239.16) has joined #ceph
[6:20] * ivancich (~ivancich@aa2.linuxbox.com) has joined #ceph
[6:20] * scheuk (~scheuk@204.246.67.78) has joined #ceph
[6:20] * penguinRaider (~KiKs@146.185.31.226) has joined #ceph
[6:20] * jowilkin (~jowilkin@c-98-207-136-41.hsd1.ca.comcast.net) has joined #ceph
[6:20] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) has joined #ceph
[6:20] * linuxkidd (~linuxkidd@241.sub-70-210-192.myvzw.com) has joined #ceph
[6:20] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) has joined #ceph
[6:20] * neurodrone (~neurodron@158.106.193.162) has joined #ceph
[6:20] * etienneme (~arch@88.ip-167-114-240.eu) has joined #ceph
[6:20] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[6:20] * ronrib (~boswortr@45.32.242.135) has joined #ceph
[6:20] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[6:20] * TiCPU (~owrt@2001:470:1c:40::2) has joined #ceph
[6:20] * Nats_ (~natscogs@114.31.195.238) has joined #ceph
[6:20] * dougf_ (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[6:20] * ffilz (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) has joined #ceph
[6:20] * sage (~quassel@2607:f298:6050:709d:11d7:393f:b703:d028) has joined #ceph
[6:20] * niknakpaddywak (~xander.ni@outbound.lax.demandmedia.com) has joined #ceph
[6:20] * wer (~wer@216.197.66.226) has joined #ceph
[6:20] * trociny (~mgolub@93.183.239.2) has joined #ceph
[6:20] * arthurh (~arthurh@38.101.34.128) has joined #ceph
[6:20] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[6:20] * Rickus (~Rickus@office.protected.ca) has joined #ceph
[6:20] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) has joined #ceph
[6:20] * mnaser (~mnaser@162.253.53.193) has joined #ceph
[6:20] * jiffe (~jiffe@nsab.us) has joined #ceph
[6:20] * aiicore_ (~aiicore@s30.linuxpl.com) has joined #ceph
[6:20] * s3an2 (~root@korn.s3an.me.uk) has joined #ceph
[6:20] * Aeso (~aesospade@aesospadez.com) has joined #ceph
[6:20] * diegows_ (~diegows@main.woitasen.com.ar) has joined #ceph
[6:20] * Walex (~Walex@72.249.182.114) has joined #ceph
[6:20] * diq (~diq@2620:11c:f:2:c23f:d5ff:fe62:112c) has joined #ceph
[6:20] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) has joined #ceph
[6:20] * ndevos (~ndevos@nat-pool-ams2-5.redhat.com) has joined #ceph
[6:20] * leseb (~leseb@81-64-223-102.rev.numericable.fr) has joined #ceph
[6:20] * Superdawg (~Superdawg@ec2-54-243-59-20.compute-1.amazonaws.com) has joined #ceph
[6:20] * gregsfortytwo (~gregsfort@transit-86-181-132-209.redhat.com) has joined #ceph
[6:20] * kingcu (~kingcu@kona.ridewithgps.com) has joined #ceph
[6:20] * thadood (~thadood@slappy.thunderbutt.org) has joined #ceph
[6:20] * Tene (~tene@173.13.139.236) has joined #ceph
[6:20] * logan- (~logan@63.143.60.136) has joined #ceph
[6:20] * blynch_ (~blynch@vm-nat.msi.umn.edu) has joined #ceph
[6:20] * xcezzz (~xcezzz@97-96-111-106.res.bhn.net) has joined #ceph
[6:20] * vvb (~vvb@168.235.85.239) has joined #ceph
[6:20] * mfa298 (~mfa298@krikkit.yapd.net) has joined #ceph
[6:20] * singler (~singler@zeta.kirneh.eu) has joined #ceph
[6:20] * oliveiradan (~doliveira@137.65.133.10) has joined #ceph
[6:20] * koma (~koma@0001c112.user.oftc.net) has joined #ceph
[6:20] * \ask (~ask@oz.develooper.com) has joined #ceph
[6:20] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) has joined #ceph
[6:20] * Bosse (~bosse@erebus.klykken.com) has joined #ceph
[6:20] * getzburg (sid24913@id-24913.ealing.irccloud.com) has joined #ceph
[6:20] * Pintomatic (sid25118@id-25118.ealing.irccloud.com) has joined #ceph
[6:20] * Randleman (~jesse@89.105.204.182) has joined #ceph
[6:20] * wateringcan (~mattt@lnx1.defunct.ca) has joined #ceph
[6:20] * iggy (~iggy@mail.vten.us) has joined #ceph
[6:20] * zerick (~zerick@irc.quassel.zerick.me) has joined #ceph
[6:20] * m0zes (~mozes@ns1.beocat.ksu.edu) has joined #ceph
[6:20] * SpamapS (~SpamapS@xencbyrum2.srihosting.com) has joined #ceph
[6:20] * yebyen (~yebyen@martyfunkhouser.csh.rit.edu) has joined #ceph
[6:20] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) has joined #ceph
[6:20] * DrewBeer_ (~DrewBeer@216.152.240.203) has joined #ceph
[6:20] * dustinm` (~dustinm`@105.ip-167-114-152.net) has joined #ceph
[6:20] * scubacuda (sid109325@0001fbab.user.oftc.net) has joined #ceph
[6:20] * darkfader (~floh@88.79.251.60) has joined #ceph
[6:20] * SamYaple (~SamYaple@162.209.126.134) has joined #ceph
[6:20] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[6:20] * destrudo (~destrudo@64.142.74.180) has joined #ceph
[6:20] * arbrandes (~arbrandes@ec2-54-172-54-135.compute-1.amazonaws.com) has joined #ceph
[6:20] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) has joined #ceph
[6:20] * fli (fli@eastside.wirebound.net) has joined #ceph
[6:20] * Sgaduuw (~eelco@willikins.srv.eelcowesemann.nl) has joined #ceph
[6:20] * ChanServ sets mode +v nhm
[6:20] * ChanServ sets mode +v sage
[6:20] * ChanServ sets mode +v leseb
[6:20] * wwdillingham (~LobsterRo@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[6:20] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[6:20] * IvanJobs (~hardes@103.50.11.146) has joined #ceph
[6:20] * bstillwell (~bryan@bokeoa.com) has joined #ceph
[6:20] * joshd (~jdurgin@206.169.83.146) has joined #ceph
[6:20] * jluis (~joao@8.184.114.89.rev.vodafone.pt) has joined #ceph
[6:20] * davidzlap (~Adium@2605:e000:1313:8003:90f5:10a4:d675:6c9d) has joined #ceph
[6:20] * lookcrabs (~lookcrabs@tail.seeee.us) has joined #ceph
[6:20] * PoRNo-MoRoZ (~hp1ng@mail.ap-team.ru) has joined #ceph
[6:20] * pdrakewe_ (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) has joined #ceph
[6:20] * jmn (~jmn@nat-pool-bos-t.redhat.com) has joined #ceph
[6:20] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) has joined #ceph
[6:20] * funnel (~funnel@81.4.123.134) has joined #ceph
[6:20] * smiley_ (~smiley@205.153.36.170) has joined #ceph
[6:20] * rektide (~rektide@eldergods.com) has joined #ceph
[6:20] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) has joined #ceph
[6:20] * shaon (~shaon@shaon.me) has joined #ceph
[6:20] * Gugge-47527 (gugge@92.246.2.105) has joined #ceph
[6:20] * masterpe (~masterpe@2a01:670:400::43) has joined #ceph
[6:20] * magicrobotmonkey (~magicrobo@8.29.8.68) has joined #ceph
[6:20] * dis (~dis@00018d20.user.oftc.net) has joined #ceph
[6:20] * jnq (sid150909@0001b7cc.user.oftc.net) has joined #ceph
[6:20] * rossdylan (rossdylan@losna.helixoide.com) has joined #ceph
[6:20] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[6:20] * tacodog40k (~tacodog@dev.host) has joined #ceph
[6:20] * joelio (~joel@81.4.101.217) has joined #ceph
[6:20] * essjayhch (sid79416@id-79416.highgate.irccloud.com) has joined #ceph
[6:20] * df (~defari@digital.el8.net) has joined #ceph
[6:20] * sig_wall_ (adjkru@xn--hwgz2tba.lamo.su) has joined #ceph
[6:20] * gtrott (sid78444@id-78444.tooting.irccloud.com) has joined #ceph
[6:20] * DeMiNe0_ (~DeMiNe0@104.131.119.74) has joined #ceph
[6:20] * carter (~carter@li98-136.members.linode.com) has joined #ceph
[6:20] * benner (~benner@188.166.111.206) has joined #ceph
[6:20] * braderhart (sid124863@braderhart.user.oftc.net) has joined #ceph
[6:20] * _nick (~nick@zarquon.dischord.org) has joined #ceph
[6:20] * sw3 (sweaung@2400:6180:0:d0::66:100f) has joined #ceph
[6:20] * andrewschoen (~andrewsch@50.56.86.195) has joined #ceph
[6:20] * ccourtaut (~ccourtaut@178.62.125.124) has joined #ceph
[6:21] * ChanServ sets mode +v jluis
[6:27] * csoukup (~csoukup@2605:a601:9c8:6b00:4519:887:2c23:18e1) Quit (Ping timeout: 480 seconds)
[6:27] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[6:31] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Quit: Leaving)
[6:31] * deepthi (~deepthi@115.118.31.109) has joined #ceph
[6:32] * adun153 (~ljtirazon@112.198.101.210) Quit (Ping timeout: 480 seconds)
[6:35] * kalmisto (~Tarazed@6AGAABLHV.tor-irc.dnsbl.oftc.net) Quit ()
[6:35] * EdGruberman (~Kurimus@tor-exit-4.all.de) has joined #ceph
[6:37] * Skaag1 (~lunix@rrcs-67-52-140-5.west.biz.rr.com) has joined #ceph
[6:37] * EdGruberman (~Kurimus@tor-exit-4.all.de) Quit (Remote host closed the connection)
[6:38] * wwdillingham (~LobsterRo@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wwdillingham)
[6:39] * madkiss (~madkiss@31.154.44.218) Quit (Quit: Leaving.)
[6:40] * Phase (~Averad@tor-exit.gansta93.com) has joined #ceph
[6:40] * Phase is now known as Guest2553
[6:41] * Skaag (~lunix@rrcs-67-52-140-5.west.biz.rr.com) Quit (Ping timeout: 480 seconds)
[6:42] * batrick (~batrick@2600:3c00::f03c:91ff:fe96:477b) has joined #ceph
[6:42] * Keiya (~andrew_m@06SAAB0HT.tor-irc.dnsbl.oftc.net) Quit ()
[6:42] * toast (~PeterRabb@195.22.126.119) has joined #ceph
[6:43] * rraja (~rraja@121.244.87.117) has joined #ceph
[6:43] * rraja_ (~rraja@121.244.87.117) has joined #ceph
[6:43] * linuxkidd (~linuxkidd@241.sub-70-210-192.myvzw.com) Quit (Quit: Leaving)
[6:45] * rraja_ (~rraja@121.244.87.117) Quit ()
[6:54] * adun153 (~ljtirazon@112.198.101.72) has joined #ceph
[6:58] * kefu is now known as kefu|afk
[7:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[7:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[7:03] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) Quit (Quit: Bye)
[7:10] * Guest2553 (~Averad@06SAAB0IG.tor-irc.dnsbl.oftc.net) Quit ()
[7:10] * OODavo (~osuka_@tor2r.ins.tor.net.eu.org) has joined #ceph
[7:10] * dgurtner (~dgurtner@c-75-74-127-185.hsd1.fl.comcast.net) has joined #ceph
[7:12] * toast (~PeterRabb@6AGAABLJG.tor-irc.dnsbl.oftc.net) Quit ()
[7:12] * Deiz (~neobenedi@6AGAABLKF.tor-irc.dnsbl.oftc.net) has joined #ceph
[7:14] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:16] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:18] * kefu|afk is now known as kefu
[7:20] * vata (~vata@cable-21.246.173-197.electronicbox.net) Quit (Quit: Leaving.)
[7:21] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[7:28] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[7:30] * yanzheng1 (~zhyan@118.116.113.70) Quit (Quit: This computer has gone to sleep)
[7:30] * toastyde1th (~toast@pool-71-255-253-39.washdc.fios.verizon.net) has joined #ceph
[7:32] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[7:32] * dgurtner (~dgurtner@c-75-74-127-185.hsd1.fl.comcast.net) Quit (Ping timeout: 480 seconds)
[7:36] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) Quit (Quit: Ex-Chat)
[7:36] * toastydeath (~toast@pool-71-255-253-39.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[7:40] * OODavo (~osuka_@6AGAABLKC.tor-irc.dnsbl.oftc.net) Quit ()
[7:42] * Deiz (~neobenedi@6AGAABLKF.tor-irc.dnsbl.oftc.net) Quit ()
[7:42] * Lattyware (~Unforgive@185.100.85.132) has joined #ceph
[7:43] * kefu (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:45] * rakeshgm (~rakesh@106.51.26.7) has joined #ceph
[7:46] * adun153 (~ljtirazon@112.198.101.72) Quit (Ping timeout: 480 seconds)
[7:51] * penguinRaider_ (~KiKo@14.139.82.6) Quit (Remote host closed the connection)
[7:56] * adun153 (~ljtirazon@112.198.77.233) has joined #ceph
[7:56] * kawa2014 (~kawa@94.161.32.91) has joined #ceph
[7:59] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[8:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[8:01] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[8:04] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[8:07] * Miouge (~Miouge@188.189.95.174) has joined #ceph
[8:07] <Miouge> sep: Thanks for the tips yesterday
[8:09] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:12] * Lattyware (~Unforgive@7V7AAEAGZ.tor-irc.dnsbl.oftc.net) Quit ()
[8:12] * Xylios (~dusti@tor-exit.eecs.umich.edu) has joined #ceph
[8:16] * coyo (~unf@00017955.user.oftc.net) has joined #ceph
[8:17] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[8:17] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[8:32] * Hemanth (~hkumar_@121.244.87.117) has joined #ceph
[8:33] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[8:34] * mykola (~Mikolaj@91.245.73.44) has joined #ceph
[8:36] <sep> Miouge, you'r welcome :)
[8:40] * Lattyware (~zapu@tor.exit-no.de) has joined #ceph
[8:41] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Ping timeout: 480 seconds)
[8:42] * Xylios (~dusti@7V7AAEAHL.tor-irc.dnsbl.oftc.net) Quit ()
[8:45] * shohn (~shohn@dslb-088-073-137-088.088.073.pools.vodafone-ip.de) Quit (Quit: Leaving.)
[8:45] * shohn (~shohn@dslb-088-073-137-088.088.073.pools.vodafone-ip.de) has joined #ceph
[8:53] * adun153 (~ljtirazon@112.198.77.233) Quit (Ping timeout: 480 seconds)
[8:56] * kefu (~kefu@183.193.162.205) has joined #ceph
[9:00] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[9:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[9:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[9:02] * adun153 (~ljtirazon@112.198.90.10) has joined #ceph
[9:03] * kefu_ is now known as kef
[9:05] <JoeJulian> I don't suppose anyone in here has a support contact at Wiwynn? One of our ceph clusters is completely down because the sas expanders keep crashing and I can't get anyone to respond.
[9:06] * kefu (~kefu@183.193.162.205) Quit (Ping timeout: 480 seconds)
[9:06] * kef is now known as kefu
[9:06] * natarej (~natarej@2001:8003:483a:a900:da8:6a6c:304e:e427) Quit (Read error: Connection reset by peer)
[9:07] * natarej (~natarej@2001:8003:483a:a900:da8:6a6c:304e:e427) has joined #ceph
[9:10] * Lattyware (~zapu@76GAAE0TG.tor-irc.dnsbl.oftc.net) Quit ()
[9:11] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[9:12] * derjohn_mobi (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:12] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[9:12] * JohnO (~skrblr@Relay-J.tor-exit.network) has joined #ceph
[9:12] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[9:20] * ade (~abradshaw@85.158.226.30) has joined #ceph
[9:22] * analbeard (~shw@support.memset.com) has joined #ceph
[9:22] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[9:23] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[9:24] * adun153 (~ljtirazon@112.198.90.10) Quit (Ping timeout: 480 seconds)
[9:25] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[9:28] * kefu (~kefu@114.92.122.74) Quit (Max SendQ exceeded)
[9:28] * kefu (~kefu@114.92.122.74) has joined #ceph
[9:29] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[9:29] * fsimonce (~simon@87.13.130.124) has joined #ceph
[9:32] * rendar (~I@host150-180-dynamic.246-95-r.retail.telecomitalia.it) has joined #ceph
[9:33] * adun153 (~ljtirazon@112.198.101.105) has joined #ceph
[9:35] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[9:35] * rakeshgm (~rakesh@106.51.26.7) Quit (Quit: Leaving)
[9:37] * derjohn_mobi (~aj@fw.gkh-setu.de) has joined #ceph
[9:37] * yanzheng (~zhyan@118.116.113.70) has joined #ceph
[9:40] * Mattress (~Xa@195-154-81-29.rev.poneytelecom.eu) has joined #ceph
[9:42] * JohnO (~skrblr@06SAAB0NC.tor-irc.dnsbl.oftc.net) Quit ()
[9:44] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Ping timeout: 480 seconds)
[9:47] * kmroz (~kilo@00020103.user.oftc.net) has joined #ceph
[9:49] * adun153 (~ljtirazon@112.198.101.105) Quit (Read error: Connection reset by peer)
[9:54] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[9:55] * pabluk__ is now known as pabluk_
[9:56] * mnathani2 (~mnathani_@192-0-149-228.cpe.teksavvy.com) has joined #ceph
[10:00] * adun153 (~ljtirazon@112.198.78.141) has joined #ceph
[10:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[10:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[10:01] * erwan_taf (~erwan@37.166.41.150) Quit (Ping timeout: 480 seconds)
[10:03] <sfrode> how would one go about getting the new rgw multitenancy to work with swift? it seems like the (public) buckets are still created directly under /swift/v1/<container> instead of /swift/v1/<tenant_id>/<container>. are there any hidden config options or steps to convert older configuration into the new one?
[10:04] * itamarl is now known as Guest2564
[10:04] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[10:05] * itamarl (~itamarl@194.90.7.244) Quit ()
[10:05] * ging (~ging@kronos.ging.me.uk) has joined #ceph
[10:07] * pdrakewe_ (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) Quit (Ping timeout: 480 seconds)
[10:08] * Guest2564 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[10:09] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[10:10] * Mattress (~Xa@7V7AAEAIY.tor-irc.dnsbl.oftc.net) Quit ()
[10:10] * tuhnis (~Scymex@h-130-176.a2.corp.bahnhof.no) has joined #ceph
[10:11] * erwan_taf (~erwan@37.160.76.112) has joined #ceph
[10:11] * thomnico (~thomnico@2a01:e35:8b41:120:5145:b80f:6a00:6a05) has joined #ceph
[10:12] <ging> if i run 'ceph osd crush reweight...' should that change the weight value or the reweight value?
[10:14] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[10:14] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[10:15] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[10:17] <etienneme> ging: I don't understand the difference
[10:18] <etienneme> Ah understood, I don't think there is a "reweight" value, it will edit the weight value
[10:18] * Concubidated (~cube@c-50-173-245-118.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:19] * LeaChim (~LeaChim@host86-150-161-6.range86-150.btcentralplus.com) has joined #ceph
[10:19] * Tene (~tene@173.13.139.236) Quit (Ping timeout: 480 seconds)
[10:21] * swami1 (~swami@49.44.57.245) has joined #ceph
[10:21] <ging> i have done something before which sets the reweight not the weight
[10:22] * Concubidated (~cube@2600:1:8969:5e4e:e177:d40a:fb36:5fa6) has joined #ceph
[10:23] <ging> so i had one osd which was filling up more than it should so i set the reweight lower and the data was balanced out, without permenantly altering the weight value
[10:24] * Tene (~tene@173.13.139.236) has joined #ceph
[10:26] <ging> so i have some disks which are 0.8TB and some which are 1 so the weights are 0.8 and 1, but sometimes they don't fill up evenly
[10:27] <ging> and reweight by utlisation doesn't balance them well enough but i remember being able to do it manually and thought that was the command
[10:27] <ging> but instead it seems to alter the weight not the reweight
[10:29] <etienneme> ging: You mean that the value x on crushmap "item osd.0 weight X" was not edited?
[10:29] * dugravot61 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[10:29] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Read error: Connection reset by peer)
[10:29] <ging> it was but i didn't want it to be
[10:30] <ging> in the past i have been able to set the reweight
[10:30] <ging> i thought that was how
[10:34] * adun153 (~ljtirazon@112.198.78.141) Quit (Ping timeout: 480 seconds)
[10:38] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[10:38] <etienneme> Maybe I don't understand how reweight works, but I've never heard of a reweight value (except the one that will replace the weight )
[10:40] * tuhnis (~Scymex@76GAAE0UR.tor-irc.dnsbl.oftc.net) Quit ()
[10:42] * measter (~Zyn@readme.tor.camolist.com) has joined #ceph
[10:43] * adun153 (~ljtirazon@112.198.90.242) has joined #ceph
[10:45] <ging> when i run ceph osd tree i have a weight and a reweight value
[10:46] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Ping timeout: 480 seconds)
[10:46] <ging> when i run ceph osd reweight-by-utilization it modifies the reweight value
[10:49] <etienneme> Hum, right! I'm curious now, checking :)
[10:50] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Remote host closed the connection)
[10:51] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:7cb8:d974:71e7:7045) has joined #ceph
[10:53] * TMM (~hp@185.5.122.2) has joined #ceph
[10:56] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[10:59] <etienneme> Ok! to change weight "ceph osd crush reweight {name} {weight}" and to edit reweight "ceph osd reweight {osd-num} {weight}" you can find explanations here
[10:59] <etienneme> Weight : http://docs.ceph.com/docs/master/rados/operations/crush-map/
[11:00] <etienneme> Reweight : http://docs.ceph.com/docs/hammer/rados/operations/control/#osd-subsystem
[11:00] <etienneme> Set the weight of {osd-num} to {weight}. Two OSDs with the same weight will receive roughly the same number of I/O requests and store approximately the same amount of data. ceph osd reweight sets an override weight on the OSD. This value is in the range 0 to 1, and forces CRUSH to re-place (1-weight) of the data that would otherwise live on this drive. It does not change the weights assigned to
[11:00] <etienneme> the buckets above the OSD in the crush map, and is a corrective measure in case the normal CRUSH distribution isn???t working out quite right. For instance, if one of your OSDs is at 90% and the others are at 50%, you could reduce this weight to try and compensate for it.
[11:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[11:01] <etienneme> I don't know how I never noticed this... I thought using reweight < 1 will just reduce weight value with a %
[11:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[11:01] * sileht (~sileht@gizmo.sileht.net) Quit (Quit: WeeChat 1.5)
[11:01] * sileht (~sileht@gizmo.sileht.net) has joined #ceph
[11:05] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[11:08] <ging> ah cheers etienneme that now makes sense
[11:08] <etienneme> thanks to you :p
[11:08] <ging> so i should not have had crush
[11:09] <ging> in my command
[11:10] * _s1gma (~x303@hessel2.torservers.net) has joined #ceph
[11:12] * measter (~Zyn@6AGAABLPK.tor-irc.dnsbl.oftc.net) Quit ()
[11:12] * aleksag (~murmur@torlesnet2.relay.coldhak.com) has joined #ceph
[11:12] <etienneme> If you want to change reweight value, you must remove crush.
[11:13] * itamarl is now known as Guest2566
[11:13] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[11:13] <etienneme> (put a close value to the current one, or you will have many missplaced objects)
[11:14] * kawa2014 (~kawa@94.161.32.91) Quit (Ping timeout: 480 seconds)
[11:15] * adun153 (~ljtirazon@112.198.90.242) Quit (Ping timeout: 480 seconds)
[11:16] * Guest2566 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[11:17] * owasserm (~owasserm@bzq-79-178-117-203.red.bezeqint.net) has joined #ceph
[11:18] * Concubidated (~cube@2600:1:8969:5e4e:e177:d40a:fb36:5fa6) Quit (Ping timeout: 480 seconds)
[11:18] * Concubidated (~cube@66.87.134.197) has joined #ceph
[11:21] * ade (~abradshaw@85.158.226.30) Quit (Ping timeout: 480 seconds)
[11:23] * vanham (~vanham@187.20.70.42) has joined #ceph
[11:23] * adun153 (~ljtirazon@112.198.101.180) has joined #ceph
[11:24] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) has joined #ceph
[11:24] <vanham> Good day everyone
[11:27] <vanham> I was wondering if anyone would point out what I'm doing wrong
[11:27] <vanham> I have 2 SSD hosts and 3 HD hosts
[11:27] * ade (~abradshaw@85.158.226.30) has joined #ceph
[11:27] <vanham> Each SSD host with with OSDs
[11:27] <vanham> with 2 OSDs
[11:28] <vanham> I would like Ceph to pick 3 SSDs out of those 4 SSDs on those two hosts
[11:28] <vanham> This is my crush map: http://sprunge.us/hEAi
[11:28] <vanham> RIght now it is picking one out of each host only
[11:29] <vanham> So, all the pgs in this pool are degraded
[11:29] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:7cb8:d974:71e7:7045) Quit (Ping timeout: 480 seconds)
[11:30] <etienneme> ceph -s reports how many OSD?
[11:30] <vanham> e753: 13 osds: 13 up, 13 in
[11:31] <vanham> ceph osd dump: http://sprunge.us/KhGe
[11:31] <vanham> ceph osd tree: http://sprunge.us/IPZD
[11:31] <vanham> I'm using hammer tunnables
[11:34] <vanham> So 4 out of those 13 OSDs are SSDs, 9 are HDs
[11:35] <etienneme> You have nothing weird on logs of the osd that are not used?
[11:36] * rdias (~rdias@2001:8a0:749a:d01:d5d8:718:fe40:4d73) Quit (Ping timeout: 480 seconds)
[11:37] <vanham> Actually all OSDs have date
[11:37] <vanham> *data
[11:37] <vanham> They are all being used
[11:37] <vanham> So, some PGs will take OSD 2, 14 or 12, 13, etc
[11:37] <vanham> So, some PGs will take OSD [2, 14] or [12, 13], etc
[11:38] <vanham> So, all are used
[11:38] <vanham> But, only one of each host in each pg
[11:38] <etienneme> Ah ok
[11:38] <vanham> I would like 3 copies of each object
[11:38] <vanham> So one host would have two while the other would have one
[11:38] * rdias (~rdias@2001:8a0:749a:d01:d5d8:718:fe40:4d73) has joined #ceph
[11:39] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[11:40] * _s1gma (~x303@06SAAB0RV.tor-irc.dnsbl.oftc.net) Quit ()
[11:40] * EdGruberman (~Thayli@tor-exit.gansta93.com) has joined #ceph
[11:41] <etienneme> Your pool is in production?
[11:42] <vanham> Yep
[11:42] * aleksag (~murmur@4MJAAEMX8.tor-irc.dnsbl.oftc.net) Quit ()
[11:42] <vanham> rbd-cache is active on writeback
[11:43] <vanham> There is something wrong with the ruleset. It is looking for 3 hosts when I want it to look for 3 OSDs, no matter where
[11:43] <vanham> Well, from that default-ssd root at least
[11:43] <etienneme> maybe editing "min_size 1" could force the replication, but it means that if you have some object who have 1 copy, request will be blocked :/
[11:46] <vanham> On the osd pool?
[11:47] * infernix (nix@000120cb.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:47] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Ping timeout: 480 seconds)
[11:48] <etienneme> Hum weird, your ceph osd dump outputs "min_size 1" on both pools while your crushmap says min_size 2 or 3
[11:49] <etienneme> I'm not an expert on ruleset, so i'm scare to tell you errors if I say more :p
[11:49] <vanham> The min_size on the OSD pool is the minimum quorum for read/write ops on that pool
[11:49] <vanham> The crush rule with dictate the data placement
[11:50] * Miouge (~Miouge@188.189.95.174) Quit (Ping timeout: 480 seconds)
[11:50] * infernix (nix@000120cb.user.oftc.net) has joined #ceph
[11:50] <etienneme> Hum, nevermind. Values are also different on one of my cluster ;)
[11:51] <vanham> Ok, fixed it
[11:51] <etienneme> ah?
[11:51] <vanham> On rule replicated_ruleset_ssd I changed the step from "step chooseleaf firstn 0 type host" to "step chooseleaf firstn 0 type osd"
[11:52] <etienneme> nice :)
[11:52] * DanFoster (~Daniel@office.34sp.com) has joined #ceph
[11:54] <vanham> 45 ssd pgs are active+clean already :)
[11:54] * Miouge (~Miouge@188.189.93.29) has joined #ceph
[11:54] <vanham> Thanks for the exchange! I had this problem since yesterday :)
[11:56] * adun153 (~ljtirazon@112.198.101.180) Quit (Quit: Leaving)
[11:57] * huangjun|2 (~kvirc@113.57.168.154) has joined #ceph
[11:57] * huangjun (~kvirc@113.57.168.154) Quit (Read error: Connection reset by peer)
[11:58] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:c876:1fb3:dcf6:668a) has joined #ceph
[12:00] * penguinRaider (~KiKs@146.185.31.226) Quit (Ping timeout: 480 seconds)
[12:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[12:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[12:03] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[12:08] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[12:09] * penguinRaider (~KiKs@146.185.31.226) has joined #ceph
[12:10] * EdGruberman (~Thayli@4MJAAEMYP.tor-irc.dnsbl.oftc.net) Quit ()
[12:10] * xanax` (~tallest_r@tor-exit.csail.mit.edu) has joined #ceph
[12:12] <etienneme> Well I did not really helped :p
[12:12] <etienneme> But now I'll think about this if I have the issue!
[12:12] * darkid (~Enikma@tor.t-3.net) has joined #ceph
[12:19] * rraja (~rraja@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:20] * penguinRaider (~KiKs@146.185.31.226) Quit (Ping timeout: 480 seconds)
[12:26] * Mika_c (~quassel@122.146.93.152) Quit (Remote host closed the connection)
[12:28] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:29] * penguinRaider (~KiKs@146.185.31.226) has joined #ceph
[12:29] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[12:30] * simonada (~oftc-webi@static.ip-171-033-130-093.signet.nl) has joined #ceph
[12:30] * rraja (~rraja@121.244.87.117) has joined #ceph
[12:31] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[12:32] <simonada> Hello everyone. I have a cluster were one OSD is crushing when reading the log of a PG that doesn't own. I can see it has data from that PG on /var/lib/ceph/osd/. I wonder how that is possible. Any ideas?
[12:33] <simonada> *crashing
[12:33] * infernix (nix@000120cb.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:35] * ade (~abradshaw@85.158.226.30) Quit (Ping timeout: 480 seconds)
[12:36] * Miouge (~Miouge@188.189.93.29) Quit (Ping timeout: 480 seconds)
[12:40] * xanax` (~tallest_r@4MJAAEMY4.tor-irc.dnsbl.oftc.net) Quit ()
[12:40] * kalleeen (~SweetGirl@marylou.nos-oignons.net) has joined #ceph
[12:41] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[12:41] * Miouge (~Miouge@188.189.93.29) has joined #ceph
[12:42] <simonada> http://pastebin.com/BECrrW6A
[12:42] * darkid (~Enikma@6AGAABLRQ.tor-irc.dnsbl.oftc.net) Quit ()
[12:42] * SweetGirl (~Knuckx@06SAAB0UT.tor-irc.dnsbl.oftc.net) has joined #ceph
[12:42] * _{Tite}_ (~oftc-webi@209-6-251-36.c3-0.wrx-ubr1.sbo-wrx.ma.cable.rcn.com) has joined #ceph
[12:44] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[12:45] * b0e (~aledermue@213.95.25.82) has joined #ceph
[12:45] * swami1 (~swami@49.44.57.245) has left #ceph
[12:49] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Ping timeout: 480 seconds)
[12:52] * TMM (~hp@185.5.122.2) Quit (Remote host closed the connection)
[12:53] * tite (~tite@209-6-251-36.c3-0.wrx-ubr1.sbo-wrx.ma.cable.rcn.com) has joined #ceph
[12:54] * tite (~tite@209-6-251-36.c3-0.wrx-ubr1.sbo-wrx.ma.cable.rcn.com) Quit ()
[12:54] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[12:56] * ade (~abradshaw@85.158.226.30) has joined #ceph
[12:56] * zhaochao (~zhaochao@125.39.9.151) Quit (Quit: ChatZilla 0.9.92 [Firefox 45.1.0/20160426232238])
[12:57] * rakeshgm (~rakesh@106.51.26.7) has joined #ceph
[12:57] * tite (~tite@209-6-251-36.c3-0.wrx-ubr1.sbo-wrx.ma.cable.rcn.com) has joined #ceph
[12:58] <vanham> simonada, since it's an assertion, maybe you should go to #ceph-dev
[12:59] * _{Tite}_ (~oftc-webi@209-6-251-36.c3-0.wrx-ubr1.sbo-wrx.ma.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[13:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[13:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[13:03] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[13:05] * TMM (~hp@185.5.122.2) has joined #ceph
[13:05] * dgurtner (~dgurtner@c-75-74-127-185.hsd1.fl.comcast.net) has joined #ceph
[13:09] * huangjun|2 (~kvirc@113.57.168.154) Quit (Ping timeout: 480 seconds)
[13:10] * kalleeen (~SweetGirl@06SAAB0UP.tor-irc.dnsbl.oftc.net) Quit ()
[13:10] * Jaska (~Epi@edwardsnowden0.torservers.net) has joined #ceph
[13:10] * pabluk_ is now known as pabluk__
[13:11] * pabluk__ is now known as pabluk_
[13:12] * SweetGirl (~Knuckx@06SAAB0UT.tor-irc.dnsbl.oftc.net) Quit ()
[13:15] <simonada> vanham thanks!
[13:16] * b0e (~aledermue@213.95.25.82) has joined #ceph
[13:19] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:19] * vanham (~vanham@187.20.70.42) Quit (Quit: Ex-Chat)
[13:19] * itamarl is now known as Guest2571
[13:19] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[13:22] * Guest2571 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[13:26] * raarts (~Adium@82-171-243-109.ip.telfort.nl) Quit (Quit: Leaving.)
[13:29] * tite (~tite@209-6-251-36.c3-0.wrx-ubr1.sbo-wrx.ma.cable.rcn.com) Quit (Quit: [BX] It's not TV. It's BitchX.)
[13:32] * raarts (~Adium@82-171-243-109.ip.telfort.nl) has joined #ceph
[13:35] * roozbeh (2ed1dd82@107.161.19.109) has joined #ceph
[13:37] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[13:40] <roozbeh> Hi, I have a ceph cluster with rbd block storage. everything wirks fine but I have too much swing speed on upload and download both. Why does it happen and how can I troubleshoot this problem?
[13:40] * Jaska (~Epi@06SAAB0VO.tor-irc.dnsbl.oftc.net) Quit ()
[13:42] <roozbeh> the switch in private network is: D-Link+DGS-1008D with 8 Gb ports
[13:42] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[13:42] * Esvandiary (~basicxman@185.100.85.132) has joined #ceph
[13:45] <Gugge-47527> roozbeh: upload slower :)
[13:45] <Gugge-47527> if you upload slowly enough, you will get the same speed all the time
[13:45] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[13:45] * jclm1 (~jclm@marriott-hotel-ottawa-yowmc.sites.intello.com) Quit (Quit: Leaving.)
[13:47] <roozbeh> Gugge-47527: Yes always upload is slower than download, but the swing is between 60 KB/s - 70 MB/s for download
[13:48] <Gugge-47527> it was kind of a joke ... meaning if you always use the slowest speed, you wont get any difference
[13:48] <Gugge-47527> How do you download/upload from/to the rbd?
[13:48] * atheism (~atheism@182.48.117.114) Quit (Ping timeout: 480 seconds)
[13:49] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[13:49] <roozbeh> I use rbd on front server which connected to cluster via private network
[13:49] <Gugge-47527> more specifics
[13:50] <Gugge-47527> how do you map it, what filesystem is on it, what operation system is it on, what software shares the files, over what protocal
[13:50] <Gugge-47527> protocol
[13:50] <Gugge-47527> and so on
[13:50] <Tetard> querlas
[13:50] * Tetard facepalms
[13:51] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Ping timeout: 480 seconds)
[13:51] * flisky (~Thunderbi@36.110.40.27) Quit (Ping timeout: 480 seconds)
[13:52] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[13:53] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[13:53] <roozbeh> I have 5 physical server which act as osd. a monitor on virtual environment. all servers based on CentOS 7. all partitions for os and osd based on XFS. all servers connected to a 8 ports switch over TCP/IP
[13:53] <roozbeh> Gugge-47527: ^
[13:53] <Gugge-47527> and that was exactly 0% of the stuff i asked :)
[13:54] <roozbeh> I mapped cluster with rbd kernel module
[13:54] <roozbeh> not fuse
[13:55] <roozbeh> also rbd formatted on XFS too
[13:56] * georgem (~Adium@24.114.58.162) has joined #ceph
[13:59] <Gugge-47527> roozbeh: no more info?
[13:59] <roozbeh> what else do you need ?
[14:00] <Gugge-47527> what did i ask for?
[14:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[14:01] <nils_> roozbeh, my advice would be to isolate the components and test them separately, check the disk speed, check the fs speed, check the network speed etc..
[14:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[14:03] * allaok (~allaok@161.106.4.5) has joined #ceph
[14:04] * allaok (~allaok@161.106.4.5) has left #ceph
[14:04] * The1w (~jens@217.195.184.71) has joined #ceph
[14:05] <The1w> Tetard: facepalm?
[14:05] * jclm (~jclm@72.143.50.126) has joined #ceph
[14:05] * georgem (~Adium@24.114.58.162) Quit (Ping timeout: 480 seconds)
[14:06] <The1w> there was a thread on the mailinglist where one had decent speed writing to RBDs but horrible speed when reading..
[14:06] <The1w> it came down to a kernel bug
[14:06] <Gugge-47527> the readahead bug in some kernel versions?
[14:08] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[14:08] <Aim_> hi, i just added a new pool to my cluster, but for some reason pgp count didnt increase on ceph status
[14:09] <Aim_> is there a command to update pgps?
[14:10] * raarts (~Adium@82-171-243-109.ip.telfort.nl) Quit (Quit: Leaving.)
[14:10] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[14:10] <roozbeh> The1w: Im using CentOS 7 with 3.10 kernel (default version of CentOS 7)
[14:11] * olqs (~olqs@cpe90-146-85-69.liwest.at) has joined #ceph
[14:11] * Hemanth (~hkumar_@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:11] <roozbeh> nils_: yes, it's a good idea, thanks
[14:11] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) has joined #ceph
[14:11] * olqs (~olqs@cpe90-146-85-69.liwest.at) Quit ()
[14:12] * Esvandiary (~basicxman@4MJAAEM0U.tor-irc.dnsbl.oftc.net) Quit ()
[14:12] * adept256 (~Hideous@178-175-128-50.static.host) has joined #ceph
[14:13] <nils_> the rbd kernel module seems to be rather unmaintained, doesn't support a lot of the features. I don't know if the qemu stuff works better though.
[14:13] * itamarl is now known as Guest2577
[14:13] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:14] <The1w> Gugge-47527: indeed
[14:15] * Guest2577 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[14:18] * itamarl is now known as Guest2578
[14:18] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:22] * Guest2578 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[14:25] <roozbeh> nils_: I have a mon node and I installed that as a VM on SSD. does it cause this problem ?
[14:26] <nils_> roozbeh, not likely since the client will contact the osd servers directly for the data.
[14:31] * itamarl is now known as Guest2580
[14:31] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:36] * Guest2580 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[14:36] * itamarl is now known as Guest2581
[14:36] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:38] * wgao (~wgao@106.120.101.38) Quit (Ping timeout: 480 seconds)
[14:38] * wgao (~wgao@106.120.101.38) has joined #ceph
[14:41] * Guest2581 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[14:42] <Be-El> are there any good OSD performance tuning guides? we have new hosts with 6 tb sas hard disk and p3700 nvm-pci-e ssd as journals, and their read performance is somewhat disappointing
[14:42] * ade (~abradshaw@85.158.226.30) Quit (Ping timeout: 480 seconds)
[14:42] * adept256 (~Hideous@7V7AAEANB.tor-irc.dnsbl.oftc.net) Quit ()
[14:42] * TomyLobo (~Yopi@06SAAB0ZR.tor-irc.dnsbl.oftc.net) has joined #ceph
[14:44] * narthollis (~Frymaster@destiny.enn.lu) has joined #ceph
[14:45] * penguinRaider (~KiKs@146.185.31.226) Quit (Remote host closed the connection)
[14:46] * roozbeh (2ed1dd82@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[14:47] <[arx]> the journal isn't part of the i/o path for reads
[14:48] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:48] <Be-El> i know. write performance is another issue
[14:49] <Gugge-47527> Be-El: somewhat disappointing isnt telling much, as we dont know what you expect :)
[14:50] <Gugge-47527> what is the read MB/s and iops you get from the drives?
[14:51] * EinstCrazy (~EinstCraz@180.152.117.239) has joined #ceph
[14:51] <Be-El> i'm currently running some benchmarks. raw rados performance (rados bench rand --no-cleanup) give ~370 MB/s after dropping caches on the storage hosts and running with a default of 16 threads
[14:53] <Gugge-47527> that isnt telling much either, without knowing how many osd's you have :)
[14:53] * BlackDex (~BlackDex@ori.vyus.nl) Quit (Ping timeout: 480 seconds)
[14:53] * deepthi (~deepthi@115.118.31.109) Quit (Ping timeout: 480 seconds)
[14:54] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[14:55] * Miouge (~Miouge@188.189.93.29) Quit (Quit: Miouge)
[14:56] * Miouge (~Miouge@188.189.93.29) has joined #ceph
[14:58] * itamarl is now known as Guest2582
[14:58] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:58] <Be-El> 9 hosts with 12 hdd OSDs and upto 4 SSD OSDs using 2 E5-2680 and 128GB RAM
[14:58] <Be-El> standard replicated pool setup with ceph 0.94.6
[14:59] * mhackett (~mhack@nat-pool-bos-u.redhat.com) has joined #ceph
[15:00] * roozbeh (2ed1dd82@107.161.19.109) has joined #ceph
[15:00] * csoukup (~csoukup@159.140.254.108) has joined #ceph
[15:01] * Guest2582 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[15:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[15:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[15:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[15:02] * ade (~abradshaw@85.158.226.30) has joined #ceph
[15:04] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:04] * itamarl is now known as Guest2583
[15:04] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[15:04] <Be-El> fio --rw=foo --fallocate=posix --size=20G with 4k block size on one of the OSD disks running btrfs with lzo compression gives about 120MB/s and about 30k IOPS
[15:05] * itamarl (~itamarl@194.90.7.244) Quit ()
[15:05] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[15:06] * Guest2583 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[15:06] * roozbeh (2ed1dd82@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[15:08] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[15:09] * thomnico (~thomnico@2a01:e35:8b41:120:5145:b80f:6a00:6a05) Quit (Ping timeout: 480 seconds)
[15:10] * thomnico (~thomnico@2a01:e35:8b41:120:6dd5:a2a6:75fb:b84c) has joined #ceph
[15:11] * itamarl (~itamarl@194.90.7.244) Quit (Quit: itamarl)
[15:12] * EinstCrazy (~EinstCraz@180.152.117.239) Quit (Remote host closed the connection)
[15:12] * TomyLobo (~Yopi@06SAAB0ZR.tor-irc.dnsbl.oftc.net) Quit ()
[15:12] * uhtr5r (~Eman@D57E4E3B.static.ziggozakelijk.nl) has joined #ceph
[15:14] * narthollis (~Frymaster@6AGAABLVT.tor-irc.dnsbl.oftc.net) Quit ()
[15:15] * galaxyAbstractor (~Helleshin@ori.enn.lu) has joined #ceph
[15:15] * garphy is now known as garphy`aw
[15:16] * garphy`aw is now known as garphy
[15:16] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Ping timeout: 480 seconds)
[15:17] * The1w (~jens@217.195.184.71) Quit (Remote host closed the connection)
[15:22] * mhackett (~mhack@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[15:22] * shaunm (~shaunm@74.83.215.100) has joined #ceph
[15:24] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[15:25] * atheism (~atheism@106.38.140.252) has joined #ceph
[15:29] <Be-El> osd bench is ok, rados bench also gives acceptable results. exporting an existing rbd gives good performance, too
[15:29] <Be-El> cephfs operations on the other hand are way too slow (both ceph-fuse and kernel ceph)
[15:30] * overclk (~quassel@121.244.87.117) Quit (Remote host closed the connection)
[15:30] * blizzow (~jburns@50.243.148.102) has joined #ceph
[15:31] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[15:31] * mhackett (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[15:32] <blizzow> Is it possible to stop ceph services on an OSD, wipe the /var/lib/ceph/osd/ceph-X/current directory/ , restart ceph services and have the OSD rebuild-resync?
[15:32] <Be-El> blizzow: i would propose to remove the OSD from the cluster and readd it after zapping
[15:34] * mattbenjamin (~mbenjamin@aa2.linuxbox.com) has joined #ceph
[15:36] <blizzow> seems like a pain in the paunch.
[15:38] <Be-El> or you need another way to recreated the OSD metadata (which is also stored in the current directory)
[15:40] * huangjun (~kvirc@117.151.49.215) has joined #ceph
[15:42] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[15:42] * uhtr5r (~Eman@06SAAB01B.tor-irc.dnsbl.oftc.net) Quit ()
[15:44] * galaxyAbstractor (~Helleshin@4MJAAEM2T.tor-irc.dnsbl.oftc.net) Quit ()
[15:45] * raindog1 (~SquallSee@4MJAAEM3K.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:45] * EinstCrazy (~EinstCraz@180.152.117.239) has joined #ceph
[15:49] * aarontc (~aarontc@2001:470:e893::1:1) Quit (Quit: Bye!)
[15:52] * wwdillingham (~LobsterRo@140.247.242.44) has joined #ceph
[15:53] * EinstCrazy (~EinstCraz@180.152.117.239) Quit (Ping timeout: 480 seconds)
[15:56] <xcezzz> blizzow: https://ceph.com/community/incomplete-pgs-oh-my/
[16:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[16:02] * huangjun|2 (~kvirc@117.151.49.215) has joined #ceph
[16:04] * allaok (~allaok@161.106.4.5) has joined #ceph
[16:04] <m0zes> any thoughts on how to fix a scrub error in an ec pool? 34.260s0 deep-scrub stat mismatch, got 52904/52905 objects, 0/0 clones, 52904/52905 dirty, 0/0 omap, 0/0 hit_set_archive, 0/0 whiteouts, 143977902783/143977902783 bytes, 0/0 hit_set_archive bytes.
[16:04] <m0zes> I've tried a repair and it doesn't seem to do anything.
[16:05] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:06] * roozbeh (2ed1dd82@107.161.19.109) has joined #ceph
[16:08] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:09] * huangjun (~kvirc@117.151.49.215) Quit (Ping timeout: 480 seconds)
[16:09] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Ping timeout: 480 seconds)
[16:10] * roozbeh (2ed1dd82@107.161.19.109) Quit ()
[16:12] * bildramer (~andrew_m@edwardsnowden2.torservers.net) has joined #ceph
[16:13] * bene2 (~bene@2601:18c:8501:41d0:ea2a:eaff:fe08:3c7a) has joined #ceph
[16:13] * bene2 (~bene@2601:18c:8501:41d0:ea2a:eaff:fe08:3c7a) Quit ()
[16:14] <simonada> is it possible to use ceph-objecstorage-tool to imort objects from one OSD into other OSD??
[16:14] * raindog1 (~SquallSee@4MJAAEM3K.tor-irc.dnsbl.oftc.net) Quit ()
[16:15] * Xa (~loft@tor-exit.eecs.umich.edu) has joined #ceph
[16:17] * roozbeh (2ed1dd82@107.161.19.109) has joined #ceph
[16:18] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[16:18] * itamarl (~itamarl@194.90.7.244) Quit ()
[16:18] * Skaag1 (~lunix@rrcs-67-52-140-5.west.biz.rr.com) Quit (Quit: Leaving.)
[16:19] * allaok (~allaok@161.106.4.5) Quit (Quit: Leaving.)
[16:19] * allaok (~allaok@161.106.4.5) has joined #ceph
[16:20] <simonada> blizzow I tried to do that once but it didn't work for me., so I don't think that will work. Usually it is recommendable to use the ceph tools to do this kind of procedures
[16:20] * Skaag (~lunix@rrcs-67-52-140-5.west.biz.rr.com) has joined #ceph
[16:20] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:22] * roozbeh (2ed1dd82@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[16:23] <blizzow> That's what I really like about HDFS/elasticsearch. Just blow away the data directories while services are stopped and the things will recover.
[16:26] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[16:26] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[16:28] * huangjun (~kvirc@117.151.49.215) has joined #ceph
[16:30] * tophyr (~textual@199.201.64.3) has joined #ceph
[16:31] * infernix (nix@000120cb.user.oftc.net) has joined #ceph
[16:31] * daviddcc (~dcasier@LAubervilliers-656-1-16-160.w217-128.abo.wanadoo.fr) has joined #ceph
[16:32] * tophyr (~textual@199.201.64.3) Quit ()
[16:33] * dsl (~dsl@mobile-107-107-191-210.mycingular.net) has joined #ceph
[16:34] * huangjun|2 (~kvirc@117.151.49.215) Quit (Ping timeout: 480 seconds)
[16:34] * huangjun|2 (~kvirc@117.151.49.215) has joined #ceph
[16:34] * Racpatel (~Racpatel@2601:87:3:3601::6f15) has joined #ceph
[16:39] <wwdillingham> is rbd mirror pool peer set ???Update mirroring peer settings.??? simply used to update the UUID of the foreign peer or are there other settings which i can manipulate?
[16:39] * dsl (~dsl@mobile-107-107-191-210.mycingular.net) Quit (Remote host closed the connection)
[16:40] * huangjun (~kvirc@117.151.49.215) Quit (Ping timeout: 480 seconds)
[16:42] * bildramer (~andrew_m@4MJAAEM4F.tor-irc.dnsbl.oftc.net) Quit ()
[16:42] * Jase (~zviratko@06SAAB05K.tor-irc.dnsbl.oftc.net) has joined #ceph
[16:43] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[16:44] * jclm1 (~jclm@72.143.50.126) has joined #ceph
[16:44] * Xa (~loft@06SAAB039.tor-irc.dnsbl.oftc.net) Quit ()
[16:45] * Salamander_1 (~SquallSee@6AGAABLZ8.tor-irc.dnsbl.oftc.net) has joined #ceph
[16:45] * thomnico (~thomnico@2a01:e35:8b41:120:6dd5:a2a6:75fb:b84c) Quit (Remote host closed the connection)
[16:45] * allaok (~allaok@161.106.4.5) Quit (Quit: Leaving.)
[16:45] * allaok (~allaok@161.106.4.5) has joined #ceph
[16:45] * yanzheng (~zhyan@118.116.113.70) Quit (Quit: This computer has gone to sleep)
[16:46] * Hemanth (~hkumar_@103.228.221.191) has joined #ceph
[16:47] * TheBall (~pi@20.92-221-43.customer.lyse.net) Quit (Read error: Connection reset by peer)
[16:47] * The_BallPI (~pi@20.92-221-43.customer.lyse.net) has joined #ceph
[16:48] * branto (~branto@nat-pool-brq-t.redhat.com) Quit (Quit: Leaving.)
[16:49] * dougf_ (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (Quit: bye)
[16:49] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[16:49] * jclm (~jclm@72.143.50.126) Quit (Ping timeout: 480 seconds)
[16:50] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[16:50] * neurodrone (~neurodron@158.106.193.162) Quit (Quit: neurodrone)
[16:51] * neurodrone (~neurodron@158.106.193.162) has joined #ceph
[16:51] * allaok (~allaok@161.106.4.5) has left #ceph
[16:52] * dugravot61 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[16:52] * dgurtner (~dgurtner@c-75-74-127-185.hsd1.fl.comcast.net) Quit (Ping timeout: 480 seconds)
[16:56] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[16:56] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) has joined #ceph
[16:59] * xarses (~xarses@64.124.158.100) has joined #ceph
[17:01] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[17:02] * atheism (~atheism@106.38.140.252) Quit (Ping timeout: 480 seconds)
[17:02] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) has joined #ceph
[17:05] * atheism (~atheism@106.38.140.252) has joined #ceph
[17:08] * Skaag1 (~lunix@rrcs-67-52-140-5.west.biz.rr.com) has joined #ceph
[17:08] * Skaag (~lunix@rrcs-67-52-140-5.west.biz.rr.com) Quit (Read error: Connection reset by peer)
[17:08] * BlackDex (~BlackDex@ori.vyus.nl) has joined #ceph
[17:10] * BlackDex (~BlackDex@ori.vyus.nl) Quit (Remote host closed the connection)
[17:10] * Skaag1 (~lunix@rrcs-67-52-140-5.west.biz.rr.com) Quit (Read error: Connection reset by peer)
[17:10] * Skaag (~lunix@rrcs-67-52-140-5.west.biz.rr.com) has joined #ceph
[17:12] * Jase (~zviratko@06SAAB05K.tor-irc.dnsbl.oftc.net) Quit ()
[17:12] * n0x1d1 (~straterra@atlantic480.us.unmetered.com) has joined #ceph
[17:13] * batrick (~batrick@2600:3c00::f03c:91ff:fe96:477b) Quit (Quit: WeeChat 1.4)
[17:14] * Salamander_1 (~SquallSee@6AGAABLZ8.tor-irc.dnsbl.oftc.net) Quit ()
[17:15] * IvanJobs (~hardes@103.50.11.146) Quit (Ping timeout: 480 seconds)
[17:17] * allaok (~allaok@161.106.4.5) has joined #ceph
[17:18] * batrick (~batrick@2600:3c00::f03c:91ff:fe96:477b) has joined #ceph
[17:18] * wushudoin (~wushudoin@38.99.12.237) has joined #ceph
[17:21] * Skaag (~lunix@rrcs-67-52-140-5.west.biz.rr.com) Quit (Quit: Leaving.)
[17:25] * overclk (~quassel@117.202.97.179) has joined #ceph
[17:26] * ade (~abradshaw@85.158.226.30) Quit (Ping timeout: 480 seconds)
[17:28] * atheism (~atheism@106.38.140.252) Quit (Ping timeout: 480 seconds)
[17:28] * Concubidated1 (~cube@c-50-173-245-118.hsd1.ca.comcast.net) has joined #ceph
[17:28] * Concubidated (~cube@66.87.134.197) Quit (Read error: Connection reset by peer)
[17:29] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[17:32] * ade (~abradshaw@62.218.20.250) has joined #ceph
[17:35] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:36] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:36] * kefu (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:39] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[17:41] * kefu (~kefu@114.92.122.74) has joined #ceph
[17:42] * n0x1d1 (~straterra@6AGAABL05.tor-irc.dnsbl.oftc.net) Quit ()
[17:42] * delcake (~Shesh@cloud.tor.ninja) has joined #ceph
[17:45] * offer (~Spessu@06SAAB079.tor-irc.dnsbl.oftc.net) has joined #ceph
[17:47] * Miouge (~Miouge@188.189.93.29) Quit (Ping timeout: 480 seconds)
[17:49] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[17:52] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[17:55] * bara (~bara@213.175.37.12) has joined #ceph
[18:00] * th0m_ (~smuxi@static-qvn-qvu-164067.business.bouyguestelecom.com) has joined #ceph
[18:00] <th0m_> hi everybody
[18:00] <th0m_> i hate a very several problem on my ceph cluster in production
[18:00] <th0m_> all my monitor are killed
[18:00] <th0m_> ...
[18:01] <th0m_> my cluster seems blocked
[18:03] <th0m_> traceback from my monitor : http://pastebin.com/TQf4FECp
[18:04] * huangjun|2 (~kvirc@117.151.49.215) Quit (Ping timeout: 480 seconds)
[18:10] * penguinRaider (~KiKo@14.139.82.6) has joined #ceph
[18:11] * allaok (~allaok@161.106.4.5) Quit (Quit: Leaving.)
[18:12] * delcake (~Shesh@76GAAE044.tor-irc.dnsbl.oftc.net) Quit ()
[18:13] * linuxkidd (~linuxkidd@241.sub-70-210-192.myvzw.com) has joined #ceph
[18:13] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:13] * rraja (~rraja@121.244.87.117) Quit (Ping timeout: 480 seconds)
[18:14] * offer (~Spessu@06SAAB079.tor-irc.dnsbl.oftc.net) Quit ()
[18:18] * ade (~abradshaw@62.218.20.250) Quit (Quit: Too sexy for his shirt)
[18:25] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[18:26] * pabluk_ is now known as pabluk__
[18:28] * kefu (~kefu@114.92.122.74) Quit (Read error: Connection reset by peer)
[18:28] * kefu (~kefu@114.92.122.74) has joined #ceph
[18:30] * dvanders (~dvanders@dvanders-pro.cern.ch) Quit (Ping timeout: 480 seconds)
[18:30] * simonada (~oftc-webi@static.ip-171-033-130-093.signet.nl) Quit (Quit: Page closed)
[18:32] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[18:32] * shylesh__ (~shylesh@45.124.227.2) has joined #ceph
[18:40] * dsl (~dsl@mobile-107-107-185-20.mycingular.net) has joined #ceph
[18:42] * MJXII (~Aethis@192.42.115.101) has joined #ceph
[18:45] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[18:46] * dsl (~dsl@mobile-107-107-185-20.mycingular.net) Quit (Remote host closed the connection)
[18:50] * dgurtner (~dgurtner@c-75-74-127-185.hsd1.fl.comcast.net) has joined #ceph
[18:53] * Hemanth (~hkumar_@103.228.221.191) Quit (Ping timeout: 480 seconds)
[18:54] * TMM (~hp@185.5.122.2) Quit (Quit: Ex-Chat)
[19:02] * daviddcc (~dcasier@LAubervilliers-656-1-16-160.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[19:03] * Skaag (~lunix@rrcs-67-52-140-5.west.biz.rr.com) has joined #ceph
[19:04] * kefu (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:06] * wwdillingham_ (~LobsterRo@65.112.8.202) has joined #ceph
[19:08] * DanFoster (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[19:09] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) Quit (Quit: Leaving)
[19:10] * sugoruyo (~georgev@paarthurnax.esc.rl.ac.uk) has joined #ceph
[19:12] <sugoruyo> hey folks, I have a cluster with 4 PGs stuck incomplete and wondering how to get them working again
[19:12] * MJXII (~Aethis@76GAAE06K.tor-irc.dnsbl.oftc.net) Quit ()
[19:12] * wwdillingham (~LobsterRo@140.247.242.44) Quit (Ping timeout: 480 seconds)
[19:12] * wwdillingham_ is now known as wwdillingham
[19:14] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[19:14] * ZombieL (~ricin@exit.tor.uwaterloo.ca) has joined #ceph
[19:15] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[19:22] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:27] * dvanders (~dvanders@46.227.20.178) has joined #ceph
[19:29] * bene2 (~bene@2601:18c:8501:41d0:ea2a:eaff:fe08:3c7a) has joined #ceph
[19:31] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[19:36] * overclk (~quassel@117.202.97.179) Quit (Quit: No Ping reply in 180 seconds.)
[19:37] * overclk (~quassel@117.202.97.179) has joined #ceph
[19:38] * overclk (~quassel@117.202.97.179) Quit (Remote host closed the connection)
[19:41] * dgurtner (~dgurtner@c-75-74-127-185.hsd1.fl.comcast.net) Quit (Ping timeout: 480 seconds)
[19:44] * ZombieL (~ricin@06SAAB1BK.tor-irc.dnsbl.oftc.net) Quit ()
[19:45] * KeeperOfTheSoul (~Xerati@tor-exit.gansta93.com) has joined #ceph
[19:45] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[19:45] * jclm (~jclm@72.143.50.126) has joined #ceph
[19:45] * sileht (~sileht@gizmo.sileht.net) Quit (Ping timeout: 480 seconds)
[19:47] * sileht (~sileht@gizmo.sileht.net) has joined #ceph
[19:48] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[19:51] * jclm1 (~jclm@72.143.50.126) Quit (Ping timeout: 480 seconds)
[19:55] <wwdillingham> my rbd-mirror daemon has gotten in a split situation, I did a demote on the backup cluster and forced a resync on all objects. My struggle is trying to determine which images are split-brain, the rbd-mirro daemon log at log level 10 shows this:
[19:55] <wwdillingham> 2016-05-03 13:54:31.289666 7f9826ffd700 -1 rbd::mirror::image_replayer::BootstrapRequest: 0x7f97d0018730 handle_get_remote_tags: split-brain detected -- skipping image replay
[19:55] <wwdillingham> 2016-05-03 13:54:31.317695 7f9826ffd700 10 rbd::mirror::image_replayer::BootstrapRequest: 0x7f97d0009500 handle_get_remote_tags: decoded remote tag: [mirror_uuid=, predecessor_mirror_uuid=]
[19:55] <wwdillingham> i am using the rbd-mirroring at the pool level.
[19:57] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[19:58] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[20:04] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[20:06] * shylesh__ (~shylesh@45.124.227.2) Quit (Remote host closed the connection)
[20:14] * KeeperOfTheSoul (~Xerati@6AGAABL6A.tor-irc.dnsbl.oftc.net) Quit ()
[20:15] * drdanick1 (~darks@06SAAB1DK.tor-irc.dnsbl.oftc.net) has joined #ceph
[20:15] * georgem (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[20:15] * georgem (~Adium@206.108.127.16) has joined #ceph
[20:17] * Hemanth (~hkumar_@103.228.221.191) has joined #ceph
[20:20] * i_m (~ivan.miro@88.206.104.168) Quit (Ping timeout: 480 seconds)
[20:25] * wgao (~wgao@106.120.101.38) Quit (Ping timeout: 480 seconds)
[20:32] * i_m (~ivan.miro@88.206.104.168) has joined #ceph
[20:32] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) Quit (Quit: It's just that easy)
[20:33] * vata (~vata@207.96.182.162) has joined #ceph
[20:35] * wgao (~wgao@106.120.101.38) has joined #ceph
[20:39] <jdillaman> wwdillingham: any clue about how you got split-brain?
[20:40] * ircuser-1 (~Johnny@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[20:42] * raindog (~Inuyasha@4MJAAENDC.tor-irc.dnsbl.oftc.net) has joined #ceph
[20:44] * drdanick1 (~darks@06SAAB1DK.tor-irc.dnsbl.oftc.net) Quit ()
[20:45] * anadrom (~Crisco@128.153.145.125) has joined #ceph
[20:57] * Miouge (~Miouge@188.189.93.29) has joined #ceph
[20:58] * Hemanth (~hkumar_@103.228.221.191) Quit (Ping timeout: 480 seconds)
[20:58] * georgem (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[20:59] * georgem (~Adium@206.108.127.16) has joined #ceph
[21:01] * Hemanth (~hkumar_@103.228.221.191) has joined #ceph
[21:01] * dgurtner (~dgurtner@c-75-74-127-185.hsd1.fl.comcast.net) has joined #ceph
[21:07] <wwdillingham> jdillaman: no, I did start and kill quite a vew VMS though before this happened
[21:08] <jdillaman> wwdillingham: running "rbd mirror image info <image-spec>" on both clusters shows only one primary?
[21:08] * ibravo (~ibravo@72.83.69.64) has joined #ceph
[21:08] <th0m_> hi everybody
[21:08] <wwdillingham> jdillaman: also, im unclear as to whether i should be enabling the journaling feature on my remote peer pool, just on my primary or on both
[21:08] <th0m_> i hate a very several problem on my ceph cluster in production
[21:08] <wwdillingham> it appears the number of rbd devies with journaling enabled on the peer cluster corresponds to the number of images being skipped bc of split brain
[21:08] <th0m_> osd down still seen up ...
[21:09] <wwdillingham> jdillaman: I will check
[21:09] <jdillaman> wwdillingham: when rbd-mirror pulls the primary image from the remote cluster to the local cluster it will create an image with journaling enabled
[21:09] <jdillaman> wwdillingham: only images with journaling enabled will be able to be mirrored
[21:10] <wwdillingham> jdillaman: I thought that to be true, but wasnt sure as i believe that I read that the primary is inferred based on which image peer had journaling enabled (i never explicitly set the primary)
[21:11] <jdillaman> wwdillingham: primary is implicitly defined when you create the image -- the other (created by rbd-mirror) is implicitly non-primary since it was created from the primary
[21:11] <jdillaman> ... technically implicitly defined when you enable journaling
[21:12] <wwdillingham> so i have a cluster now that is dedicated entirely to receiving rbd-mirrors as a backup, should that cluster not have journaling enabled as a rbd_default_feature ?
[21:12] * m0zes (~mozes@ns1.beocat.ksu.edu) Quit (Ping timeout: 480 seconds)
[21:12] * dvanders (~dvanders@46.227.20.178) Quit (Ping timeout: 480 seconds)
[21:12] * raindog (~Inuyasha@4MJAAENDC.tor-irc.dnsbl.oftc.net) Quit ()
[21:12] * jacoo (~hifi@5.254.102.185) has joined #ceph
[21:12] <wwdillingham> when it begins to receive backups though, it will create the journal?
[21:12] * aarontc (~aarontc@2001:470:e893::1:1) has joined #ceph
[21:13] <wwdillingham> ive enabled journaling on all rbd devices on my primary cluster
[21:14] <wwdillingham> jdillaman: comparing two rbd devies in the primary / backup cluster (the mirroring primary variable : true/false variable is as expected)
[21:14] <jdillaman> wwdillingham: you can enable journaling via rbd_default_features on your backup cluster, it won't affect the non-primary status of an image when rbd-mirror creates them
[21:14] * anadrom (~Crisco@06SAAB1EK.tor-irc.dnsbl.oftc.net) Quit ()
[21:15] * nih (~cheese^@6AGAABL8U.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:15] <jdillaman> wwdillingham: when you say you did a "demote" on the backup images, you just ran "rbd mirror image demote <image-spec>"?
[21:15] * i_m (~ivan.miro@88.206.104.168) Quit (Remote host closed the connection)
[21:15] <wwdillingham> jdillaman: yes, and I did that to all images in the pool in a little loop
[21:15] * m0zes (~mozes@ns1.beocat.ksu.edu) has joined #ceph
[21:16] <wwdillingham> jdillaman: is there a way to get a list of which rbd images are split brain?
[21:17] * rendar (~I@host150-180-dynamic.246-95-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:17] <jdillaman> wwdillingham: we just added support for collecting replication status on a pool and per image basis to master. we'll get it backported to jewel hopefully in the next point release
[21:18] * lj (~liujun@111.202.176.44) Quit (Read error: Connection reset by peer)
[21:20] * rendar (~I@host150-180-dynamic.246-95-r.retail.telecomitalia.it) has joined #ceph
[21:20] <jdillaman> wwdillingham: any chance you can pastebin a larger section of your rbd-mirror log with "debug rbd_mirror = 20"?
[21:21] <wwdillingham> jdillaman: sure, im currently running in foreground, but will start logging to a file and crank it to 20
[21:21] * DeMiNe0_ (~DeMiNe0@104.131.119.74) Quit (Ping timeout: 480 seconds)
[21:21] * DeMiNe0 (~DeMiNe0@104.131.119.74) has joined #ceph
[21:21] * csoukup (~csoukup@159.140.254.108) Quit (Ping timeout: 480 seconds)
[21:21] <wwdillingham> jdillaman: also, the only daemon complainging with any errors is the non-primary daemon
[21:22] <jdillaman> wwdillingham: the daemons just pull updates from one another -- in this case, you have no primary images on your backup cluster to the rbd-mirror daemon on the primary cluster is idle
[21:26] <wwdillingham> jdillaman: so in theory could i do pool level replication and have some images be the primary and some be the secondary ?
[21:27] <wwdillingham> two-way mirroring lets say
[21:27] <jdillaman> wwdillingham: yup -- split the load if you want
[21:27] <jdillaman> ... plus helps to support failback
[21:28] <jdillaman> (do it incrementally)
[21:28] <wwdillingham> jdillaman: why i have you on the line a few more questions???. is it required that the pool names be identical for primary and backup ?
[21:29] <jdillaman> wwdillingham: yes, pool names must match 1-to-1 between the peers
[21:29] <jdillaman> wwdillingham: ... but if can have non-replicated pools that exist only on one cluster
[21:29] <wwdillingham> also, in my case i will alway be replicating one way my backup cluster will only be for disaster recovery, clients only ever write to the primary???.
[21:30] <wwdillingham> what happens when the backup cluster goes offline, how will that impacy my primary rbd devices on the primary cluster
[21:31] <wwdillingham> i presume the rbd_journal increases in size until it can flush to its backup, but other than this size increase requirement will it have any impact on my primary rbd device being able to continue to receiver writes?
[21:31] <jdillaman> wwdillingham: right now the journal will continue to be filled until the backup cluster ACKs the events
[21:31] <jdillaman> wwdillingham: should be no impact to the primary cluster's writes
[21:32] <jdillaman> wwdillingham: have a future feature on being able to "disconnect" very laggy peers from the journal to prevent eating up unbounded space
[21:33] <wwdillingham> jdillaman: thanks, was my next questions
[21:34] <wwdillingham> jdillaman: is the journal used exlusively for mirroring, in the meantime, if we have an unbound space problem, could i delete the rbd_journal object and not imact my primary rbd device?
[21:34] <jdillaman> wwdillingham: yes -- right now it's exclusively used for mirroring. you can just disable the journaling feature to remove the journal
[21:36] <jdillaman> wwdillingham: when you ran "rbd mirror image demote <image-spec>" on the backup cluster images, you didn't get an error stating that they are not primary?
[21:36] <wwdillingham> jdillaman: thanks. One more question if you dont mind. Lets say I have rbd devices which arent doing anything (vms powered off) will those lack an rbd journal because of no writes and therefore not be mirrored until they begin to get IO or would they be mirrored in their consistent state?
[21:37] <wwdillingham> jdillaman: no they all said ???sheduling sync??? or something to that affect
[21:37] <jdillaman> wwdillingham: the journal is sparsely allocated -- it's actually striped over a collection of objects that are created as events are added and older journal objects fill up
[21:38] <wwdillingham> jdillaman: i cant say that with 100% certainty
[21:38] <jdillaman> wwdillingham: oh, you also ran "rbd mirror image resync <image-spec>"?
[21:38] <wwdillingham> oooo, sorry i misread
[21:38] <wwdillingham> yes, on the backup cluster i attempted to demote all of the images
[21:39] <wwdillingham> but i got a bunch of errors saying they werent the primary
[21:39] <wwdillingham> i think (on the bu cluster) did a resync
[21:39] <jdillaman> ok -- that is the expected result if you try to demote a non-primary image
[21:39] <jdillaman> (it ignores your request)
[21:39] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[21:39] <wwdillingham> / i then / i think /
[21:40] <jdillaman> probably the resync that caused the split-brain -- you can force promote them, delete them, and it will recover
[21:40] <wwdillingham> for i in `rbd --cluster bu --id backup_receiver ls ox60_root_disk`; do rbd --cluster bu --id backup_receiver mirror image demote ox60_r
[21:40] <wwdillingham> oot_disk/$i; done
[21:41] <wwdillingham> for i in `rbd --cluster bu --id backup_receiver ls ox60_root_disk`; do rbd --cluster bu --id backup_receiver mirror image resync ox60_r
[21:41] <wwdillingham> oot_disk/$i && sleep 20; done
[21:41] <wwdillingham> this is what i did
[21:41] * dgurtner (~dgurtner@c-75-74-127-185.hsd1.fl.comcast.net) Quit (Ping timeout: 480 seconds)
[21:42] * jacoo (~hifi@06SAAB1FE.tor-irc.dnsbl.oftc.net) Quit ()
[21:42] <jdillaman> thanks, that's probably enough to go on
[21:43] <jdillaman> you shouldn't ever need to demote / resync in this case
[21:43] <wwdillingham> jdillaman: because all of my primaries are on one cluster and all my primaries on the other?
[21:44] <wwdillingham> err all of my backups on one and primaries on the other
[21:44] <jdillaman> wwdillingham: you would only ever run "rbd mirror image demote" against a primary image if you were attempting to perform an orderly failover to your other cluster
[21:44] * nih (~cheese^@6AGAABL8U.tor-irc.dnsbl.oftc.net) Quit ()
[21:45] <jdillaman> wwdillingham: and you would only need to run "rbd mirror image resync" if you did a "rbd mirror image promote --force" for non-orderly failover
[21:45] <jdillaman> wwdillingham: in your case -- creating journaled images in the primary cluster should just result in backup images appearing and being synced to your backup cluster w/o operator intervention
[21:47] <wwdillingham> jdillaman: Thanks that very helpful
[21:47] <jdillaman> wwdillingham: thank you for testing it out
[21:48] <wwdillingham> jdillaman: so, can a split brain be resolved, can i just tell the system to do a full sync
[21:48] <wwdillingham> tell the bu cluster to forget about whatever its hung up on and just attempt to pull down the image as it exists on the primary?
[21:48] * Hemanth is now known as kumar
[21:48] * mykola (~Mikolaj@91.245.73.44) Quit (Quit: away)
[21:49] <wwdillingham> jdillaman: the rbd-mirroring dropped at a perfect time, i was putting together some janky export import diff thing and then the clouds parted and rbd-mirror shone down upon me
[21:49] <jdillaman> wwdillingham: haha
[21:50] <th0m_> hi guys
[21:51] <th0m_> All monitor on my production cluster crash since this morning
[21:51] <th0m_> Pastebin here http://pastebin.com/TQf4FECp
[21:51] <jdillaman> wwdillingham: to recover from this, just force promote the images in your backup cluster and delete the images from the backup cluster
[21:51] <jdillaman> wwdillingham: also, for your use case, you won't need to run rbd-mirror in your primary cluster since you don't want to pull updates from your backup cluster
[21:51] <th0m_> anyone can help me please?
[21:52] * sudocat1 (~dibarra@192.185.1.20) has joined #ceph
[21:54] <wwdillingham> jdillaman: why would i promote them in the backup cluster and then delete them?
[21:54] <oliveiradan> Hey Guys, quick question... have you seen "rbd: sysfs write failed (6) no such device or address with... this is based on jewel (10.2) and virtualized environment (KVM).
[21:55] <jdillaman> wwdillingham: i'm pretty sure the remove op will fail if you attempt it on the backup since it's a non-primary image
[21:55] <oliveiradan> it used to work before... thats the interesting part.
[21:56] <oliveiradan> I can create the rbd image, but cannot map it.
[21:56] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[21:56] <wwdillingham> jdillaman: so curious, will it allow me to promote the non-primary to a primary if there already is a primary on my ???primary cluster"
[21:57] <jdillaman> oliveiradan: with Jewel -- new rbd images have features enabled by default that are not supported by krbd
[21:57] <wwdillingham> and would i run the risk then of data from my backup rbd device progogating ???upstream??? to my primary rbd device
[21:57] <jdillaman> oliveiradan: https://github.com/ceph/ceph/blob/master/doc/release-notes.rst#L200
[21:58] <jdillaman> wwdillingham: the primary cluster's rbd-mirror daemon will detect the split brain and not do anything
[21:58] <wwdillingham> oliveiradan: from what i understand the rbd kernel driver doesnt work in jewel
[21:58] <oliveiradan> jdillaman: Shouldn't the default behavior/features be preserved on how they used to work, so customers wouldn't face the same problem?
[21:58] <jdillaman> ... but you can just turn off the primary rbd-mirror daemon since you don't want to use it anyway
[21:59] <wwdillingham> jdillaman: so the rbd-mirror daemon on the bu cluster is all that is needed ?
[21:59] <jdillaman> wwdillingham: yeah
[21:59] <jdillaman> (for your use-case)
[22:00] <wwdillingham> is there a reason you suggest i run the daemon the bu cluster over the primary cluster (I could run it in both placs i presume since It would be authed to r/w from both clusters)
[22:01] * erwan_taf (~erwan@37.160.76.112) Quit (Ping timeout: 480 seconds)
[22:01] <jdillaman> oliveiradan: the kernel is always behind so that is a hard rule to use. instead, we documented how revert to the old behavior and provided an easy way to fix it if you created images with the new features enables. same issues can happen if you enable optimal tunables w/ older kernels.
[22:02] * sudocat1 (~dibarra@192.185.1.20) Quit (Quit: Leaving.)
[22:02] <jdillaman> oliveiradan: there is active development to get the kernel back up to librbd feature capabilities
[22:02] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[22:03] <oliveiradan> jdillaman: It does make sense! thanks a lot. I will give it a try and see if I can get it work then.
[22:04] * cmorandin (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) has joined #ceph
[22:06] <wwdillingham> jdillaman: thanks again for the help
[22:07] <jdillaman> np
[22:07] <wwdillingham> ill try and generate some logs for you if you are inclined to have a look
[22:07] * csoukup (~csoukup@2605:a601:9c8:6b00:82a:4075:413f:7786) has joined #ceph
[22:07] <jdillaman> wwdillingham: i think it was the resync so i think i'm all set
[22:09] * drnexus (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) has joined #ceph
[22:09] <wwdillingham> sounds good
[22:10] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[22:11] * erwan_taf (~erwan@37.161.38.139) has joined #ceph
[22:11] * Skaag (~lunix@rrcs-67-52-140-5.west.biz.rr.com) Quit (Quit: Leaving.)
[22:11] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[22:14] * ggg (~Keiya@93.115.95.202) has joined #ceph
[22:17] * ricin (~ghostnote@edwardsnowden0.torservers.net) has joined #ceph
[22:17] <blizzow> Is there a way to force a re-check of clock skew? ceph -s is returning that one of my mons has clock skew
[22:17] <blizzow> I restarted the ntp service on it, so I'm pretty sure they're all in sync
[22:18] <wwdillingham> blizzow: restart the ceph daemon on the skewed host
[22:21] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:23] <blizzow> wwdillingham: Thanks.
[22:29] * allaok (~allaok@ARennes-658-1-7-90.w83-199.abo.wanadoo.fr) has joined #ceph
[22:31] * allaok (~allaok@ARennes-658-1-7-90.w83-199.abo.wanadoo.fr) has left #ceph
[22:35] * kumar (~hkumar_@103.228.221.191) Quit (Ping timeout: 480 seconds)
[22:37] * dvanders (~dvanders@46.227.20.178) has joined #ceph
[22:39] * Miouge (~Miouge@188.189.93.29) Quit (Ping timeout: 480 seconds)
[22:44] * ggg (~Keiya@4MJAAENG2.tor-irc.dnsbl.oftc.net) Quit ()
[22:44] * Jase (~homosaur@4MJAAENH1.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:47] * ricin (~ghostnote@76GAAE1B4.tor-irc.dnsbl.oftc.net) Quit ()
[22:47] * tallest_red (~Peaced@edwardsnowden2.torservers.net) has joined #ceph
[22:50] * puffy (~puffy@c-71-198-18-187.hsd1.ca.comcast.net) has joined #ceph
[22:51] * Skaag (~lunix@rrcs-67-52-140-5.west.biz.rr.com) has joined #ceph
[22:56] * wwdillingham (~LobsterRo@65.112.8.202) Quit (Quit: wwdillingham)
[23:03] * analbeard (~shw@host109-157-136-215.range109-157.btcentralplus.com) has joined #ceph
[23:03] * georgem (~Adium@24.114.52.151) has joined #ceph
[23:05] <analbeard> hi guys, what's the best way to find all the pgs for a given pool?
[23:05] <analbeard> or rather, the pgs for all the objects in a pool
[23:06] * jclm (~jclm@72.143.50.126) Quit (Quit: Leaving.)
[23:14] * Jase (~homosaur@4MJAAENH1.tor-irc.dnsbl.oftc.net) Quit ()
[23:17] * tallest_red (~Peaced@7V7AAEAYQ.tor-irc.dnsbl.oftc.net) Quit ()
[23:19] * drnexus (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[23:19] * cmorandin (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[23:20] <georgem> analbeard: "ceph pg dump" -> the first column is the pg, the first number in the pg is the pool index
[23:21] <georgem> analbeard: and "ceph osd lspools" tells you the pool index for each pool
[23:21] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[23:25] * dvanders (~dvanders@46.227.20.178) Quit (Ping timeout: 480 seconds)
[23:25] * jclm (~jclm@marriott-hotel-ottawa-yowmc.sites.intello.com) has joined #ceph
[23:27] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[23:28] * shohn (~shohn@dslb-088-073-137-088.088.073.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[23:28] * shohn (~shohn@dslb-088-073-137-088.088.073.pools.vodafone-ip.de) has joined #ceph
[23:37] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[23:41] * georgem (~Adium@24.114.52.151) Quit (Ping timeout: 480 seconds)
[23:47] * Kyso_ (~EdGruberm@chomsky.torservers.net) has joined #ceph
[23:49] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[23:52] <analbeard> @georgem: thanks, I had sorta fudged something with that but I wasn't sure if there was a cleaner way to achieve the same thing
[23:59] * puffy (~puffy@c-71-198-18-187.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.