#ceph IRC Log

Index

IRC Log for 2016-06-22

Timestamps are in GMT/BST.

[0:00] * squizzi (~squizzi@107.13.31.195) Quit (Ping timeout: 480 seconds)
[0:00] * DJComet (~Kurimus@marylou.nos-oignons.net) has joined #ceph
[0:03] * Monello_xxx (~wiycfddbp@185.10.189.138) has joined #ceph
[0:05] * mattbenjamin (~mbenjamin@12.118.3.106) Quit (Ping timeout: 480 seconds)
[0:06] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[0:08] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[0:10] * ventifus (~ventifus@2604:b500:a:15:18d9:636c:42c3:869b) has joined #ceph
[0:12] * rendar (~I@host58-75-dynamic.0-87-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:13] * dgurtner (~dgurtner@host86-146-94-84.range86-146.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[0:14] * sleinen (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[0:16] * whatevsz (~quassel@185.22.140.109) Quit (Ping timeout: 480 seconds)
[0:17] * allaok (~allaok@ARennes-658-1-132-195.w90-32.abo.wanadoo.fr) Quit (Quit: Leaving.)
[0:21] * Monello_xxx (~wiycfddbp@185.10.189.138) Quit (Quit: Ciao)
[0:26] * davidzlap (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) has joined #ceph
[0:26] * davidzlap1 (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) Quit (Read error: Connection reset by peer)
[0:27] * saintpablo (~saintpabl@gw01.mhitp.dk) has joined #ceph
[0:30] * bene (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[0:30] * DJComet (~Kurimus@06SAAEBUI.tor-irc.dnsbl.oftc.net) Quit ()
[0:30] * offender (~Nanobot@tor.idolf.dk) has joined #ceph
[0:31] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (Quit: bye)
[0:34] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[0:34] * saintpablos (~saintpabl@185.85.5.78) has joined #ceph
[0:35] * ivancich_ (~ivancich@12.118.3.106) has joined #ceph
[0:36] * ivancich (~ivancich@12.118.3.106) Quit (Read error: Connection reset by peer)
[0:38] * cathode (~cathode@50.232.215.114) has joined #ceph
[0:40] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[0:42] * saintpablo (~saintpabl@gw01.mhitp.dk) Quit (Ping timeout: 480 seconds)
[0:43] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[0:48] * ivancich (~ivancich@12.118.3.106) has joined #ceph
[0:49] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[0:51] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) Quit (Remote host closed the connection)
[0:52] * ivancich_ (~ivancich@12.118.3.106) Quit (Quit: ivancich_)
[0:56] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:959c:fd25:b144:9b3c) has joined #ceph
[0:58] * saintpablos (~saintpabl@185.85.5.78) Quit (Ping timeout: 480 seconds)
[1:00] * offender (~Nanobot@7V7AAGJGJ.tor-irc.dnsbl.oftc.net) Quit ()
[1:00] * Hazmat (~storage@06SAAEBW3.tor-irc.dnsbl.oftc.net) has joined #ceph
[1:01] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[1:01] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[1:03] * DLX (~owner@mail.tw.co.nz) has joined #ceph
[1:09] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:959c:fd25:b144:9b3c) Quit (Ping timeout: 480 seconds)
[1:11] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:11] * infernix (nix@000120cb.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:11] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Read error: Connection reset by peer)
[1:13] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[1:13] * DLX (~owner@mail.tw.co.nz) Quit (Quit: Konversation terminated!)
[1:14] * davidzlap1 (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) has joined #ceph
[1:15] * infernix (nix@2001:41f0::2) has joined #ceph
[1:15] * davidzlap (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) Quit (Read error: Connection reset by peer)
[1:15] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[1:16] * johnavp19891 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[1:16] <- *johnavp19891* To prove that you are human, please enter the result of 8+3
[1:21] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[1:26] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[1:29] * fsimonce (~simon@host107-37-dynamic.251-95-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:30] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[1:30] * Mattress (~Peaced@torrelay6.tomhek.net) has joined #ceph
[1:31] * Hazmat (~storage@06SAAEBW3.tor-irc.dnsbl.oftc.net) Quit ()
[1:33] * sudocat (~dibarra@192.185.1.20) Quit (Quit: Leaving.)
[1:37] * krypto (~krypto@G68-90-105-114.sbcis.sbc.com) Quit (Read error: Connection reset by peer)
[1:45] * brians (~brian@80.111.114.175) Quit (Quit: Textual IRC Client: www.textualapp.com)
[1:48] * brians (~brian@80.111.114.175) has joined #ceph
[1:49] * brians (~brian@80.111.114.175) Quit (Max SendQ exceeded)
[1:49] * brians (~brian@80.111.114.175) has joined #ceph
[1:51] * rdias (~rdias@2001:8a0:749a:d01:152d:c8f2:7994:6ee0) Quit (Ping timeout: 480 seconds)
[1:53] * davidzlap (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) has joined #ceph
[1:53] * davidzlap1 (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) Quit (Read error: Connection reset by peer)
[1:57] * davidzlap1 (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) has joined #ceph
[1:57] * davidzlap (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) Quit (Read error: Connection reset by peer)
[1:57] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[2:00] * Mattress (~Peaced@06SAAEBXZ.tor-irc.dnsbl.oftc.net) Quit ()
[2:00] * AluAlu (~cooey@192.42.116.16) has joined #ceph
[2:02] * jermudgeon (~jhaustin@wpc-pe-l2.whitestone.link) Quit (Quit: jermudgeon)
[2:10] * penguinRaider_ (~KiKo@14.139.82.6) Quit (Quit: Leaving)
[2:12] * dnunez (~dnunez@c-73-38-0-185.hsd1.ma.comcast.net) Quit (Read error: Connection timed out)
[2:14] * dnunez (~dnunez@c-73-38-0-185.hsd1.ma.comcast.net) has joined #ceph
[2:16] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[2:17] * valeech (~valeech@wsip-70-166-79-23.ga.at.cox.net) Quit (Quit: valeech)
[2:18] * Brochacho (~alberto@c-73-45-127-198.hsd1.il.comcast.net) Quit (Quit: Brochacho)
[2:18] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[2:19] * ventifus (~ventifus@2604:b500:a:15:18d9:636c:42c3:869b) Quit (Quit: Leaving)
[2:27] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[2:27] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Ping timeout: 480 seconds)
[2:28] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[2:29] * xarses (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[2:30] * AluAlu (~cooey@06SAAEBY3.tor-irc.dnsbl.oftc.net) Quit ()
[2:35] * ItsCriminalAFK (~measter@tor-exit.eecs.umich.edu) has joined #ceph
[2:43] * rdias (~rdias@2001:8a0:749a:d01:152d:c8f2:7994:6ee0) has joined #ceph
[2:49] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) has joined #ceph
[2:50] * dougf_ (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[2:51] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (Ping timeout: 480 seconds)
[3:01] * johnavp19891 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[3:01] * rdias (~rdias@2001:8a0:749a:d01:152d:c8f2:7994:6ee0) Quit (Ping timeout: 480 seconds)
[3:02] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[3:03] * Nicola-1_ (~Nicola-19@x4db42015.dyn.telefonica.de) has joined #ceph
[3:05] * ItsCriminalAFK (~measter@4MJAAGUJ0.tor-irc.dnsbl.oftc.net) Quit ()
[3:05] * Azru (~Plesioth@163.172.158.208) has joined #ceph
[3:07] * antongribok (~antongrib@216.207.42.140) Quit (Quit: Leaving...)
[3:09] * Nicola-1980 (~Nicola-19@x55b31f16.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[3:13] * dnunez (~dnunez@c-73-38-0-185.hsd1.ma.comcast.net) Quit (Read error: Connection timed out)
[3:14] * dnunez (~dnunez@c-73-38-0-185.hsd1.ma.comcast.net) has joined #ceph
[3:22] * rdias (~rdias@2001:8a0:749a:d01:152d:c8f2:7994:6ee0) has joined #ceph
[3:24] * dnunez (~dnunez@c-73-38-0-185.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[3:26] * jinxing (~jinxing@58.247.117.134) has joined #ceph
[3:26] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[3:35] * Azru (~Plesioth@06SAAEB1T.tor-irc.dnsbl.oftc.net) Quit ()
[3:35] * ItsCriminalAFK (~bildramer@185.36.100.145) has joined #ceph
[3:35] * haomaiwang (~haomaiwan@li1068-35.members.linode.com) Quit (Remote host closed the connection)
[3:36] * haomaiwang (~haomaiwan@li1068-35.members.linode.com) has joined #ceph
[3:40] * sebastian-w (~quassel@212.218.8.138) Quit (Read error: Connection reset by peer)
[3:40] * sebastian-w (~quassel@212.218.8.138) has joined #ceph
[3:41] * winston-d_ (uid98317@id-98317.richmond.irccloud.com) has joined #ceph
[3:43] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[3:44] * haomaiwang (~haomaiwan@li1068-35.members.linode.com) Quit (Ping timeout: 480 seconds)
[3:46] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[3:46] * yanzheng1 (~zhyan@125.70.21.146) has joined #ceph
[3:50] * EinstCrazy (~EinstCraz@106.120.121.78) has joined #ceph
[3:55] * linuxkidd (~linuxkidd@134.sub-70-210-193.myvzw.com) has joined #ceph
[3:56] * rdias (~rdias@2001:8a0:749a:d01:152d:c8f2:7994:6ee0) Quit (Ping timeout: 480 seconds)
[4:04] * shyu (~Frank@218.241.172.114) has joined #ceph
[4:04] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[4:05] * ItsCriminalAFK (~bildramer@7V7AAGJNW.tor-irc.dnsbl.oftc.net) Quit ()
[4:08] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[4:09] * Mraedis (~Oddtwang@politkovskaja.torservers.net) has joined #ceph
[4:13] * haomaiwang (~haomaiwan@li1068-35.members.linode.com) has joined #ceph
[4:14] * wjw-freebsd (~wjw@176.74.240.9) Quit (Ping timeout: 480 seconds)
[4:15] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[4:28] * rdias (~rdias@2001:8a0:749a:d01:152d:c8f2:7994:6ee0) has joined #ceph
[4:29] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[4:36] * davidzlap1 (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) Quit (Read error: Connection reset by peer)
[4:36] * davidzlap (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) has joined #ceph
[4:37] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) has joined #ceph
[4:39] * Mraedis (~Oddtwang@06SAAEB34.tor-irc.dnsbl.oftc.net) Quit ()
[4:39] * Grum (~cheese^@06SAAEB5A.tor-irc.dnsbl.oftc.net) has joined #ceph
[4:48] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[4:49] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[4:51] * rdias (~rdias@2001:8a0:749a:d01:152d:c8f2:7994:6ee0) Quit (Ping timeout: 480 seconds)
[4:52] * rdias (~rdias@bl7-92-98.dsl.telepac.pt) has joined #ceph
[4:53] * jinxing (~jinxing@58.247.117.134) Quit (Quit: jinxing)
[4:57] * jinxing (~jinxing@58.247.119.250) has joined #ceph
[4:59] * linuxkidd (~linuxkidd@134.sub-70-210-193.myvzw.com) Quit (Quit: Leaving)
[5:00] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has left #ceph
[5:04] * ibravo (~ibravo@72.198.142.104) Quit (Quit: Leaving)
[5:09] * Grum (~cheese^@06SAAEB5A.tor-irc.dnsbl.oftc.net) Quit ()
[5:13] * sleinen (~Adium@2001:620:0:82::100) has joined #ceph
[5:18] * Racpatel (~Racpatel@2601:641:200:4c30:4e34:88ff:fe87:9abf) Quit (Ping timeout: 480 seconds)
[5:21] * IvanJobs_ (~ivanjobs@103.50.11.146) has joined #ceph
[5:28] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Ping timeout: 480 seconds)
[5:33] * sleinen (~Adium@2001:620:0:82::100) Quit (Ping timeout: 480 seconds)
[5:39] * SEBI1 (~Thononain@192.42.116.16) has joined #ceph
[5:50] * jermudgeon (~jhaustin@tab.mdu.whitestone.link) has joined #ceph
[5:52] * Vacuum__ (~Vacuum@88.130.221.106) has joined #ceph
[5:53] * jermudgeon_ (~jhaustin@tab.mdu.whitestone.link) has joined #ceph
[5:58] * jermudgeon (~jhaustin@tab.mdu.whitestone.link) Quit (Ping timeout: 480 seconds)
[5:59] * Vacuum_ (~Vacuum@88.130.192.29) Quit (Ping timeout: 480 seconds)
[5:59] * jermudgeon_ is now known as jermudgeon
[6:01] * jinxing (~jinxing@58.247.119.250) Quit (Quit: jinxing)
[6:09] * SEBI1 (~Thononain@7V7AAGJSB.tor-irc.dnsbl.oftc.net) Quit ()
[6:09] * EinstCrazy (~EinstCraz@106.120.121.78) Quit (Remote host closed the connection)
[6:09] * tritonx (~Tumm@broadband-77-37-218-145.nationalcablenetworks.ru) has joined #ceph
[6:14] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[6:17] * jermudgeon (~jhaustin@tab.mdu.whitestone.link) Quit (Ping timeout: 480 seconds)
[6:17] * IvanJobs_ (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[6:20] * tobiash (~quassel@212.118.206.70) has joined #ceph
[6:20] * tobiash_ (~quassel@212.118.206.70) Quit (Ping timeout: 480 seconds)
[6:28] * shyu (~Frank@218.241.172.114) Quit (Ping timeout: 480 seconds)
[6:28] * jermudgeon (~jhaustin@tab.mdu.whitestone.link) has joined #ceph
[6:31] * deepthi (~deepthi@115.118.209.38) has joined #ceph
[6:35] * squ (~Thunderbi@00020d26.user.oftc.net) has joined #ceph
[6:35] * swami1 (~swami@49.32.0.228) has joined #ceph
[6:35] * jermudgeon (~jhaustin@tab.mdu.whitestone.link) Quit (Read error: No route to host)
[6:36] * kawa2014 (~kawa@5.87.252.222) has joined #ceph
[6:39] * tritonx (~Tumm@4MJAAGUTE.tor-irc.dnsbl.oftc.net) Quit ()
[6:39] * sleinen (~Adium@80-254-69-63.dynamic.monzoon.net) has joined #ceph
[6:41] * sleinen1 (~Adium@2001:620:0:82::101) has joined #ceph
[6:41] * jermudgeon (~jhaustin@tab.mdu.whitestone.link) has joined #ceph
[6:44] * jinxing (~jinxing@58.247.117.134) has joined #ceph
[6:47] * sleinen (~Adium@80-254-69-63.dynamic.monzoon.net) Quit (Ping timeout: 480 seconds)
[6:54] * rdias (~rdias@bl7-92-98.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[6:56] * davidzlap1 (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) has joined #ceph
[6:57] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[6:57] * davidzlap (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) Quit (Read error: Connection reset by peer)
[6:58] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[6:58] * rdias (~rdias@bl7-92-98.dsl.telepac.pt) has joined #ceph
[6:58] * jermudgeon (~jhaustin@tab.mdu.whitestone.link) Quit (Ping timeout: 480 seconds)
[7:03] * Racpatel (~Racpatel@2601:641:200:4c30:4e34:88ff:fe87:9abf) has joined #ceph
[7:03] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[7:07] * jermudgeon (~jhaustin@tab.biz.whitestone.link) has joined #ceph
[7:09] * Redshift (~Averad@93.115.95.206) has joined #ceph
[7:15] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[7:16] * reed (~reed@142-254-30-170.dsl.dynamic.fusionbroadband.com) Quit (Quit: Ex-Chat)
[7:17] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[7:18] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) has joined #ceph
[7:21] * gauravbafna (~gauravbaf@49.32.0.202) has joined #ceph
[7:22] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[7:26] * Racpatel (~Racpatel@2601:641:200:4c30:4e34:88ff:fe87:9abf) Quit (Ping timeout: 480 seconds)
[7:39] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[7:39] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[7:39] * Redshift (~Averad@4MJAAGUU9.tor-irc.dnsbl.oftc.net) Quit ()
[7:39] * Kyso (~drupal@relay1.tor.openinternet.io) has joined #ceph
[7:54] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[8:00] * linjan_ (~linjan@176.195.252.240) Quit (Ping timeout: 480 seconds)
[8:05] * micw (~micw@p50992bfa.dip0.t-ipconnect.de) has joined #ceph
[8:05] <micw> hi
[8:05] * nathani (~nathani@2607:f2f8:ac88::) Quit (Quit: WeeChat 1.4)
[8:06] <micw> how can i tune cephfs for metadata-heavy operations (like running rsync against it)?
[8:09] * Kyso (~drupal@06SAAECB5.tor-irc.dnsbl.oftc.net) Quit ()
[8:11] * EinstCrazy (~EinstCraz@114.111.167.229) has joined #ceph
[8:13] * rdas (~rdas@121.244.87.116) has joined #ceph
[8:13] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:13] * liumxnl (~liumxnl@45.32.74.135) has joined #ceph
[8:13] * liumxnl (~liumxnl@45.32.74.135) Quit (Remote host closed the connection)
[8:15] * davidzlap (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) has joined #ceph
[8:16] * liumxnl (~liumxnl@45.32.74.135) has joined #ceph
[8:16] * liumxnl (~liumxnl@45.32.74.135) Quit (Remote host closed the connection)
[8:16] * liumxnl (~liumxnl@45.32.74.135) has joined #ceph
[8:16] * liumxnl (~liumxnl@45.32.74.135) Quit (Remote host closed the connection)
[8:17] * davidzlap1 (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) Quit (Read error: Connection reset by peer)
[8:18] * liumxnl (~liumxnl@45.32.74.135) has joined #ceph
[8:18] * liumxnl (~liumxnl@45.32.74.135) Quit (Remote host closed the connection)
[8:19] * liumxnl (~liumxnl@45.32.74.135) has joined #ceph
[8:19] * liumxnl (~liumxnl@45.32.74.135) Quit (Remote host closed the connection)
[8:19] * liumxnl (~liumxnl@45.32.74.135) has joined #ceph
[8:19] * liumxnl (~liumxnl@45.32.74.135) Quit (Remote host closed the connection)
[8:21] * nathani (~nathani@2607:f2f8:ac88::) has joined #ceph
[8:23] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[8:24] * rdias (~rdias@bl7-92-98.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[8:33] * rdias (~rdias@bl7-92-98.dsl.telepac.pt) has joined #ceph
[8:34] <krogon> need some help, how to switch these mds servers from standby to active: https://gist.github.com/krogon-intel/a4a01baa7e28a5a7b9128ac1853e671e
[8:35] * briner (~briner@129.194.16.54) has joined #ceph
[8:35] * dec (~dec@223.119.197.104.bc.googleusercontent.com) Quit (Read error: Connection reset by peer)
[8:38] * squ (~Thunderbi@00020d26.user.oftc.net) Quit (Read error: Connection reset by peer)
[8:38] * ade (~abradshaw@dslb-092-078-139-021.092.078.pools.vodafone-ip.de) has joined #ceph
[8:39] * squ (~Thunderbi@00020d26.user.oftc.net) has joined #ceph
[8:40] * davidzlap1 (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) has joined #ceph
[8:40] * davidzlap (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) Quit (Ping timeout: 480 seconds)
[8:40] * tpetr (~tpetr@nat-pool-brq-t.redhat.com) has joined #ceph
[8:41] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Quit: Leaving...)
[8:41] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[8:44] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[8:44] * sese_ (~ricin@67.ip-92-222-38.eu) has joined #ceph
[8:47] <ronrib> krogon: have you created a fs?
[8:47] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[8:47] <ronrib> micw: cephfs doesn't seem to have many tuning options yet
[8:48] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[8:48] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit ()
[8:50] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Read error: Connection reset by peer)
[8:50] <krogon> I have fs already created and it worked before outage
[8:50] <micw> i guess most operations run on the metadata pool and mds.
[8:50] <micw> i have 3 nodes. so i can switch metadata size between 2 and 3 and i can add more active mds
[8:51] <sep> micw, you can do things with the metadatapool tho. like place it on ssd
[8:51] <Be-El> krogon: did you try 'ceph mds cluster_up' already or reducing the number of active mds to one?
[8:51] <krogon> I got "mds rank 0 is damaged" and I cleared the error with ceph mds rmfailed
[8:51] <micw> how might this incluence speed?
[8:51] <krogon> now all mds are standby
[8:52] <krogon> yes, I done cluster_down and cluster_up, I also start mds sequentially one after one
[8:55] <Be-El> krogon: which ceph release do you use?
[8:56] * rendar (~I@host183-125-dynamic.183-80-r.retail.telecomitalia.it) has joined #ceph
[8:57] <krogon> ceph version 9.2.1 (752b6a3020c3de74e07d2a8b4c5e48dab5a6b6fd)
[8:58] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[8:58] <micw> krogon, i had the problem that i did not see some messages from daemons in the logs (especially when startup fails). So i started them manually with debug enabled. e.g. sudo -u ceph ceph-mds -i 0 --pid-file /var/run/ceph/mds.0.pid -c /etc/ceph/ceph.conf --cluster ceph --setuser ceph --setgroup ceph -d
[8:58] <Be-El> are there any useful error messages/warnings in the logs if you restart one mds?
[8:58] * davidzlap1 (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) Quit (Read error: Connection reset by peer)
[8:58] * davidzlap (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) has joined #ceph
[9:01] * EinstCrazy (~EinstCraz@114.111.167.229) Quit (Remote host closed the connection)
[9:02] * EinstCrazy (~EinstCraz@114.111.167.229) has joined #ceph
[9:02] <Be-El> krogon: http://www.spinics.net/lists/ceph-users/msg27825.html
[9:03] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[9:03] * dneary (~dneary@213.23.104.157) has joined #ceph
[9:04] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[9:04] * saintpablo (~saintpabl@gw01.mhitp.dk) has joined #ceph
[9:05] * analbeard (~shw@support.memset.com) has joined #ceph
[9:05] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[9:06] * lmb (~Lars@tmo-102-168.customers.d1-online.com) has joined #ceph
[9:10] * EinstCrazy (~EinstCraz@114.111.167.229) Quit (Ping timeout: 480 seconds)
[9:11] * EinstCrazy (~EinstCraz@114.111.167.229) has joined #ceph
[9:13] * rdias (~rdias@bl7-92-98.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[9:14] * sese_ (~ricin@06SAAECEF.tor-irc.dnsbl.oftc.net) Quit ()
[9:14] * Kyso_ (~maku@cry.ip-eend.nl) has joined #ceph
[9:14] * jcsp (~jspray@213.175.37.12) has joined #ceph
[9:15] * ircolle (~ircolle@nat-pool-brq-t.redhat.com) has joined #ceph
[9:17] * rdias (~rdias@bl7-92-98.dsl.telepac.pt) has joined #ceph
[9:19] <sep> anyone knows when the bonus techtalk from yesterday will be uploaded on youtube ?
[9:20] * dneary (~dneary@213.23.104.157) Quit (Ping timeout: 480 seconds)
[9:21] * thesix (~thesix@leifhelm.mur.at) Quit (Ping timeout: 480 seconds)
[9:21] * jclm (~jclm@86.188.165.219) Quit (Quit: Leaving.)
[9:22] * thesix (~thesix@leifhelm.mur.at) has joined #ceph
[9:29] * wjw-freebsd (~wjw@176.74.240.9) has joined #ceph
[9:31] * madkiss (~madkiss@tmo-110-181.customers.d1-online.com) has joined #ceph
[9:31] * wjw-freebsd (~wjw@176.74.240.9) Quit (Read error: Connection reset by peer)
[9:32] * Steppy (~dleeuw@143.121.192.183) has joined #ceph
[9:32] * Steppy (~dleeuw@143.121.192.183) has left #ceph
[9:35] * dugravot6 (~dugravot6@194.199.223.4) Quit (Ping timeout: 480 seconds)
[9:37] * wjw-freebsd (~wjw@176.74.240.9) has joined #ceph
[9:39] * lmb (~Lars@tmo-102-168.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[9:40] * fsimonce (~simon@host107-37-dynamic.251-95-r.retail.telecomitalia.it) has joined #ceph
[9:43] * Titin (~textual@ALyon-658-1-213-185.w90-14.abo.wanadoo.fr) has joined #ceph
[9:44] * Kyso_ (~maku@06SAAECFO.tor-irc.dnsbl.oftc.net) Quit ()
[9:44] * AG_Scott (~lmg@37.48.80.101) has joined #ceph
[9:50] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:51] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[9:52] * dneary (~dneary@62.214.2.210) has joined #ceph
[9:56] * linjan_ (~linjan@86.62.112.22) has joined #ceph
[9:56] * jermudgeon (~jhaustin@tab.biz.whitestone.link) Quit (Quit: jermudgeon)
[9:58] * allaok (~allaok@machine107.orange-labs.com) has left #ceph
[9:59] * DanFoster (~Daniel@office.34sp.com) has joined #ceph
[10:04] * flisky (~Thunderbi@106.38.61.190) has joined #ceph
[10:04] * madkiss (~madkiss@tmo-110-181.customers.d1-online.com) Quit (Quit: Leaving.)
[10:14] * AG_Scott (~lmg@7V7AAGJ1F.tor-irc.dnsbl.oftc.net) Quit ()
[10:14] * blank (~Jyron@tor-exit1-readme.dfri.se) has joined #ceph
[10:15] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[10:16] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[10:20] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:959c:fd25:b144:9b3c) has joined #ceph
[10:21] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[10:21] * smerz (~ircircirc@37.74.194.90) Quit (Quit: Leaving)
[10:23] <sep> rbd on erasure backed pool. i assume it's not possible to do much about the read performance, except migrate to replicated pools? am using it for veeam backup. and merging the oldest incremental into the old full backup is very slow. i assume none of them are in cache tier since they are the oldest files there is.
[10:31] * swami2 (~swami@49.44.57.236) has joined #ceph
[10:32] * micw (~micw@p50992bfa.dip0.t-ipconnect.de) Quit (Quit: Leaving)
[10:37] * swami1 (~swami@49.32.0.228) Quit (Ping timeout: 480 seconds)
[10:40] * KindOne_ (kindone@h99.17.40.69.dynamic.ip.windstream.net) has joined #ceph
[10:43] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[10:44] * blank (~Jyron@4MJAAGU2L.tor-irc.dnsbl.oftc.net) Quit ()
[10:44] * EinstCrazy (~EinstCraz@114.111.167.229) Quit (Remote host closed the connection)
[10:44] * PeterRabbit (~Sigma@hessel0.torservers.net) has joined #ceph
[10:46] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:46] * KindOne_ is now known as KindOne
[10:48] * sleinen (~Adium@2001:620:0:82::104) has joined #ceph
[10:55] * shylesh (~shylesh@121.244.87.118) has joined #ceph
[10:56] * dneary (~dneary@62.214.2.210) Quit (Ping timeout: 480 seconds)
[10:57] * dneary (~dneary@62.214.2.210) has joined #ceph
[11:00] * briner (~briner@129.194.16.54) Quit (Quit: briner)
[11:00] * briner (~briner@129.194.16.54) has joined #ceph
[11:00] * sickology (~mio@vpn.bcs.hr) has joined #ceph
[11:02] * briner (~briner@129.194.16.54) Quit ()
[11:02] * deepthi (~deepthi@115.118.209.38) Quit (Ping timeout: 480 seconds)
[11:02] * briner (~briner@2001:620:600:1000:5d26:8eaa:97f0:8115) has joined #ceph
[11:04] * briner (~briner@2001:620:600:1000:5d26:8eaa:97f0:8115) Quit ()
[11:07] * sickolog1 (~mio@vpn.bcs.hr) Quit (Ping timeout: 481 seconds)
[11:08] * sleinen (~Adium@2001:620:0:82::104) Quit (Read error: Connection reset by peer)
[11:08] * sleinen (~Adium@2001:620:0:82::104) has joined #ceph
[11:09] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[11:10] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[11:12] * dneary (~dneary@62.214.2.210) Quit (Ping timeout: 480 seconds)
[11:14] * PeterRabbit (~Sigma@4MJAAGU3M.tor-irc.dnsbl.oftc.net) Quit ()
[11:15] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[11:18] * karnan (~karnan@121.244.87.117) has joined #ceph
[11:18] * NTTEC (~nttec@49.146.70.133) has joined #ceph
[11:19] * nardial (~ls@dslb-084-063-234-150.084.063.pools.vodafone-ip.de) has joined #ceph
[11:20] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[11:21] * micw (~micw@ip92346aec.dynamic.kabel-deutschland.de) has joined #ceph
[11:21] <micw> hi
[11:23] <micw> can i simply create a big ceph block device, format it ext4 and mount it as "local" big disk?
[11:24] <rotbeard> micw, of course
[11:25] * NTTEC (~nttec@49.146.70.133) Quit (Remote host closed the connection)
[11:25] * NTTEC (~nttec@49.146.70.133) has joined #ceph
[11:28] * sleinen (~Adium@2001:620:0:82::104) Quit (Read error: Connection reset by peer)
[11:28] * sleinen (~Adium@2001:620:0:82::104) has joined #ceph
[11:30] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[11:32] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[11:34] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:35] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Quit: Leaving.)
[11:36] <micw> would this better work with rsync workload than cephfs?
[11:41] * herrsergio (~herrsergi@00021432.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:41] * TMM (~hp@185.5.121.201) has joined #ceph
[11:44] * Nijikokun (~Silentkil@37.48.81.27) has joined #ceph
[11:45] * swami2 (~swami@49.44.57.236) Quit (Ping timeout: 480 seconds)
[11:46] * thomnico (~thomnico@2a01:e35:8b41:128::30) has joined #ceph
[11:56] * sleinen (~Adium@2001:620:0:82::104) Quit (Ping timeout: 480 seconds)
[11:57] * sleinen (~Adium@2001:620:0:82::104) has joined #ceph
[12:01] * thomnico (~thomnico@2a01:e35:8b41:128::30) Quit (Ping timeout: 480 seconds)
[12:02] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[12:02] * gauravba_ (~gauravbaf@49.32.28.201) has joined #ceph
[12:04] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[12:04] * swami1 (~swami@49.32.0.228) has joined #ceph
[12:06] * yibo (~YiboCai@101.230.208.200) Quit (Quit: Leaving)
[12:08] * lmb (~Lars@tmo-102-168.customers.d1-online.com) has joined #ceph
[12:09] * gauravbafna (~gauravbaf@49.32.0.202) Quit (Ping timeout: 480 seconds)
[12:11] * sleinen (~Adium@2001:620:0:82::104) Quit (Ping timeout: 480 seconds)
[12:11] * Titin (~textual@ALyon-658-1-213-185.w90-14.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[12:13] <post-factum> http://ceph.com/rpm-hammer/rhel7/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found
[12:13] <post-factum> is that ok for ceph repo?
[12:13] <Walex> micw: what you can do is a lot more than what may be sensible to do. But then most system administrators are "syntacticists"...
[12:13] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[12:14] * Nijikokun (~Silentkil@7V7AAGJ5B.tor-irc.dnsbl.oftc.net) Quit ()
[12:14] <Walex> micw: CephFS would probably be a lot better.
[12:15] <Walex> sep: "erasure backed pool" "merging the oldest incremental into the old full backup is very slow". I guess you are seeing massive write amplifications. That's part of the package.
[12:16] <Walex> sep "erasure backed pool" does not mean "lots more space and nothing else gets worse" :-)
[12:18] * nardial (~ls@dslb-084-063-234-150.084.063.pools.vodafone-ip.de) Quit (Quit: Leaving)
[12:19] <micw> Walex, better in performance?
[12:21] * t4nk960 (~oftc-webi@117.247.186.15) has joined #ceph
[12:21] <t4nk960> hi
[12:22] <zdzichu> is there a way to completely obliterate cephfs? I shut down MDSs, did "ceph fs rm", recreated the filesystem and cephfs-data-scan was able to recover old files
[12:22] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[12:22] <t4nk960> I cannot change access for containers with ceph-jewel
[12:22] <t4nk960> I am using radosgw as object storage in openstack liberty. I am using ceph jewel. Currently I can create public and private containers. But cannot change the access of containers ie. cannot change a public container to private and vice versa. There is pop-up. "Success: Successfully updated container access to public." But access is not changing
[12:23] <t4nk960> I tried with ceph-infernalis, but couldn't recreate this with infernalis. Could this be a bug with ceph jewel?
[12:24] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) has joined #ceph
[12:24] <Be-El> zdzichu: if you want to get rid of cephfs completely, you need to remove the data and metadata pool
[12:25] <Be-El> zdzichu: a MDS instance does not have any local state; everything is stored in the (meta)data pools
[12:27] <zdzichu> Be-El: well, data pool is used for other stuff, too
[12:27] <zdzichu> Be-El: removing metadata pool will make the file unrecoverable?
[12:27] <Be-El> zdzichu: in that case i would propose to mount cephfs and remove the files using rm
[12:28] * t4nk960 (~oftc-webi@117.247.186.15) Quit (Quit: Page closed)
[12:28] * t4nk460 (~oftc-webi@117.247.186.15) has joined #ceph
[12:28] * t4nk460 (~oftc-webi@117.247.186.15) Quit ()
[12:29] <Be-El> zdzichu: afaik parts of the filesystem structure is stored in the xattrs of the objects in the data pool, so a recovery using rados cli might be possible
[12:30] * lmb (~Lars@tmo-102-168.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[12:30] <Be-El> zdzichu: if you are in control over the rados object names and are 100% absolutely sure that no object ceph based application is using a similar pattern, you can try to remove the cephfs related rados objects by their name. but i would definitely prefer to use standard file removal commands, even if it may take more time
[12:33] * wjw-freebsd (~wjw@176.74.240.9) Quit (Read error: Connection reset by peer)
[12:33] <zdzichu> right know I have bunch of empty directories in my cephfs, using space, with rm -rf unable to remove them ("Directory not empty")
[12:33] <zdzichu> right now
[12:33] * wjw-freebsd (~wjw@176.74.240.9) has joined #ceph
[12:34] <zdzichu> so I'm looking for a way to permanently remove this cephfs and recreate it
[12:35] <Be-El> are there any directory entries starting with a '.' or any running processes that still might have open filehandles for some files?
[12:35] <zdzichu> no and no
[12:36] <zdzichu> I'm using ceph-fuse at the moment
[12:36] <zdzichu> seems a bit more stable than kernel client
[12:36] <Be-El> but has its own problems, too......but that's a different story
[12:37] * squ (~Thunderbi@00020d26.user.oftc.net) Quit (Quit: squ)
[12:37] <Be-El> did you umount all cephs mountpoints except one?
[12:37] <sep> Walex, ofcourse. but the idea is to make it as good as possible. and see if it's workable or useless.
[12:38] <zdzichu> Be-El: yes, I had to restart two client stations because ceph-fuse hung, but no it is mounted only on one station, in one place
[12:38] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[12:38] <Be-El> zdzichu: and you cannot remove the directories after umount and remounting on that last station?
[12:38] <zdzichu> yes
[12:39] <Be-El> zdzichu: and umount, restart of mds server and remount also did not help?
[12:39] * wjw-freebsd (~wjw@176.74.240.9) Quit (Read error: Connection reset by peer)
[12:40] <zdzichu> well, mds restarts by itself every few seconds
[12:40] <zdzichu> but I will try to restart all of them
[12:40] <zdzichu> version is ceph-mds-10.2.2-1.fc25.x86_64, BTW
[12:40] <Be-El> stop all of them and only start the one with the lowest rank
[12:41] * wjw-freebsd (~wjw@176.74.240.9) has joined #ceph
[12:42] <Be-El> and increase the debug level for that mds to help debugging the actual problem
[12:42] * kawa2014 (~kawa@5.87.252.222) Quit (Ping timeout: 480 seconds)
[12:43] * kawa2014 (~kawa@5.86.41.224) has joined #ceph
[12:48] * kalleeen (~Deiz@06SAAECNH.tor-irc.dnsbl.oftc.net) has joined #ceph
[12:52] * jinxing (~jinxing@58.247.117.134) Quit (Quit: jinxing)
[12:53] * ircolle (~ircolle@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[12:54] <zdzichu> Be-El: https://paste.debian.net/plain/743534 repeats constantly
[12:54] <zdzichu> this "/dane" is from first, removed cephfs
[12:56] * shyu (~Frank@218.241.172.114) has joined #ceph
[12:58] <micw> i fail to map an image because the kernel client misses some image features. but how can i see which features are supported?
[12:59] * sleinen (~Adium@2001:620:0:82::102) has joined #ceph
[13:00] <Be-El> zdzichu: and the second try was at /tmp2?
[13:14] <zdzichu> yes
[13:15] <zdzichu> uh wait, those were directories inside cephfs
[13:15] <Be-El> and now inode 10000000 collides for both attempts, resulting in a failed assertion
[13:15] <zdzichu> with cephfs mounted at /mnt/ceph, first was /mnt/ceph/dane/ , second /mnt/ceph/tmp2/
[13:18] * kalleeen (~Deiz@06SAAECNH.tor-irc.dnsbl.oftc.net) Quit ()
[13:18] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:24] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[13:25] * owlbot (~supybot@pct-empresas-50.uc3m.es) has joined #ceph
[13:25] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[13:27] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[13:27] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[13:29] * neo (~oftc-webi@pct-empresas-133.uc3m.es) has joined #ceph
[13:29] * neo is now known as Guest400
[13:33] * garphy is now known as garphy`aw
[13:33] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[13:34] * Guest400 (~oftc-webi@pct-empresas-133.uc3m.es) Quit ()
[13:35] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[13:37] * haomaiwang (~haomaiwan@li1068-35.members.linode.com) Quit (Remote host closed the connection)
[13:37] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[13:38] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[13:39] * bene (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) has joined #ceph
[13:39] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[13:41] * i_m (~ivan.miro@deibp9eh1--blueice2n1.emea.ibm.com) has joined #ceph
[13:42] <IvanJobs> I have a question about ceph-deploy, how can I configure ceph-deploy to use 2014 as ssh port other than 22?
[13:43] <sep> not sure about ceph-deploy. but what about making a ssh_config file for the ceph-deploy running user ?
[13:43] <micw> IvanJobs, if it internally uses openssh-client, it can be confugured with .ssh/config
[13:43] <post-factum> IvanJobs: ~/.ssh/config
[13:44] <micw> ;-)
[13:44] <post-factum> meh
[13:44] <sep> :)
[13:44] <IvanJobs> wow, thx guys, micw, post-factum
[13:44] <post-factum> ok, i see ppl alive here. what's going on with ceph repos?
[13:44] <sep> pool size in this channel set to 3 obviously...
[13:45] <post-factum> http://ceph.com/rpm-hammer/rhel7/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found
[13:45] <ceph-ircslackbot1> <dvanders> http://download.ceph.com/rpm-hammer/el7/x86_64/repodata/repomd.xml ?
[13:45] <ceph-ircslackbot1> <dvanders> rhel7 -> el7
[13:45] <post-factum> yes, but http://ceph.com/rpm-hammer/rhel7/x86_64 is pre-configured in ceph-release rpm
[13:46] <post-factum> and ceph-release rpm is not updated
[13:46] <ceph-ircslackbot1> <dvanders> well, rhel7 is not there at all now
[13:47] <ceph-ircslackbot1> <dvanders> did you try the el7 ceph-release here: http://download.ceph.com/rpm-hammer/el7/noarch/
[13:47] <post-factum> yes
[13:47] <post-factum> baseurl=http://ceph.com/rpm-hammer/rhel7/$basearch
[13:47] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[13:48] <micw> can ceph intercept when 2 clients try to bind or mount the same rbd?
[13:48] <ceph-ircslackbot1> <dvanders> post-factum: that's clearly a bug, you're right.
[13:48] * brianjjo (~Pettis@188.213.166.6) has joined #ceph
[13:49] <post-factum> i believe we should have ceph-release rpm updated
[13:50] <ceph-ircslackbot1> <dvanders> i think adeza maintains that
[13:51] <post-factum> how could one find adeza?
[13:51] <ceph-ircslackbot1> <dvanders> adeza@redhat.com
[13:55] * gauravba_ (~gauravbaf@49.32.28.201) Quit (Remote host closed the connection)
[13:57] <post-factum> thanks, mailed him, CCing ceph-users
[13:57] <ceph-ircslackbot1> <dvanders> np
[13:57] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[13:58] * dneary (~dneary@62.214.2.210) has joined #ceph
[13:58] * dneary (~dneary@62.214.2.210) Quit ()
[14:01] * lmb (~Lars@tmo-102-168.customers.d1-online.com) has joined #ceph
[14:01] * garphy`aw is now known as garphy
[14:04] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (Ping timeout: 480 seconds)
[14:04] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[14:10] * lmb (~Lars@tmo-102-168.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[14:11] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[14:15] * shylesh (~shylesh@121.244.87.118) Quit (Remote host closed the connection)
[14:15] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[14:18] * brianjjo (~Pettis@06SAAECPO.tor-irc.dnsbl.oftc.net) Quit ()
[14:19] * anadrom (~Xeon06@7V7AAGKAU.tor-irc.dnsbl.oftc.net) has joined #ceph
[14:19] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[14:22] * haomaiwang (~haomaiwan@li1068-35.members.linode.com) has joined #ceph
[14:23] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[14:23] * NTTEC (~nttec@49.146.70.133) Quit (Remote host closed the connection)
[14:24] * NTTEC (~nttec@49.146.70.133) has joined #ceph
[14:24] * haomaiwang (~haomaiwan@li1068-35.members.linode.com) Quit (Remote host closed the connection)
[14:24] * haomaiwang (~haomaiwan@li1068-35.members.linode.com) has joined #ceph
[14:25] * jcsp (~jspray@213.175.37.12) Quit (Ping timeout: 480 seconds)
[14:25] * NTTEC (~nttec@49.146.70.133) Quit (Read error: Connection reset by peer)
[14:25] * NTTEC (~nttec@49.146.70.133) has joined #ceph
[14:28] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[14:30] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit ()
[14:30] * dugravot6 (~dugravot6@nat-persul-montet.wifi.univ-lorraine.fr) has joined #ceph
[14:33] * bniver (~bniver@nat-pool-bos-u.redhat.com) has joined #ceph
[14:35] * dugravot6 (~dugravot6@nat-persul-montet.wifi.univ-lorraine.fr) Quit (Remote host closed the connection)
[14:37] * dugravot6 (~dugravot6@nat-persul-montet.wifi.univ-lorraine.fr) has joined #ceph
[14:40] * jinxing (~jinxing@58.33.4.210) has joined #ceph
[14:45] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[14:48] * EthanL (~lamberet@cce02cs4036-fa12-z.ams.hpecore.net) Quit (Ping timeout: 480 seconds)
[14:48] * anadrom (~Xeon06@7V7AAGKAU.tor-irc.dnsbl.oftc.net) Quit ()
[14:49] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[14:52] * yanzheng1 (~zhyan@125.70.21.146) Quit (Quit: This computer has gone to sleep)
[14:52] * itwasntandy (~andrew@bash.sh) Quit (Ping timeout: 480 seconds)
[14:53] * yanzheng1 (~zhyan@125.70.21.146) has joined #ceph
[14:57] * itwasntandy (~andrew@bash.sh) has joined #ceph
[14:59] * EthanL (~lamberet@cce02cs4036-fa12-z.ams.hpecore.net) has joined #ceph
[15:00] * onyb (~ani07nov@119.82.105.66) has joined #ceph
[15:01] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[15:02] * georgem (~Adium@206.108.127.16) has joined #ceph
[15:03] * mattbenjamin (~mbenjamin@12.118.3.106) has joined #ceph
[15:06] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) has joined #ceph
[15:06] * khyron (~khyron@fixed-190-159-187-190-159-75.iusacell.net) Quit (Ping timeout: 480 seconds)
[15:08] * dougf_ (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (Quit: bye)
[15:08] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[15:12] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[15:17] * owlbot (~supybot@pct-empresas-50.uc3m.es) Quit (Remote host closed the connection)
[15:18] * owlbot (~supybot@pct-empresas-50.uc3m.es) has joined #ceph
[15:21] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[15:22] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[15:24] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[15:24] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[15:26] <scheuk> hello, any radosgw performance metrics experts out there?
[15:27] <scheuk> I'm tring to understand the get_initial_lat and the put_initial_lat metrics
[15:27] <scheuk> also
[15:27] <scheuk> the difference between the gets/puts/requests metrics
[15:28] * IvanJobs (~ivanjobs@183.192.78.179) has joined #ceph
[15:28] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[15:30] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[15:31] * valeech (~valeech@wsip-70-166-79-23.ga.at.cox.net) has joined #ceph
[15:33] * winston-d_ (uid98317@id-98317.richmond.irccloud.com) Quit (Quit: Connection closed for inactivity)
[15:35] * gregmark (~Adium@68.87.42.115) has joined #ceph
[15:35] * shyu (~Frank@218.241.172.114) Quit (Ping timeout: 480 seconds)
[15:36] * vincepii (~textual@77.245.22.67) has joined #ceph
[15:39] <vincepii> Hello! I am looking for some clarification on this "Tip" from the troubleshooting page: "DO NOT mount kernel clients directly on the same node as your Ceph Storage Cluster, because kernel conflicts can arise. However, you can mount kernel clients within virtual machines (VMs) on a single node."
[15:39] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[15:39] <vincepii> if you have a node that is NOT an OSD, can you map an rbd there?
[15:39] <vincepii> (but this node is still part of ceph)
[15:39] <rkeene> vincepii, Of course
[15:40] <vincepii> so the thing to avoid is to map rbd images on nodes that also are OSDs?
[15:41] <T1w> yes or possibly MONs
[15:41] * micw (~micw@ip92346aec.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[15:41] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[15:42] <vincepii> and what about cephfs, can you mount it on ceph nodes (OSDs, MONs or MDSs)?
[15:43] <vincepii> I mean, for sure you can, but will there be risk for the kernel deadlock?
[15:44] <T1w> I would not suggest it
[15:45] * m0zes__ (~mozes@n117m02.cis.ksu.edu) has joined #ceph
[15:45] <T1w> if possible use a userspace client (fuse based one) and not a kernel based client, but it seems silly to mount a cephfs on part of the nodes that serve the fs to clients - it's high risk for something bad to happen
[15:45] <rkeene> cephfs is still talking to RADOS -- really the problem is overstated and you're not likely to hit a deadlock with RBDs mapped on an OSD host -- the issue is that I/O scheduling becomes contenious.
[15:48] * PuyoDead (~Zombiekil@216.218.134.12) has joined #ceph
[15:48] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Read error: Connection reset by peer)
[15:49] <vincepii> rkeene so I/O slows down for the whole host, not just for the ceph volume?
[15:50] * yanzheng1 (~zhyan@125.70.21.146) Quit (Quit: This computer has gone to sleep)
[15:51] * jermudgeon (~jhaustin@tab.biz.whitestone.link) has joined #ceph
[15:51] <rkeene> It creates more I/O scheduling overhead, since a write(2) will happen twice -- it's not a huge deal
[15:52] <rkeene> Now if you SWAP over an RBD, well... may God have mercy on your soul because the kswapd kernel thread runs at a much higher priority than the OSD process so the OSD process may be unrunnable because the swapper is swapping
[15:52] * yanzheng (~zhyan@125.70.21.146) has joined #ceph
[15:54] <vincepii> ok, it's very useful info you are giving me, thanks. But talking about likelihood of kernel deadlocks, do you see bugs being reported about it or people actually complaining about it here or on mailing list? Just trying to get a statistic :D
[15:55] <rkeene> Kernel deadlocks for userspace I/O systems like Ceph, NBD, etc are mostly related to swap AFAIK... I'm not an expert on Ceph but I do have an RBD that I map and have a filesystem on on a Ceph OSD node (for a few years now), but it is low-volume I/O
[15:56] * wes_dillingham (~wes_dilli@140.247.242.44) Quit (Quit: wes_dillingham)
[15:58] * yanzheng (~zhyan@125.70.21.146) Quit (Quit: This computer has gone to sleep)
[15:58] <vincepii> wait, getting confused about "userspace I/O systems": when you map an rbd image, you are using the rbd kernel module, no?
[16:01] * swami1 (~swami@49.32.0.228) Quit (Quit: Leaving.)
[16:01] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[16:01] * vata (~vata@207.96.182.162) has joined #ceph
[16:04] <rkeene> But all the I/O goes through userspace (ceph-osd)
[16:05] <rkeene> userspace (your app) -> kernel (fs) -> kernel (rbd) -> userspace (ceph-osd) -> kernel (fs) -> kernel (block) -> ... eventually disk
[16:06] * NTTEC (~nttec@49.146.70.133) Quit (Remote host closed the connection)
[16:07] <vincepii> cool, thanks :)
[16:12] * jermudgeon (~jhaustin@tab.biz.whitestone.link) Quit (Quit: jermudgeon)
[16:15] * IvanJobs (~ivanjobs@183.192.78.179) Quit ()
[16:16] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[16:16] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has left #ceph
[16:16] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[16:18] * PuyoDead (~Zombiekil@7V7AAGKF6.tor-irc.dnsbl.oftc.net) Quit ()
[16:18] * Behedwin (~hassifa@ns316491.ip-37-187-129.eu) has joined #ceph
[16:18] * ntpttr_laptop (~ntpttr@192.55.55.37) has joined #ceph
[16:19] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[16:28] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[16:28] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) has joined #ceph
[16:28] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[16:29] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[16:30] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[16:33] * yanzheng (~zhyan@125.70.21.146) has joined #ceph
[16:33] * tpetr (~tpetr@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:34] * tpetr (~tpetr@nat-pool-brq-t.redhat.com) has joined #ceph
[16:34] * vincepii (~textual@77.245.22.67) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[16:36] * vincepii (~textual@77.245.22.67) has joined #ceph
[16:41] * yanzheng (~zhyan@125.70.21.146) Quit (Quit: This computer has gone to sleep)
[16:41] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:41] * mshaffer (~Adium@2607:fad0:32:a02:c9d3:869:56ff:8672) has joined #ceph
[16:41] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[16:42] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[16:48] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:48] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) has joined #ceph
[16:48] * Behedwin (~hassifa@4MJAAGVHW.tor-irc.dnsbl.oftc.net) Quit ()
[16:48] <mshaffer> I'm looking for some help with a problematic PG, whenever an osd that this PG is on is restarted I get stuck with 2 degraded objects and 1 misplaced object. I haven't had any success finding out how to correct this. When the osd is down a ceph pg query completes, but once the osd starts, that command hangs indefinitely. I'm not sure what further info would be of use but happy to supply whatever could help. Any help would be greatly apprec
[16:48] * flisky (~Thunderbi@106.38.61.190) Quit (Ping timeout: 480 seconds)
[16:53] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Quit: Leaving.)
[16:53] * Fapiko (~dug@relay1.tor.openinternet.io) has joined #ceph
[16:53] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[16:55] * gauravbafna (~gauravbaf@122.172.241.208) has joined #ceph
[16:58] <johnavp1989> Having and issue mounting cephfs with a secretfile. It mounts properly if I pass it the secret but if I stick the secret in a file and use secretfile= it fails with wrong fs tpe, bad option, bad superblock
[16:59] <rkeene> johnavp1989, Maybe the secret in the file is wrong or inaccessible ?
[16:59] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (Ping timeout: 480 seconds)
[17:00] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[17:01] * ventifus (~ventifus@2604:b500:a:15:18d9:636c:42c3:869b) has joined #ceph
[17:01] <johnavp1989> rkeene: I think I have the format correct according to the docs. It just a file with nothing but the secret with the name admin.secret. I'm mounting with sudo so I don't see why permissions would be an issue. I tried completely opening the permissions with the same result
[17:01] * gauravbafna (~gauravbaf@122.172.241.208) Quit (Read error: Connection reset by peer)
[17:02] * ventifus (~ventifus@2604:b500:a:15:18d9:636c:42c3:869b) Quit ()
[17:02] <rkeene> johnavp1989, It could be inaccessible due to namespacing issues if you're trying to do this in a fs namespace/chroot
[17:03] * saintpablo (~saintpabl@gw01.mhitp.dk) Quit (Ping timeout: 480 seconds)
[17:03] <johnavp1989> rkeene:syslog seems to tell me it just doesn't like the option libceph: bad option at 'secretfile=/etc/ceph/admin.secret'
[17:03] * gauravbafna (~gauravbaf@122.167.71.122) has joined #ceph
[17:04] <johnavp1989> no namespaces/chroot and it works with the options secret= option
[17:06] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) Quit (Quit: Leaving)
[17:06] <bene> perf mtg this week?
[17:07] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:07] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[17:08] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[17:09] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[17:10] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[17:11] <bene> sage^
[17:11] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) has joined #ceph
[17:11] * gauravbafna (~gauravbaf@122.167.71.122) Quit (Ping timeout: 480 seconds)
[17:11] * EthanL (~lamberet@cce02cs4036-fa12-z.ams.hpecore.net) Quit (Ping timeout: 480 seconds)
[17:13] <sage> skipping it for this week i guess!
[17:13] * dugravot6 (~dugravot6@nat-persul-montet.wifi.univ-lorraine.fr) Quit (Quit: Leaving.)
[17:15] * sleinen (~Adium@2001:620:0:82::102) Quit (Ping timeout: 480 seconds)
[17:15] * ntpttr_laptop (~ntpttr@192.55.55.37) Quit (Remote host closed the connection)
[17:15] * ntpttr_laptop (~ntpttr@134.134.139.83) has joined #ceph
[17:17] * tpetr (~tpetr@nat-pool-brq-t.redhat.com) Quit (Quit: Leaving)
[17:18] * gauravbafna (~gauravbaf@122.172.229.206) has joined #ceph
[17:20] * EthanL (~lamberet@cce02cs4036-fa12-z.ams.hpecore.net) has joined #ceph
[17:21] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) has joined #ceph
[17:22] * xarses (~xarses@64.124.158.100) has joined #ceph
[17:23] * Fapiko (~dug@7V7AAGKKL.tor-irc.dnsbl.oftc.net) Quit ()
[17:23] * _s1gma (~blip2@ori.enn.lu) has joined #ceph
[17:23] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[17:24] * i_m (~ivan.miro@deibp9eh1--blueice2n1.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[17:25] * swami1 (~swami@27.7.165.79) has joined #ceph
[17:27] * gauravbafna (~gauravbaf@122.172.229.206) Quit (Ping timeout: 480 seconds)
[17:27] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[17:28] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[17:28] * billwebb (~billwebb@50-203-47-138-static.hfc.comcastbusiness.net) has joined #ceph
[17:29] * rraja (~rraja@121.244.87.117) has joined #ceph
[17:31] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[17:33] * vincepii (~textual@77.245.22.67) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:35] * vincepii (~textual@77.245.22.67) has joined #ceph
[17:36] * ntpttr_laptop (~ntpttr@134.134.139.83) Quit (Remote host closed the connection)
[17:36] * ntpttr_laptop (~ntpttr@134.134.139.83) has joined #ceph
[17:38] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[17:42] * derjohn_mob (~aj@88.128.80.159) has joined #ceph
[17:47] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:48] * lmb (~Lars@tmo-102-168.customers.d1-online.com) has joined #ceph
[17:49] * sleinen (~Adium@2001:620:0:82::100) has joined #ceph
[17:50] * dnunez (~dnunez@nat-pool-bos-t.redhat.com) has joined #ceph
[17:51] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[17:52] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:53] * _s1gma (~blip2@7V7AAGKMX.tor-irc.dnsbl.oftc.net) Quit ()
[17:53] * PeterRabbit (~Kyso_@tor.les.net) has joined #ceph
[17:59] * linjan_ (~linjan@86.62.112.22) Quit (Ping timeout: 480 seconds)
[18:02] * garphy is now known as garphy`aw
[18:02] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[18:11] * jinxing (~jinxing@58.33.4.210) Quit (Quit: jinxing)
[18:13] * bniver (~bniver@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[18:15] * vincepii (~textual@77.245.22.67) Quit (Quit: Textual IRC Client: www.textualapp.com)
[18:15] * ade (~abradshaw@dslb-092-078-139-021.092.078.pools.vodafone-ip.de) Quit (Quit: Too sexy for his shirt)
[18:15] * danieagle (~Daniel@187.75.17.48) has joined #ceph
[18:17] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[18:18] * Animazing (~Wut@94.242.217.235) Quit (Ping timeout: 480 seconds)
[18:18] * bniver (~bniver@nat-pool-bos-u.redhat.com) has joined #ceph
[18:23] * PeterRabbit (~Kyso_@7V7AAGKO4.tor-irc.dnsbl.oftc.net) Quit ()
[18:23] * Jamana1 (~Chaos_Lla@06SAAEC4E.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:24] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[18:25] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[18:25] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[18:28] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:29] * Racpatel (~Racpatel@2601:641:200:4c30:4e34:88ff:fe87:9abf) has joined #ceph
[18:29] * jclm (~jclm@86.188.165.219) has joined #ceph
[18:31] * jclm (~jclm@86.188.165.219) Quit ()
[18:32] * reed (~reed@216.38.134.18) has joined #ceph
[18:35] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[18:38] * derjohn_mob (~aj@88.128.80.159) Quit (Ping timeout: 480 seconds)
[18:38] * ntpttr_laptop (~ntpttr@134.134.139.83) Quit (Quit: Leaving)
[18:39] * Racpatel (~Racpatel@2601:641:200:4c30:4e34:88ff:fe87:9abf) Quit (Quit: Leaving)
[18:40] * mykola (~Mikolaj@193.93.217.46) has joined #ceph
[18:41] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[18:42] * Racpatel (~Racpatel@2601:641:200:4c30:4e34:88ff:fe87:9abf) has joined #ceph
[18:43] * wgao (~wgao@106.120.101.38) Quit (Read error: Connection timed out)
[18:43] * kawa2014 (~kawa@5.86.41.224) Quit (Ping timeout: 480 seconds)
[18:43] * danieagle (~Daniel@187.75.17.48) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[18:44] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[18:45] * sleinen (~Adium@2001:620:0:82::100) Quit (Ping timeout: 480 seconds)
[18:45] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[18:46] * wgao (~wgao@106.120.101.38) has joined #ceph
[18:48] * reed (~reed@216.38.134.18) Quit (Quit: Ex-Chat)
[18:48] * DanFoster (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[18:49] * reed (~reed@216.38.134.18) has joined #ceph
[18:50] * Vacuum_ (~Vacuum@88.130.218.46) has joined #ceph
[18:52] * ventifus (~ventifus@2604:b500:a:15:18d9:636c:42c3:869b) has joined #ceph
[18:53] * Jamana1 (~Chaos_Lla@06SAAEC4E.tor-irc.dnsbl.oftc.net) Quit ()
[18:55] * Dav1 (~user@static-206-226-72-81.cust.tzulo.com) has joined #ceph
[18:56] * Dav1 (~user@static-206-226-72-81.cust.tzulo.com) has left #ceph
[18:57] * Vacuum__ (~Vacuum@88.130.221.106) Quit (Ping timeout: 480 seconds)
[19:00] * jermudgeon (~jhaustin@wpc-pe-l2.whitestone.link) has joined #ceph
[19:05] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[19:10] * gauravbafna (~gauravbaf@122.172.229.206) has joined #ceph
[19:10] * shylesh (~shylesh@45.124.227.167) has joined #ceph
[19:15] * linjan (~linjan@176.195.252.240) has joined #ceph
[19:17] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[19:23] * Rehevkor (~Knuckx@exit1.ipredator.se) has joined #ceph
[19:29] * lmb (~Lars@tmo-102-168.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[19:33] * cathode (~cathode@50.232.215.114) has joined #ceph
[19:35] * bniver (~bniver@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[19:36] * gauravbafna (~gauravbaf@122.172.229.206) Quit (Remote host closed the connection)
[19:37] * dugravot6 (~dugravot6@4cy54-1-88-187-244-6.fbx.proxad.net) has joined #ceph
[19:42] <wes_dillingham> does anyone know of a project that is a web app which allows ldap auth login and on the backend creates an rgw user and serves up keys / is there any built in ldap integration to RGW frontend auth?
[19:43] * swami1 (~swami@27.7.165.79) Quit (Quit: Leaving.)
[19:45] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:45] <SamYaple> wes_dillingham: you _could_ do this with Openstacks keystone but its probably not exactly what youre looking for I dont think
[19:46] <SamYaple> depending on your needs you might be able to get it to do what you need though
[19:46] <wes_dillingham> Yea, I dont run openstack but, I suppose maybe I could use standalone keystone to do this?
[19:46] <wes_dillingham> I will check that out SamYaple
[19:47] * vikhyat (~vumrao@123.252.241.152) has joined #ceph
[19:48] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[19:48] * dugravot6 (~dugravot6@4cy54-1-88-187-244-6.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[19:53] * Rehevkor (~Knuckx@7V7AAGKUK.tor-irc.dnsbl.oftc.net) Quit ()
[19:54] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[19:55] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (Remote host closed the connection)
[19:55] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[19:57] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:959c:fd25:b144:9b3c) Quit (Ping timeout: 480 seconds)
[20:07] * mgolub (~Mikolaj@193.93.217.58) has joined #ceph
[20:12] * mykola (~Mikolaj@193.93.217.46) Quit (Ping timeout: 480 seconds)
[20:13] * agsha (~agsha@124.40.246.234) has joined #ceph
[20:14] * henrique (~henriquet@router2.lsd.ufcg.edu.br) Quit (Read error: Connection reset by peer)
[20:17] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[20:18] * kicker__ (~kicker@216.160.243.23) has joined #ceph
[20:23] * zviratko (~nupanick@195.228.45.176) has joined #ceph
[20:23] * vikhyat (~vumrao@123.252.241.152) Quit (Quit: Leaving)
[20:26] * shylesh (~shylesh@45.124.227.167) Quit (Remote host closed the connection)
[20:26] <SamYaple> wes_dillingham: you could, but it does have limitations
[20:26] * jermudgeon (~jhaustin@wpc-pe-l2.whitestone.link) Quit (Quit: jermudgeon)
[20:27] <SamYaple> wes_dillingham: for example you would need to use the swift emulation and not RGW directly (i believe)
[20:27] <SamYaple> wes_dillingham: there is likely a bette rway to do what you want, what i suggested with keystone is the only way _I_ know of though
[20:27] * jermudgeon (~jhaustin@31.207.56.59) has joined #ceph
[20:29] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[20:36] * davidzlap1 (~Adium@2605:e000:1313:8003:bcd8:792b:43bf:3ca7) has joined #ceph
[20:37] * davidzlap (~Adium@2605:e000:1313:8003:70ec:82ae:55ea:933b) Quit (Read error: Connection reset by peer)
[20:37] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) has joined #ceph
[20:42] * davidzlap (~Adium@2605:e000:1313:8003:bcd8:792b:43bf:3ca7) has joined #ceph
[20:42] * jermudgeon_ (~jhaustin@wpc-pe-l2.whitestone.link) has joined #ceph
[20:42] * davidzlap1 (~Adium@2605:e000:1313:8003:bcd8:792b:43bf:3ca7) Quit (Read error: Connection reset by peer)
[20:47] * jermudgeon (~jhaustin@31.207.56.59) Quit (Ping timeout: 480 seconds)
[20:47] * jermudgeon_ is now known as jermudgeon
[20:53] * zviratko (~nupanick@06SAAEDAP.tor-irc.dnsbl.oftc.net) Quit ()
[20:53] * Rehevkor (~offender@Relay-J.tor-exit.network) has joined #ceph
[20:57] * kicker__ (~kicker@216.160.243.23) Quit (Read error: Connection reset by peer)
[20:57] * kicker__ (~kicker@216.160.243.23) has joined #ceph
[21:00] * jcsp (~jspray@80.120.160.35) has joined #ceph
[21:01] * rendar (~I@host183-125-dynamic.183-80-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:05] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:d97a:3df0:7bfd:9f48) has joined #ceph
[21:06] * billwebb (~billwebb@50-203-47-138-static.hfc.comcastbusiness.net) Quit (Quit: billwebb)
[21:07] * billwebb (~billwebb@50-203-47-138-static.hfc.comcastbusiness.net) has joined #ceph
[21:07] * agsha (~agsha@124.40.246.234) Quit (Remote host closed the connection)
[21:10] * bene (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[21:10] * mattbenjamin (~mbenjamin@12.118.3.106) Quit (Ping timeout: 480 seconds)
[21:12] * kicker__ (~kicker@216.160.243.23) Quit (Ping timeout: 480 seconds)
[21:17] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[21:21] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:d97a:3df0:7bfd:9f48) Quit (Ping timeout: 480 seconds)
[21:21] * T1 (~the_one@87.104.212.66) Quit (Quit: Where did the client go?)
[21:23] * Rehevkor (~offender@7V7AAGKYN.tor-irc.dnsbl.oftc.net) Quit ()
[21:23] * clarjon1 (~AG_Clinto@edwardsnowden1.torservers.net) has joined #ceph
[21:24] * derjohn_mob (~aj@x4db252b0.dyn.telefonica.de) has joined #ceph
[21:27] * rendar (~I@host183-125-dynamic.183-80-r.retail.telecomitalia.it) has joined #ceph
[21:27] * T1 (~the_one@87.104.212.66) has joined #ceph
[21:28] * gauravbafna (~gauravbaf@122.167.206.36) has joined #ceph
[21:30] * kicker__ (~kicker@104.244.84.18) has joined #ceph
[21:34] * wgao (~wgao@106.120.101.38) Quit (Ping timeout: 480 seconds)
[21:34] * wgao (~wgao@106.120.101.38) has joined #ceph
[21:35] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) has joined #ceph
[21:36] * gauravbafna (~gauravbaf@122.167.206.36) Quit (Ping timeout: 480 seconds)
[21:40] * kicker__ (~kicker@104.244.84.18) Quit (Ping timeout: 480 seconds)
[21:41] * cyphase_eviltwin (~cyphase@2601:640:c401:969a:468a:5bff:fe29:b5fd) has joined #ceph
[21:42] * cyphase_eviltwin is now known as cyphase
[21:43] * cyphase is now known as Guest434
[21:43] * gauravbafna (~gauravbaf@122.178.253.241) has joined #ceph
[21:45] * kicker__ (~kicker@104.244.84.18) has joined #ceph
[21:46] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:46] * mattstroud (~kicker@104.244.84.18) has joined #ceph
[21:51] * gauravbafna (~gauravbaf@122.178.253.241) Quit (Ping timeout: 480 seconds)
[21:51] * onyb (~ani07nov@119.82.105.66) Quit (Quit: raise SystemExit())
[21:53] * clarjon1 (~AG_Clinto@4MJAAGVW1.tor-irc.dnsbl.oftc.net) Quit ()
[21:53] * click1 (~PappI@atlantic480.us.unmetered.com) has joined #ceph
[21:53] * kicker__ (~kicker@104.244.84.18) Quit (Ping timeout: 480 seconds)
[21:54] * Guest434 is now known as cyphase
[21:54] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[21:56] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Quit: cyphase.com)
[21:56] * cyphase (~cyphase@000134f2.user.oftc.net) has joined #ceph
[21:57] * sleinen (~Adium@2001:620:0:82::101) has joined #ceph
[22:01] * billwebb (~billwebb@50-203-47-138-static.hfc.comcastbusiness.net) Quit (Quit: billwebb)
[22:05] * mattstroud1 (~kicker@104.244.84.18) has joined #ceph
[22:05] * mattstroud (~kicker@104.244.84.18) Quit (Read error: Connection reset by peer)
[22:06] * micw (~micw@p2003000603F72E1610CB037CCF530789.dip0.t-ipconnect.de) has joined #ceph
[22:06] <micw> hi
[22:06] * mgolub (~Mikolaj@193.93.217.58) Quit (Quit: away)
[22:07] * davidzlap1 (~Adium@2605:e000:1313:8003:bcd8:792b:43bf:3ca7) has joined #ceph
[22:07] * fandi (~fandi@112.78.178.16) has joined #ceph
[22:07] <fandi> hi all
[22:07] <fandi> i have ceph cluster
[22:07] * davidzlap (~Adium@2605:e000:1313:8003:bcd8:792b:43bf:3ca7) Quit (Read error: Connection reset by peer)
[22:08] <micw> today i did some tests with my backup (i backup to ceph using rsync). cephfs is very slow with this kind of workload (rsyncing a simple root fs a 2nd time takes ~25 minutes). with rbd formated with ext4 and mounted, the same sync takes 3-5 minutes.
[22:08] <fandi> i do ceph df, is that true if i have total 165 T ?
[22:08] <fandi> root@node-1:~# ceph df
[22:08] <fandi> GLOBAL:
[22:08] <fandi> SIZE AVAIL RAW USED %RAW USED
[22:08] <fandi> 226T 165T 62640G 27.03
[22:09] <micw> drawback with rbd is that i have a single point of failure (the system where the rbd is mounted)
[22:09] <micw> is there a chance to get a comparable speed with cepfs?
[22:09] <micw> fandi, do you have quotas enabled?
[22:10] <micw> if not, yue see raw cluster states
[22:10] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[22:10] <fandi> hiw micw, how to check quotas
[22:10] <micw> dont know ;)
[22:10] <micw> but i have read (and seen) that df lies a bit
[22:10] <micw> it shows cluster internal stuff. e.g. if you have 3 replicas, USED shows size*3
[22:11] <micw> but if you do a "ls -alh" cephfs has the nice feature to show usage per folder
[22:11] <micw> what does ceph -s say?
[22:11] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[22:12] <micw> btw. how many disks/machines do you have in this cluster?
[22:14] <micw> oh, did not notice that you mean "ceph df", i thought you meant "df" on ceph ;)
[22:16] * garphy`aw is now known as garphy
[22:21] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:21] * debian112 (~bcolbert@24.126.201.64) Quit (Read error: No route to host)
[22:22] * linjan (~linjan@176.195.252.240) Quit (Ping timeout: 480 seconds)
[22:22] <fandi> hi micw,
[22:23] <fandi> this is my ceph -s result
[22:23] <fandi> https://gist.github.com/fandikurnia/91c76cc8b752f2ec564ea777c24acbd8
[22:23] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[22:23] * click1 (~PappI@7V7AAGK1N.tor-irc.dnsbl.oftc.net) Quit ()
[22:23] * xolotl (~Sliker@7V7AAGK23.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:23] <fandi> are you sure ceph fs already stable ?
[22:23] <micw> i've asked this question yesterday on ceph-devel ;)
[22:24] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[22:24] <micw> short answer: if you use one mds, yes
[22:24] <micw> you have 165TB avail
[22:24] <micw> 21TB of data that uses 62TB of space
[22:25] <micw> meanx 3x redundancy
[22:25] <micw> MEANS
[22:25] * EthanL (~lamberet@cce02cs4036-fa12-z.ams.hpecore.net) Quit (Ping timeout: 480 seconds)
[22:26] <fandi> yups, we set 3x redudancy so, i have 165 T /3 equal 55 T
[22:27] * garphy is now known as garphy`aw
[22:27] <fandi> is that true? btw micw thanks
[22:27] <micw> seems so. unfortunately you cannot really see that you have 55T
[22:28] * onyb (~ani07nov@119.82.105.66) has joined #ceph
[22:28] <micw> what kind of workload do you run on this machines?
[22:29] <fandi> ok micw, maybe we have 40T. actually we use for openstack backend storage
[22:30] * Nicola-1980 (~Nicola-19@x4db42015.dyn.telefonica.de) has joined #ceph
[22:31] <micw> nice. may i rent one with 10TB?
[22:31] <fandi> :)
[22:31] <fandi> i have to ask my boss :D
[22:32] <micw> since yesterday i have 40TB on 3 machines for backup purposes
[22:32] <micw> unfortunately cephfs performs really bad with rsync workload :(
[22:32] <micw> not as bad as glusterfs ;)
[22:33] <micw> but 5x worse than a locally mounted rbd
[22:33] <T1> read up on how cephfs works and you'll see why
[22:33] <fandi> :)
[22:33] <T1> I am not surprised
[22:33] <micw> T1: it's because of mds bottleneck?
[22:34] <T1> mostly, ye
[22:34] <T1> s
[22:34] <micw> are there any good alternatives?
[22:34] <micw> i considered running 3 active mds
[22:34] <T1> the actual data flows directly between the client and the OSDs
[22:34] <micw> but got a warning from devs M)
[22:35] <micw> yeah but (repeated) rsync is 90% metadata
[22:35] <T1> so it's only the MDS thats adding overhead compared to a RBD mounted on a single machine somewhere
[22:35] * garphy`aw is now known as garphy
[22:35] * EthanL (~lamberet@cce02cs4036-fa12-z.ams.hpecore.net) has joined #ceph
[22:35] <micw> speed difference between rbd and cephfs in this scenario is at least 5x
[22:35] <T1> give it a few versions and things might speed up
[22:36] <T1> but given that you use it for backup purposes, having an RBD mounted on a single machine with no failover does not seem like such a bad idea
[22:36] <micw> maybe. i've tried different things, metatdata seems always be the bottleneck on distributed fs
[22:37] <micw> T1, the reason we choosed ceph was failover ;)
[22:37] * Nicola-1_ (~Nicola-19@x4db42015.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[22:37] <micw> or ha
[22:37] <micw> so it's a bit disappointing
[22:37] <T1> if you use a good journaled filesystem inside the RBD, you should be able to perform failover to another client without any loss
[22:37] * EthanL (~lamberet@cce02cs4036-fa12-z.ams.hpecore.net) has left #ceph
[22:38] <T1> well.. failover for the backup device mountpoint(!) seems like overkill
[22:38] <micw> yeah but i have to script it myself (using linux-ha or such)
[22:38] <T1> the actual data is still safe inside the rbd
[22:38] <micw> my initial concept was:
[22:38] <micw> havinf 3 nodes with mon and 4 osds each
[22:38] <micw> mounted cephfs on each node
[22:38] <T1> google it around - someone has published a working pacemaker setup
[22:39] <T1> (at least I remember it as pacemaker..)
[22:39] <micw> resloving backupcluster.mydomain to all 3 nodes
[22:39] <T1> ah, yes
[22:39] <T1> here we are..
[22:39] <T1> https://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
[22:39] <micw> so if my backup clients (ssh/rsync) connects, they will always reach one random node
[22:39] <T1> you can send your thanks to leseb
[22:39] * sbfox (~Adium@vancouver.xmatters.com) has joined #ceph
[22:40] <T1> I asked him about that setup a while ago, and he said it was still running
[22:40] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[22:40] <micw> yeah, looks what i need
[22:40] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[22:40] <micw> unfortunately my hosting provider does not support floating ip addresses directly
[22:41] <micw> but i can bind an extra ssh port only on the machine that has the mount and use client failove
[22:42] * sbfox (~Adium@vancouver.xmatters.com) Quit ()
[22:42] <micw> thank you or the link
[22:43] <T1> np
[22:44] * georgem (~Adium@24.114.70.16) has joined #ceph
[22:45] <micw> are there any experiences running gfs or such on rbd?
[22:45] <T1> no idea
[22:45] * georgem (~Adium@24.114.70.16) Quit ()
[22:45] * georgem (~Adium@206.108.127.16) has joined #ceph
[22:47] <rkeene> Test it out :-)
[22:49] <micw> need more time than a day has :-(
[22:49] <rkeene> Just need more days
[22:50] <micw> yeah. unfortunately i spent most time planned for the backup with glusterfs
[22:50] * mattstroud1 (~kicker@104.244.84.18) Quit (Read error: Connection reset by peer)
[22:50] * mattstroud1 (~kicker@104.244.84.18) has joined #ceph
[22:51] * wonko_be_ (bernard@november.openminds.be) Quit (Remote host closed the connection)
[22:51] * wonko_be (bernard@november.openminds.be) has joined #ceph
[22:51] * mattstroud1 (~kicker@104.244.84.18) Quit ()
[22:53] * xolotl (~Sliker@7V7AAGK23.tor-irc.dnsbl.oftc.net) Quit ()
[22:53] * maku (~JohnO@06SAAEDHU.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:54] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[22:54] <micw> would it solve my metadata performance problem if i enable multiple mds?
[22:57] <rkeene> I don't think so -- but I haven't upgraded to 10.2 yet
[22:58] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) has joined #ceph
[23:10] * wes_dillingham (~wes_dilli@140.247.242.44) Quit (Ping timeout: 480 seconds)
[23:17] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[23:18] * bitserker (~toni@81.184.9.72.dyn.user.ono.com) has joined #ceph
[23:18] * bitserker (~toni@81.184.9.72.dyn.user.ono.com) Quit (Remote host closed the connection)
[23:20] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[23:22] * davidzlap1 (~Adium@2605:e000:1313:8003:bcd8:792b:43bf:3ca7) Quit (Read error: Connection reset by peer)
[23:22] * davidzlap (~Adium@2605:e000:1313:8003:bcd8:792b:43bf:3ca7) has joined #ceph
[23:23] * maku (~JohnO@06SAAEDHU.tor-irc.dnsbl.oftc.net) Quit ()
[23:32] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[23:32] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[23:36] * valeech_ (~valeech@wsip-70-166-79-23.ga.at.cox.net) has joined #ceph
[23:36] * valeech (~valeech@wsip-70-166-79-23.ga.at.cox.net) Quit (Read error: Connection reset by peer)
[23:36] * valeech_ is now known as valeech
[23:39] * valeech (~valeech@wsip-70-166-79-23.ga.at.cox.net) Quit (Read error: Connection reset by peer)
[23:39] * valeech (~valeech@wsip-70-166-79-23.ga.at.cox.net) has joined #ceph
[23:41] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[23:43] * dnunez (~dnunez@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[23:49] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[23:53] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[23:56] * davidzlap1 (~Adium@2605:e000:1313:8003:bcd8:792b:43bf:3ca7) has joined #ceph
[23:56] * davidzlap (~Adium@2605:e000:1313:8003:bcd8:792b:43bf:3ca7) Quit (Read error: Connection reset by peer)
[23:58] * sleinen (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.