#ceph IRC Log

Index

IRC Log for 2016-06-07

Timestamps are in GMT/BST.

[0:06] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[0:06] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[0:07] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[0:08] * mattbenjamin (~mbenjamin@12.118.3.106) Quit (Quit: Leaving.)
[0:08] * N3X15 (~Grimhound@06SAADLTG.tor-irc.dnsbl.oftc.net) Quit ()
[0:08] * Hazmat (~csharp@tor-exit-4.all.de) has joined #ceph
[0:10] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[0:13] * danieagle (~Daniel@177.188.65.64) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[0:17] * jermudgeon (~jhaustin@199.200.6.147) has joined #ceph
[0:19] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[0:20] * ircolle (~Adium@2601:285:201:633a:b482:ba8e:ae5d:2e88) Quit (Quit: Leaving.)
[0:21] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[0:22] * srk (~oftc-webi@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:31] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[0:35] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[0:38] * Hazmat (~csharp@4MJAAF253.tor-irc.dnsbl.oftc.net) Quit ()
[0:38] * Redshift (~mrapple@7V7AAFTFW.tor-irc.dnsbl.oftc.net) has joined #ceph
[0:42] * jermudgeon (~jhaustin@199.200.6.147) Quit (Quit: jermudgeon)
[0:43] * hellertime (~Adium@pool-173-48-155-219.bstnma.fios.verizon.net) has joined #ceph
[0:43] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[0:45] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) Quit (Remote host closed the connection)
[0:45] * ircolle (~Adium@2601:285:201:633a:f074:ba:45ed:a9ea) has joined #ceph
[0:46] * hellertime (~Adium@pool-173-48-155-219.bstnma.fios.verizon.net) Quit ()
[0:47] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[0:56] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) has joined #ceph
[1:01] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[1:01] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[1:03] * dgurtner (~dgurtner@178.197.239.98) Quit (Ping timeout: 480 seconds)
[1:03] * lcurtis (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[1:04] * dneary (~dneary@pool-96-233-46-27.bstnma.fios.verizon.net) has joined #ceph
[1:07] * jermudgeon (~jhaustin@199.200.6.148) has joined #ceph
[1:08] * Redshift (~mrapple@7V7AAFTFW.tor-irc.dnsbl.oftc.net) Quit ()
[1:08] * curtis864 (~Neon@ns316491.ip-37-187-129.eu) has joined #ceph
[1:10] * jermudgeon (~jhaustin@199.200.6.148) Quit ()
[1:13] <flaf> I'have tried to set ???mds standby replay = true??? in the global section of my ceph.conf, I have restarted all the mds daemons (id=ceph01,ceph02,ceph03) but when I launch ???ceph mds dump???, one mds is missing. Is it normal?
[1:15] <flaf> I can see one mds active (currently ceph03), one in "standby-replay" state (ceph02) but one is not mentioned (ceph01).
[1:15] * davidz (~davidz@2605:e000:1313:8003:46b:c01d:af13:2a90) Quit (Quit: Leaving.)
[1:16] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) Quit (Remote host closed the connection)
[1:17] * hellertime (~Adium@pool-173-48-155-219.bstnma.fios.verizon.net) has joined #ceph
[1:17] * davidzlap (~Adium@2605:e000:1313:8003:91cd:f442:aa19:ea6) has joined #ceph
[1:19] * oms101 (~oms101@p20030057EA13E100C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:22] <flaf> Furthermore, the ???ceph -v??? is curious => ???fsmap e37: 1/1/1 up {0=ceph02=up:standby-replay}, 1 up:standby???. It's not mentioned that "ceph03" is the active mds (according to the output of ???ceph mds dump???).
[1:27] * oms101 (~oms101@p20030057EA071A00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:29] * khyron (~khyron@fixed-190-159-187-190-159-75.iusacell.net) has joined #ceph
[1:30] * ircolle (~Adium@2601:285:201:633a:f074:ba:45ed:a9ea) Quit (Quit: Leaving.)
[1:32] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[1:33] * hellertime (~Adium@pool-173-48-155-219.bstnma.fios.verizon.net) Quit (Quit: Leaving.)
[1:35] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:38] * curtis864 (~Neon@4MJAAF28A.tor-irc.dnsbl.oftc.net) Quit ()
[1:38] * bildramer1 (~Freddy@tor-exit-4.all.de) has joined #ceph
[1:41] * vata (~vata@cable-192.222.249.207.electronicbox.net) has joined #ceph
[1:41] * bildramer1 (~Freddy@4MJAAF29T.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[1:43] <ronrib> do you have the mds defined in the config? eg: [mds.ceph] host = ceph
[1:44] <flaf> ronrib: no.
[1:44] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[1:44] <flaf> In fact, I realize that ???ceph mds dump??? doesn't print mds in standby state.
[1:44] <ronrib> give that a shot, i've found the mds may not even start without that in there
[1:45] * rf`2 (~Hideous@tor-exit7-readme.dfri.se) has joined #ceph
[1:45] <flaf> ronrib: in my case, mds start well.
[1:46] <flaf> the strange thing is that ???ceph -s??? doesn't print the active mds but the standby-replay mds.
[1:48] * sudocat (~dibarra@192.185.1.20) Quit (Quit: Leaving.)
[1:54] * andreww (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[1:54] <ronrib> i've only just started playing around with the mds, sorry I haven't come across that one
[1:54] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[1:54] <flaf> ronrib: no problem. Thx.
[1:55] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[1:56] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[2:00] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[2:00] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[2:07] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[2:09] <praveen> Hi
[2:09] <praveen> May I know how to enable dout logs
[2:10] <praveen> and where the logs gets logged
[2:11] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[2:14] * rf`2 (~Hideous@4MJAAF298.tor-irc.dnsbl.oftc.net) Quit ()
[2:14] * Behedwin (~Mattress@192.87.28.28) has joined #ceph
[2:25] <badone> praveen: for which daemon? The logs should end up in /var/log/ceph/
[2:26] <badone> praveen: this is an introduction, http://docs.ceph.com/docs/jewel/rados/troubleshooting/log-and-debug/
[2:26] <badone> or http://docs.ceph.com/docs/master/rados/troubleshooting/log-and-debug/ for master branch
[2:38] * agsha (~agsha@124.40.246.234) has joined #ceph
[2:40] * blizzow (~jburns@50.243.148.102) Quit (Ping timeout: 480 seconds)
[2:40] <flaf> Another thing I have noticed with a standby-replay mds, it's in its logs. I have every seconds the 2 same lines ???mds.0.0 standby_replay_restart (as standby)??? and ???mds.0.0 replay_done (as standby)???. Is it normal?
[2:41] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[2:44] * Behedwin (~Mattress@4MJAAF3A4.tor-irc.dnsbl.oftc.net) Quit ()
[2:45] * Bromine (~Izanagi@4MJAAF3CI.tor-irc.dnsbl.oftc.net) has joined #ceph
[2:47] * hellertime (~Adium@pool-173-48-155-219.bstnma.fios.verizon.net) has joined #ceph
[2:49] * agsha (~agsha@124.40.246.234) Quit (Remote host closed the connection)
[2:52] * wushudoin (~wushudoin@2601:646:9501:d2b2:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[2:54] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[2:58] * dneary (~dneary@pool-96-233-46-27.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[3:02] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) has joined #ceph
[3:06] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[3:08] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Ping timeout: 480 seconds)
[3:08] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[3:11] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) Quit (Quit: Leaving)
[3:12] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) Quit (Quit: Leaving.)
[3:12] * georgem (~Adium@206.108.127.16) has joined #ceph
[3:14] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[3:14] * Bromine (~Izanagi@4MJAAF3CI.tor-irc.dnsbl.oftc.net) Quit ()
[3:17] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) has joined #ceph
[3:18] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[3:19] * BillyBobJohn (~Zeis@163.172.152.231) has joined #ceph
[3:25] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:32] * yanzheng (~zhyan@125.70.23.87) has joined #ceph
[3:35] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:49] * BillyBobJohn (~Zeis@4MJAAF3D1.tor-irc.dnsbl.oftc.net) Quit ()
[3:50] * Vacuum_ (~Vacuum@88.130.200.119) has joined #ceph
[3:56] <hellertime> I seem to have a large number of mds session stuck in the killing state, but when I checkout hosts listed as holding the connections I see nothing in netstat as being connected...
[3:56] <hellertime> could this is a kernel issue on the client end? I
[3:57] * Vacuum__ (~Vacuum@88.130.217.57) Quit (Ping timeout: 480 seconds)
[3:57] <hellertime> I tried to evict the sessions but they won't dissapear...
[3:59] * kefu (~kefu@183.193.187.174) has joined #ceph
[4:02] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[4:06] * karnan (~karnan@106.51.128.50) has joined #ceph
[4:07] * penguinRaider (~KiKo@67.159.20.163) Quit (Ping timeout: 480 seconds)
[4:07] * scg (~zscg@c-50-169-194-217.hsd1.ma.comcast.net) has joined #ceph
[4:11] * squizzi (~squizzi@107.13.31.195) Quit (Remote host closed the connection)
[4:13] * kefu_ (~kefu@114.92.104.47) has joined #ceph
[4:15] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Read error: Connection reset by peer)
[4:18] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) Quit (Quit: Bye)
[4:19] <flaf> hellertime: the only thing I can tell you is in my Jewel cluster, I have no "ghost" session. All sessions I can read are "real".
[4:19] * KeeperOfTheSoul (~drupal@tor2r.ins.tor.net.eu.org) has joined #ceph
[4:20] * kefu (~kefu@183.193.187.174) Quit (Ping timeout: 480 seconds)
[4:21] <flaf> hellertime: maybe a restart of the active mds could remove your "ghost" sessions.
[4:21] * destrudo (~destrudo@tomba.sonic.net) has joined #ceph
[4:22] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[4:22] <hellertime> so I stopped all mds
[4:22] * bene (~bene@2601:193:4003:4c7a:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[4:22] <hellertime> I restarted one
[4:22] <hellertime> its now up:active
[4:22] <hellertime> but the sessions remain
[4:22] <hellertime> I'm now running at mds_debug level 20
[4:22] <hellertime> I saw that one client was marked as the laggiest ??? so I tried evicting it, that hung the mds
[4:23] * destrudo_ (~destrudo@tomba.sonic.net) Quit (Ping timeout: 480 seconds)
[4:23] <hellertime> I rebooted the client, but still see ~ same number of sessions
[4:24] <hellertime> watching the debug logs now ??? nothing exciting??? as far as I can tell
[4:24] <flaf> Which command gives you the laggiest client?
[4:24] <hellertime> its reported when running at dedug level 20
[4:24] <hellertime> just in the stdout log
[4:25] <hellertime> thing is mds says its up. cluster is HEALTHY_OK, but all new clients hang
[4:25] <flaf> Sorry, I have no clue.
[4:27] * deepthi (~deepthi@106.197.84.234) has joined #ceph
[4:32] <hellertime> I fear that rebooting the nodes listed in the session ls ??? won't fix this
[4:32] * penguinRaider (~KiKo@67.159.20.163) has joined #ceph
[4:39] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[4:42] * penguinRaider (~KiKo@67.159.20.163) Quit (Ping timeout: 480 seconds)
[4:48] * kefu_ is now known as kefu
[4:49] * KeeperOfTheSoul (~drupal@4MJAAF3GF.tor-irc.dnsbl.oftc.net) Quit ()
[4:49] * demonspork (~Tarazed@x1-6-f4-6d-04-50-45-d9.cpe.webspeed.dk) has joined #ceph
[4:51] <hellertime> I tried resetting the session map for mds0, but still seeing these "killing" sessions
[4:51] * NTTEC (~nttec@203.177.235.23) has joined #ceph
[4:55] * penguinRaider (~KiKo@14.139.82.6) has joined #ceph
[5:00] <hellertime> ruh ro: heartbeat_map is_healthy 'MDS' had timed out after 15
[5:08] * penguinRaider (~KiKo@14.139.82.6) Quit (Ping timeout: 480 seconds)
[5:09] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[5:10] * agsha (~agsha@124.40.246.234) has joined #ceph
[5:13] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[5:14] * shyu (~shyu@218.241.172.114) has joined #ceph
[5:18] * penguinRaider (~KiKo@67.159.20.163) has joined #ceph
[5:18] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[5:19] * demonspork (~Tarazed@06SAADL62.tor-irc.dnsbl.oftc.net) Quit ()
[5:19] * Architect (~mLegion@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[5:19] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[5:19] * overclk (~quassel@117.202.96.153) has joined #ceph
[5:21] <hellertime> what types of issues can arise from truncating a mds journal? can I lose an entire filesystem or will it only have the potential to cause issues with changes within some delta?
[5:23] * agsha (~agsha@124.40.246.234) Quit (Remote host closed the connection)
[5:24] * Vacuum__ (~Vacuum@88.130.217.103) has joined #ceph
[5:28] <hellertime> how large should I expect an exported mds journal to be?
[5:28] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[5:28] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[5:31] * Vacuum_ (~Vacuum@88.130.200.119) Quit (Ping timeout: 480 seconds)
[5:31] * karnan (~karnan@106.51.128.50) Quit (Ping timeout: 480 seconds)
[5:32] * NTTEC (~nttec@203.177.235.23) Quit (Read error: Connection reset by peer)
[5:33] * NTTEC (~nttec@203.177.235.23) has joined #ceph
[5:46] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[5:49] * Architect (~mLegion@7V7AAFTST.tor-irc.dnsbl.oftc.net) Quit ()
[5:53] * kefu (~kefu@114.92.104.47) Quit (Max SendQ exceeded)
[5:54] * kefu (~kefu@ec2-52-192-226-216.ap-northeast-1.compute.amazonaws.com) has joined #ceph
[5:57] * NTTEC (~nttec@203.177.235.23) Quit (Remote host closed the connection)
[5:57] * kefu (~kefu@ec2-52-192-226-216.ap-northeast-1.compute.amazonaws.com) Quit (Max SendQ exceeded)
[5:58] * kefu (~kefu@ec2-52-192-226-216.ap-northeast-1.compute.amazonaws.com) has joined #ceph
[6:01] * NTTEC (~nttec@119.93.91.136) has joined #ceph
[6:04] * scg (~zscg@c-50-169-194-217.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[6:06] * vata (~vata@cable-192.222.249.207.electronicbox.net) Quit (Quit: Leaving.)
[6:06] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[6:06] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[6:07] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[6:07] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[6:08] * agsha (~agsha@124.40.246.234) has joined #ceph
[6:13] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[6:19] * ZombieTree (~Esge@93.174.90.30) has joined #ceph
[6:21] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[6:24] * swami1 (~swami@49.44.57.243) has joined #ceph
[6:25] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[6:30] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[6:30] * swami2 (~swami@49.32.0.140) has joined #ceph
[6:32] * karnan (~karnan@121.244.87.117) has joined #ceph
[6:32] * swami1 (~swami@49.44.57.243) Quit (Ping timeout: 480 seconds)
[6:39] * kefu is now known as kefu|afk
[6:42] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[6:42] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) Quit (Quit: WeeChat 1.5)
[6:46] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[6:49] * ZombieTree (~Esge@7V7AAFTVC.tor-irc.dnsbl.oftc.net) Quit ()
[6:49] * kefu|afk (~kefu@ec2-52-192-226-216.ap-northeast-1.compute.amazonaws.com) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:49] * mog_ (~sese_@h-130-176.a2.corp.bahnhof.no) has joined #ceph
[6:52] * kefu (~kefu@183.193.187.174) has joined #ceph
[6:53] * hellertime (~Adium@pool-173-48-155-219.bstnma.fios.verizon.net) Quit (Quit: Leaving.)
[6:54] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[6:55] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) has joined #ceph
[7:11] * penguinRaider (~KiKo@67.159.20.163) Quit (Ping timeout: 480 seconds)
[7:13] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[7:15] * kefu (~kefu@183.193.187.174) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:18] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[7:18] * jermudgeon (~jhaustin@tab.biz.whitestone.link) has joined #ceph
[7:19] * mog_ (~sese_@7V7AAFTWH.tor-irc.dnsbl.oftc.net) Quit ()
[7:19] * hifi1 (~Tenk@tor-exit.insane.us.to) has joined #ceph
[7:20] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[7:20] * penguinRaider (~KiKo@14.139.82.6) has joined #ceph
[7:30] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:35] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[7:40] * sankarshan (~sankarsha@121.244.87.117) has joined #ceph
[7:43] * wgao (~wgao@106.120.101.38) has joined #ceph
[7:47] * praveen_ (~praveen@121.244.155.11) has joined #ceph
[7:47] * praveen (~praveen@121.244.155.11) Quit (Read error: Connection reset by peer)
[7:47] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[7:48] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[7:49] * jermudgeon (~jhaustin@tab.biz.whitestone.link) Quit (Quit: jermudgeon)
[7:49] * hifi1 (~Tenk@4MJAAF3N7.tor-irc.dnsbl.oftc.net) Quit ()
[7:51] * matj345314 (~matj34531@141.255.254.208) has joined #ceph
[7:54] * verbalins (~Throlkim@7V7AAFTYR.tor-irc.dnsbl.oftc.net) has joined #ceph
[7:54] * kefu (~kefu@183.193.187.174) has joined #ceph
[7:57] * kefu_ (~kefu@114.92.104.47) has joined #ceph
[7:59] * sankarshan (~sankarsha@121.244.87.117) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[7:59] * mohmultihouse (~mohmultih@gw01.mhitp.dk) has joined #ceph
[8:00] * NTTEC (~nttec@119.93.91.136) Quit (Read error: Connection reset by peer)
[8:00] * NTTEC (~nttec@119.93.91.136) has joined #ceph
[8:01] * NTTEC_ (~nttec@203.177.235.23) has joined #ceph
[8:04] * kefu (~kefu@183.193.187.174) Quit (Ping timeout: 480 seconds)
[8:05] * lmb (~Lars@ip5b41f0a4.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[8:07] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[8:08] * NTTEC (~nttec@119.93.91.136) Quit (Ping timeout: 480 seconds)
[8:10] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[8:12] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[8:15] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:19] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[8:20] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[8:24] * verbalins (~Throlkim@7V7AAFTYR.tor-irc.dnsbl.oftc.net) Quit ()
[8:26] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[8:28] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[8:34] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Read error: Connection reset by peer)
[8:34] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[8:35] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[8:36] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) has joined #ceph
[8:42] * linjan (~linjan@86.62.112.22) has joined #ceph
[8:43] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[8:52] * yanzheng1 (~zhyan@125.70.23.87) has joined #ceph
[8:52] * yanzheng (~zhyan@125.70.23.87) Quit (Ping timeout: 480 seconds)
[8:53] * xiucai (~hualingso@219.145.57.146) has joined #ceph
[8:54] * Peaced (~Pommesgab@exit1.torproxy.org) has joined #ceph
[8:54] <xiucai> hi, everyone:) i just use a new ceph rbd pool instead the old ceph rbd pool, it downloads image when i create vms, why?
[8:54] <xiucai> it's already configured rbd_store_pool=ssd in /etc/glance/glance-api.conf, ssd is my new ceph rbd pool name.
[8:56] <vikhyat> xiucai: just check if you have show_image_direct_url=true
[8:56] <vikhyat> xiucai: just check if you have show_image_direct_url=true and enable_v2_api=True
[8:58] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:04] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[9:11] <chengpeng> anyone configure ceph radosgw with nginx
[9:11] <chengpeng> ?
[9:13] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[9:15] * tumeric (~jcastro@89.152.250.115) has joined #ceph
[9:19] * mhuang (~mhuang@42.120.74.88) has joined #ceph
[9:19] * rendar (~I@host69-75-dynamic.0-87-r.retail.telecomitalia.it) has joined #ceph
[9:22] * dgurtner (~dgurtner@178.197.233.244) has joined #ceph
[9:24] * Peaced (~Pommesgab@4MJAAF3SE.tor-irc.dnsbl.oftc.net) Quit ()
[9:24] * Maza (~Shadow386@anonymous.sec.nl) has joined #ceph
[9:24] * analbeard (~shw@support.memset.com) has joined #ceph
[9:26] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[9:28] * thomnico (~thomnico@2a01:e35:8b41:120:9d38:bba2:2fc2:41a) has joined #ceph
[9:29] * winston-d_ (uid98317@id-98317.richmond.irccloud.com) has joined #ceph
[9:31] * penguinRaider (~KiKo@14.139.82.6) Quit (Ping timeout: 480 seconds)
[9:34] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[9:37] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:38] * xiucai (~hualingso@219.145.57.146) Quit (Ping timeout: 480 seconds)
[9:39] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[9:40] * dlan (~dennis@116.228.88.131) Quit (Remote host closed the connection)
[9:41] * penguinRaider (~KiKo@204.152.207.173) has joined #ceph
[9:48] * xiucai (~hualingso@124.114.71.83) has joined #ceph
[9:49] * thomnico (~thomnico@2a01:e35:8b41:120:9d38:bba2:2fc2:41a) Quit (Quit: Ex-Chat)
[9:49] * thomnico (~thomnico@2a01:e35:8b41:120:9d38:bba2:2fc2:41a) has joined #ceph
[9:51] * dlan (~dennis@116.228.88.131) has joined #ceph
[9:52] * ade (~abradshaw@85.158.226.30) has joined #ceph
[9:53] * _28_ria (~kvirc@opfr028.ru) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[9:53] * dvanders (~dvanders@130.246.253.64) has joined #ceph
[9:54] * Maza (~Shadow386@7V7AAFT2J.tor-irc.dnsbl.oftc.net) Quit ()
[9:54] * tokie (~zapu@7V7AAFT3N.tor-irc.dnsbl.oftc.net) has joined #ceph
[9:54] * NTTEC_ (~nttec@203.177.235.23) Quit (Remote host closed the connection)
[9:56] * Frank_ (~Frank@149.210.210.150) Quit (Quit: Leaving)
[9:56] * thomnico (~thomnico@2a01:e35:8b41:120:9d38:bba2:2fc2:41a) Quit (Quit: Ex-Chat)
[9:56] * thomnico (~thomnico@2a01:e35:8b41:120:9d38:bba2:2fc2:41a) has joined #ceph
[9:57] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:59] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[10:01] <IvanJobs> chengpeng: just use civetweb.
[10:01] <IvanJobs> why do u want to use nginx instead?
[10:03] <chengpeng> to unified with other web server in company
[10:05] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[10:07] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[10:10] <mistur> Is anyone here will be at the ceph day at CERN next Tuesday ?
[10:10] * Lokta (~Lokta@carbon.coe.int) has joined #ceph
[10:10] <Lokta> Hi everyone !
[10:11] <Lokta> is anyone experienced with crush ruleset and willing to give a bit of feedback ? thx :)
[10:15] * lmb (~Lars@tmo-114-0.customers.d1-online.com) has joined #ceph
[10:18] * kefu_ is now known as kefu
[10:24] * tokie (~zapu@7V7AAFT3N.tor-irc.dnsbl.oftc.net) Quit ()
[10:27] * TMM (~hp@185.5.121.201) has joined #ceph
[10:28] * aldiyen (~Sirrush@185.100.87.73) has joined #ceph
[10:29] * RhinoBeetle (~bradley@vpn1.updata.net) has joined #ceph
[10:34] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[10:40] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) has joined #ceph
[10:43] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[10:53] * penguinRaider (~KiKo@204.152.207.173) Quit (Ping timeout: 480 seconds)
[10:56] * secate (~Secate@dsl-197-245-164-90.voxdsl.co.za) has joined #ceph
[10:58] * aldiyen (~Sirrush@4MJAAF3WJ.tor-irc.dnsbl.oftc.net) Quit ()
[10:58] * LorenXo (~JamesHarr@06SAADML0.tor-irc.dnsbl.oftc.net) has joined #ceph
[11:01] <Heebie> Lokta: did you post your crush ruleset somewhere? (pastebin etc..)
[11:01] * lmb (~Lars@tmo-114-0.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[11:02] * mhuang (~mhuang@42.120.74.88) Quit (Quit: This computer has gone to sleep)
[11:02] <tumeric> Hello guys, is it possible to mount cephfs via the internet?
[11:02] <tumeric> I am sure I was able to do this in the past.
[11:02] <tumeric> But apparently not in the newest jewel version, I get connection timed out
[11:03] <tumeric> major showstopper :(
[11:03] <Heebie> tumeric: If you expose your CEPH cluster directly to the Internet on the right ports, but that sounds like an AWFUL idea... performance would likely be abysmal with that amount of latency that would introduce. You're probably better off exposing it via RADOS, or something like iSCSI or NFS over a VPN.
[11:03] * mhuang (~mhuang@42.120.74.88) has joined #ceph
[11:05] <tumeric> Well, for now we need a filesystem.
[11:05] <tumeric> Something POSIX
[11:05] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Remote host closed the connection)
[11:05] <Lokta> @Heebie http://pastebin.com/jbVkkgcz
[11:05] <tumeric> I spent a lot of time learning how ceph works, and right now I feel comfortable with it. Happens that I need to mount it somewhere via internet, meaning they are in the same datacenter just not in the same rack
[11:05] <Lokta> I have two servers with two OSD on each
[11:06] <Lokta> half of them are ISCSI
[11:06] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[11:06] <Lokta> and i want the ISCSI to be primary and the local to be secondary on the other server
[11:06] <Lokta> so i can afford to lose 1/2 server
[11:06] <Lokta> or the bay
[11:07] <tumeric> Heebie, as for testing, how I would be able to mount it in a machine in a different rack? I would need a VPN, right?
[11:07] <tafelpoot> tumeric: might be a silly question, do you have any connection at all between the 2 sites? are you able to ping the cluster?
[11:07] <Lokta> i have, ofc, allowed primary affinity in my conf
[11:08] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:9135:a55b:27ec:cc48) has joined #ceph
[11:09] * penguinRaider (~KiKo@204.152.207.173) has joined #ceph
[11:09] <tumeric> tafelpoot, yes I am able to ping the monitor, nothing else though.
[11:09] * untoreh (~fra@151.50.200.100) Quit (Remote host closed the connection)
[11:10] <tumeric> What someone told me here yesterday was that the client (cephfs) needs to be directly connected to all the OSD's
[11:11] <tumeric> So I think I might have a big showstopper here.
[11:11] <tumeric> Latency wouldn't be an issue as the servers are in the same DC, except in different racks.
[11:11] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[11:12] <tafelpoot> and no option to extend the network in some way?
[11:12] <tumeric> This part I dont knowm I would have to see with the DC guys.
[11:12] <tumeric> But I think it is a long shot.
[11:12] <tafelpoot> vpn? or even physical ... If they are in the same DC, you should be able to pull some wire from one to the other side?
[11:13] * rraja (~rraja@121.244.87.117) has joined #ceph
[11:13] <tumeric> I think VPN would introduce greate latency and slow speeds.
[11:13] <tumeric> The best option is the second one, for sure.
[11:14] <tafelpoot> tumeric: make sure the vpn is over UDP, it helps a lot
[11:14] <tafelpoot> I am not sure it will work.. but it sounds like the best guess
[11:16] <tumeric> What I found strange is that I was able to make it work before.
[11:16] <tumeric> Are you aware of any changes in jewel?
[11:19] <tafelpoot> nope, sorry, I don't use cephfs and am not really a power ceph user
[11:20] <IcePic> seems unlikely for any changes of that magnitude (as in, "didnt need ip access between X and Y before but does now")
[11:20] <T1w> a requirement for cephfs is that all clients and all OSDs can see/reach each other
[11:20] <T1w> otherwise no data can flow
[11:20] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[11:21] <T1w> .. and that has been a requirement a really long long time
[11:21] <T1w> it's not changed in jewel
[11:21] <T1w> it was a requirement in hammer
[11:22] <T1w> and probably ever since cephfs inception
[11:22] <T1w> it's needed due to the "no single point of failure"
[11:22] <T1w> (among other things)
[11:23] <T1w> and since the data between ceph and the clients does not flow through the MDSs (the MDS is only needed when opening (either r/w) a new file) - once the file has been opened the data flows directly between the client and a number of OSDs
[11:24] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Quit: Leaving.)
[11:24] <T1w> .. which means that a crashed/missing/unavailable MDS doesn't affect already opened files - data still flows (can be read/written)
[11:26] <tumeric> Ok, thanks
[11:26] * lmb (~Lars@tmo-113-150.customers.d1-online.com) has joined #ceph
[11:26] <tumeric> T1w, so it has be direct connection/VPN right?
[11:27] <T1w> unless you are talking a very limited number of clients and a limited number of OSDs then VPN might be possible to get working, but performance would be aweful
[11:28] <T1w> if you scale up the number of clients and number of OSDs then the VPN endpoint would have to handle an absurd amount of TCP connections on behalf of all clients
[11:28] <T1w> .. which might cause problems
[11:28] * LorenXo (~JamesHarr@06SAADML0.tor-irc.dnsbl.oftc.net) Quit ()
[11:29] <T1w> cephfs is not really meant to be used over anything else than a L2 network
[11:29] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:9135:a55b:27ec:cc48) Quit (Ping timeout: 480 seconds)
[11:29] <T1w> as soon as you involve VPNs or routing (L3 network) you are almost asking for trouble
[11:30] <T1w> if can be done, but there are many pitfalls
[11:30] <T1w> and it's on your own head..
[11:32] * lmb_ (~Lars@tmo-112-57.customers.d1-online.com) has joined #ceph
[11:33] * mhuang (~mhuang@42.120.74.88) Quit (Quit: This computer has gone to sleep)
[11:35] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:9135:a55b:27ec:cc48) has joined #ceph
[11:38] * lmb (~Lars@tmo-113-150.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[11:38] <Be-El> the easiest setup imho is a NFS gateway. it only requires a single port to be opened (NFS v4 TCP) and can be secured using the standard NFS security measurements (client ip address limitation / kerberos setup for encryption + user authentication)
[11:43] <tumeric> Be-El, thanks. Do you have any documentation on that?
[11:50] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[11:51] * krion (~seb@lisa.zetla.fr) has joined #ceph
[11:51] <krion> hi
[11:52] <krion> i used to have a poc of radosgw on ceph 9.x
[11:52] <krion> now configuring my production on 10.x, and having different behaviour
[11:52] <krion> the most annoying one is that radosgw recreate default.* pool, even if .* pool are already there
[11:53] <krion> i assume there is a trouble with the region set but can't find out what's wrong
[11:58] <krion> http://www.spinics.net/lists/ceph-users/msg27291.html
[11:58] <krion> similar
[12:08] * kefu is now known as kefu|afk
[12:10] * mohmultihhouse (~mohmultih@gw01.mhitp.dk) has joined #ceph
[12:15] * mohmultihouse (~mohmultih@gw01.mhitp.dk) Quit (Ping timeout: 480 seconds)
[12:23] <RhinoBeetle> Hi all, I've been testing cache tiering, and have noticed something I think is strange behaviour: Deleting a rados object does not evict the object from the cache pool. Is this expected behaviour?
[12:25] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[12:25] * winston-d_ (uid98317@id-98317.richmond.irccloud.com) Quit (Quit: Connection closed for inactivity)
[12:26] * viisking (~viisking@183.80.255.12) has joined #ceph
[12:27] * xiucai (~hualingso@124.114.71.83) has left #ceph
[12:27] <viisking> hi all
[12:28] <viisking> I got an issue with this command
[12:28] <viisking> radosgw-admin user create --uid="oslo-user" --display-name="Oslo User" --system
[12:28] <viisking> 2016-06-07 12:20:33.463220 7fd42ea74a40 0 Cannot find zone id= (name=), switching to local zonegroup configuration
[12:28] <viisking> is there anyone has experience with this?
[12:29] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[12:33] * mog_1 (~SEBI@185.100.85.101) has joined #ceph
[12:37] * b0e (~aledermue@213.95.25.82) has joined #ceph
[12:39] * bene (~bene@2601:193:4003:4c7a:ea2a:eaff:fe08:3c7a) has joined #ceph
[12:46] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[12:50] * itamarl is now known as Guest3454
[12:50] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[12:51] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[12:52] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[12:54] * Guest3454 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[12:56] * itamarl is now known as Guest3456
[12:56] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[12:58] * Guest3456 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[13:00] * gregmark (~Adium@68.87.42.115) has joined #ceph
[13:01] * itamarl is now known as Guest3458
[13:01] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[13:02] * jlayton (~jlayton@107.13.84.55) Quit (Read error: Connection reset by peer)
[13:03] * mog_1 (~SEBI@7V7AAFUA1.tor-irc.dnsbl.oftc.net) Quit ()
[13:04] * Guest3458 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[13:05] * jlayton (~jlayton@2606:a000:1125:4074:c5:7ff:fe41:3227) has joined #ceph
[13:06] * itamarl is now known as Guest3459
[13:06] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[13:07] <sep> are there any reviews of what performance one can expect from ceph on different hardware and tuning configurations ? some of my coworkers show up with things like ceph vs scaleio and ceph vs storepool. and unsuprisingly all those reports say that their product is awesome and we should run buy it immidiatly. :) .. looking for more fair comparisons where actualy using best practices.
[13:09] * hellertime (~Adium@72.246.3.14) has joined #ceph
[13:09] * Guest3459 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[13:10] * jlayton (~jlayton@2606:a000:1125:4074:c5:7ff:fe41:3227) Quit (Read error: Connection reset by peer)
[13:10] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[13:10] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[13:12] * jlayton (~jlayton@107.13.84.55) has joined #ceph
[13:15] * dgurtner (~dgurtner@178.197.233.244) Quit (Ping timeout: 480 seconds)
[13:16] * itamarl is now known as Guest3461
[13:16] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[13:17] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[13:18] <BranchPredictor> sep: regarding scaleio, last time I checked, all perf mesurements were under emc nda, in other words, you can't publish any scaleio benchmarks without their written consent, so it's fair to assume all scaleio benchmarks are emc's own. or otherwise blessed by emc.
[13:19] * Guest3461 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[13:20] * shylesh (~shylesh@121.244.87.118) has joined #ceph
[13:20] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) has joined #ceph
[13:20] <RhinoBeetle> Hi all, I've been testing cache tiering, and have noticed something I think is strange behaviour: Deleting a rados object does not evict the object from the cache pool. Is this expected behaviour?
[13:21] * itamarl is now known as Guest3463
[13:21] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[13:24] * Guest3463 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[13:25] * itamarl is now known as Guest3465
[13:25] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[13:25] <hellertime> oh boy. something is off with my cluster still. the monitor stores keep growing until they are critical, and I have to compact them manually??? meanwhile I can't get the osd to stabilize (all of a sudden, they were fine for the entire month of may) strange. not even sure where to start on this one.
[13:27] <sep> BranchPredictor, indeed. so i am looking for real unbiased benchmarks for various deployments. more to see what kind of performance one can expect from different hardware price levels.
[13:28] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[13:29] * Guest3465 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[13:31] * NTTEC (~nttec@122.53.162.158) Quit (Remote host closed the connection)
[13:32] * itamarl is now known as Guest3466
[13:32] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[13:33] * Guest3466 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[13:33] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:34] * lmb_ (~Lars@tmo-112-57.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[13:36] * dvanders (~dvanders@130.246.253.64) Quit (Ping timeout: 480 seconds)
[13:36] * shylesh (~shylesh@121.244.87.118) Quit (Remote host closed the connection)
[13:37] * itamarl is now known as Guest3468
[13:37] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[13:40] * Guest3468 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[13:44] * itamarl is now known as Guest3469
[13:44] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[13:47] * itamarl is now known as Guest3470
[13:47] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[13:47] * Guest3469 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[13:52] * Guest3470 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[13:55] * itamarl is now known as Guest3473
[13:55] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[13:57] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Remote host closed the connection)
[13:58] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[13:58] * kefu|afk (~kefu@114.92.104.47) Quit (Max SendQ exceeded)
[13:58] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[13:59] * dgurtner (~dgurtner@178.197.233.244) has joined #ceph
[13:59] * kefu (~kefu@114.92.104.47) has joined #ceph
[14:00] * Guest3473 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[14:02] * itamarl is now known as Guest3475
[14:02] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:05] * viisking (~viisking@183.80.255.12) Quit (Read error: Connection reset by peer)
[14:05] * viisking (~viisking@183.80.255.12) has joined #ceph
[14:05] * Guest3475 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[14:05] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[14:06] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Ping timeout: 480 seconds)
[14:06] * itamarl is now known as Guest3476
[14:06] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:10] * gregmark (~Adium@68.87.42.115) has joined #ceph
[14:10] * Guest3476 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[14:11] * b0e (~aledermue@213.95.25.82) has joined #ceph
[14:14] * itamarl is now known as Guest3477
[14:14] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:17] * Guest3477 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[14:17] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[14:19] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[14:21] * itamarl is now known as Guest3478
[14:21] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:24] * IvanJobs (~ivanjobs@103.50.11.146) Quit ()
[14:24] * Guest3478 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[14:24] * itamarl is now known as Guest3481
[14:24] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:28] * itamarl is now known as Guest3482
[14:28] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:29] * Guest3481 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[14:30] * dvanders (~dvanders@130.246.253.64) has joined #ceph
[14:30] * derjohn_mob (~aj@46.189.28.49) has joined #ceph
[14:32] * Guest3482 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[14:33] * _s1gma (~Random@hessel2.torservers.net) has joined #ceph
[14:33] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[14:33] * itamarl is now known as Guest3484
[14:33] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:36] * Guest3484 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[14:38] * dvanders (~dvanders@130.246.253.64) Quit (Ping timeout: 480 seconds)
[14:41] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[14:41] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[14:44] * itamarl is now known as Guest3489
[14:44] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:46] * dvanders (~dvanders@130.246.253.64) has joined #ceph
[14:47] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:47] * Guest3489 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[14:49] <The_Ball> I'm observing an interesting behaviour, I increased PGs in a pool displacing ~50% of the objects, then while watching the rebuild stats and playing with Cephfs removed a large amount of small files from the Cephfs mount and the number of objects misplaced started rising
[14:49] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[14:50] * itamarl is now known as Guest3491
[14:50] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:50] <The_Ball> Doh, sorry I'm an idiot, the number of misplaced objects did not increase, but the percentage of misplaced did increase because the total number of objects decreased from the deletion
[14:52] * mohmultihhouse (~mohmultih@gw01.mhitp.dk) Quit (Ping timeout: 480 seconds)
[14:52] * Guest3491 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[14:53] * smerz (~ircircirc@37.74.194.90) Quit (Quit: Leaving)
[14:53] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[14:53] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[14:55] * itamarl is now known as Guest3492
[14:55] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:55] * mohmultihouse (~mohmultih@mail.ballisager.com) has joined #ceph
[14:58] * Guest3492 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[14:59] * itamarl is now known as Guest3493
[14:59] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[15:02] * shaunm (~shaunm@74.83.215.100) has joined #ceph
[15:02] * mhuang (~mhuang@58.100.83.154) has joined #ceph
[15:03] * _s1gma (~Random@4MJAAF38F.tor-irc.dnsbl.oftc.net) Quit ()
[15:03] * hgjhgjh (~ZombieL@tor-exit.eecs.umich.edu) has joined #ceph
[15:03] * Guest3493 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[15:03] * jlayton (~jlayton@107.13.84.55) Quit (Read error: Connection reset by peer)
[15:03] * jlayton (~jlayton@2606:a000:1125:602c:c5:7ff:fe41:3227) has joined #ceph
[15:04] * rraja_ (~rraja@121.244.87.117) has joined #ceph
[15:05] * rraja (~rraja@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:05] * chrone (~chrone@103.5.50.202) has joined #ceph
[15:06] <chrone> Morning all..
[15:09] * itamarl is now known as Guest3495
[15:09] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[15:10] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[15:13] * rakeshgm (~rakesh@121.244.87.117) Quit (Remote host closed the connection)
[15:13] * Guest3495 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[15:13] * itamarl is now known as Guest3497
[15:13] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[15:13] <sep> The_Ball, :)
[15:14] * musca (musca@tyrael.eu) has joined #ceph
[15:15] * khyron (~khyron@fixed-190-159-187-190-159-75.iusacell.net) Quit (Quit: The computer fell asleep)
[15:15] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[15:15] * khyron (~khyron@fixed-190-159-187-190-159-75.iusacell.net) has joined #ceph
[15:17] <chrone> Guys, need help on fixing the object store rgw in Jewel where it could not pass S3test.py due to this bug: http://tracker.ceph.com/issues/15937
[15:18] * Guest3497 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[15:18] * itamarl is now known as Guest3499
[15:19] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[15:19] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[15:21] * Guest3499 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[15:23] <hellertime> my monitor store is 17G??? does that seem excessively large?
[15:23] * khyron (~khyron@fixed-190-159-187-190-159-75.iusacell.net) Quit (Ping timeout: 480 seconds)
[15:24] <wes_dillingham> whats sort of local disk io requirements to metadata servers have? I see some documentation saying they are CPU intensive but no info on how much disk io they may do??? would money be better spent elsewhere than on ssd for my mds?
[15:26] * itamarl is now known as Guest3501
[15:26] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[15:26] * scg (~zscg@valis.gnu.org) has joined #ceph
[15:29] * bitserker (~toni@88.87.194.130) has joined #ceph
[15:29] * Guest3501 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[15:29] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:30] * valeech_ (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) has joined #ceph
[15:30] * itamarl is now known as Guest3502
[15:30] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[15:31] <irq0> hi, is the ceph days switzerland registration already closed?
[15:31] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[15:32] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[15:32] * valeech_ is now known as valeech
[15:33] * hgjhgjh (~ZombieL@06SAADMXO.tor-irc.dnsbl.oftc.net) Quit ()
[15:34] * Guest3502 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[15:34] * musca (musca@tyrael.eu) has left #ceph
[15:34] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[15:37] * mohmultihouse (~mohmultih@mail.ballisager.com) Quit (Ping timeout: 480 seconds)
[15:38] * itamarl is now known as Guest3503
[15:38] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[15:39] * ade (~abradshaw@85.158.226.30) Quit (Ping timeout: 480 seconds)
[15:40] * shyu (~shyu@218.241.172.114) Quit (Ping timeout: 480 seconds)
[15:40] * mohmultihouse (~mohmultih@gw01.mhitp.dk) has joined #ceph
[15:42] * vata (~vata@207.96.182.162) has joined #ceph
[15:42] * Guest3503 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[15:43] * itamarl is now known as Guest3504
[15:43] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[15:44] * NTTEC (~nttec@122.53.162.158) Quit (Remote host closed the connection)
[15:45] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:46] * Guest3504 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[15:47] * bara (~bara@213.175.37.12) has joined #ceph
[15:47] * masterom1 (~ivan@93-142-13-172.adsl.net.t-com.hr) has joined #ceph
[15:53] * masteroman (~ivan@93-142-28-162.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[15:54] * itamarl is now known as Guest3505
[15:54] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[15:57] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[15:57] * NTTEC (~nttec@122.53.162.158) Quit (Remote host closed the connection)
[15:58] * itamarl is now known as Guest3506
[15:58] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[15:59] * Guest3505 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[16:02] * Guest3506 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[16:04] * mohmultihouse (~mohmultih@gw01.mhitp.dk) Quit (Ping timeout: 480 seconds)
[16:04] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[16:04] * chrone (~chrone@00021f98.user.oftc.net) Quit (Quit: brb)
[16:04] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:07] * lcurtis_ (~lcurtis@47.19.105.250) has joined #ceph
[16:09] * mattbenjamin (~mbenjamin@12.118.3.106) has joined #ceph
[16:09] * chrone (~chrone@103.5.50.202) has joined #ceph
[16:09] * matj345314 (~matj34531@141.255.254.208) Quit (Quit: matj345314)
[16:09] * chrone is now known as Guest3508
[16:10] * Guest3508 is now known as chrone
[16:10] * agsha_ (~agsha@124.40.246.234) has joined #ceph
[16:11] * kefu (~kefu@114.92.104.47) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[16:12] * rraja_ (~rraja@121.244.87.117) Quit (Ping timeout: 480 seconds)
[16:12] * danieagle (~Daniel@187.74.77.234) has joined #ceph
[16:13] * lmb_ (~Lars@charybdis-ext.suse.de) has joined #ceph
[16:17] * agsha (~agsha@124.40.246.234) Quit (Ping timeout: 480 seconds)
[16:17] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:18] * brad_mssw (~brad@66.129.88.50) Quit (Remote host closed the connection)
[16:20] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:21] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[16:22] * deepthi (~deepthi@106.197.84.234) Quit (Quit: Leaving)
[16:24] * joelc (~joelc@rrcs-71-41-248-34.sw.biz.rr.com) has joined #ceph
[16:25] * ade (~abradshaw@213.90.43.182) has joined #ceph
[16:25] * brad_mssw (~brad@66.129.88.50) Quit (Remote host closed the connection)
[16:25] * kefu (~kefu@183.193.187.174) has joined #ceph
[16:25] * bene (~bene@2601:193:4003:4c7a:ea2a:eaff:fe08:3c7a) Quit (Remote host closed the connection)
[16:26] * bene (~bene@2601:193:4003:4c7a:ea2a:eaff:fe08:3c7a) has joined #ceph
[16:27] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[16:30] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) has joined #ceph
[16:33] * Silentspy (~hyst@tor-exit-node-nibbana.dson.org) has joined #ceph
[16:33] * itamarl is now known as Guest3513
[16:33] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[16:34] * yanzheng1 (~zhyan@125.70.23.87) Quit (Quit: This computer has gone to sleep)
[16:35] * wushudoin (~wushudoin@2601:646:9501:d2b2:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:36] * Guest3513 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[16:36] * swami2 (~swami@49.32.0.140) Quit (Quit: Leaving.)
[16:36] * wushudoin (~wushudoin@2601:646:9501:d2b2:2ab2:bdff:fe0b:a6ee) Quit ()
[16:37] * wushudoin (~wushudoin@2601:646:9501:d2b2:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:37] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[16:37] * andreww (~xarses@64.124.158.100) has joined #ceph
[16:39] * itamarl is now known as Guest3514
[16:39] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[16:40] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit ()
[16:41] * Guest3514 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[16:43] * whatevsz__ (~quassel@b9168e90.cgn.dg-w.de) Quit (Remote host closed the connection)
[16:44] <The_Ball> hellertime, http://ceph.com/planet/ceph-monitor-store-taking-up-a-lot-of-space/
[16:44] * whatevsz (~quassel@b9168e90.cgn.dg-w.de) has joined #ceph
[16:45] * kefu_ (~kefu@114.92.104.47) has joined #ceph
[16:46] <hellertime> isn't setting that config the same as calling `ceph tell mds.* compact'?
[16:46] * itamarl is now known as Guest3515
[16:46] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[16:47] * Guest3515 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[16:52] * kefu (~kefu@183.193.187.174) Quit (Ping timeout: 480 seconds)
[16:53] * itamarl is now known as Guest3517
[16:53] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[16:55] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[16:57] * gkoof3ovk (david@sugi.qzx.se) has joined #ceph
[16:57] * Guest3517 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[16:58] * itamarl is now known as Guest3518
[16:58] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[16:59] * lmb_ (~Lars@charybdis-ext.suse.de) Quit (Ping timeout: 480 seconds)
[17:01] * Guest3518 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[17:03] * Silentspy (~hyst@06SAADM2S.tor-irc.dnsbl.oftc.net) Quit ()
[17:03] * ircolle (~Adium@2601:285:201:633a:f074:ba:45ed:a9ea) has joined #ceph
[17:03] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) has joined #ceph
[17:03] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) Quit (Read error: Connection reset by peer)
[17:06] * bene (~bene@2601:193:4003:4c7a:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[17:07] * itamarl is now known as Guest3519
[17:07] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[17:09] * vivo (~viisking@183.80.255.12) has joined #ceph
[17:10] * Guest3519 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[17:11] <The_Ball> hellertime, probably, but I don't know
[17:12] * linjan (~linjan@86.62.112.22) Quit (Ping timeout: 480 seconds)
[17:12] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:13] <The_Ball> hellertime, perhaps a full compact can only be done before the osd fully starts, I would try it on one osd and see if it makes a difference
[17:13] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[17:13] * itamarl is now known as Guest3521
[17:13] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[17:15] * viisking (~viisking@183.80.255.12) Quit (Ping timeout: 480 seconds)
[17:17] * Guest3521 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[17:21] * itamarl is now known as Guest3522
[17:21] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[17:22] <The_Ball> hellertime, s/osd/mon/
[17:24] * mattbenjamin (~mbenjamin@12.118.3.106) Quit (Ping timeout: 480 seconds)
[17:24] * brad_mssw (~brad@66.129.88.50) Quit (Remote host closed the connection)
[17:25] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:26] * Guest3522 (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[17:26] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[17:28] * brad_mssw (~brad@66.129.88.50) Quit (Remote host closed the connection)
[17:28] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[17:28] * blizzow (~jburns@50.243.148.102) has joined #ceph
[17:29] * chrone81 (~chronic@202.171.27.122) has joined #ceph
[17:29] * kefu_ is now known as kefu
[17:29] * mhuang (~mhuang@58.100.83.154) Quit (Quit: This computer has gone to sleep)
[17:29] * chrone81 is now known as Guest3523
[17:29] * brad_mssw (~brad@66.129.88.50) Quit (Remote host closed the connection)
[17:30] * itamarl (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[17:30] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[17:31] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Quit: Ex-Chat)
[17:32] * chrone is now known as Guest3524
[17:32] * khyron (~khyron@200.77.224.239) has joined #ceph
[17:32] * Guest3523 is now known as chrone
[17:33] * Gecko1986 (~Crisco@193.90.12.87) has joined #ceph
[17:36] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[17:36] * Guest3524 (~chrone@00021f98.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:36] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[17:36] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[17:37] * RhinoBeetle (~bradley@vpn1.updata.net) Quit (Ping timeout: 480 seconds)
[17:38] * Skaag (~lunix@65.200.54.234) has joined #ceph
[17:39] * Brochacho (~alberto@2601:243:504:6aa:8c39:923c:2ad3:ecc8) has joined #ceph
[17:41] * bara (~bara@213.175.37.12) Quit (Remote host closed the connection)
[17:47] * karnan (~karnan@121.244.87.117) has joined #ceph
[17:48] * ska (~skatinolo@cpe-173-174-111-177.austin.res.rr.com) has joined #ceph
[17:50] * bara (~bara@213.175.37.12) has joined #ceph
[17:54] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[17:56] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[17:57] * cathode (~cathode@50.232.215.114) has joined #ceph
[17:59] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:01] * bitserker1 (~toni@88.87.194.130) has joined #ceph
[18:01] * bitserker (~toni@88.87.194.130) Quit (Quit: Leaving.)
[18:03] * Gecko1986 (~Crisco@06SAADM6B.tor-irc.dnsbl.oftc.net) Quit ()
[18:05] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) Quit (Quit: valeech)
[18:07] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[18:08] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[18:09] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) has joined #ceph
[18:10] * lmb_ (~Lars@nat.nue.novell.com) has joined #ceph
[18:10] * penguinRaider (~KiKo@204.152.207.173) Quit (Ping timeout: 480 seconds)
[18:11] * messi2001 (~viisking@183.80.255.12) has joined #ceph
[18:14] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[18:17] * vivo (~viisking@183.80.255.12) Quit (Ping timeout: 480 seconds)
[18:19] * penguinRaider (~KiKo@204.152.207.173) has joined #ceph
[18:21] * lmb_ (~Lars@nat.nue.novell.com) Quit (Ping timeout: 480 seconds)
[18:21] * derjohn_mob (~aj@46.189.28.49) Quit (Ping timeout: 480 seconds)
[18:25] * kefu is now known as kefu|afk
[18:26] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[18:26] * jermudgeon_ (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[18:27] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[18:29] * karnan (~karnan@121.244.87.117) Quit (Quit: Leaving)
[18:29] * tumeric (~jcastro@89.152.250.115) Quit (Quit: Leaving)
[18:30] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) has joined #ceph
[18:30] * kefu|afk (~kefu@114.92.104.47) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:30] <hellertime> The_Ball: ok set it in the conf??? sadly my mon stores just keep growing out of control....
[18:31] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Ping timeout: 480 seconds)
[18:31] * jermudgeon_ is now known as jermudgeon
[18:33] * skrblr (~rushworld@watchme.tor-exit.network) has joined #ceph
[18:33] <hellertime> compacting doesn't even seem to actually reduce the size of the store at all
[18:33] <hellertime> df before and after is roughly the same
[18:37] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[18:40] * ircolle (~Adium@2601:285:201:633a:f074:ba:45ed:a9ea) Quit (Quit: Leaving.)
[18:41] * swami1 (~swami@27.7.170.84) has joined #ceph
[18:43] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[18:44] * ASBishop (~ASBishop@143.166.116.80) has joined #ceph
[18:46] <hellertime> so I seem to be suffering from major osd churn on my cluster, which it causing the mon store to grow unbounded ??? any thoughts on how to stabilize? should I stop all my osd and start them one by one?
[18:49] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:51] * joelc (~joelc@rrcs-71-41-248-34.sw.biz.rr.com) Quit (Remote host closed the connection)
[18:53] * ASBishop (~ASBishop@143.166.116.80) has left #ceph
[18:57] * andrewschoen (~andrewsch@50.56.86.195) Quit (Ping timeout: 480 seconds)
[19:00] * bara (~bara@213.175.37.12) Quit (Quit: Bye guys! (??????????????????? ?????????)
[19:00] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[19:02] * khyron (~khyron@200.77.224.239) Quit (Ping timeout: 480 seconds)
[19:03] * skrblr (~rushworld@06SAADNAB.tor-irc.dnsbl.oftc.net) Quit ()
[19:03] * Throlkim (~ChauffeR@tor1.mysec-arch.net) has joined #ceph
[19:04] * penguinRaider (~KiKo@204.152.207.173) Quit (Ping timeout: 480 seconds)
[19:07] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[19:11] * lmb_ (~Lars@tmo-102-189.customers.d1-online.com) has joined #ceph
[19:12] * vivo (~viisking@183.80.255.12) has joined #ceph
[19:13] * dvanders (~dvanders@130.246.253.64) Quit (Ping timeout: 480 seconds)
[19:15] * rakeshgm (~rakesh@106.51.27.212) has joined #ceph
[19:16] * rakeshgm (~rakesh@106.51.27.212) Quit ()
[19:17] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) has joined #ceph
[19:17] * rakeshgm (~rakesh@106.51.27.212) has joined #ceph
[19:18] * penguinRaider (~KiKo@204.152.207.173) has joined #ceph
[19:18] * messi2001 (~viisking@183.80.255.12) Quit (Ping timeout: 480 seconds)
[19:19] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:22] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[19:23] * rakeshgm (~rakesh@106.51.27.212) Quit (Quit: Leaving)
[19:23] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) has joined #ceph
[19:25] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) Quit (Quit: Leaving)
[19:26] * thomnico (~thomnico@2a01:e35:8b41:120:9d38:bba2:2fc2:41a) Quit (Quit: Ex-Chat)
[19:26] * thomnico (~thomnico@2a01:e35:8b41:120:9d38:bba2:2fc2:41a) has joined #ceph
[19:33] * Throlkim (~ChauffeR@06SAADNBR.tor-irc.dnsbl.oftc.net) Quit ()
[19:33] * Epi (~Guest1390@tor-exit.insane.us.to) has joined #ceph
[19:35] * matj345314 (~matj34531@element.planetq.org) has joined #ceph
[19:36] * rakeshgm (~rakesh@106.51.27.212) has joined #ceph
[19:37] * joelc (~joelc@rrcs-71-41-248-34.sw.biz.rr.com) has joined #ceph
[19:37] * thomnico (~thomnico@2a01:e35:8b41:120:9d38:bba2:2fc2:41a) Quit (Quit: Ex-Chat)
[19:37] * thomnico (~thomnico@2a01:e35:8b41:120:9d38:bba2:2fc2:41a) has joined #ceph
[19:39] * agsha (~agsha@124.40.246.234) has joined #ceph
[19:40] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:9135:a55b:27ec:cc48) Quit (Ping timeout: 480 seconds)
[19:42] * cathode (~cathode@50.232.215.114) Quit (Quit: Changing servers)
[19:45] * agsha_ (~agsha@124.40.246.234) Quit (Ping timeout: 480 seconds)
[19:46] * mykola (~Mikolaj@193.93.217.33) has joined #ceph
[19:46] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) has joined #ceph
[19:46] * dgurtner (~dgurtner@178.197.233.244) Quit (Ping timeout: 480 seconds)
[19:48] <kjetijor> hellertime: When you say churn.. is it osd(s) "flapping" between either out/in or up/down ? I'd go log-spelunking on osd(s) that's been "flapping" and potentially monitors.
[19:49] * gregmark (~Adium@68.87.42.115) has joined #ceph
[19:54] * thomnico (~thomnico@2a01:e35:8b41:120:9d38:bba2:2fc2:41a) Quit (Quit: Ex-Chat)
[19:55] * thomnico (~thomnico@2a01:e35:8b41:120:9d38:bba2:2fc2:41a) has joined #ceph
[19:58] * rakeshgm (~rakesh@106.51.27.212) Quit (Quit: Leaving)
[20:03] * Epi (~Guest1390@06SAADNDA.tor-irc.dnsbl.oftc.net) Quit ()
[20:03] * mason1 (~Ralth@ns398717.ip-37-59-42.eu) has joined #ceph
[20:06] * matj345314 (~matj34531@element.planetq.org) Quit (Quit: matj345314)
[20:07] * matj345314 (~matj34531@element.planetq.org) has joined #ceph
[20:12] <hellertime> yeah going up/down they all seem to be aborting when applying a transaction...
[20:13] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[20:14] <hellertime> I've stopped all my mds (I ran an osd with debug 20 and it seems to be lots of mds replay)
[20:14] * messi2001 (~viisking@183.80.255.12) has joined #ceph
[20:14] <hellertime> trying to get the damn osd back online and healthy .. then I'll restart mds
[20:19] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[20:20] * vivo (~viisking@183.80.255.12) Quit (Ping timeout: 480 seconds)
[20:20] * ade (~abradshaw@213.90.43.182) Quit (Quit: Too sexy for his shirt)
[20:20] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) Quit (Quit: Leaving.)
[20:25] <hellertime> I can't seem to get the mon store to stop growing!?!?! :(
[20:26] * chrone (~chronic@00021f98.user.oftc.net) Quit (Quit: Leaving)
[20:32] * andrewschoen (~andrewsch@50.56.86.195) has joined #ceph
[20:33] * mason1 (~Ralth@7V7AAFU0P.tor-irc.dnsbl.oftc.net) Quit ()
[20:35] * swami1 (~swami@27.7.170.84) Quit (Quit: Leaving.)
[20:37] * Vidi (~Harryhy@4MJAAF4TU.tor-irc.dnsbl.oftc.net) has joined #ceph
[20:37] * matj345314 (~matj34531@element.planetq.org) Quit (Quit: matj345314)
[20:38] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[20:39] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[20:40] * bene (~bene@2601:18c:8501:41d0:ea2a:eaff:fe08:3c7a) has joined #ceph
[20:40] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[20:41] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:9135:a55b:27ec:cc48) has joined #ceph
[20:43] * penguinRaider (~KiKo@204.152.207.173) Quit (Ping timeout: 480 seconds)
[20:44] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Read error: No route to host)
[20:45] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[20:47] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[20:52] * penguinRaider (~KiKo@204.152.207.173) has joined #ceph
[20:53] * todin (tuxadero@kudu.in-berlin.de) Quit (Read error: No route to host)
[20:55] * Skaag (~lunix@65.200.54.234) has joined #ceph
[20:55] * matj345314 (~matj34531@element.planetq.org) has joined #ceph
[20:57] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:9135:a55b:27ec:cc48) Quit (Ping timeout: 480 seconds)
[20:57] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[20:58] * rraja (~rraja@121.244.87.117) has joined #ceph
[20:58] <kjetijor> yea; anecdotally high churn on the map(s) maintained by the monitors does lead to on-disk-growth. (I've had a similar experience with accidental blacklist churn)
[20:59] * lmb_ (~Lars@tmo-102-189.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[21:01] * blizzow (~jburns@50.243.148.102) Quit (Quit: Leaving.)
[21:01] * praveen_ (~praveen@121.244.155.11) Quit (Read error: Connection reset by peer)
[21:01] * praveen (~praveen@121.244.155.11) has joined #ceph
[21:02] * praveen_ (~praveen@121.244.155.11) has joined #ceph
[21:02] * praveen (~praveen@121.244.155.11) Quit (Read error: Connection reset by peer)
[21:03] * praveen (~praveen@121.244.155.11) has joined #ceph
[21:03] * praveen_ (~praveen@121.244.155.11) Quit (Read error: Connection reset by peer)
[21:04] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[21:05] * Kruge_ (~Anus@198.211.99.93) has joined #ceph
[21:05] * bvi (~Bastiaan@185.56.32.1) has joined #ceph
[21:06] * Kruge (~Anus@198.211.99.93) Quit (Read error: Connection reset by peer)
[21:07] * agsha (~agsha@124.40.246.234) Quit (Remote host closed the connection)
[21:07] * Vidi (~Harryhy@4MJAAF4TU.tor-irc.dnsbl.oftc.net) Quit ()
[21:07] * agsha (~agsha@124.40.246.234) has joined #ceph
[21:11] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Killed (NickServ (Too many failed password attempts.)))
[21:11] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[21:15] * messi2001 (~viisking@183.80.255.12) Quit (Read error: Connection reset by peer)
[21:15] * messi2001 (~viisking@183.80.255.12) has joined #ceph
[21:15] * praveen (~praveen@121.244.155.11) Quit (Read error: Connection reset by peer)
[21:16] * praveen (~praveen@121.244.155.11) has joined #ceph
[21:19] * praveen_ (~praveen@121.244.155.11) has joined #ceph
[21:19] * praveen (~praveen@121.244.155.11) Quit (Read error: Connection reset by peer)
[21:21] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[21:24] * bniver is now known as bniver|away
[21:25] * m0zes__ (~mozes@n117m02.cis.ksu.edu) has joined #ceph
[21:25] * rendar (~I@host69-75-dynamic.0-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:26] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:34] * bvi (~Bastiaan@185.56.32.1) Quit (Ping timeout: 480 seconds)
[21:35] * thomnico (~thomnico@2a01:e35:8b41:120:9d38:bba2:2fc2:41a) Quit (Quit: Ex-Chat)
[21:37] <hellertime> interestingly compacting this 20GB monitor store results in ??? ~20GB of monitor store :/
[21:37] * scg (~zscg@valis.gnu.org) Quit (Remote host closed the connection)
[21:37] * brianjjo (~oracular@193.90.12.87) has joined #ceph
[21:37] <hellertime> worst is I have two clusters with roughly same amount of data (actually the other has more data than this one ??? I'm syncing it over at the moment) ??? and that cluster's monitor store is 500MB!
[21:40] * scg (~zscg@valis.gnu.org) has joined #ceph
[21:40] <zdzichu> IIRC monitor holds 500 copies of previous cluster maps?
[21:40] <hellertime> can I purge the monitor store somehow?
[21:40] * praveen_ (~praveen@121.244.155.11) Quit (Remote host closed the connection)
[21:40] * johnavp19891 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[21:40] <- *johnavp19891* To prove that you are human, please enter the result of 8+3
[21:44] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[21:44] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[21:51] * rendar (~I@host69-75-dynamic.0-87-r.retail.telecomitalia.it) has joined #ceph
[21:53] <hellertime> if its not possible to purge. ugh. forward progress seems unlikely :/
[21:55] <georgem> hellertime: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-December/006456.html
[21:55] <hellertime> oh nice. thanks. I'll try this
[21:55] <georgem> hellertime: that is to limit the number of maps sent to the OSDs
[21:56] <hellertime> does that also limit the number of maps retained in the store?
[21:57] <georgem> hellertime: you can check your values with: ceph --admin-daemon /var/run/ceph/ceph-mon.asok config show | grep map
[21:57] <georgem> adapt for you local env
[22:04] * allaok (~allaok@ARennes-658-1-231-16.w2-13.abo.wanadoo.fr) has joined #ceph
[22:04] * allaok (~allaok@ARennes-658-1-231-16.w2-13.abo.wanadoo.fr) has left #ceph
[22:06] * overclk (~quassel@117.202.96.153) Quit (Remote host closed the connection)
[22:07] * brianjjo (~oracular@7V7AAFU5R.tor-irc.dnsbl.oftc.net) Quit ()
[22:07] * Kalado (~JamesHarr@tor02.zencurity.dk) has joined #ceph
[22:10] * matj345314 (~matj34531@element.planetq.org) Quit (Quit: matj345314)
[22:10] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[22:17] * messi2001 (~viisking@183.80.255.12) Quit (Read error: Connection reset by peer)
[22:17] * messi2001 (~viisking@183.80.255.12) has joined #ceph
[22:20] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:26] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[22:27] * Nicola-1980 (~Nicola-19@x4db48960.dyn.telefonica.de) has joined #ceph
[22:28] <hellertime> well those inferalis settings appear to have reduced the mon store size to 5G??? thats interesting..
[22:33] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) has joined #ceph
[22:37] * mykola (~Mikolaj@193.93.217.33) Quit (Quit: away)
[22:37] * Kalado (~JamesHarr@06SAADNLP.tor-irc.dnsbl.oftc.net) Quit ()
[22:37] * galaxyAbstractor (~adept256@185.100.87.73) has joined #ceph
[22:39] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[22:45] * khyron (~khyron@200.77.224.239) has joined #ceph
[22:47] * bvi (~Bastiaan@152-64-132-5.ftth.glasoperator.nl) has joined #ceph
[22:50] * bniver|away is now known as bniver
[22:58] * wes_dillingham (~wes_dilli@140.247.242.44) has left #ceph
[23:07] * galaxyAbstractor (~adept256@4MJAAF4ZP.tor-irc.dnsbl.oftc.net) Quit ()
[23:07] * Coestar1 (~delcake@ori.enn.lu) has joined #ceph
[23:14] * scg (~zscg@valis.gnu.org) Quit (Quit: Leaving)
[23:18] * messi2001 (~viisking@183.80.255.12) Quit (Read error: Connection reset by peer)
[23:18] * messi2001 (~viisking@183.80.255.12) has joined #ceph
[23:21] * joelc (~joelc@rrcs-71-41-248-34.sw.biz.rr.com) Quit (Remote host closed the connection)
[23:25] * danieagle (~Daniel@187.74.77.234) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[23:26] * andreww (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[23:30] * wjw-freebsd (~wjw@176.74.240.9) has joined #ceph
[23:31] <m0zes__> does bluestore/rocksdb get rid of the issues with filestore/leveldb and millions of omap keys?
[23:33] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:9135:a55b:27ec:cc48) has joined #ceph
[23:34] <m0zes__> also, I???ve identified the slowest objects that need to be exported out of my cephfs metadata pool to fix the suicide timeout issues I???ve been hitting. It looks like they are ???stray??? inodes. possibly caused by a user creating and deleting millions of files simultaneously??? objects in the 600.00000000-609.0000000 range.
[23:37] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:37] * Coestar1 (~delcake@4MJAAF407.tor-irc.dnsbl.oftc.net) Quit ()
[23:37] * xENO_ (~Behedwin@edwardsnowden2.torservers.net) has joined #ceph
[23:39] <m0zes__> is there a running count of omap key->values on an object that I can query through rados?
[23:39] <m0zes__> I???d like to know how many omap keys I have on each of the ???stray??? inodes.
[23:40] <flaf> Hi, I have no idea for your question (sorry) but probably the fact to not use a POSIX filesystem will improve perfs concerning omap.
[23:42] <gregsfortytwo> m0zes__: there's ah omap header you can probably read through the ceph_objectstore_tool
[23:43] <m0zes__> is there a map for the header? or should I be source diving?
[23:43] * hellertime (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[23:43] * praveen (~praveen@122.171.101.185) has joined #ceph
[23:43] <gregsfortytwo> mmm, no, it doesn't look like it's trivial :(
[23:43] <gregsfortytwo> ceph/src/mds/CDir.cc::_omap_fetched
[23:44] <gregsfortytwo> you might also be able to just get ceph_objectstore_tool to tell you the number of omaps associated with the object, but I don't know if it keeps that metadata around
[23:45] <gregsfortytwo> we do have a couple different things coming up that should prevent this problem in future; either the dirfrags (need more testing, probably work) or the stray delete queue (doesn't exist yet)...
[23:45] <gregsfortytwo> but for now, if you're *really* ambitious
[23:46] <gregsfortytwo> ...hmm, we actually don't have a way to do a quick reset of the stray directory, never mind that idea
[23:47] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[23:47] * khyron (~khyron@200.77.224.239) Quit (Quit: The computer fell asleep)
[23:47] * khyron (~khyron@200.77.224.239) has joined #ceph
[23:49] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[23:49] <m0zes__> dirfrags sound interesting, and I wouldn???t mind slowing down the people who have tons files in a single directory even more (as a cost for keeping this from happening again), but I???m not sure I???m comfortable enabling it yet.
[23:49] <gregsfortytwo> yeah, that's probably wise
[23:49] <gregsfortytwo> just, sorry you ran into this, but we do have plans for it :)
[23:50] <m0zes__> the problem as I see it, is that I cannot keep people from deleting millions of files at a time, and with only 10 ???stray??? inodes, we may hit this issue again.
[23:51] <m0zes__> I can yell and kick and scream and ban people for creating millions of files in a single directory, but I cannot tell when those inodes are ???filling up??? until it is (probably) too late.
[23:52] <gregsfortytwo> yeah
[23:53] <gregsfortytwo> we've got an open bug that won't take too long and I imagine will go into a jewel point release, to just disallow directories getting too large
[23:53] <m0zes__> that would be wonderful
[23:53] <gregsfortytwo> but that still won't prevent mass deletions from blowing up the stray dir, which I hadn't considered
[23:54] <gregsfortytwo> ...I guess we could do the crudest and most horrible semantics imaginable where we actually disallow deletes if the stray dir is too large
[23:54] <gregsfortytwo> but I don't like that idea too much
[23:54] <gregsfortytwo> hrm
[23:54] <gregsfortytwo> adding stray dirs dynamically is probably right out
[23:54] <m0zes__> honestly if it keeps the cluster alive, I???d be okay with that temporarily???
[23:55] <m0zes__> because I???ve been down more than a week because of this.
[23:55] <gregsfortytwo> it might not be too hard to do, and set it up as a default-off option
[23:55] * khyron (~khyron@200.77.224.239) Quit (Ping timeout: 480 seconds)
[23:55] * Lokta (~Lokta@carbon.coe.int) Quit (Ping timeout: 480 seconds)
[23:56] <m0zes__> should I file a bug/feature request?
[23:57] <gregsfortytwo> I'm commenting on http://tracker.ceph.com/issues/16164
[23:57] <gregsfortytwo> that's probably enough for now
[23:57] <m0zes__> that sounds great. thanks.
[23:58] <gregsfortytwo> so why are your OSDs rebalancing to begin with?
[23:58] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[23:58] <gregsfortytwo> I'm wondering if they died because the MDS restarted/failed over and had to actually request a load of the stray dir
[23:58] <gregsfortytwo> or if it was something else
[23:59] <gregsfortytwo> I have the strong suspicion that once you do the rebalance you'll need to loosen the OSD timeouts a lot in order for the MDS to load up those dirs
[23:59] <gregsfortytwo> and probably increase the MDS trimming limits as well to let it bring them down, before letting users back in

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.