#ceph IRC Log

Index

IRC Log for 2016-04-21

Timestamps are in GMT/BST.

[0:03] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[0:05] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[0:08] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[0:12] * davidzlap (~Adium@2605:e000:1313:8003:a0eb:23b5:ea94:e066) Quit (Ping timeout: 480 seconds)
[0:13] * vicente (~vicente@1-163-220-225.dynamic.hinet.net) has joined #ceph
[0:14] * ghostnote (~hyst@76GAAEPQ3.tor-irc.dnsbl.oftc.net) Quit ()
[0:15] * JustEra (~JustEra@217.138.37.156) Quit (Ping timeout: 480 seconds)
[0:21] * vicente (~vicente@1-163-220-225.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[0:28] * Bartek (~Bartek@37-128-72-8.adsl.inetia.pl) has joined #ceph
[0:31] * dneary (~dneary@50-206-118-3-static.hfc.comcastbusiness.net) has joined #ceph
[0:32] * JustEra (~JustEra@my83-216-94-242.cust.relish.net) has joined #ceph
[0:38] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[0:38] * cathode (~cathode@50.232.215.114) has joined #ceph
[0:39] * gregmark1 (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[0:44] * Rehevkor (~hgjhgjh@4MJAAEC9X.tor-irc.dnsbl.oftc.net) has joined #ceph
[0:44] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) Quit (Quit: WeeChat 1.4)
[0:46] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) has joined #ceph
[0:52] * Bartek (~Bartek@37-128-72-8.adsl.inetia.pl) Quit (Remote host closed the connection)
[0:53] * dneary (~dneary@50-206-118-3-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[0:55] * nils_ (~nils@port-17233.pppoe.wtnet.de) has joined #ceph
[0:55] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[0:58] <aarontc> I have pgs in inconsistent state but no ERRs in any OSD logs (using default logging configuration)... How can I investigate the problem?
[1:03] * JustEra (~JustEra@my83-216-94-242.cust.relish.net) Quit (Quit: This computer has gone to sleep)
[1:04] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[1:08] * csoukup (~csoukup@2605:a601:9c8:6b00:941c:22f3:48b8:af14) has joined #ceph
[1:09] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Quit: Leaving.)
[1:13] * rendar (~I@host126-178-dynamic.36-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[1:14] * vicente (~vicente@1-163-220-225.dynamic.hinet.net) has joined #ceph
[1:14] * Rehevkor (~hgjhgjh@4MJAAEC9X.tor-irc.dnsbl.oftc.net) Quit ()
[1:16] * csoukup (~csoukup@2605:a601:9c8:6b00:941c:22f3:48b8:af14) Quit (Ping timeout: 480 seconds)
[1:17] * The_Ball (~pi@20.92-221-43.customer.lyse.net) Quit (Ping timeout: 480 seconds)
[1:22] * vicente (~vicente@1-163-220-225.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[1:26] * The_Ball (~pi@20.92-221-43.customer.lyse.net) has joined #ceph
[1:29] * vicente (~vicente@1-163-220-225.dynamic.hinet.net) has joined #ceph
[1:30] * davidzlap (~Adium@2605:e000:1313:8003:8c2f:fea2:3c75:8114) has joined #ceph
[1:30] * haomaiwang (~haomaiwan@180.sub-70-193-56.myvzw.com) has joined #ceph
[1:35] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[1:37] * oms101 (~oms101@p20030057EA016600C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:39] * haomaiwang (~haomaiwan@180.sub-70-193-56.myvzw.com) Quit (Ping timeout: 480 seconds)
[1:39] * kefu (~kefu@114.92.122.74) has joined #ceph
[1:42] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[1:43] * thansen (~thansen@17.253.sfcn.org) Quit (Quit: Ex-Chat)
[1:46] * oms101 (~oms101@p20030057EA013F00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:46] * daiver (~daiver@2606:a000:111b:c12b:e425:7629:a19:9dbb) has joined #ceph
[1:46] * vicente (~vicente@1-163-220-225.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[1:50] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (Ping timeout: 480 seconds)
[1:53] * curtis864 (~tritonx@anonymous6.sec.nl) has joined #ceph
[1:54] * daiver (~daiver@2606:a000:111b:c12b:e425:7629:a19:9dbb) Quit (Ping timeout: 480 seconds)
[1:54] * Skaag1 (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[1:56] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[1:57] * xarses (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[2:03] * daiver (~daiver@2606:a000:111b:c12b:acd1:2bbf:1ebe:2e31) has joined #ceph
[2:05] * daiver (~daiver@2606:a000:111b:c12b:acd1:2bbf:1ebe:2e31) Quit (Remote host closed the connection)
[2:05] <loth> Are there any causes for a pg query hanging? "EINTR: problem getting command descriptions from pg.3.261"
[2:05] * daiver (~daiver@2606:a000:111b:c12b:acd1:2bbf:1ebe:2e31) has joined #ceph
[2:05] * xarses (~xarses@50.141.33.27) has joined #ceph
[2:08] * xarses_ (~xarses@172.56.30.200) has joined #ceph
[2:13] * xarses (~xarses@50.141.33.27) Quit (Ping timeout: 480 seconds)
[2:15] * daiver (~daiver@2606:a000:111b:c12b:acd1:2bbf:1ebe:2e31) Quit (Remote host closed the connection)
[2:15] * daiver (~daiver@2606:a000:111b:c12b:acd1:2bbf:1ebe:2e31) has joined #ceph
[2:23] * curtis864 (~tritonx@76GAAEPTN.tor-irc.dnsbl.oftc.net) Quit ()
[2:23] * galaxyAbstractor (~dug@chomsky.torservers.net) has joined #ceph
[2:23] * daiver (~daiver@2606:a000:111b:c12b:acd1:2bbf:1ebe:2e31) Quit (Ping timeout: 480 seconds)
[2:24] * kefu (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[2:26] * xarses_ (~xarses@172.56.30.200) Quit (Ping timeout: 480 seconds)
[2:29] * huangjun (~kvirc@113.57.168.154) has joined #ceph
[2:34] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[2:36] * huangjun|2 (~kvirc@113.57.168.154) has joined #ceph
[2:36] * huangjun (~kvirc@113.57.168.154) Quit (Read error: Connection reset by peer)
[2:39] * vicente (~vicente@1-163-220-225.dynamic.hinet.net) has joined #ceph
[2:44] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[2:45] * vata (~vata@cable-21.246.173-197.electronicbox.net) has joined #ceph
[2:47] * vicente (~vicente@1-163-220-225.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[2:49] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Quit: Leaving.)
[2:49] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[2:53] * galaxyAbstractor (~dug@6AGAAA4XJ.tor-irc.dnsbl.oftc.net) Quit ()
[2:53] * Silentspy (~mLegion@torsrvn.snydernet.net) has joined #ceph
[3:04] * badone (~badone@66.187.239.16) Quit (Remote host closed the connection)
[3:07] * BrianA1 (~BrianA@fw-rw.shutterfly.com) Quit (Read error: Connection reset by peer)
[3:08] * Lea (~LeaChim@host86-176-19-208.range86-176.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[3:15] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[3:18] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[3:23] * Silentspy (~mLegion@4MJAAEDB6.tor-irc.dnsbl.oftc.net) Quit ()
[3:23] * kiasyn (~Hejt@85.159.237.210) has joined #ceph
[3:30] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:31] * davidzlap (~Adium@2605:e000:1313:8003:8c2f:fea2:3c75:8114) Quit (Quit: Leaving.)
[3:32] * daiver (~daiver@cpe-98-26-71-226.nc.res.rr.com) has joined #ceph
[3:32] * daiver_ (~daiver@2606:a000:111b:c12b:4858:e9c5:10cf:d1f2) has joined #ceph
[3:40] * vicente (~vicente@1-163-220-225.dynamic.hinet.net) has joined #ceph
[3:40] * daiver (~daiver@cpe-98-26-71-226.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:47] * flisky (~Thunderbi@36.110.40.24) has joined #ceph
[3:48] * vicente (~vicente@1-163-220-225.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[3:52] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[3:53] * kiasyn (~Hejt@6AGAAA4Z5.tor-irc.dnsbl.oftc.net) Quit ()
[3:53] * curtis864 (~zapu@edwardsnowden2.torservers.net) has joined #ceph
[3:53] * yanzheng (~zhyan@125.70.21.113) has joined #ceph
[3:54] * derjohn_mobi (~aj@x590d52c3.dyn.telefonica.de) has joined #ceph
[3:55] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[4:02] * derjohn_mob (~aj@x590c3e22.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[4:02] <loth> Is there any way to force a incomplete pg to delete its data and rejoin the cluster?
[4:03] * Racpatel (~Racpatel@2601:87:3:3601::30ed) Quit (Ping timeout: 480 seconds)
[4:04] <Anticimex> what could be limiting number of active pgs that are recovering?
[4:05] <Anticimex> have 36 osds in one pool and when they're recovering a 15% pool outage, it's typically at 3-6 pgs recovering at any one point in time. i'd expect that to be at least #osd/3 (replication level)
[4:05] <Anticimex> though the logic around that isn't completely obvious from docs
[4:05] * kefu (~kefu@183.193.162.205) has joined #ceph
[4:06] * IvanJobs (~hardes@103.50.11.146) has joined #ceph
[4:09] <nils_> Anticimex: you can set that in the osd configuration
[4:09] * Racpatel (~Racpatel@2601:87:3:3601::30ed) has joined #ceph
[4:16] * DG1 (~Adium@inet-hqmc04-o.oracle.com) has left #ceph
[4:19] * Racpatel (~Racpatel@2601:87:3:3601::30ed) Quit (Ping timeout: 480 seconds)
[4:21] <Anticimex> i tried "osd recovery max active", but that wasn't it
[4:23] * curtis864 (~zapu@6AGAAA41J.tor-irc.dnsbl.oftc.net) Quit ()
[4:24] <lurbs> There's both osd_recovery_max_active and osd_max_backfills. Slightly different cases.
[4:25] <lurbs> http://docs.ceph.com/docs/master/rados/operations/pg-states/
[4:25] <lurbs> You can change them live with:
[4:25] <lurbs> ceph tell osd.* injectargs '--osd_recovery_max_active 1'
[4:25] <lurbs> ceph tell osd.* injectargs '--osd_max_backfills 1'
[4:26] <lurbs> Or whatever you need the value to be. Not sure if it will immediately stop operations over the limits, or just not start another when one finishes.
[4:28] * Racpatel (~Racpatel@2601:87:3:3601:4e34:88ff:fe87:9abf) has joined #ceph
[4:29] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[4:30] * vbellur (~vijay@rrcs-70-60-101-195.midsouth.biz.rr.com) has joined #ceph
[4:30] * daiver_ (~daiver@2606:a000:111b:c12b:4858:e9c5:10cf:d1f2) Quit (Remote host closed the connection)
[4:30] * daiver (~daiver@cpe-98-26-71-226.nc.res.rr.com) has joined #ceph
[4:32] * murmur1 (~JWilbur@107.181.161.205) has joined #ceph
[4:33] * Mika_c (~quassel@122.146.93.152) Quit (Remote host closed the connection)
[4:34] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[4:38] * daiver (~daiver@cpe-98-26-71-226.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[4:58] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:02] * murmur1 (~JWilbur@06SAABI7C.tor-irc.dnsbl.oftc.net) Quit ()
[5:02] * superdug (~Diablothe@tor00.telenet.unc.edu) has joined #ceph
[5:09] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[5:11] * mattbenjamin (~mbenjamin@rrcs-70-60-101-195.midsouth.biz.rr.com) has joined #ceph
[5:13] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[5:13] * kefu (~kefu@183.193.162.205) Quit (Ping timeout: 480 seconds)
[5:17] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) has joined #ceph
[5:22] * Vacuum_ (~Vacuum@88.130.221.129) has joined #ceph
[5:23] <huangjun|2> Anyone have changed mtu (like, set to 9000) on ceph cluster?
[5:23] <lurbs> We run with an MTU of 9000, yes.
[5:24] <nils_> yeah me too
[5:24] <huangjun|2> improved performance a lot?
[5:24] * huangjun|2 is now known as huangjun
[5:24] <nils_> honestly I've never used anything else on 10Gig networks
[5:25] <huangjun> we test rados bench, it has no different with mtu=1500
[5:25] <nils_> you'll probably see a lot more interrupts though?
[5:26] <huangjun> but iperf shows a improve from 9.41Gb/s to 9.90Gb/s
[5:27] <nils_> probably the backing store being slower and higher latency would mask many issues with the network
[5:27] <huangjun> we dont monitor the cpu
[5:27] * xarses_ (~xarses@172.56.39.81) has joined #ceph
[5:28] <huangjun> if the backstore is faster than network, like memstore, mtu 9000 maybe better than mtu 1500?
[5:29] * Vacuum__ (~Vacuum@i59F79B6F.versanet.de) Quit (Ping timeout: 480 seconds)
[5:30] * joshd (~jdurgin@rrcs-70-60-101-195.midsouth.biz.rr.com) has joined #ceph
[5:32] * superdug (~Diablothe@06SAABI71.tor-irc.dnsbl.oftc.net) Quit ()
[5:33] <nils_> likely.
[5:33] <nils_> the bigger advantage is that you'll have less interrupts.
[5:35] <huangjun> or we also should increase the tcp read and recive buffer size ?
[5:36] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[5:36] * Bj_o_rn (~vend3r@tor.piratenpartei-nrw.de) has joined #ceph
[5:40] <nils_> huangjun: I think if you reach almost line speed in iperf you're probably set with regards to that, however I wonder how that affects latency. Haven't had time to test that more thoroughly
[5:40] * haomaiwang (~haomaiwan@180.sub-70-193-56.myvzw.com) has joined #ceph
[5:42] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[5:42] * MentalRay (~MentalRay@107.171.161.165) Quit ()
[5:42] * xarses_ (~xarses@172.56.39.81) Quit (Ping timeout: 480 seconds)
[5:43] <huangjun> yes, maybe the bottleneck is not network
[5:45] <nils_> what's the performance problem you're trying to fix?
[5:45] * overclk (~quassel@121.244.87.117) has joined #ceph
[5:45] <nils_> if you aren't optimizing for the hell of it?
[5:46] <huangjun> How to improve iscsi + tgt + librbd latency?
[5:49] <nils_> so you're exposing rbd devices via iscsi?
[5:52] * mattbenjamin (~mbenjamin@rrcs-70-60-101-195.midsouth.biz.rr.com) Quit (Ping timeout: 480 seconds)
[5:52] <nils_> it just so happens I have two systems at my disposal, I might just go ahead and set up a small ceph test cluster.
[5:56] <nils_> specifically I wonder what might happen if I pin an OSD process to a specific CPU and then also pin a network queue with its specific interrupts to that CPU
[5:58] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[6:00] * nils__ (~nils@port-19141.pppoe.wtnet.de) has joined #ceph
[6:03] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[6:05] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[6:06] * Bj_o_rn (~vend3r@76GAAEPXN.tor-irc.dnsbl.oftc.net) Quit ()
[6:07] * nils_ (~nils@port-17233.pppoe.wtnet.de) Quit (Ping timeout: 480 seconds)
[6:10] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[6:20] * valeech (~valeech@wsip-70-166-79-23.ga.at.cox.net) Quit (Quit: valeech)
[6:21] * fdmanana (~fdmanana@74.203.127.5) has joined #ceph
[6:21] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[6:27] * joshd (~jdurgin@rrcs-70-60-101-195.midsouth.biz.rr.com) Quit (Ping timeout: 480 seconds)
[6:28] * compass (~compass@10.93.0.5) Quit (Ping timeout: 480 seconds)
[6:36] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:39] * swami1 (~swami@49.44.57.237) has joined #ceph
[6:41] <IvanJobs> I came into a core dump when using "rados bench -p scbench 10 seq", I think it's a bug of ceph 0.94.6. Anyone have any idea of this http://paste.ubuntu.com/15959894/ ?
[6:45] * winston-d_ (uid98317@id-98317.richmond.irccloud.com) Quit (Quit: Connection closed for inactivity)
[6:45] * OODavo (~spidu_@109.201.133.100) has joined #ceph
[6:46] * swami2 (~swami@49.32.0.113) has joined #ceph
[6:47] * swami1 (~swami@49.44.57.237) Quit (Ping timeout: 480 seconds)
[6:47] <IvanJobs> FYI, "rados bench scbench 10 write" works fine.
[6:49] * vata (~vata@cable-21.246.173-197.electronicbox.net) Quit (Quit: Leaving.)
[6:51] <IvanJobs> FYI, "rados bench scbench 10 rand" works file, just "seq" core dumps.
[6:51] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[6:51] * gopher_49 (~gopher_49@75.66.43.16) has joined #ceph
[6:51] * gopher_49 (~gopher_49@75.66.43.16) Quit ()
[6:53] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[6:55] * fdmanana (~fdmanana@74.203.127.5) Quit (Ping timeout: 480 seconds)
[6:56] * yatin (~yatin@161.163.44.8) has joined #ceph
[7:00] * flisky (~Thunderbi@36.110.40.24) Quit (Quit: flisky)
[7:03] * daiver (~daiver@2606:a000:111b:c12b:4858:e9c5:10cf:d1f2) has joined #ceph
[7:11] * daiver (~daiver@2606:a000:111b:c12b:4858:e9c5:10cf:d1f2) Quit (Ping timeout: 480 seconds)
[7:15] * OODavo (~spidu_@06SAABJBA.tor-irc.dnsbl.oftc.net) Quit ()
[7:15] * Arcturus (~sixofour@93.115.95.201) has joined #ceph
[7:16] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[7:19] * yatin (~yatin@161.163.44.8) has joined #ceph
[7:36] * Arcturus (~sixofour@06SAABJB8.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[7:36] * aldiyen (~PuyoDead@edwardsnowden2.torservers.net) has joined #ceph
[7:37] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[7:43] <huangjun> IvanJobs: maybe a bug, you can post the log to http://tracker.ceph.com/projects/ceph/issues?set_filter=1
[7:50] * compass (~compass@10.93.0.5) has joined #ceph
[7:57] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[8:04] * daiver (~daiver@cpe-98-26-71-226.nc.res.rr.com) has joined #ceph
[8:06] * aldiyen (~PuyoDead@06SAABJCV.tor-irc.dnsbl.oftc.net) Quit ()
[8:07] * mohmultihhouse (~mohmultih@gw01.mhitp.dk) has joined #ceph
[8:11] * PuyoDead (~mr_flea@Relay-J.tor-exit.network) has joined #ceph
[8:12] * dyasny (~dyasny@46-117-8-108.bb.netvision.net.il) has joined #ceph
[8:12] * daiver (~daiver@cpe-98-26-71-226.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[8:14] * mohmultihhouse (~mohmultih@gw01.mhitp.dk) Quit (Read error: Connection reset by peer)
[8:16] * deepthi (~deepthi@121.244.87.117) has joined #ceph
[8:29] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[8:32] * angdraug (~angdraug@2620:10d:c090:180::8400) has joined #ceph
[8:37] * linjan_ (~linjan@176.193.210.34) Quit (Ping timeout: 480 seconds)
[8:39] * angdraug (~angdraug@2620:10d:c090:180::8400) Quit (Quit: Leaving)
[8:39] * itamarl (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[8:39] <IvanJobs> huangjun: I created this issue http://tracker.ceph.com/issues/15556
[8:40] * rendar (~I@host165-182-dynamic.37-79-r.retail.telecomitalia.it) has joined #ceph
[8:41] * PuyoDead (~mr_flea@6AGAAA5BB.tor-irc.dnsbl.oftc.net) Quit ()
[8:41] * tritonx (~Grimhound@65.19.167.130) has joined #ceph
[8:42] * fsimonce (~simon@host201-70-dynamic.26-79-r.retail.telecomitalia.it) has joined #ceph
[8:43] * Nacer (~Nacer@AToulouse-554-1-13-145.w92-149.abo.wanadoo.fr) has joined #ceph
[8:44] * kefu_ is now known as kefu
[8:51] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:57] * mohmultihouse (~mohmultih@gw01.mhitp.dk) has joined #ceph
[9:00] * dugravot6 (~dugravot6@nat-persul-montet.wifi.univ-lorraine.fr) has joined #ceph
[9:07] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[9:09] * Nacer (~Nacer@AToulouse-554-1-13-145.w92-149.abo.wanadoo.fr) Quit (Remote host closed the connection)
[9:10] * Nacer (~Nacer@AToulouse-554-1-13-145.w92-149.abo.wanadoo.fr) has joined #ceph
[9:11] * tritonx (~Grimhound@76GAAEP05.tor-irc.dnsbl.oftc.net) Quit ()
[9:14] * dugravot6 (~dugravot6@nat-persul-montet.wifi.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[9:15] * dgurtner (~dgurtner@178.197.231.84) has joined #ceph
[9:15] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[9:16] * allaok (~allaok@machine107.orange-labs.com) Quit ()
[9:16] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[9:17] * adun153 (~ljtirazon@112.198.78.58) has joined #ceph
[9:18] * Nacer (~Nacer@AToulouse-554-1-13-145.w92-149.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[9:27] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:31] * analbeard (~shw@support.memset.com) has joined #ceph
[9:41] * Phase (~jacoo@37.48.81.27) has joined #ceph
[9:42] * Phase is now known as Guest1291
[9:42] * derjohn_mobi (~aj@x590d52c3.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[9:43] * adun153 (~ljtirazon@112.198.78.58) Quit (Quit: Leaving)
[9:45] * dugravot6 (~dugravot6@nat-persul-montet.wifi.univ-lorraine.fr) has joined #ceph
[9:58] * allaok (~allaok@machine107.orange-labs.com) Quit (Quit: Leaving.)
[9:58] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[9:59] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[10:03] * allaok (~allaok@machine107.orange-labs.com) Quit (Quit: Leaving.)
[10:03] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[10:05] * daiver (~daiver@2606:a000:111b:c12b:4858:e9c5:10cf:d1f2) has joined #ceph
[10:05] * allaok (~allaok@machine107.orange-labs.com) Quit ()
[10:05] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[10:07] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[10:07] * kefu (~kefu@114.92.122.74) Quit (Max SendQ exceeded)
[10:08] * kefu (~kefu@114.92.122.74) has joined #ceph
[10:08] * allaok (~allaok@machine107.orange-labs.com) Quit ()
[10:08] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[10:10] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[10:11] * Guest1291 (~jacoo@06SAABJF6.tor-irc.dnsbl.oftc.net) Quit ()
[10:13] * daiver (~daiver@2606:a000:111b:c12b:4858:e9c5:10cf:d1f2) Quit (Ping timeout: 480 seconds)
[10:13] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:14] * rdas (~rdas@121.244.87.116) has joined #ceph
[10:16] * i_m (~ivan.miro@88.206.104.168) has joined #ceph
[10:16] <huangjun> kefu: can we run performance test on teuthology ?
[10:17] <kefu> huangjun, AFAIK, ceph-qa-suite focuses on functional tests.
[10:18] <huangjun> okay, thanks
[10:18] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[10:18] <kefu> yw
[10:20] * derjohn_mobi (~aj@2001:6f8:1337:0:7009:e24a:1a7:a7ff) has joined #ceph
[10:25] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[10:28] * linjan_ (~linjan@86.62.112.22) has joined #ceph
[10:30] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) has joined #ceph
[10:31] <BranchPredictor> huangjun: you may be looking for CBT (https://github.com/ceph/cbt)
[10:32] * Lea (~LeaChim@host86-176-19-208.range86-176.btcentralplus.com) has joined #ceph
[10:33] <huangjun> BranchPredictor: CBT is a general performance test tool, we want to automated framwork to do performance test
[10:39] <kefu> huangjun, maybe https://github.com/01org/CeTune ?
[10:40] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[10:41] * Dinnerbone (~ulterior@192.87.28.82) has joined #ceph
[10:46] <huangjun> kefu: thanks, i will have a look, ceph lab used this tool?
[10:48] * rraja (~rraja@121.244.87.117) has joined #ceph
[10:50] <kefu> i am not sure, mark might know more about it in this area.
[10:50] <BranchPredictor> Mark is using cbt, for sure.
[10:50] <kefu> huangjun, but not sure about CeTune.
[10:51] * hroussea (~hroussea@000200d7.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:51] <huangjun> under intel maintain
[10:57] * dugravot6 (~dugravot6@nat-persul-montet.wifi.univ-lorraine.fr) Quit (Quit: Leaving.)
[10:59] * dugravot6 (~dugravot6@nat-persul-montet.wifi.univ-lorraine.fr) has joined #ceph
[11:04] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Quit: Leaving...)
[11:04] * deepthi (~deepthi@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:05] * dugravot6 (~dugravot6@nat-persul-montet.wifi.univ-lorraine.fr) Quit (Quit: Leaving.)
[11:05] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[11:11] * rraja (~rraja@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:11] * Dinnerbone (~ulterior@06SAABJHY.tor-irc.dnsbl.oftc.net) Quit ()
[11:14] * deepthi (~deepthi@121.244.87.118) has joined #ceph
[11:18] <yatin> jobs are failing https://jenkins.ceph.com/job/ceph-pull-requests/
[11:18] <yatin> just curious, is anyone looking into this?
[11:27] * kawa2014 (~kawa@83.111.58.108) has joined #ceph
[11:30] * wgao (~wgao@106.120.101.38) Quit (Ping timeout: 480 seconds)
[11:32] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[11:36] * Nacer (~Nacer@37.160.152.88) has joined #ceph
[11:38] * deepthi (~deepthi@121.244.87.118) Quit (Ping timeout: 480 seconds)
[11:38] * wgao (~wgao@106.120.101.38) has joined #ceph
[11:38] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[11:39] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[11:41] * Sirrush (~ggg@37.48.81.27) has joined #ceph
[11:49] * JustEra (~JustEra@217.138.37.156) has joined #ceph
[11:50] * IvanJobs (~hardes@103.50.11.146) Quit (Read error: Connection reset by peer)
[11:52] * deepthi (~deepthi@121.244.87.117) has joined #ceph
[11:57] * Mika_c (~quassel@122.146.93.152) Quit (Remote host closed the connection)
[11:59] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[12:00] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[12:03] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[12:04] * derjohn_mobi (~aj@2001:6f8:1337:0:7009:e24a:1a7:a7ff) Quit (Remote host closed the connection)
[12:07] * daiver (~daiver@2606:a000:111b:c12b:4858:e9c5:10cf:d1f2) has joined #ceph
[12:10] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:10] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[12:10] * huangjun (~kvirc@113.57.168.154) Quit (Read error: Connection reset by peer)
[12:10] * huangjun|2 (~kvirc@113.57.168.154) has joined #ceph
[12:11] * Sirrush (~ggg@76GAAEP4L.tor-irc.dnsbl.oftc.net) Quit ()
[12:11] * spidu_ (~Tenk@06SAABJKE.tor-irc.dnsbl.oftc.net) has joined #ceph
[12:11] * dugravot61 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[12:11] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Write error: connection closed)
[12:12] * dugravot61 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Remote host closed the connection)
[12:13] * derjohn_mob (~aj@2001:6f8:1337:0:3107:c753:b6fb:9acd) has joined #ceph
[12:13] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[12:15] * daiver (~daiver@2606:a000:111b:c12b:4858:e9c5:10cf:d1f2) Quit (Ping timeout: 480 seconds)
[12:15] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[12:18] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[12:25] <jnq> is there any documentation of what permissions particular functions will need respective of the different caps? for example what permissions are needed to mount and write to a cephfs vs rados, it seems to have changed in a recent upgrade.
[12:28] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[12:31] * JustEra (~JustEra@217.138.37.156) Quit (Ping timeout: 480 seconds)
[12:32] * JustEra (~JustEra@217.138.37.156) has joined #ceph
[12:32] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[12:32] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[12:33] * TMM (~hp@185.5.122.2) has joined #ceph
[12:34] * kefu (~kefu@114.92.122.74) Quit (Ping timeout: 480 seconds)
[12:34] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[12:36] * ksingh (~Adium@86.114.156.226) has joined #ceph
[12:39] * kefu_ is now known as kefu|afk
[12:41] * kefu|afk is now known as kefu_
[12:41] * spidu_ (~Tenk@06SAABJKE.tor-irc.dnsbl.oftc.net) Quit ()
[12:41] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[12:41] * Kottizen (~Moriarty@Relay-J.tor-exit.network) has joined #ceph
[12:42] * kefu_ is now known as kefu|afk
[12:44] * bara (~bara@213.175.37.12) has joined #ceph
[12:46] * derjohn_mob (~aj@2001:6f8:1337:0:3107:c753:b6fb:9acd) Quit (Remote host closed the connection)
[12:52] * JustEra (~JustEra@217.138.37.156) Quit (Ping timeout: 480 seconds)
[12:53] * blo (~blo@2a01cb0400809e0091555c39d8fc4555.ipv6.abo.wanadoo.fr) has joined #ceph
[12:56] * derjohn_mob (~aj@2001:6f8:1337:0:85c1:5adf:ddd3:2e4f) has joined #ceph
[13:00] * JustEra (~JustEra@217.138.37.156) has joined #ceph
[13:08] * kingcu (~kingcu@kona.ridewithgps.com) Quit (Ping timeout: 480 seconds)
[13:09] * zhaochao (~zhaochao@124.202.191.137) Quit (Ping timeout: 480 seconds)
[13:11] * Kottizen (~Moriarty@06SAABJK0.tor-irc.dnsbl.oftc.net) Quit ()
[13:11] * aldiyen1 (~Jones@tor.metaether.net) has joined #ceph
[13:11] * kingcu (~kingcu@kona.ridewithgps.com) has joined #ceph
[13:11] * deepthi (~deepthi@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:12] * mohmultihouse (~mohmultih@gw01.mhitp.dk) Quit (Quit: Leaving)
[13:12] * mohmultihouse (~mohmultih@gw01.mhitp.dk) has joined #ceph
[13:16] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[13:16] <blo> Hey all, I'm trying to disable readproxy cache tier, but before doing so I tried to flush an object on disk with rados cache-try-flush but it does not seem to appear on disk afterward
[13:19] * huangjun|2 (~kvirc@113.57.168.154) Quit (Ping timeout: 480 seconds)
[13:20] <blo> Do you know if there are other parameters to configure to allow the flush to proceed ?
[13:22] * ffilz (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[13:22] * kefu|afk (~kefu@114.92.122.74) Quit (Max SendQ exceeded)
[13:22] * ffilz (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) has joined #ceph
[13:23] * kefu (~kefu@114.92.122.74) has joined #ceph
[13:23] <blo> I would like to confirm that flushed objects are actually written on disk before evicting them
[13:23] * JustEra (~JustEra@217.138.37.156) Quit (Ping timeout: 480 seconds)
[13:31] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[13:33] * mahesh (~mahesh@122.171.157.65) has joined #ceph
[13:33] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[13:35] <mahesh> any one has used ceph with mainframes
[13:41] * aldiyen1 (~Jones@4MJAAEDLH.tor-irc.dnsbl.oftc.net) Quit ()
[13:41] * colde1 (~lmg@76GAAEP64.tor-irc.dnsbl.oftc.net) has joined #ceph
[13:44] * daiver (~daiver@cpe-98-26-71-226.nc.res.rr.com) has joined #ceph
[13:50] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[13:51] * mattbenjamin (~mbenjamin@rrcs-70-60-101-195.midsouth.biz.rr.com) has joined #ceph
[13:52] * daiver (~daiver@cpe-98-26-71-226.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[13:54] * valeech (~valeech@wsip-70-166-79-23.ga.at.cox.net) has joined #ceph
[13:55] * haomaiwang (~haomaiwan@180.sub-70-193-56.myvzw.com) Quit (Remote host closed the connection)
[13:56] * yatin (~yatin@161.163.44.8) has joined #ceph
[13:57] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[13:57] * Hemanth (~hkumar_@121.244.87.117) has joined #ceph
[14:00] * JustEra (~JustEra@217.138.37.156) has joined #ceph
[14:03] * Nacer (~Nacer@37.160.152.88) Quit (Remote host closed the connection)
[14:04] * deepthi (~deepthi@115.117.165.212) has joined #ceph
[14:05] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[14:09] * csoukup (~csoukup@136.63.84.142) has joined #ceph
[14:11] * colde1 (~lmg@76GAAEP64.tor-irc.dnsbl.oftc.net) Quit ()
[14:12] * blo (~blo@2a01cb0400809e0091555c39d8fc4555.ipv6.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[14:12] * daiver (~daiver@216.85.162.38) has joined #ceph
[14:12] * daiver (~daiver@216.85.162.38) Quit (Remote host closed the connection)
[14:13] * daiver (~daiver@95.85.8.93) has joined #ceph
[14:20] * blo (~blo@2a01cb0400809e0020100494682ce72e.ipv6.abo.wanadoo.fr) has joined #ceph
[14:28] * kawa2014 (~kawa@83.111.58.108) Quit (Ping timeout: 480 seconds)
[14:30] * pabluk__ is now known as pabluk_
[14:32] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) Quit (Quit: Leaving)
[14:34] * huangjun (~kvirc@117.152.64.187) has joined #ceph
[14:36] * kawa2014 (~kawa@178.162.199.143) has joined #ceph
[14:37] * wyang (~wyang@116.216.0.53) has joined #ceph
[14:38] * Racpatel (~Racpatel@2601:87:3:3601:4e34:88ff:fe87:9abf) Quit (Quit: Leaving)
[14:38] * Racpatel (~Racpatel@2601:87:3:3601:4e34:88ff:fe87:9abf) has joined #ceph
[14:39] * daiver_ (~daiver@216.85.162.38) has joined #ceph
[14:41] * mattbenjamin (~mbenjamin@rrcs-70-60-101-195.midsouth.biz.rr.com) Quit (Ping timeout: 480 seconds)
[14:44] * deepthi (~deepthi@115.117.165.212) Quit (Ping timeout: 480 seconds)
[14:47] * daiver (~daiver@95.85.8.93) Quit (Ping timeout: 480 seconds)
[14:50] * wyang (~wyang@116.216.0.53) Quit (Read error: Connection reset by peer)
[14:52] * allaok (~allaok@machine107.orange-labs.com) Quit (Remote host closed the connection)
[14:55] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[14:55] * allaok (~allaok@machine107.orange-labs.com) Quit ()
[14:57] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[15:01] * wyang (~wyang@116.216.0.53) has joined #ceph
[15:01] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[15:03] * EinstCrazy (~EinstCraz@101.85.207.66) has joined #ceph
[15:04] * EinstCrazy (~EinstCraz@101.85.207.66) Quit (Read error: No route to host)
[15:05] * haomaiwang (~haomaiwan@2600:1004:b069:6936:6920:cc36:fab0:4cf6) has joined #ceph
[15:11] * Pommesgabel (~matx@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[15:13] * lmb (~Lars@74.203.127.200) has joined #ceph
[15:19] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:23] * JustEra (~JustEra@217.138.37.156) Quit (Ping timeout: 480 seconds)
[15:24] * JustEra (~JustEra@217.138.37.156) has joined #ceph
[15:24] * EinstCrazy (~EinstCraz@101.85.207.66) has joined #ceph
[15:24] * thomnico (~thomnico@12.237.105.2) has joined #ceph
[15:29] * csoukup (~csoukup@136.63.84.142) Quit (Ping timeout: 480 seconds)
[15:30] * wyang (~wyang@116.216.0.53) Quit (Read error: Connection reset by peer)
[15:31] * wyang (~wyang@116.216.0.53) has joined #ceph
[15:31] * tumeric (~jcastro@bl10-198-227.dsl.telepac.pt) has joined #ceph
[15:31] * lmb (~Lars@74.203.127.200) Quit (Ping timeout: 480 seconds)
[15:34] * thomnico (~thomnico@12.237.105.2) Quit (Ping timeout: 480 seconds)
[15:41] * Pommesgabel (~matx@7V7AAD6DM.tor-irc.dnsbl.oftc.net) Quit ()
[15:41] * Sirrush (~ricin@212.83.40.239) has joined #ceph
[15:43] * wyang (~wyang@116.216.0.53) Quit (Read error: Connection reset by peer)
[15:54] * tumeric (~jcastro@bl10-198-227.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[15:54] * JustEra (~JustEra@217.138.37.156) Quit (Quit: This computer has gone to sleep)
[15:56] * JustEra (~JustEra@217.138.37.156) has joined #ceph
[15:56] * overclk (~quassel@121.244.87.117) Quit (Remote host closed the connection)
[15:57] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[15:57] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[16:00] * dsl (~dsl@204.155.27.221) has joined #ceph
[16:01] * dvanders (~dvanders@dvanders-pro.cern.ch) Quit (Ping timeout: 480 seconds)
[16:07] * wyang (~wyang@116.216.0.53) has joined #ceph
[16:08] * deepthi (~deepthi@115.117.165.212) has joined #ceph
[16:09] * allaok (~allaok@machine107.orange-labs.com) Quit (Remote host closed the connection)
[16:11] * Sirrush (~ricin@4MJAAEDN5.tor-irc.dnsbl.oftc.net) Quit ()
[16:11] * mollstam (~slowriot@exit1.ipredator.se) has joined #ceph
[16:12] * SDub (~SDub@stpaul-nat.cray.com) Quit (Quit: Leaving)
[16:17] * blo (~blo@2a01cb0400809e0020100494682ce72e.ipv6.abo.wanadoo.fr) Quit (Read error: Connection reset by peer)
[16:17] * yanzheng (~zhyan@125.70.21.113) Quit (Quit: This computer has gone to sleep)
[16:17] * blo (~blo@2a01cb0400809e0020100494682ce72e.ipv6.abo.wanadoo.fr) has joined #ceph
[16:18] * thomnico (~thomnico@12.237.105.2) has joined #ceph
[16:19] * haomaiwang (~haomaiwan@2600:1004:b069:6936:6920:cc36:fab0:4cf6) Quit (Remote host closed the connection)
[16:20] * yanzheng (~zhyan@125.70.21.113) has joined #ceph
[16:20] * kawa2014 (~kawa@178.162.199.143) Quit (Ping timeout: 480 seconds)
[16:22] * yanzheng (~zhyan@125.70.21.113) Quit ()
[16:24] * wyang (~wyang@116.216.0.53) Quit (Read error: Connection reset by peer)
[16:24] * haomaiwang (~haomaiwan@2600:1004:b069:6936:6920:cc36:fab0:4cf6) has joined #ceph
[16:26] * ksingh (~Adium@86.114.156.226) Quit (Quit: Leaving.)
[16:26] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[16:27] * linjan_ (~linjan@86.62.112.22) Quit (Remote host closed the connection)
[16:27] * wyang (~wyang@116.216.0.53) has joined #ceph
[16:27] * linjan (~linjan@86.62.112.22) has joined #ceph
[16:29] * dvanders (~dvanders@2001:1458:202:16b::101:124a) has joined #ceph
[16:30] * ksingh (~Adium@86.114.156.226) has joined #ceph
[16:30] <ska> Is there any way to show Pool utilization?
[16:32] * kawa2014 (~kawa@83.111.58.108) has joined #ceph
[16:33] <ska> Is there any way to infer the max size of a pool based on pg values???
[16:33] * haomaiwang (~haomaiwan@2600:1004:b069:6936:6920:cc36:fab0:4cf6) Quit (Remote host closed the connection)
[16:34] * daiver (~daiver@95.85.8.93) has joined #ceph
[16:34] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:34] * deepthi (~deepthi@115.117.165.212) Quit (Ping timeout: 480 seconds)
[16:34] * ksingh (~Adium@86.114.156.226) Quit ()
[16:35] * mohmultihouse (~mohmultih@gw01.mhitp.dk) Quit (Ping timeout: 480 seconds)
[16:36] * vata (~vata@207.96.182.162) has joined #ceph
[16:37] * allaok (~allaok@161.105.181.113) has joined #ceph
[16:41] * mollstam (~slowriot@76GAAEQA6.tor-irc.dnsbl.oftc.net) Quit ()
[16:41] * wyang (~wyang@116.216.0.53) Quit (Read error: Connection reset by peer)
[16:41] * daiver_ (~daiver@216.85.162.38) Quit (Ping timeout: 480 seconds)
[16:44] <m0zes> ska: ceph df
[16:46] * wyang (~wyang@116.216.0.53) has joined #ceph
[16:47] * Kurt (~Adium@2001:628:1:5:805c:8ce7:1a25:9e13) Quit (Quit: Leaving.)
[16:52] * swami2 (~swami@49.32.0.113) Quit (Quit: Leaving.)
[16:53] * haomaiwang (~haomaiwan@2600:1004:b069:6936:8443:ebb6:c761:eaa0) has joined #ceph
[16:55] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:59] * wyang (~wyang@116.216.0.53) Quit (Read error: Connection reset by peer)
[16:59] * dvanders_ (~dvanders@dvanders-pro.cern.ch) has joined #ceph
[17:02] * haomaiwang (~haomaiwan@2600:1004:b069:6936:8443:ebb6:c761:eaa0) Quit (Ping timeout: 480 seconds)
[17:03] * dvanders (~dvanders@2001:1458:202:16b::101:124a) Quit (Ping timeout: 480 seconds)
[17:03] * JustEra (~JustEra@217.138.37.156) Quit (Ping timeout: 480 seconds)
[17:04] * JustEra (~JustEra@217.138.37.156) has joined #ceph
[17:06] * sudocat (~dibarra@2602:306:8bc7:4c50:8821:9325:ff01:1a6b) Quit (Ping timeout: 480 seconds)
[17:06] * wyang (~wyang@116.216.0.53) has joined #ceph
[17:07] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:09] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[17:15] * kawa2014 (~kawa@83.111.58.108) Quit (Ping timeout: 480 seconds)
[17:19] * allaok (~allaok@161.105.181.113) Quit (Ping timeout: 480 seconds)
[17:21] * herrsergio (~herrsergi@00021432.user.oftc.net) Quit (Remote host closed the connection)
[17:22] * herrsergio (~herrsergi@ec2-107-21-210-136.compute-1.amazonaws.com) has joined #ceph
[17:23] * wyang (~wyang@116.216.0.53) Quit (Read error: Connection reset by peer)
[17:24] * kawa2014 (~kawa@tsn109-201-152-238.dyn.nltelcom.net) has joined #ceph
[17:25] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Quit: Leaving.)
[17:25] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[17:26] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[17:26] * wyang (~wyang@116.216.0.53) has joined #ceph
[17:27] * kefu (~kefu@114.92.122.74) Quit (Max SendQ exceeded)
[17:27] * kefu (~kefu@107.191.53.152) has joined #ceph
[17:30] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:32] * wushudoin (~wushudoin@38.140.108.2) has joined #ceph
[17:32] * itamarl (~itamarl@194.90.7.244) Quit (Quit: itamarl)
[17:33] * kawa2014 (~kawa@76GAAEQDO.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[17:35] * csoukup (~csoukup@159.140.254.106) has joined #ceph
[17:37] * haomaiwang (~haomaiwan@2600:1004:b069:6936:4427:5a3c:365d:819d) has joined #ceph
[17:37] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:37] * Hemanth (~hkumar_@121.244.87.117) Quit (Ping timeout: 480 seconds)
[17:37] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) has joined #ceph
[17:39] * danieagle (~Daniel@179.110.217.188) has joined #ceph
[17:39] * wyang (~wyang@116.216.0.53) Quit (Read error: Connection reset by peer)
[17:41] * n0x1d (~Frymaster@06SAABJT4.tor-irc.dnsbl.oftc.net) has joined #ceph
[17:45] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:45] * haomaiwang (~haomaiwan@2600:1004:b069:6936:4427:5a3c:365d:819d) Quit (Ping timeout: 480 seconds)
[17:45] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:46] * kawa2014 (~kawa@83.111.58.108) has joined #ceph
[17:46] * wyang (~wyang@116.216.0.53) has joined #ceph
[17:49] * dgurtner (~dgurtner@178.197.231.84) Quit (Ping timeout: 480 seconds)
[17:53] * xarses_ (~xarses@64.124.158.100) has joined #ceph
[17:57] * haomaiwang (~haomaiwan@180.sub-70-193-56.myvzw.com) has joined #ceph
[17:57] * wyang (~wyang@116.216.0.53) Quit (Quit: This computer has gone to sleep)
[17:58] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[17:58] * dsl (~dsl@204.155.27.221) Quit (Remote host closed the connection)
[17:59] * dsl (~dsl@204.155.27.221) has joined #ceph
[18:01] * vbellur (~vijay@rrcs-70-60-101-195.midsouth.biz.rr.com) Quit (Ping timeout: 480 seconds)
[18:02] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) has joined #ceph
[18:02] * squizzi (~squizzi@74.203.127.200) has joined #ceph
[18:06] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[18:06] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[18:06] * dsl (~dsl@204.155.27.221) Quit (Remote host closed the connection)
[18:10] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:11] * n0x1d (~Frymaster@06SAABJT4.tor-irc.dnsbl.oftc.net) Quit ()
[18:11] * andrew_m (~Kakeru@93.115.95.202) has joined #ceph
[18:11] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[18:16] * haomaiwang (~haomaiwan@180.sub-70-193-56.myvzw.com) Quit (Remote host closed the connection)
[18:17] <Anticimex> yay, jewel
[18:17] * huangjun (~kvirc@117.152.64.187) Quit (Ping timeout: 480 seconds)
[18:17] <Anticimex> sage, gregsfortytwo: i didn't find any video streaming covering vault 2016 (1min searching tho)
[18:19] * vbellur (~vijay@74.203.127.200) has joined #ceph
[18:20] * linjan (~linjan@86.62.112.22) Quit (Ping timeout: 480 seconds)
[18:21] * Skaag (~lunix@65.200.54.234) has joined #ceph
[18:22] * mahesh (~mahesh@122.171.157.65) Quit ()
[18:22] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[18:23] * dneary (~dneary@50-206-118-3-static.hfc.comcastbusiness.net) has joined #ceph
[18:25] <Anticimex> very nicely formatted & detailed release notes on http://ceph.com/releases/v10-2-0-jewel-released/ :)
[18:28] * vbellur (~vijay@74.203.127.200) Quit (Ping timeout: 480 seconds)
[18:30] * evelu (~erwan@46.231.131.178) Quit (Ping timeout: 480 seconds)
[18:34] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[18:36] * Racpatel (~Racpatel@2601:87:3:3601:4e34:88ff:fe87:9abf) Quit (Quit: Leaving)
[18:38] * squizzi (~squizzi@74.203.127.200) Quit (Ping timeout: 480 seconds)
[18:41] * andrew_m (~Kakeru@76GAAEQFB.tor-irc.dnsbl.oftc.net) Quit ()
[18:41] * Rosenbluth (~mLegion@hessel3.torservers.net) has joined #ceph
[18:45] * TMM (~hp@185.5.122.2) Quit (Quit: Ex-Chat)
[18:46] * JustEra (~JustEra@217.138.37.156) Quit (Quit: This computer has gone to sleep)
[18:52] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:52] * derjohn_mob (~aj@2001:6f8:1337:0:85c1:5adf:ddd3:2e4f) Quit (Ping timeout: 480 seconds)
[18:52] * shylesh__ (~shylesh@45.124.227.15) has joined #ceph
[18:55] * Lea (~LeaChim@host86-176-19-208.range86-176.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[18:56] * kawa2014 (~kawa@83.111.58.108) Quit (Quit: Leaving)
[18:59] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has joined #ceph
[19:04] * Lea (~LeaChim@host86-176-19-208.range86-176.btcentralplus.com) has joined #ceph
[19:05] <loth> Hi all, is there any way to mark a placement group as lost in 0.94 ?
[19:05] * neurodrone (~neurodron@158.106.193.162) Quit (Quit: neurodrone)
[19:05] * neurodrone (~neurodron@162.243.191.67) has joined #ceph
[19:07] * dneary (~dneary@50-206-118-3-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[19:08] * compass (~compass@10.93.0.5) Quit (Remote host closed the connection)
[19:11] * Rosenbluth (~mLegion@7V7AAD6GB.tor-irc.dnsbl.oftc.net) Quit ()
[19:11] * Deiz (~lobstar@hessel3.torservers.net) has joined #ceph
[19:11] * davidzlap (~Adium@2605:e000:1313:8003:8c2f:fea2:3c75:8114) has joined #ceph
[19:18] * lcurtis_ (~lcurtis@47.19.105.250) has joined #ceph
[19:18] * JustEra (~JustEra@217.138.37.156) has joined #ceph
[19:18] * haomaiwang (~haomaiwan@2600:1004:b069:6936:e8b3:bc7c:c79b:7b7e) has joined #ceph
[19:18] * fdmanana (~fdmanana@74.203.127.5) has joined #ceph
[19:20] * swami1 (~swami@27.7.165.186) has joined #ceph
[19:20] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[19:22] * DG1 (~Adium@inet-hqmc01-o.oracle.com) has joined #ceph
[19:22] * ingard (~cake@tu.rd.vc) Quit (Ping timeout: 480 seconds)
[19:22] <DG1> Has anyone tried using swift tempurls with radosgw?
[19:23] * BrianA (~BrianA@fw-rw.shutterfly.com) has joined #ceph
[19:26] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) Quit (Quit: Leaving.)
[19:26] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has joined #ceph
[19:27] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) Quit (Quit: Leaving)
[19:34] * Titin (~textual@LFbn-1-1560-65.w90-65.abo.wanadoo.fr) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:35] * EinstCrazy (~EinstCraz@101.85.207.66) Quit (Remote host closed the connection)
[19:35] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:40] * thomnico (~thomnico@12.237.105.2) Quit (Ping timeout: 480 seconds)
[19:41] * Deiz (~lobstar@6AGAAA59V.tor-irc.dnsbl.oftc.net) Quit ()
[19:43] * mattbenjamin (~mbenjamin@rrcs-70-60-101-195.midsouth.biz.rr.com) has joined #ceph
[19:45] * clusterfudge1 (~straterra@93.171.205.34) has joined #ceph
[19:50] * gillesMo (~gillesMo@00012912.user.oftc.net) has joined #ceph
[19:50] * vbellur (~vijay@74.203.127.200) has joined #ceph
[19:56] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) has joined #ceph
[19:59] * fdmanana (~fdmanana@74.203.127.5) Quit (Ping timeout: 480 seconds)
[20:02] <diq> so no video stream of today's talk?
[20:05] * swami1 (~swami@27.7.165.186) Quit (Ping timeout: 480 seconds)
[20:06] * kefu (~kefu@107.191.53.152) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:11] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[20:11] * LDA (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[20:12] * Hemanth (~hkumar_@103.228.221.185) has joined #ceph
[20:15] * clusterfudge1 (~straterra@06SAABJX7.tor-irc.dnsbl.oftc.net) Quit ()
[20:15] * Kizzi (~qable@185-35-137-250.v4.as62454.net) has joined #ceph
[20:17] * lmb (~Lars@74.203.127.200) has joined #ceph
[20:20] * dneary (~dneary@50-206-118-3-static.hfc.comcastbusiness.net) has joined #ceph
[20:20] * JustEra (~JustEra@217.138.37.156) Quit (Quit: This computer has gone to sleep)
[20:21] * Gandle (~boob@00021b85.user.oftc.net) Quit (Remote host closed the connection)
[20:22] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[20:25] * LDA|2 (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[20:27] * LDA (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[20:36] * dneary (~dneary@50-206-118-3-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[20:36] * blo (~blo@2a01cb0400809e0020100494682ce72e.ipv6.abo.wanadoo.fr) Quit (Quit: Leaving)
[20:37] * espeer_ (~quassel@phobos.isoho.st) Quit (Read error: Connection reset by peer)
[20:38] * shylesh__ (~shylesh@45.124.227.15) Quit (Remote host closed the connection)
[20:39] * espeer (~quassel@phobos.isoho.st) has joined #ceph
[20:41] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[20:43] * haomaiwang (~haomaiwan@2600:1004:b069:6936:e8b3:bc7c:c79b:7b7e) Quit (Remote host closed the connection)
[20:44] * vbellur (~vijay@74.203.127.200) has left #ceph
[20:45] * Kizzi (~qable@4MJAAEDUE.tor-irc.dnsbl.oftc.net) Quit ()
[20:46] * rendar (~I@host165-182-dynamic.37-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:47] * pabluk_ is now known as pabluk__
[20:48] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[20:49] * rendar (~I@host165-182-dynamic.37-79-r.retail.telecomitalia.it) has joined #ceph
[20:49] * linjan (~linjan@176.193.210.34) has joined #ceph
[20:53] * lmb (~Lars@74.203.127.200) Quit (Ping timeout: 480 seconds)
[20:54] <TMM> So, I had an interesting problem today, I lost connectivity between all my osds and monitors due to someone dorking the network config
[20:55] <TMM> Everything came back fine, but it took like a minute for all osds and monitors to connect to each other again
[20:55] <T1> that doesn't sound good
[20:55] <TMM> is there a way to improve that time?
[20:55] <TMM> T1, yeah, it was very interesting
[20:55] <T1> wan't it because of tcp timeouts?
[20:55] <T1> wasn't even
[20:55] <TMM> 'may you live in interesting times' interesting
[20:55] <T1> haha, yeah
[20:56] <T1> I'm contemplating upgrading to Jewel at some point in time
[20:56] * xarses_ is now known as xarses
[20:56] <TMM> I don't think it was tcp timeouts alone at least
[20:56] <T1> it SHOULD be possible with no downtime for me
[20:56] <TMM> It took quite a while for the 'peering' states to go away
[20:56] <T1> buuut I'll probably wait at least for 10.1 to come out
[20:57] <TMM> Jewel will mark CephFS as production stable, right?
[20:57] <xarses> loth: yes, there I a way for many versions now, I think its under the `ceph pg` sub group, but I don't recall exactly
[20:57] <T1> yes
[20:57] <TMM> yeah, I'll probably upgrade sooner rather than later to that
[20:57] <T1> but there are probably still some things that need to be ironed out
[20:57] <TMM> people keep bitching about not providing file shares
[20:57] <T1> mmm - and it would be nice to drop nfs
[20:58] <TMM> yeah
[20:58] <T1> but I'll probably also wait at least until v11 or v12 before moving everything
[20:58] <T1> hot zones in the filesystem
[20:58] <T1> and tens of millions of files..
[20:58] <TMM> well, I'm sure that the rbd stuff in jewel is fine
[20:59] <T1> yeah
[20:59] <TMM> I can at least provisionally start support cephfs here for test and acceptance
[20:59] <T1> I've read all the stuff on the mailing list about depricating v1 images
[20:59] <TMM> and tell the users I'll laugh at them if they lose data in production before I tell them it's ok
[20:59] <TMM> that way I can test the data access patterns a bit
[20:59] <T1> but I would still like not to have to move everything several times over
[20:59] <TMM> yeah, makes sense
[21:00] <T1> so for now I'm not going away from v1 rbds and nfs
[21:00] <T1> we'll probably go over to cephfs at some point, but not the first time it's marked production ready
[21:01] <T1> I want to hear warstories from others first..
[21:01] <T1> but going from hammer to jewel is not a bad idea - from an oldish LTS release to the latest
[21:02] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:05] <TMM> I'm on hammer now too
[21:05] <TMM> hammer with jemalloc now actually
[21:05] <TMM> I love to do non-standard things it seems
[21:05] <T1> heh
[21:06] <T1> I have not even changed the scheduler on my rhel7 machines.. :)
[21:07] <TMM> I had giant problems with tcmalloc, even with an increased thread cache size
[21:07] <TMM> load on my osds has decreased a lot
[21:07] <T1> I've actually never had any problem of any kind..
[21:08] <TMM> I was wondering why people called me adventurous when I said I deployed an all flash cluster
[21:08] * haomaiwang (~haomaiwan@2600:1004:b069:6936:61ff:87c1:a1fa:378c) has joined #ceph
[21:08] <TMM> now I know :P
[21:08] <T1> heh
[21:08] <TMM> it has actually been mostly fine
[21:09] <TMM> but cpu load on the osds goes up a lot
[21:10] <TMM> probably because it can process so many more requests than on platters
[21:11] * Hemanth (~hkumar_@103.228.221.185) Quit (Quit: Leaving)
[21:12] * haomaiwang (~haomaiwan@2600:1004:b069:6936:61ff:87c1:a1fa:378c) Quit (Remote host closed the connection)
[21:12] * haomaiwang (~haomaiwan@2600:1004:b069:6936:61ff:87c1:a1fa:378c) has joined #ceph
[21:13] * haomaiwang (~haomaiwan@2600:1004:b069:6936:61ff:87c1:a1fa:378c) Quit (Remote host closed the connection)
[21:13] * thomnico (~thomnico@12.237.105.2) has joined #ceph
[21:13] * squizzi (~squizzi@74.203.127.200) has joined #ceph
[21:14] <TMM> I'm excited to see how my new osd nodes will perform though
[21:14] <TMM> Probably also a little on the exciting side
[21:15] * AGaW (~eXeler0n@06SAABJ07.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:19] * haomaiwang (~haomaiwan@2600:1004:b069:6936:61ff:87c1:a1fa:378c) has joined #ceph
[21:23] * gillesMo (~gillesMo@00012912.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:24] * sudocat (~dibarra@192.185.1.20) Quit (Remote host closed the connection)
[21:27] * PoRNo-MoRoZ (~hp1ng@mail.ap-team.ru) has joined #ceph
[21:28] * mattbenjamin (~mbenjamin@rrcs-70-60-101-195.midsouth.biz.rr.com) Quit (Ping timeout: 480 seconds)
[21:29] <TMM> is there a way to reduce the tcp timeouts used by the osds btw? There are some kernel settings that I can tweak but it'd be nicer to only change them for osds I think
[21:29] * Gjax (~martin@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[21:35] * Bartek (~Bartek@dynamic-78-8-238-183.ssp.dialog.net.pl) has joined #ceph
[21:38] * dyasny (~dyasny@46-117-8-108.bb.netvision.net.il) Quit (Ping timeout: 480 seconds)
[21:41] <TMM> "osd op thread timeout" perhaps?
[21:42] <PoRNo-MoRoZ> hola
[21:42] <PoRNo-MoRoZ> one of my osds can't start ><
[21:42] <PoRNo-MoRoZ> probably due to fs corruption
[21:42] <PoRNo-MoRoZ> can this cause it ?
[21:42] <PoRNo-MoRoZ> 2016-04-21 22:39:16.426198 7f0b6ae4a900 -1 filestore(/var/lib/ceph/osd/ceph-3) could not find 1/00000040//head in index: (2) No such file or directory
[21:42] <PoRNo-MoRoZ> 2016-04-21 22:39:16.426222 7f0b6ae4a900 -1 filestore(/var/lib/ceph/osd/ceph-3) could not find 1/00000040//head in index: (2) No such file or directory
[21:42] <PoRNo-MoRoZ> 2016-04-21 22:39:17.227838 7f0b6ae4a900 -1 filestore(/var/lib/ceph/osd/ceph-3) could not find 1/00000040//head in index: (2) No such file or directory
[21:42] <TMM> PoRNo-MoRoZ, if it just the one, just nuke it
[21:42] <PoRNo-MoRoZ> i got some unfounds
[21:42] <PoRNo-MoRoZ> ><
[21:43] <TMM> oh
[21:43] <PoRNo-MoRoZ> that linked to that osd
[21:43] <TMM> then don't do that
[21:43] <TMM> how did that happen?
[21:43] <PoRNo-MoRoZ> i manually got problem objects
[21:43] <PoRNo-MoRoZ> but don't know what to do with them
[21:43] <PoRNo-MoRoZ> dunno, looks like i'm newb:D
[21:43] <PoRNo-MoRoZ> actually messed up with cache
[21:43] <PoRNo-MoRoZ> and journals
[21:43] * infernix (nix@spirit.infernix.net) Quit (Ping timeout: 480 seconds)
[21:43] <PoRNo-MoRoZ> restarted osds alot also
[21:44] * Bartek (~Bartek@dynamic-78-8-238-183.ssp.dialog.net.pl) Quit (Ping timeout: 480 seconds)
[21:44] <PoRNo-MoRoZ> actuallyl restarts killed it
[21:44] <TMM> yeah, you don't want to be restarting stuff while your cluster isn't healthy
[21:45] <PoRNo-MoRoZ> i felt some kinda power when played with ceph alot and it was really stable:D
[21:45] <PoRNo-MoRoZ> and now
[21:45] * thomnico (~thomnico@12.237.105.2) Quit (Ping timeout: 480 seconds)
[21:45] <PoRNo-MoRoZ> it does this
[21:45] <PoRNo-MoRoZ> if i got objects that reported to be unfound
[21:45] <PoRNo-MoRoZ> i got copies from both osds
[21:45] <PoRNo-MoRoZ> what i should do now?
[21:45] * AGaW (~eXeler0n@06SAABJ07.tor-irc.dnsbl.oftc.net) Quit ()
[21:45] * Bartek (~Bartek@dynamic-78-8-227-166.ssp.dialog.net.pl) has joined #ceph
[21:45] * infernix (nix@2001:41f0::2) has joined #ceph
[21:46] <TMM> I don't know, sorry. But there are some very knowlegable people in this channel
[21:46] <TMM> if you hang around someone will help
[21:46] <PoRNo-MoRoZ> hope so
[21:46] <xcezzz> unfound objects??? you were able to mount and you have access to the osd datas?
[21:46] <PoRNo-MoRoZ> yep
[21:46] <PoRNo-MoRoZ> osd can start
[21:46] <PoRNo-MoRoZ> and crashed like after 20 secs
[21:47] <PoRNo-MoRoZ> it even showing up 'up' in osd tree
[21:47] <PoRNo-MoRoZ> for a sec
[21:47] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[21:47] <TMM> is your cluster otherwise healthy?
[21:47] <xcezzz> oh??? well that actually may be a really easy fix???
[21:47] <PoRNo-MoRoZ> yesterday i upgraded to infernalis
[21:47] <PoRNo-MoRoZ> except this unfound - will be
[21:47] <PoRNo-MoRoZ> when rebalance will be done
[21:47] <PoRNo-MoRoZ> probably :)
[21:48] <TMM> PoRNo-MoRoZ, you may want to read this, it actually looks kind of straightforward, and it sounds like what happened to you : http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#failures-osd-unfound
[21:48] <xcezzz> https://ceph.com/community/incomplete-pgs-oh-my/
[21:48] <xcezzz> but???.
[21:48] <xcezzz> you may not have to go through the objectstore import deal
[21:48] <xcezzz> if the osd???s are otherwise healthy
[21:48] * thomnico (~thomnico@12.237.105.2) has joined #ceph
[21:49] <TMM> I'd defer to xcezzz so they may tell you this advice is bad, but I'd personally wait until your rebalance is done before doing anything else
[21:49] <PoRNo-MoRoZ> i'm gonna read this
[21:49] <PoRNo-MoRoZ> thanks
[21:49] <xcezzz> PoRNo-MoRoZ: whats pool size?
[21:49] <xcezzz> for the one with unfound pgs
[21:49] <PoRNo-MoRoZ> about 12tb i think
[21:50] <PoRNo-MoRoZ> raw
[21:50] <xcezzz> sorry
[21:50] <xcezzz> replica size
[21:50] <PoRNo-MoRoZ> 2
[21:50] <TMM> auch
[21:51] * erhudy (uid89730@id-89730.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[21:51] <xcezzz> ceph osd pool get <POOLNAME> min_size
[21:51] <PoRNo-MoRoZ> 1
[21:52] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[21:52] <PoRNo-MoRoZ> looks like the osd that can't start have fresher copies of 4 unfound objects
[21:53] <xcezzz> so you are able to see the unfounds on both drives that are acting?
[21:54] <PoRNo-MoRoZ> yep, problematic osd mounts ok
[21:54] <xcezzz> ceph health detail | grep unfound
[21:54] <PoRNo-MoRoZ> and does start, but crashed soon after
[21:54] <PoRNo-MoRoZ> pg 1.2f0 is active+recovering+undersized+degraded+remapped, acting [16], 1 unfound
[21:54] <PoRNo-MoRoZ> pg 1.7d is active+recovering+undersized+degraded+remapped, acting [16], 2 unfound
[21:54] <PoRNo-MoRoZ> pg 1.85 is active+recovering+undersized+degraded, acting [6], 1 unfound
[21:54] <PoRNo-MoRoZ> yeah i got object names
[21:54] <PoRNo-MoRoZ> from these pg id's
[21:55] <TMM> I know this doesn't really help you now, but running at a min size of 1 isn't very safe
[21:55] <xcezzz> ^^^
[21:55] <PoRNo-MoRoZ> yep
[21:55] <PoRNo-MoRoZ> :)
[21:55] <PoRNo-MoRoZ> risky boy
[21:55] <TMM> For the reasons you're seeing now
[21:55] <TMM> you really should consider going for min 2, size 3
[21:55] <xcezzz> is there any recovery traffic?
[21:55] <PoRNo-MoRoZ> i'm migrating stuff, and don't have all planned hdds at this time
[21:55] <PoRNo-MoRoZ> i'm gonna use 2+3 ofc
[21:56] <PoRNo-MoRoZ> yep
[21:56] <PoRNo-MoRoZ> about 2gbits i think
[21:56] <PoRNo-MoRoZ> based on monitoring
[21:56] * wwdillingham (~LobsterRo@189.149.136.30) has joined #ceph
[21:56] <PoRNo-MoRoZ> ceph -s says different values
[21:56] * haomaiwang (~haomaiwan@2600:1004:b069:6936:61ff:87c1:a1fa:378c) Quit (Remote host closed the connection)
[21:56] <PoRNo-MoRoZ> sometimes 1.5gbs, sometimes 180mbps
[21:56] <xcezzz> just wondering if ceph -w or ceph status mentions any recovering objects
[21:56] <PoRNo-MoRoZ> it's definately not frozen at all
[21:57] <PoRNo-MoRoZ> alot )
[21:57] <PoRNo-MoRoZ> recovery 62989/8078819 objects degraded (0.780%)
[21:57] <PoRNo-MoRoZ> recovery 681114/8078819 objects misplaced (8.431%)
[21:57] <xcezzz> well??? it may figure itself out???
[21:57] <TMM> I'd highly recommend waiting for recovery to be done before touching anything else
[21:57] <xcezzz> but if the pgs that are ???unfound??? are still in a remapped/degraded state you def wanna wait for recovery
[21:57] <PoRNo-MoRoZ> should i do anything with the problematic osd ?
[21:57] <PoRNo-MoRoZ> that can't start
[21:58] <xcezzz> no leave it be...
[21:58] <PoRNo-MoRoZ> okay i'm waiting
[21:58] <TMM> not until everything else is done
[21:58] <xcezzz> at least until recovery done??? that way with the pg statuses all your looking at are the ones with problems
[21:58] <TMM> yeah
[21:58] <TMM> +1
[21:58] <xcezzz> but you do havea few options in the end???
[21:59] <TMM> 1 problem at a time, doing multiple things at once is what got you in this mess ;)
[21:59] <TMM> you're still in a universe governed by the cap theorem :P
[21:59] <TMM> ceph be not magical ;)
[22:00] <PoRNo-MoRoZ> damn too bad it's no 'pew pew magic'
[22:00] <PoRNo-MoRoZ> :D
[22:00] <xcezzz> actually??? ceph is magical??? just at size 3 lol
[22:00] <PoRNo-MoRoZ> but it's near
[22:00] <TMM> hahaha
[22:00] <PoRNo-MoRoZ> :D for me 2 is near 3
[22:00] <PoRNo-MoRoZ> so it fits
[22:00] <xcezzz> haha??? until this happens
[22:00] <TMM> I run my ec pools at 7+3
[22:01] <TMM> my cross power domain pools at 2+4 and my single power domains at 2+3
[22:01] <xcezzz> so check it??? the mount points for the osd not running.. you???ll be able to see the objects and pg names on them??? you can even ???hide??? bad pgs??? and try to start osd??? you probably got in a state where the pgs are not consistent with the rest of the cluster??? so ???hiding??? them will allow it to just think ok??? fix??? but you really gotta wait for recovery to be sure and deal with one problem at a time
[22:02] <TMM> yeah, you may end up making things worse if you hide the wrong ones, more unfound objects
[22:02] <PoRNo-MoRoZ> how do i hide them ?
[22:02] <xcezzz> mkdir .hidden && mv 1.9e .hidden/
[22:02] <PoRNo-MoRoZ> using that link 'incomplete pgs' ?
[22:02] <PoRNo-MoRoZ> ah, that simple ?> :D
[22:03] <PoRNo-MoRoZ> basically just move it
[22:03] <PoRNo-MoRoZ> i know problematic objects, can i hide only them ?
[22:03] * daiver (~daiver@95.85.8.93) Quit (Remote host closed the connection)
[22:03] <xcezzz> ya just out of the way??? but srsly dont do anything unless recovery finished
[22:03] * daiver (~daiver@216.85.162.34) has joined #ceph
[22:04] <xcezzz> but ya that article i linked will definitely help if it really comes down to it
[22:05] * Kupo1 (~t.wilson@23.111.255.162) has joined #ceph
[22:05] <PoRNo-MoRoZ> should i disable scrub to speed up recovery ?
[22:05] <Kupo1> hey all, is there any way to see what placement groups are used by what RBD volumes?
[22:07] <ska> In "ceph df -fjson-pretty" why is .stats.total_avail_bytes not equal to .pools[].stats.max_avail ??
[22:08] <xcezzz> Kupo1: the cern ceph scripts will let you do that easily
[22:09] * aarontc (~aarontc@2001:470:e893::1:1) Quit (Quit: Bye!)
[22:09] <Kupo1> xcezzz: you have a link? :)
[22:09] <TMM> PoRNo-MoRoZ, I would really just not touch anything. disabling scrub will probably not cause any additional trouble but still
[22:09] <xcezzz> PoRNo-MoRoZ: well yes you could??? but a caveat???
[22:09] <xcezzz> if it???s done??? and you re-enable scrubs
[22:09] <TMM> PoRNo-MoRoZ, leave it be, really, it's the best advice in 90% of all cases in my experience. It's what I've been drilling into my engineers
[22:09] <xcezzz> it could trigger the maximum scrub allowed to fire off immediately
[22:10] <TMM> ^^^
[22:10] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[22:10] <xcezzz> soo??? while it is an option
[22:10] <xcezzz> you will probably need to start manual scrubs with it still disabled so it???s more of a gentle scrub
[22:10] <xcezzz> and the cern ceph scripts will help with that
[22:11] <xcezzz> https://github.com/cernceph/ceph-scripts/tree/master/tools
[22:11] * aarontc (~aarontc@2001:470:e893::1:1) has joined #ceph
[22:11] <xcezzz> but any itching, or inkling you have to do anything right now??? just don???t do it
[22:11] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[22:12] * daiver (~daiver@216.85.162.34) Quit (Ping timeout: 480 seconds)
[22:12] <TMM> yeah
[22:12] <PoRNo-MoRoZ> :)
[22:12] <PoRNo-MoRoZ> thanks
[22:12] <PoRNo-MoRoZ> @
[22:12] <PoRNo-MoRoZ> !
[22:13] <PoRNo-MoRoZ> 6.9% left ..
[22:13] <xcezzz> is it still serving clients?
[22:13] <PoRNo-MoRoZ> some frozen
[22:13] <PoRNo-MoRoZ> most of them
[22:13] <TMM> do you have rbd clients?
[22:13] <PoRNo-MoRoZ> vms
[22:13] <TMM> ok, so yes
[22:13] <xcezzz> a whole bunch of dead vms
[22:14] <xcezzz> lol
[22:14] * haomaiwang (~haomaiwan@180.sub-70-193-56.myvzw.com) has joined #ceph
[22:14] <PoRNo-MoRoZ> :DD
[22:14] * csoukup (~csoukup@159.140.254.106) Quit (Ping timeout: 480 seconds)
[22:14] <PoRNo-MoRoZ> exactly
[22:14] * haomaiwang (~haomaiwan@180.sub-70-193-56.myvzw.com) Quit (Remote host closed the connection)
[22:14] <TMM> they might continue eventually
[22:14] <PoRNo-MoRoZ> hope so
[22:14] <xcezzz> ya just be patient??? did you mess with the tunables when i upgraded? the upgrade shouldn???t have caused a rebalance unless you did???
[22:14] <PoRNo-MoRoZ> that's so weird cluster
[22:15] <xcezzz> u
[22:15] <PoRNo-MoRoZ> i just rebalanced myself
[22:15] <PoRNo-MoRoZ> got some new disks
[22:15] <PoRNo-MoRoZ> i mean
[22:15] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has left #ceph
[22:15] <PoRNo-MoRoZ> i got synology
[22:15] <PoRNo-MoRoZ> with chrooted debian
[22:15] <PoRNo-MoRoZ> and ceph inside it
[22:15] <PoRNo-MoRoZ> on old kernel
[22:15] <PoRNo-MoRoZ> like 3.10
[22:15] <TMM> what's the processor on that?
[22:15] <xcezzz> o dear
[22:15] <PoRNo-MoRoZ> i3
[22:15] <PoRNo-MoRoZ> :D
[22:15] <TMM> how many osds?
[22:15] * offer (~Behedwin@46.166.138.175) has joined #ceph
[22:15] <PoRNo-MoRoZ> at this moment 6
[22:16] <xcezzz> uhhh how much memory i nnthere?
[22:16] <TMM> how much storage space?
[22:16] <PoRNo-MoRoZ> 32gb
[22:16] <PoRNo-MoRoZ> disks 4tbs
[22:16] * BrianA2 (~BrianA@fw-rw.shutterfly.com) has joined #ceph
[22:16] <PoRNo-MoRoZ> we got 2 nodes more
[22:16] <PoRNo-MoRoZ> for that pool
[22:16] <PoRNo-MoRoZ> and 3 nodes for another
[22:16] <TMM> that's enough, but it's only very barely enough
[22:16] <PoRNo-MoRoZ> yep, cutting edge :D
[22:16] <xcezzz> i was gonna say cutting something else??? lol
[22:17] <PoRNo-MoRoZ> don't have all osds in cluster atm, so it's choppy
[22:17] <TMM> you may want to buy some cheap-ass upsses
[22:17] <TMM> and wire them up with their serial ports so that you shut down the whole lot if you lose power, that'll probably work out better for you than trying to run the cluster with a major degradation with that hardware
[22:18] <Kupo1> xcezzz: I don't see anything on https://github.com/cernceph/ceph-scripts/tree/master/tools that will match me specific placement groups to RBD's, I had to delete two PG's and want to see what RBD's were affected
[22:18] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[22:18] <ska> Is max_avail for pools based on replication factors?
[22:18] <TMM> if you lose a single osd, it should be fine
[22:18] <TMM> but if you lose a box you should really just turn everything off
[22:19] <BrianA2> sweet. 2.0 is released. Now I can just redo some systems I broke :)
[22:19] <xcezzz> Kupo1: oh u wanna go the other way
[22:19] * BrianA (~BrianA@fw-rw.shutterfly.com) Quit (Ping timeout: 480 seconds)
[22:19] <Kupo1> yeah
[22:19] <xcezzz> rbd info imagename
[22:19] <xcezzz> take the first part of the rbd prefix
[22:20] <xcezzz> err wait still wrong way lol
[22:20] <PoRNo-MoRoZ> ah
[22:20] <PoRNo-MoRoZ> one more question
[22:20] <ska> I show that in my example max_avail is about 2/3 of .stats.total_avail_bytes ....
[22:20] <PoRNo-MoRoZ> not related to my problem :D
[22:21] <Kupo1> xcezzz: I think i might be able to make that work, if i can get all the pg data and filter it down
[22:21] <PoRNo-MoRoZ> i set up weights by size in GB, but on one node osds allways filled 20-30% more
[22:21] <PoRNo-MoRoZ> *always
[22:21] <Kupo1> xcezzz: how do i go from rbd_data.c939e42bb1c4b3 to pg?
[22:22] <TMM> PoRNo-MoRoZ, with so few hosts I'm not that surprised
[22:23] <PoRNo-MoRoZ> i know that :D
[22:23] <PoRNo-MoRoZ> thanks
[22:23] <PoRNo-MoRoZ> gonna ask for more money for that cluster
[22:23] <PoRNo-MoRoZ> we need a way more
[22:23] <TMM> yeah, you really want a minimum of 5 hosts
[22:23] * offer (~Behedwin@46.166.138.175) Quit (Ping timeout: 480 seconds)
[22:24] <PoRNo-MoRoZ> currently 6, but on different pools
[22:24] <TMM> and with that hardware I'd myself highly recommend you get a couple of dedicated mon boxes
[22:24] <PoRNo-MoRoZ> i should rise to 50 disks i think
[22:24] <xcezzz> Kupo1: you can use the rados tool to look em up
[22:24] <PoRNo-MoRoZ> well i got some xeons in that cluster
[22:24] <xcezzz> i can find the current option at the moment i got sidetracked onto something else
[22:24] <PoRNo-MoRoZ> actually 4 xeons systems )
[22:24] <TMM> you really need 5 hosts per pool, minimum
[22:25] <PoRNo-MoRoZ> okay
[22:25] <xcezzz> i started with 3??? 12tb boxes 24TB each, bonded 1gbps for balance and 1gbps for public
[22:26] <xcezzz> worked great, slow rebalances and adding drives
[22:26] <PoRNo-MoRoZ> what about btrfs ?
[22:26] * danieagle (~Daniel@179.110.217.188) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[22:26] * jackhill (~jackhill@bog.hcoop.net) has joined #ceph
[22:27] <xcezzz> nothing special really??? supposedly a bit faster writes and trims
[22:27] <PoRNo-MoRoZ> i'm a bit scared to say
[22:27] <PoRNo-MoRoZ> but our cluster on btrfs
[22:27] <PoRNo-MoRoZ> is that bad ?
[22:27] <PoRNo-MoRoZ> we found with synthetic tests that btrfs is better than xfs
[22:28] <PoRNo-MoRoZ> but now i dunno
[22:28] <m0zes> faster on a clean (non-fragmented) system
[22:28] * allaok (~allaok@ARennes-658-1-103-68.w83-199.abo.wanadoo.fr) has joined #ceph
[22:28] <PoRNo-MoRoZ> damn
[22:28] <m0zes> fragmentation degrades that performance quickly.
[22:28] * Bartek (~Bartek@dynamic-78-8-227-166.ssp.dialog.net.pl) Quit (Ping timeout: 480 seconds)
[22:28] <PoRNo-MoRoZ> xfs not so affected like btrfs ?
[22:29] <m0zes> not nearly as badly. very few filesystems like fragmentation.
[22:30] <PoRNo-MoRoZ> currently btrfs writing all the time
[22:30] <PoRNo-MoRoZ> i mean in idle state i can see every osd have some write
[22:30] <PoRNo-MoRoZ> btrfs-cleaner
[22:30] <PoRNo-MoRoZ> or worker
[22:31] <PoRNo-MoRoZ> is it normal ?
[22:32] <xcezzz> my reasoning???
[22:32] * wwdillingham (~LobsterRo@189.149.136.30) Quit (Quit: wwdillingham)
[22:33] <xcezzz> XFS is a high-performance 64-bit journaling file system created by Silicon Graphics, Inc (SGI) in 1993.
[22:33] <xcezzz> The development of Btrfs began in 2007,
[22:33] <TMM> you want to balance your btrfs nodes from time to time
[22:33] <TMM> but for that to not kill your performance completely you really need pretty fast disks and osds
[22:33] <DG1> has anyone used Swift tempurls with Ceph object gateway?
[22:36] <TMM> PoRNo-MoRoZ, waaaiiiittt a second
[22:36] <TMM> PoRNo-MoRoZ, you're running btrfs on a production cluster on 3.10?!
[22:36] <PoRNo-MoRoZ> single node
[22:36] <PoRNo-MoRoZ> rest 4.2 or 4.4
[22:37] <xcezzz> o man that sounds like a management nightmare
[22:37] <PoRNo-MoRoZ> that synology .. i just hate it )
[22:37] <TMM> you really cannot run btrfs on anything older than 3.16
[22:37] <TMM> seriously
[22:37] <PoRNo-MoRoZ> yep ><
[22:37] <TMM> I've been running btrfs on production workloads for about 2 years
[22:37] * Bartek (~Bartek@dynamic-78-8-227-166.ssp.dialog.net.pl) has joined #ceph
[22:38] <TMM> before 3.16 I habitually lost entire filesystems
[22:38] <PoRNo-MoRoZ> the fun is that node one of the stablest
[22:38] <PoRNo-MoRoZ> oh god my english is so bad
[22:38] <TMM> oh yeah, btrfs won't crash your kernel since like 3.2 or something
[22:38] <PoRNo-MoRoZ> :D
[22:39] * squizzi (~squizzi@74.203.127.200) Quit (Ping timeout: 480 seconds)
[22:39] <TMM> but it'll eat filesystems
[22:39] <PoRNo-MoRoZ> scrubbing can handle this while i moving to xfs ?
[22:39] <TMM> one thing at a time man, fix your lost objects first
[22:40] <PoRNo-MoRoZ> yeah, ofc not right noW :D
[22:40] <TMM> you just want to out one of the btrfs osds, format the disk to xfs, add the osd back in, wait for backfill
[22:40] <TMM> rinse, repeat
[22:40] <TMM> one at a time
[22:40] <TMM> but do it per OSD, not for the entire host
[22:41] <PoRNo-MoRoZ> 5% last
[22:42] <PoRNo-MoRoZ> speed is lowering as i can see
[22:42] <xcezzz> that???ll happenn??? less pgs running at a time
[22:42] <xcezzz> if your replacing/migrating drives
[22:42] <TMM> yeah, you only get so many threads to do these operations
[22:42] <xcezzz> do nobackfill
[22:42] <ska> I see in the C++ code that the max_avail for pools is ' = avail * k / (m + k);'
[22:43] <xcezzz> remove it??? add it back before turning nobackfill off
[22:43] <xcezzz> so you don???t waste a backfill cycle
[22:43] <TMM> ah yeah, what xcezzz just said
[22:43] <PoRNo-MoRoZ> oh, didn't know about that flag
[22:43] <PoRNo-MoRoZ> thanks
[22:44] <xcezzz> ya you can even use it if u wanna stop backfills due to speed issues or whatever???
[22:44] <xcezzz> but usually best idea for backfill eating up iops is to lower priority /max osd etc
[22:45] <PoRNo-MoRoZ> i'm currently in injected ceph defaults
[22:45] <PoRNo-MoRoZ> to speed up things
[22:45] <PoRNo-MoRoZ> usually on lowest possible
[22:45] <PoRNo-MoRoZ> oh god i love irc
[22:46] <PoRNo-MoRoZ> i should install my bouncer again
[22:46] <PoRNo-MoRoZ> :D
[22:46] <xcezzz> ya??? i have a few scripts laying around so if need be can # sleep 21600; ./recoveryio; sleep 21600; ./clientio to have it go up and down outside peak times
[22:46] * derjohn_mob (~aj@x590d52c3.dyn.telefonica.de) has joined #ceph
[22:49] * Gjax (~martin@93-167-84-102-static.dk.customer.tdc.net) Quit (Quit: Leaving)
[22:50] * haomaiwang (~haomaiwan@2600:1004:b069:6936:454c:38ce:c625:16d2) has joined #ceph
[22:50] * dneary (~dneary@50-206-118-3-static.hfc.comcastbusiness.net) has joined #ceph
[22:51] <PoRNo-MoRoZ> https://i.gyazo.com/e031c7f48517e59d53e2570a07d72983.png
[22:51] <PoRNo-MoRoZ> this is how my problems looks :D
[22:53] * luigiman (~SaneSmith@76GAAEQMS.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:54] * ksingh1 (~Adium@64.169.30.57) has joined #ceph
[22:56] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) Quit (Ping timeout: 480 seconds)
[22:57] * haomaiwang (~haomaiwan@2600:1004:b069:6936:454c:38ce:c625:16d2) Quit (Remote host closed the connection)
[23:01] * wushudoin (~wushudoin@38.140.108.2) Quit (Quit: Leaving)
[23:01] * scuttle|afk is now known as scuttlemonkey
[23:01] * wushudoin (~wushudoin@38.140.108.2) has joined #ceph
[23:04] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[23:08] * allaok (~allaok@ARennes-658-1-103-68.w83-199.abo.wanadoo.fr) has left #ceph
[23:08] <PoRNo-MoRoZ> every object is 4mb ?
[23:10] * thomnico (~thomnico@12.237.105.2) Quit (Remote host closed the connection)
[23:10] * Bartek (~Bartek@dynamic-78-8-227-166.ssp.dialog.net.pl) Quit (Ping timeout: 480 seconds)
[23:10] <PoRNo-MoRoZ> on replication client firsts write on master (head/primary) osd, then this master writes to his slaves
[23:11] <PoRNo-MoRoZ> is that correct ?
[23:12] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[23:13] * Racpatel (~Racpatel@2601:87:3:3601::30ed) has joined #ceph
[23:13] * linjan (~linjan@176.193.210.34) Quit (Ping timeout: 480 seconds)
[23:14] <ksingh1> yes by default object size is 4M
[23:14] <PoRNo-MoRoZ> i mean can i estimate remain size by remaining objects ?
[23:14] <PoRNo-MoRoZ> 200k objects
[23:15] <PoRNo-MoRoZ> 781gb
[23:15] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[23:16] * haomaiwang (~haomaiwan@180.sub-70-193-56.myvzw.com) has joined #ceph
[23:19] * thomnico (~thomnico@12.237.105.2) has joined #ceph
[23:23] * luigiman (~SaneSmith@76GAAEQMS.tor-irc.dnsbl.oftc.net) Quit ()
[23:24] * haomaiwang (~haomaiwan@180.sub-70-193-56.myvzw.com) Quit (Ping timeout: 480 seconds)
[23:28] * csharp (~clarjon1@195.22.126.119) has joined #ceph
[23:34] * lmb (~Lars@74.203.127.5) has joined #ceph
[23:36] * scuttlemonkey is now known as scuttle|afk
[23:39] * LDA|2 (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[23:44] * wushudoin (~wushudoin@38.140.108.2) Quit (Quit: Leaving)
[23:44] * wushudoin (~wushudoin@38.140.108.2) has joined #ceph
[23:47] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[23:49] * rendar (~I@host165-182-dynamic.37-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[23:49] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[23:51] * ksingh1 (~Adium@64.169.30.57) Quit (Quit: Leaving.)
[23:51] <Kupo1> xcezzz: "(1:24:33 PM) xcezzz: Kupo1: you can use the rados tool to look em up" which tool is that?
[23:52] * Gandle (~boob@00021b85.user.oftc.net) has joined #ceph
[23:55] * Bartek (~Bartek@dynamic-78-8-227-166.ssp.dialog.net.pl) has joined #ceph
[23:58] * csharp (~clarjon1@76GAAEQNE.tor-irc.dnsbl.oftc.net) Quit ()
[23:58] * Zyn (~Jourei@tor-exit.simpelbuntu.nl) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.