#ceph IRC Log

Index

IRC Log for 2016-09-16

Timestamps are in GMT/BST.

[0:05] * fsimonce (~simon@host98-71-dynamic.1-87-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:05] * srk (~Siva@32.97.110.55) Quit (Ping timeout: 480 seconds)
[0:07] <cetex> also, when changing size of a metadata pool for cephfs (2replicas -> 3 replicas), what would be the reason for all IO to/from that pool to stall for some time?
[0:19] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[0:20] * DoDzy (~biGGer@tsn109-201-154-183.dyn.nltelcom.net) has joined #ceph
[0:43] * tsg__ (~tgohad@jfdmzpr05-ext.jf.intel.com) Quit (Ping timeout: 480 seconds)
[0:45] * tdb_ (~tdb@myrtle.kent.ac.uk) Quit (Ping timeout: 480 seconds)
[0:49] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[0:50] * DoDzy (~biGGer@tsn109-201-154-183.dyn.nltelcom.net) Quit ()
[0:54] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[0:54] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[1:00] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:02] * diver (~diver@cpe-2606-A000-111B-C12B-7195-8E8-347F-73C2.dyn6.twc.com) has joined #ceph
[1:02] * srk (~Siva@2605:6000:ed04:ce00:ac6b:7704:1eed:3e9a) has joined #ceph
[1:04] * kiasyn (~Blueraven@nl3x.mullvad.net) has joined #ceph
[1:04] * lcurtis_ (~lcurtis@47.19.105.250) Quit (Read error: Connection reset by peer)
[1:08] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Ping timeout: 480 seconds)
[1:09] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[1:10] * diver (~diver@cpe-2606-A000-111B-C12B-7195-8E8-347F-73C2.dyn6.twc.com) Quit (Remote host closed the connection)
[1:11] * diver (~diver@cpe-2606-A000-111B-C12B-7195-8E8-347F-73C2.dyn6.twc.com) has joined #ceph
[1:13] * kristen (~kristen@134.134.139.82) Quit (Quit: Leaving)
[1:13] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) Quit (Ping timeout: 480 seconds)
[1:14] * neurodrone_ (~neurodron@162.243.191.67) Quit (Quit: neurodrone_)
[1:14] * ntpttr_ (~ntpttr@134.134.139.78) has joined #ceph
[1:16] * srk (~Siva@2605:6000:ed04:ce00:ac6b:7704:1eed:3e9a) Quit (Ping timeout: 480 seconds)
[1:17] * foxxx0 (~fox@valhalla.nano-srv.net) Quit (Ping timeout: 480 seconds)
[1:17] * Nixx (~quassel@bulbasaur.sjorsgielen.nl) Quit (Ping timeout: 480 seconds)
[1:17] * Nixx (~quassel@bulbasaur.sjorsgielen.nl) has joined #ceph
[1:18] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[1:18] * foxxx0 (~fox@valhalla.nano-srv.net) has joined #ceph
[1:18] * ntpttr_ (~ntpttr@134.134.139.78) Quit (Remote host closed the connection)
[1:19] * diver (~diver@cpe-2606-A000-111B-C12B-7195-8E8-347F-73C2.dyn6.twc.com) Quit (Ping timeout: 480 seconds)
[1:20] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[1:20] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[1:23] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) has joined #ceph
[1:25] * [0x4A6F]_ (~ident@p4FC278EE.dip0.t-ipconnect.de) has joined #ceph
[1:28] * [0x4A6F] (~ident@0x4a6f.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:28] * [0x4A6F]_ is now known as [0x4A6F]
[1:28] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:28] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:34] * kiasyn (~Blueraven@26XAABYVA.tor-irc.dnsbl.oftc.net) Quit ()
[1:34] * narthollis (~CobraKhan@108.61.122.221) has joined #ceph
[1:38] * corevoid (~corevoid@ip68-5-125-61.oc.oc.cox.net) has joined #ceph
[1:38] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) Quit (Read error: Connection reset by peer)
[1:43] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[1:43] <corevoid> Hi. So, maybe someone has an idea of where to go on this. I have just setup 2 rgw instances in a multisite setup. They are working nicely. I have add a couple of test buckets and some files to make sure it works is all. The status shows both are caught up. Nobody else is accessing or using them. However, the CPU load on both hosts is sitting at like 3.00, with the radosgw process taking up 99% CPU constantly. I do not see anything in the
[1:43] <corevoid> logs happening at all. Any thoughts or direction?
[1:43] * doppelgrau1 (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[1:43] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[1:46] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) has joined #ceph
[1:47] * oms101 (~oms101@p20030057EA025D00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:47] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[1:53] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[1:54] * diver (~diver@cpe-2606-A000-111B-C12B-198D-2DD4-4C86-EAB9.dyn6.twc.com) has joined #ceph
[1:54] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) has joined #ceph
[1:55] * oms101 (~oms101@p20030057EA01D400C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:58] * salwasser (~Adium@c-76-118-229-231.hsd1.ma.comcast.net) has joined #ceph
[2:01] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[2:03] * salwasser (~Adium@c-76-118-229-231.hsd1.ma.comcast.net) Quit (Quit: Leaving.)
[2:04] * narthollis (~CobraKhan@2RTAAAEEQ.tor-irc.dnsbl.oftc.net) Quit ()
[2:04] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[2:09] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[2:10] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[2:13] * diver (~diver@cpe-2606-A000-111B-C12B-198D-2DD4-4C86-EAB9.dyn6.twc.com) Quit (Remote host closed the connection)
[2:14] * diver (~diver@cpe-98-26-71-226.nc.res.rr.com) has joined #ceph
[2:14] * diver_ (~diver@cpe-2606-A000-111B-C12B-4D23-A08F-5F67-B16A.dyn6.twc.com) has joined #ceph
[2:15] * ron-slc_ (~Ron@173-165-129-118-utah.hfc.comcastbusiness.net) has joined #ceph
[2:17] * ron-slc (~Ron@173-165-129-118-utah.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[2:20] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[2:22] * diver (~diver@cpe-98-26-71-226.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[2:22] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[2:24] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[2:26] * Jaska (~Wijk@213.61.149.100) has joined #ceph
[2:27] * KindOne_ (kindone@h20.149.29.71.dynamic.ip.windstream.net) has joined #ceph
[2:29] * g-unit (~gunit@ip68-230-63-130.ph.ph.cox.net) has joined #ceph
[2:31] <g-unit> Hello. Does anyone have experience running Ceph on ZFS? If so, what are the caveats of using a single disk vdev vs raidz2 under ceph?
[2:32] <ben1> no experience doing such, but raidz2 is kind of not how ceph is meant to work
[2:33] * madkiss (~madkiss@2a02:8109:8680:2000:b8e3:4e23:c7f2:b34d) Quit (Read error: Connection reset by peer)
[2:33] <ben1> you're meant to have 3 copies on 3 different machines of your data or such
[2:33] <ben1> and you're going to lose iops
[2:33] * madkiss (~madkiss@2a02:8109:8680:2000:95b7:2e70:7e03:1e22) has joined #ceph
[2:34] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:34] * KindOne_ is now known as KindOne
[2:35] * vata (~vata@96.127.202.136) has joined #ceph
[2:37] <g-unit> I was thinking of having a degree of error-correction at the node layer, thereby limiting the intra-node traffic during an osd "rebuild". Instead the zpool would be in a degraded state, but the osds would only be rebuilt upon node failure+restoration.
[2:37] <g-unit> ^the osds would only be rebuilt
[2:39] * KindOne_ (kindone@h121.178.190.173.ip.windstream.net) has joined #ceph
[2:42] * g-unit (~gunit@ip68-230-63-130.ph.ph.cox.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[2:42] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[2:42] * madkiss (~madkiss@2a02:8109:8680:2000:95b7:2e70:7e03:1e22) Quit (Read error: Connection reset by peer)
[2:43] * madkiss (~madkiss@2a02:8109:8680:2000:95b7:2e70:7e03:1e22) has joined #ceph
[2:45] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[2:45] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:45] * KindOne_ is now known as KindOne
[2:49] * tdb_ (~tdb@myrtle.kent.ac.uk) has joined #ceph
[2:49] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Read error: Connection reset by peer)
[2:53] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[2:54] * KegelB (~chatzilla@24-148-87-69.c3-0.grn-ubr2.chi-grn.il.cable.rcn.com) has joined #ceph
[2:56] * Jaska (~Wijk@9J5AAAS0Q.tor-irc.dnsbl.oftc.net) Quit ()
[3:09] <diver_> I tried to start OSD on the ZoL, latest version & CentOS
[3:10] <diver_> OSD simply didn't start
[3:10] <diver_> stuck on 'loading pgs'
[3:15] * jfaj (~jan@p4FC5BF85.dip0.t-ipconnect.de) has joined #ceph
[3:19] * hgjhgjh (~MKoR@178-175-128-50.static.host) has joined #ceph
[3:21] * EinstCrazy (~EinstCraz@2001:da8:8001:822:c821:1ff2:4e37:a8e6) has joined #ceph
[3:21] * jfaj__ (~jan@p4FC2523D.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:22] * EinstCrazy (~EinstCraz@2001:da8:8001:822:c821:1ff2:4e37:a8e6) Quit (Remote host closed the connection)
[3:22] * EinstCrazy (~EinstCraz@2001:da8:8001:822:c821:1ff2:4e37:a8e6) has joined #ceph
[3:22] * wushudoin (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[3:23] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[3:24] * xarses (~xarses@73.93.154.115) has joined #ceph
[3:24] * xarses (~xarses@73.93.154.115) Quit (Remote host closed the connection)
[3:24] * aj__ (~aj@x4db2641a.dyn.telefonica.de) has joined #ceph
[3:24] * xarses (~xarses@73.93.154.115) has joined #ceph
[3:27] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) Quit (Ping timeout: 480 seconds)
[3:32] * derjohn_mobi (~aj@x590cfca8.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[3:37] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[3:38] * KindOne_ (kindone@h121.178.190.173.ip.windstream.net) has joined #ceph
[3:39] * borei1 (~dan@216.13.217.230) Quit (Ping timeout: 480 seconds)
[3:41] * xarses (~xarses@73.93.154.115) Quit (Read error: Connection reset by peer)
[3:41] * xarses_ (~xarses@73.93.154.115) has joined #ceph
[3:41] * sebastian-w (~quassel@212.218.8.138) has joined #ceph
[3:42] * sebastian-w_ (~quassel@212.218.8.138) Quit (Read error: Connection reset by peer)
[3:42] * srk (~Siva@2605:6000:ed04:ce00:e914:4b9a:224:fd2a) has joined #ceph
[3:44] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:44] * KindOne_ is now known as KindOne
[3:45] * tsg (~tgohad@192.55.54.36) has joined #ceph
[3:48] * diver_ (~diver@cpe-2606-A000-111B-C12B-4D23-A08F-5F67-B16A.dyn6.twc.com) Quit (Remote host closed the connection)
[3:49] * diver (~diver@cpe-2606-A000-111B-C12B-4D23-A08F-5F67-B16A.dyn6.twc.com) has joined #ceph
[3:49] * hgjhgjh (~MKoR@26XAABYX4.tor-irc.dnsbl.oftc.net) Quit ()
[3:50] * t4nk682 (~oftc-webi@117.247.104.70) has joined #ceph
[3:50] <t4nk682> is it possible to add ssd journaling in to an existing production ceph
[3:54] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[3:56] * KegelB (~chatzilla@24-148-87-69.c3-0.grn-ubr2.chi-grn.il.cable.rcn.com) Quit (Quit: ChatZilla 0.9.92 [Firefox 48.0.2/20160823121617])
[3:57] * diver (~diver@cpe-2606-A000-111B-C12B-4D23-A08F-5F67-B16A.dyn6.twc.com) Quit (Ping timeout: 480 seconds)
[4:02] * srk (~Siva@2605:6000:ed04:ce00:e914:4b9a:224:fd2a) Quit (Ping timeout: 480 seconds)
[4:02] <t4nk682> is it possible to add ssd journaling in to an existing production ceph
[4:07] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[4:11] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[4:11] * Mousey (~dotblank@108.61.123.88) has joined #ceph
[4:12] * tsg (~tgohad@192.55.54.36) Quit (Remote host closed the connection)
[4:17] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[4:20] * Jeffrey4l_ (~Jeffrey@110.244.238.95) Quit (Ping timeout: 480 seconds)
[4:24] * xarses_ (~xarses@73.93.154.115) Quit (Ping timeout: 480 seconds)
[4:24] * srk (~Siva@2605:6000:ed04:ce00:2cdf:b7ff:1268:31ed) has joined #ceph
[4:27] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[4:30] * EinstCrazy (~EinstCraz@2001:da8:8001:822:c821:1ff2:4e37:a8e6) Quit (Remote host closed the connection)
[4:30] * EinstCrazy (~EinstCraz@58.246.118.131) has joined #ceph
[4:35] <corevoid> @t4nk682 I don't know if it is correct or not, but I have moved my journals do different disks before without an issue. I know you can recreate the journal. So, as long as you do it one OSD at a time, should be fine.
[4:38] * davidzlap (~Adium@2605:e000:1313:8003:c9ce:633a:915e:1395) Quit (Quit: Leaving.)
[4:38] * EinstCrazy (~EinstCraz@58.246.118.131) Quit (Ping timeout: 480 seconds)
[4:38] * corevoid (~corevoid@ip68-5-125-61.oc.oc.cox.net) Quit (Quit: Leaving)
[4:41] * georgem (~Adium@69-165-135-139.dsl.teksavvy.com) has joined #ceph
[4:41] * Mousey (~dotblank@108.61.123.88) Quit ()
[4:43] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[4:49] * srk (~Siva@2605:6000:ed04:ce00:2cdf:b7ff:1268:31ed) Quit (Ping timeout: 480 seconds)
[4:49] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) has joined #ceph
[4:51] * EinstCrazy (~EinstCraz@2001:da8:8001:822:c821:1ff2:4e37:a8e6) has joined #ceph
[4:53] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[4:57] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[5:02] * zack_dolby (~textual@p845d32.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[5:08] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[5:16] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[5:17] * t4nk682 (~oftc-webi@117.247.104.70) has left #ceph
[5:18] * Vacuum__ (~Vacuum@88.130.204.79) has joined #ceph
[5:21] * EinstCrazy (~EinstCraz@2001:da8:8001:822:c821:1ff2:4e37:a8e6) Quit (Remote host closed the connection)
[5:21] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) has joined #ceph
[5:21] * EinstCrazy (~EinstCraz@2001:da8:8001:822:c821:1ff2:4e37:a8e6) has joined #ceph
[5:23] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[5:24] * t4nk590 (~oftc-webi@117.247.104.70) has joined #ceph
[5:24] <t4nk590> its possible to add ssd journaling in to a production ceph. Is there any document available
[5:25] * Vacuum_ (~Vacuum@88.130.198.181) Quit (Ping timeout: 480 seconds)
[5:25] <lurbs> t4nk590: http://www.spinics.net/lists/ceph-users/msg05061.html
[5:27] <lurbs> But personally I wouldn't trust that 'sage' guy. Bit of a flake. :)
[5:29] * EinstCrazy (~EinstCraz@2001:da8:8001:822:c821:1ff2:4e37:a8e6) Quit (Ping timeout: 480 seconds)
[5:30] * tsg (~tgohad@fmdmzpr01-ext.fm.intel.com) has joined #ceph
[5:35] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[5:36] * srk (~Siva@2605:6000:ed04:ce00:f8a7:dd7d:cc7a:b501) has joined #ceph
[5:52] * tsg (~tgohad@fmdmzpr01-ext.fm.intel.com) Quit (Remote host closed the connection)
[5:53] * tsg (~tgohad@134.134.139.76) has joined #ceph
[5:55] * georgem (~Adium@69-165-135-139.dsl.teksavvy.com) Quit (Quit: Leaving.)
[6:00] * zack_dolby (~textual@p15168-ipngn9301marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[6:03] * zack_dolby (~textual@p15168-ipngn9301marunouchi.tokyo.ocn.ne.jp) Quit ()
[6:04] * praveen (~praveen@122.172.122.231) Quit (Remote host closed the connection)
[6:04] * walcubi_ (~walcubi@p5795B441.dip0.t-ipconnect.de) has joined #ceph
[6:08] * EinstCrazy (~EinstCraz@2001:da8:8001:822:d9e:b27:732b:a5be) has joined #ceph
[6:10] * joshd (~jdurgin@125.16.34.66) has joined #ceph
[6:10] * jcsp (~jspray@125.16.34.66) has joined #ceph
[6:11] * walcubi (~walcubi@p5797A1F7.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:28] * spidu_ (~Sketchfil@exit0.radia.tor-relays.net) has joined #ceph
[6:39] * praveen_ (~praveen@122.172.122.231) has joined #ceph
[6:47] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[6:48] * srk (~Siva@2605:6000:ed04:ce00:f8a7:dd7d:cc7a:b501) Quit (Ping timeout: 480 seconds)
[6:49] * diver (~diver@cpe-2606-A000-111B-C12B-398D-B66E-9D4C-1F8C.dyn6.twc.com) has joined #ceph
[6:50] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) has joined #ceph
[6:51] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[6:58] * diver (~diver@cpe-2606-A000-111B-C12B-398D-B66E-9D4C-1F8C.dyn6.twc.com) Quit (Ping timeout: 480 seconds)
[6:58] * spidu_ (~Sketchfil@635AAAMU5.tor-irc.dnsbl.oftc.net) Quit ()
[6:59] * KindOne_ (kindone@h94.130.30.71.dynamic.ip.windstream.net) has joined #ceph
[6:59] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[7:01] * tsg (~tgohad@134.134.139.76) Quit (Ping timeout: 480 seconds)
[7:04] * dgurtner (~dgurtner@178.197.234.12) has joined #ceph
[7:05] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:05] * KindOne_ is now known as KindOne
[7:08] * jowilkin (~jowilkin@2601:644:4000:b0bf:ea2a:eaff:fe08:3f1d) Quit (Ping timeout: 480 seconds)
[7:13] * morse (~morse@supercomputing.univpm.it) Quit (Ping timeout: 480 seconds)
[7:18] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[7:24] * EinstCrazy (~EinstCraz@2001:da8:8001:822:d9e:b27:732b:a5be) Quit (Remote host closed the connection)
[7:24] * EinstCrazy (~EinstCraz@2001:da8:8001:822:d9e:b27:732b:a5be) has joined #ceph
[7:29] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[7:30] * KindOne_ (kindone@h212.229.28.71.dynamic.ip.windstream.net) has joined #ceph
[7:31] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[7:31] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[7:32] * EinstCrazy (~EinstCraz@2001:da8:8001:822:d9e:b27:732b:a5be) Quit (Ping timeout: 480 seconds)
[7:33] * KindOne_ (kindone@h212.229.28.71.dynamic.ip.windstream.net) Quit (Remote host closed the connection)
[7:37] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:38] * KindOne (kindone@h212.229.28.71.dynamic.ip.windstream.net) has joined #ceph
[7:43] * karnan (~karnan@125.16.34.66) has joined #ceph
[7:48] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[7:56] * vikhyat (~vumrao@49.248.206.76) has joined #ceph
[7:57] * EinstCrazy (~EinstCraz@2001:da8:8001:822:fd77:c844:828f:f438) has joined #ceph
[8:00] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) Quit (Quit: Leaving.)
[8:04] * EinstCrazy (~EinstCraz@2001:da8:8001:822:fd77:c844:828f:f438) Quit (Remote host closed the connection)
[8:07] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:09] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[8:14] * EinstCrazy (~EinstCraz@58.246.118.131) has joined #ceph
[8:14] * dgurtner (~dgurtner@178.197.234.12) Quit (Read error: Connection reset by peer)
[8:21] * ledgr (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[8:26] * EinstCrazy (~EinstCraz@58.246.118.131) Quit (Read error: Connection reset by peer)
[8:28] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[8:30] * rendar (~I@host224-58-dynamic.49-79-r.retail.telecomitalia.it) has joined #ceph
[8:31] * rakeshgm (~rakesh@125.16.34.66) has joined #ceph
[8:35] * EinstCrazy (~EinstCraz@2001:da8:8001:822:4c3a:f150:2f64:ea4f) has joined #ceph
[8:37] * karnan (~karnan@125.16.34.66) Quit (Quit: Leaving)
[8:37] * karnan (~karnan@125.16.34.66) has joined #ceph
[8:38] * jclm (~jclm@77.95.96.78) has joined #ceph
[8:38] * jclm (~jclm@77.95.96.78) Quit ()
[8:38] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[8:45] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[8:49] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Killed (NickServ (Too many failed password attempts.)))
[8:49] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[8:50] * karnan (~karnan@125.16.34.66) Quit (Ping timeout: 480 seconds)
[8:52] * kuku (~kuku@119.93.91.136) has joined #ceph
[8:54] * schegi (~schegi@81.169.147.212) has joined #ceph
[9:00] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[9:05] * praveen_ (~praveen@122.172.122.231) Quit (Remote host closed the connection)
[9:07] * sleinen (~Adium@macsl.switch.ch) has joined #ceph
[9:13] * ade (~abradshaw@194.169.251.11) has joined #ceph
[9:17] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[9:22] * schegi (~schegi@81.169.147.212) Quit (Quit: leaving)
[9:22] * schegi (~schegi@81.169.147.212) has joined #ceph
[9:23] * ledgr (~ledgr@88-222-11-185.meganet.lt) Quit (Remote host closed the connection)
[9:24] * ledgr (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[9:25] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[9:33] * ledgr (~ledgr@88-222-11-185.meganet.lt) Quit (Remote host closed the connection)
[9:33] * ledgr (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[9:35] * northrup (~northrup@173.14.101.193) has joined #ceph
[9:35] <northrup> I am beating my head against a wall...
[9:35] <northrup> I am setting mds_cache_size in my ceph.conf under the [mds] block and going ALL the way to restarting the damn server...
[9:36] <northrup> and when I do "ceph --show-config | grep -i mds_cache" it's still the damned default "mds_cache_size = 100000"
[9:36] <northrup> HOW do you make this change then?
[9:37] * Jeffrey4l (~Jeffrey@120.11.26.103) has joined #ceph
[9:37] <singler_> did you try setting it in global section?
[9:37] <northrup> no... because the docs has it in the [mds] section...
[9:37] <northrup> I'll try that
[9:37] <singler_> maybe mds is searching only for [mds.somename] and does not include [mds]
[9:39] <northrup> ((facepalm)) it did take in the general settings
[9:40] * singler_ is now known as singler
[9:44] * praveen_ (~praveen@121.244.155.11) has joined #ceph
[9:46] * ledgr (~ledgr@88-222-11-185.meganet.lt) Quit (Remote host closed the connection)
[9:46] * Dw_Sn (~Dw_Sn@00020a72.user.oftc.net) has joined #ceph
[10:05] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[10:06] * aj__ (~aj@x4db2641a.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[10:16] * fsimonce (~simon@host98-71-dynamic.1-87-r.retail.telecomitalia.it) has joined #ceph
[10:20] * northrup (~northrup@173.14.101.193) Quit (Quit: Textual IRC Client: www.textualapp.com)
[10:26] * TMM (~hp@dhcp-077-248-009-229.chello.nl) Quit (Quit: Ex-Chat)
[10:28] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:405b:e9e5:f08e:901e) has joined #ceph
[10:29] * karnan (~karnan@125.16.34.66) has joined #ceph
[10:30] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[10:30] * penguinRaider (~KiKo@50.115.124.86) has joined #ceph
[10:32] * karnan (~karnan@125.16.34.66) Quit ()
[10:32] * karnan (~karnan@125.16.34.66) has joined #ceph
[10:43] * aj__ (~aj@46.246.49.50) has joined #ceph
[10:44] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:49] <Jeeves_> Hi: https://en.wikipedia.org/wiki/Amazon_S3 states a file limit of 5TB for S3. Does this also apply to RadosGW?
[10:49] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[10:51] * Linkmark (~Linkmark@252.146-78-194.adsl-static.isp.belgacom.be) has joined #ceph
[10:52] <IcePic> takes a while to test. ;)
[10:52] <IcePic> "Oh, forgot to chunk my file, hope your apache frontend has 5TB ram" ;)
[10:52] <Jeeves_> :)
[10:53] * rwheeler (~rwheeler@125.16.34.66) has joined #ceph
[10:57] * EinstCrazy (~EinstCraz@2001:da8:8001:822:4c3a:f150:2f64:ea4f) Quit (Remote host closed the connection)
[10:59] <singler> maybe there is a limit on non-multipart upload?
[10:59] <singler> I think AWs limits to 5GB
[10:59] <singler> AWS
[11:00] <singler> http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectOperations.html You can upload objects of up to 5 GB in size in a single operation. For objects greater than 5 GB you must use the multipart upload API.
[11:07] * karnan (~karnan@125.16.34.66) Quit (Ping timeout: 480 seconds)
[11:08] * kefu (~kefu@114.92.125.128) has joined #ceph
[11:17] * rwheeler (~rwheeler@125.16.34.66) Quit (Quit: Leaving)
[11:17] * joshd (~jdurgin@125.16.34.66) Quit (Quit: Leaving.)
[11:18] <CypressXt> Hi again :). Just an other thing. I'm removing some files from an RBD image and the available size doesn't change even when unmapping the image 0.0 ?
[11:19] * karnan (~karnan@125.16.34.66) has joined #ceph
[11:19] * TMM (~hp@185.5.121.201) has joined #ceph
[11:21] <Gugge-47527> CypressXt: what available size?
[11:21] <Gugge-47527> the one shown by the filesystem in the RBD, or the one shown by ceph?
[11:22] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[11:22] * kefu (~kefu@114.92.125.128) has joined #ceph
[11:23] <CypressXt> Gugge-47527: the one shown by ceph
[11:23] <Gugge-47527> you need trim/discard support to get something deleted from an RBD
[11:25] <CypressXt> Gugge-47527: what's this thing ?
[11:26] <singler> it's like with SSDs. You need to tell underlying system that blocks are unused
[11:27] * jcsp (~jspray@125.16.34.66) Quit (Ping timeout: 480 seconds)
[11:27] <CypressXt> ok with the mount option like: mount -o discard /dev/rbd0 /cephmount/ ?
[11:28] <IcePic> can you run manual fstrim also?
[11:29] <CypressXt> apparently the mount option doesn't work but fstrim do. Is a a way to enable it without running the fstrim command manually ?
[11:32] <CypressXt> * discard option worked with the sync cmd. My bad
[11:38] * Xa (~ghostnote@46.166.188.238) has joined #ceph
[11:39] <Gugge-47527> CypressXt: if you use krbd, in not sure discard is supported
[11:39] <Gugge-47527> but maybe it is in new enough kernels :)
[11:39] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[11:39] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[11:40] * kefu (~kefu@114.92.125.128) has joined #ceph
[11:42] * LiamMon (~liam.monc@classical.moncur.eu) Quit (Quit: leaving)
[11:42] * LiamMon (~liam.monc@163.172.181.66) has joined #ceph
[12:02] * smithfarm (~smithfarm@213.151.95.130) has joined #ceph
[12:08] * Xa (~ghostnote@5AEAABPKL.tor-irc.dnsbl.oftc.net) Quit ()
[12:10] * sto_ is now known as sto
[12:11] * Vacuum_ (~Vacuum@88.130.198.189) has joined #ceph
[12:16] * tsg (~tgohad@134.134.139.82) has joined #ceph
[12:18] * Vacuum__ (~Vacuum@88.130.204.79) Quit (Ping timeout: 480 seconds)
[12:20] * tsg_ (~tgohad@134.134.139.82) has joined #ceph
[12:20] * tsg (~tgohad@134.134.139.82) Quit (Remote host closed the connection)
[12:34] <SamYaple> CypressXt: yea that option is on the filesystem itself when you mount
[12:35] <SamYaple> CypressXt: really though, its generally considered a bad idea to have discard running all the time. the recommended path is to run fstrim periodically
[12:36] <SamYaple> thats more due to ssh bios issues, i dont know how well it translates into the ceph implemntation
[12:36] <SamYaple> we ran with discard for a while but then turned it off and now have a cron that runs fstrim once a day during downtime
[12:40] * dneary (~dneary@80.169.137.53) has joined #ceph
[12:41] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[12:48] <IcePic> its about when to "pay" for the trim, in terms of io
[12:49] * t4nk590 (~oftc-webi@117.247.104.70) Quit (Ping timeout: 480 seconds)
[12:56] <SamYaple> IcePic: its more than that. continuous trim isn't actually handled well at all on some ssds. as in it will lock up the ssd for long periods of time when a periodic fstrim would not
[12:58] <SamYaple> IcePic: http://permalink.gmane.org/gmane.comp.file-systems.ext4/41974
[12:59] * sankarshan (~sankarsha@125.16.34.66) has joined #ceph
[12:59] * sankarshan (~sankarsha@125.16.34.66) Quit (Remote host closed the connection)
[13:02] * ccourtaut_ (~ccourtaut@93.31.173.157) Quit (Ping timeout: 480 seconds)
[13:03] * ccourtaut (~ccourtaut@157.173.31.93.rev.sfr.net) has joined #ceph
[13:04] * diver (~diver@cpe-98-26-71-226.nc.res.rr.com) has joined #ceph
[13:04] * diver_ (~diver@95.85.8.93) has joined #ceph
[13:05] * diver__ (~diver@cpe-2606-A000-111B-C12B-9EA-8E68-26C0-AAF9.dyn6.twc.com) has joined #ceph
[13:07] * ggarg (~ggarg@host-82-135-29-34.customer.m-online.net) Quit (Remote host closed the connection)
[13:08] * smithfarm (~smithfarm@213.151.95.130) Quit (Ping timeout: 480 seconds)
[13:12] * dgurtner (~dgurtner@217.192.177.51) has joined #ceph
[13:12] * diver (~diver@cpe-98-26-71-226.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[13:12] * diver_ (~diver@95.85.8.93) Quit (Ping timeout: 480 seconds)
[13:13] * diver__ (~diver@cpe-2606-A000-111B-C12B-9EA-8E68-26C0-AAF9.dyn6.twc.com) Quit (Ping timeout: 480 seconds)
[13:19] * derjohn_mob (~aj@b2b-94-79-172-98.unitymedia.biz) has joined #ceph
[13:20] <CypressXt> ok thx guys :)
[13:22] * smithfarm (~smithfarm@213.151.95.130) has joined #ceph
[13:23] * csharp (~SEBI@exit1.radia.tor-relays.net) has joined #ceph
[13:26] * aj__ (~aj@46.246.49.50) Quit (Ping timeout: 480 seconds)
[13:28] * Racpatel (~Racpatel@2601:87:3:31e3::77ec) has joined #ceph
[13:39] * diver (~diver@216.85.162.38) has joined #ceph
[13:40] * diver_ (~diver@95.85.8.93) has joined #ceph
[13:41] * bene2 (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) has joined #ceph
[13:42] * karnan (~karnan@125.16.34.66) Quit (Ping timeout: 480 seconds)
[13:45] * smithfarm (~smithfarm@213.151.95.130) Quit (Ping timeout: 480 seconds)
[13:47] * diver (~diver@216.85.162.38) Quit (Ping timeout: 480 seconds)
[13:47] * georgem (~Adium@24.114.68.202) has joined #ceph
[13:49] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[13:52] * donatas (~donatas@88-119-196-104.static.zebra.lt) has joined #ceph
[13:53] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[13:53] <donatas> hi, i have ceph-fuse mount between 3 nodes, one node does `ls -la /home` very quickly, others just stuck, where should I take a look first?
[13:53] * csharp (~SEBI@26XAABY8M.tor-irc.dnsbl.oftc.net) Quit ()
[13:55] * joshd (~jdurgin@121.244.54.198) has joined #ceph
[13:56] * smithfarm (~smithfarm@213.151.95.130) has joined #ceph
[14:00] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) has joined #ceph
[14:02] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Quit: treenerd_)
[14:02] * eXeler0n (~AotC@185.3.135.178) has joined #ceph
[14:03] * ade (~abradshaw@194.169.251.11) Quit (Ping timeout: 480 seconds)
[14:05] * karnan (~karnan@125.16.34.66) has joined #ceph
[14:08] * karnan (~karnan@125.16.34.66) Quit ()
[14:08] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[14:09] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[14:12] * rwheeler (~rwheeler@202.62.94.195) has joined #ceph
[14:12] * ircolle (~ircolle@202.62.94.195) has joined #ceph
[14:13] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[14:14] * ircolle (~ircolle@202.62.94.195) has left #ceph
[14:24] * shaunm (~shaunm@ms-208-102-105-216.gsm.cbwireless.com) Quit (Ping timeout: 480 seconds)
[14:24] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[14:26] * georgem (~Adium@24.114.68.202) Quit (Quit: Leaving.)
[14:30] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[14:30] * thomnico (~thomnico@2a01:e35:8b41:120:d16f:9087:ec5b:969f) has joined #ceph
[14:35] * eXeler0n (~AotC@185.3.135.178) Quit (Ping timeout: 480 seconds)
[14:37] * rendar (~I@host224-58-dynamic.49-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[14:51] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:56] * donatas (~donatas@88-119-196-104.static.zebra.lt) Quit (Quit: leaving)
[14:56] * KindOne_ (kindone@h155.160.186.173.dynamic.ip.windstream.net) has joined #ceph
[14:59] * rendar (~I@host125-183-dynamic.46-79-r.retail.telecomitalia.it) has joined #ceph
[15:02] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:02] * KindOne_ is now known as KindOne
[15:04] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Ping timeout: 480 seconds)
[15:05] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[15:05] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[15:08] * KindOne_ (kindone@h191.174.16.98.dynamic.ip.windstream.net) has joined #ceph
[15:13] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:13] * KindOne_ is now known as KindOne
[15:18] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) has joined #ceph
[15:19] * srk (~Siva@2605:6000:ed04:ce00:7041:6878:78e4:6270) has joined #ceph
[15:24] * Jeeves_ (~mark@2a03:7900:1:1:4cac:cad7:939b:67f4) Quit (Remote host closed the connection)
[15:25] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:26] * Jeeves_ (~mark@2a03:7900:1:1:4cac:cad7:939b:67f4) has joined #ceph
[15:27] * srk (~Siva@2605:6000:ed04:ce00:7041:6878:78e4:6270) Quit (Ping timeout: 480 seconds)
[15:31] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[15:32] * Branczu (branch@predictor.org.pl) Quit (Quit: http://www.hellcore-mailer.pl)
[15:34] * BranchPredictor (branch@00021630.user.oftc.net) has joined #ceph
[15:37] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[15:47] * shaunm (~shaunm@cpe-192-180-17-174.kya.res.rr.com) has joined #ceph
[15:48] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[15:48] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:49] * kefu (~kefu@114.92.125.128) has joined #ceph
[15:51] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[15:51] <mistur> Hello
[15:51] * kefu (~kefu@114.92.125.128) Quit ()
[15:52] * kuku (~kuku@112.202.168.251) has joined #ceph
[15:52] <mistur> how I can determine wich pool is affected by "too many PGs per OSD"
[15:53] <mistur> running jewel 10.2.2
[15:55] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[15:55] * salwasser (~Adium@72.246.3.14) has joined #ceph
[15:55] <singler> mistur: it does not target a pool, it means that there are too many pg in total
[15:55] * srk (~Siva@2605:6000:ed04:ce00:7041:6878:78e4:6270) has joined #ceph
[15:55] <mistur> singler: ok
[15:56] <mistur> I thought it was for a specific pool
[16:05] * diver__ (~diver@216.85.162.34) has joined #ceph
[16:05] * smithfarm (~smithfarm@213.151.95.130) Quit (Read error: Connection reset by peer)
[16:06] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:06] * smithfarm (~smithfarm@213.151.95.130) has joined #ceph
[16:08] * srk (~Siva@2605:6000:ed04:ce00:7041:6878:78e4:6270) Quit (Ping timeout: 480 seconds)
[16:10] * derjohn_mob (~aj@b2b-94-79-172-98.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[16:11] * diver_ (~diver@95.85.8.93) Quit (Ping timeout: 480 seconds)
[16:13] * vata (~vata@96.127.202.136) Quit (Quit: Leaving.)
[16:16] * srk (~Siva@2605:6000:ed04:ce00:7041:6878:78e4:6270) has joined #ceph
[16:17] <CypressXt> to give you a feedback about the yesterday discution. I finaly achived to reduce de freeze time of my ceph cluster (1mon 4 osd) to ~3-4sec instead of 900sec by using this configuration: http://pastebin.com/bTQDdWhJ
[16:22] <CypressXt> if I set the "mon osd report timeout = 4" lower than 4, the freeze time is shorter, but when I plug back the lost osd, it can't join the cluster properly after going down.
[16:24] * diver__ (~diver@216.85.162.34) Quit (Ping timeout: 480 seconds)
[16:24] * peetaur2_ (~peter@i4DF67CD2.pool.tripleplugandplay.com) has joined #ceph
[16:26] <singler> CypressXt: try setting "ceph osd set nodown" and "ceph osd set noout" before pluging it in
[16:26] <singler> see if it comes up
[16:27] <singler> you can unset these settings with "unset" instead of "set"
[16:27] * diver (~diver@216.85.162.34) has joined #ceph
[16:28] * srk_ (~Siva@2605:6000:ed04:ce00:13f:91c0:54d9:5252) has joined #ceph
[16:33] <CypressXt> singler: yep I tried it but the osd start to flap between up and down, until it goes finally down. Then I have to restart the ceph-osd service to make it up and properly working again.
[16:33] * squizzi (~squizzi@107.13.237.240) has joined #ceph
[16:35] * srk (~Siva@2605:6000:ed04:ce00:7041:6878:78e4:6270) Quit (Ping timeout: 480 seconds)
[16:36] * smithfarm (~smithfarm@213.151.95.130) Quit (Ping timeout: 480 seconds)
[16:36] * doppelgrau (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[16:37] * andreww (~xarses@64.124.158.3) has joined #ceph
[16:37] * kristen (~kristen@134.134.139.82) has joined #ceph
[16:39] * diver_ (~diver@216.85.162.38) has joined #ceph
[16:39] * malevolent (~quassel@192.146.172.118) Quit (Read error: Connection reset by peer)
[16:40] * malevolent (~quassel@192.146.172.118) has joined #ceph
[16:41] * vata (~vata@207.96.182.162) has joined #ceph
[16:41] <kiranos> Hi we get an error from a client
[16:41] <kiranos> "FULL or reached pool quota"
[16:41] <kiranos> but I've checked the pool
[16:41] <kiranos> ceph df
[16:41] <kiranos> pool1 33 3060G 3.03 8073G 1148120
[16:41] <kiranos> ceph osd pool get-quota pool1
[16:41] <kiranos> quotas for pool 'pool1':
[16:41] <kiranos> max objects: N/A
[16:41] <kiranos> max bytes : N/A
[16:41] <kiranos> its not either, anyone seen this or know how to check what the client is complaining about?
[16:42] * vikhyat (~vumrao@49.248.206.76) Quit (Quit: Leaving)
[16:43] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[16:44] * dgurtner (~dgurtner@217.192.177.51) Quit (Ping timeout: 480 seconds)
[16:44] <peetaur2_> kiranos: I can't likely solve your problem, but I think it would be helpful to know os, kernel, ceph version, and some column headings on that or `ceph osd df detail`
[16:45] <peetaur2_> and you could look at logs
[16:45] * diver (~diver@216.85.162.34) Quit (Ping timeout: 480 seconds)
[16:47] * Kurt (~Adium@2001:628:1:5:983f:a4ca:a2bb:967f) Quit (Quit: Leaving.)
[16:48] * diver (~diver@95.85.8.93) has joined #ceph
[16:48] * srk_ (~Siva@2605:6000:ed04:ce00:13f:91c0:54d9:5252) Quit (Ping timeout: 480 seconds)
[16:50] * wushudoin (~wushudoin@2601:646:8200:c9f0:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:54] * joshd (~jdurgin@121.244.54.198) Quit (Quit: Leaving.)
[16:55] * diver_ (~diver@216.85.162.38) Quit (Ping timeout: 480 seconds)
[16:55] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[16:56] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[16:57] * dgurtner (~dgurtner@178.197.234.12) has joined #ceph
[16:58] * The_Ball (~pi@20.92-221-43.customer.lyse.net) Quit (Ping timeout: 480 seconds)
[16:58] * thomnico (~thomnico@2a01:e35:8b41:120:d16f:9087:ec5b:969f) Quit (Ping timeout: 480 seconds)
[16:58] * tsg_ (~tgohad@134.134.139.82) Quit (Ping timeout: 480 seconds)
[17:05] <SamYaple> kiranos: is the _custer_ FULL?
[17:09] * walcubi_ is now known as walcubi
[17:10] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[17:10] * walcubi (~walcubi@p5795B441.dip0.t-ipconnect.de) Quit (Quit: Leaving)
[17:10] * grauzikas (grauzikas@78-56-222-78.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[17:10] * walcubi (~walcubi@p5795B441.dip0.t-ipconnect.de) has joined #ceph
[17:11] <peetaur2_> that's why I wanted ceph osd df, for the % columns
[17:12] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[17:12] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Quit: Leaving.)
[17:12] * rakeshgm (~rakesh@125.16.34.66) Quit (Ping timeout: 480 seconds)
[17:13] * Linkmark (~Linkmark@252.146-78-194.adsl-static.isp.belgacom.be) Quit (Quit: Leaving)
[17:18] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[17:19] <walcubi> Now storing 1 billion objects. =)
[17:20] <walcubi> And there's a two disks that keep on dying - probably the speed that it's trying to backfill just causes it to blow out each time I try.
[17:21] * dgurtner_ (~dgurtner@178.197.234.12) has joined #ceph
[17:22] * dgurtner (~dgurtner@178.197.234.12) Quit (Ping timeout: 480 seconds)
[17:24] * rakeshgm (~rakesh@125.16.34.66) has joined #ceph
[17:25] * rakeshgm (~rakesh@125.16.34.66) Quit ()
[17:25] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[17:25] <walcubi> I have osd_max_backfills=1 and osd_recovery_max_active=1 - other recovery settings are defaults.
[17:25] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[17:26] <CypressXt> thx for help guys, bye
[17:26] <walcubi> Anything else that could be done to make the easing back into the cluster as painless as possible?
[17:26] * CypressXt (~clement@205.37.194.178.dynamic.wline.res.cust.swisscom.ch) Quit (Quit: WeeChat 1.5)
[17:27] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[17:27] <walcubi> It's pretty bad when it takes a day for 120M objects to backfill one way, only to have to send them back the other because the disks it's recovering to have slowed to a crawl and start blowing out.
[17:29] * Dw_Sn (~Dw_Sn@00020a72.user.oftc.net) Quit (Quit: leaving)
[17:29] <peetaur2_> walcubi: are you adding back the same disks? get new ones and they should endure any load you give them :D
[17:29] <walcubi> These disks are pretty much brand new
[17:30] * ntpttr_ (~ntpttr@134.134.139.83) has joined #ceph
[17:30] <peetaur2_> what kind of disks are they?
[17:30] <walcubi> SSDs
[17:30] * tsg_ (~tgohad@134.134.139.77) has joined #ceph
[17:30] <peetaur2_> cheap ones?
[17:31] <peetaur2_> only issues I had with SSDS (not in ceph...I don't have a real ceph cluster yet) were fixed with firmware updates
[17:31] <walcubi> Looks like they're intel SSDs
[17:31] <walcubi> Should be production grade
[17:33] <walcubi> There are 30 of them, all identical with maybe a small variance of ages.
[17:33] <walcubi> I don't think it's the disks, more the underlying filesystem that just slows to a crawl.
[17:33] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[17:34] <peetaur2_> what do you mean "blowing out"?
[17:34] <walcubi> Otherwise they would be slow from the get-go
[17:34] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[17:34] <walcubi> From the graphs, it looks like what happens when a formula 1 tyre goes.
[17:35] * smithfarm (~smithfarm@217.30.64.210) has joined #ceph
[17:35] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) has joined #ceph
[17:35] <peetaur2_> ok well all I can do is give you a list of notes/options I read on https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/a-year-with-cinder-and-ceph-at-twc
[17:35] <peetaur2_> https://bpaste.net/show/e3b9f3b1c5e3
[17:35] <walcubi> OSD load goes 3x higher than average, internal disk usage starts increasing in a linear rate.
[17:35] <peetaur2_> but they were mainly talking about the rest of the cluster going slow due to having to send data to those new osd disks
[17:35] <walcubi> Yeah
[17:35] <walcubi> I'm seeing that too
[17:36] <peetaur2_> ok well, something to play with anyway :)
[17:36] <walcubi> Same when going the other way also.
[17:36] * derjohn_mob (~aj@x4db2641a.dyn.telefonica.de) has joined #ceph
[17:36] <walcubi> Would be nice if all new writes went to *other* disks when you mark an osd as down.
[17:36] <peetaur2_> down, or out?
[17:37] <walcubi> s/down/out/ ;-)
[17:37] <peetaur2_> doesn't all of everything go from an osd when it's out?
[17:37] <walcubi> Yes and no
[17:37] <walcubi> If it's slowed to a crawl, reads latencies are going through the roof
[17:38] <peetaur2_> ok I guess I know what you mean.... you mark it out, and it begins migrating, but new data and old data may still be saved/read from it while it's going out?
[17:38] <peetaur2_> I would expect they would only allow reads, and only if the other copies are missing/busy
[17:38] <walcubi> My guess is that there are PGs in backfill+waiting status
[17:38] <walcubi> Maybe I'll see better results once I've completed the migration.
[17:39] <walcubi> But it looks like the rate that I'm writing images into Ceph is 3x more than what it's able to recover
[17:39] <walcubi> So the number of "misplaced" objects goes up rather than down
[17:40] <peetaur2_> did you try nobackfill?
[17:40] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[17:41] <walcubi> Only around 300M more images to go and all will be complete.
[17:41] <walcubi> peetaur2_, nobackfill?
[17:41] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[17:41] <peetaur2_> in my bpaste link which is the stuff I got from that presentation
[17:41] <walcubi> I don't think I can do that in my setup
[17:42] <peetaur2_> ok... why not? will I break my cluster if I try it? :D
[17:42] * peetaur2_ doesn't have a production cluster yet
[17:42] <walcubi> Well I am far too sensible to fly around with wings. :-P
[17:43] <walcubi> I don't have the luxury of having 2x the number of servers for the amount of data I'm storing.
[17:44] <peetaur2_> do you mean you have size 1?
[17:44] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:44] <walcubi> Exactly. :-D
[17:45] <peetaur2_> ah, well that's a different use case :)
[17:45] <peetaur2_> what do you do when disks go down?
[17:45] <walcubi> well clients have a very sensitive timeout.
[17:46] <walcubi> But by and large, in our old setup, we've been using the same SSDs for 3 years now.
[17:46] <peetaur2_> maybe you can add some hdds and size 2 during rebuild (and slow it down terribly :D)
[17:46] <walcubi> Yeah, I had a look, and that's just not doable for the moment.
[17:47] <walcubi> The way we are, there's no locality of reference in what gets read from storage.
[17:47] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Remote host closed the connection)
[17:47] <walcubi> And we need to cope with at least 1000 stat() requests per disk per second.
[17:48] <peetaur2_> well there are always dirty hacks.... like kill the osd, replace its disk with a raid1 with that disk + missing, and start osd again .... then add the new ssd in there...when mirroring is complete, remove the old one :)
[17:48] <peetaur2_> that would be some quick downtime
[17:49] * kuku (~kuku@112.202.168.251) Quit (Remote host closed the connection)
[17:49] <peetaur2_> gtg cya
[17:50] * peetaur2_ is now known as peetaur2
[17:51] <walcubi> Perhaps it'll get better with bluestore.
[17:54] <walcubi> Don't know how space effective the new system will be, but with the current filestore backend, it's about a factor of 2:1.
[17:54] <walcubi> 2 physical data, 1 internal filesystem baggage.
[17:55] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[17:55] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[17:56] <walcubi> And I would like some of that 100GB of internal filesystem baggage back.
[17:56] <walcubi> x30 disks.
[17:58] * ntpttr_ (~ntpttr@134.134.139.83) Quit (Remote host closed the connection)
[17:59] * dgurtner_ (~dgurtner@178.197.234.12) Quit (Ping timeout: 480 seconds)
[18:03] * tsg__ (~tgohad@192.55.54.43) has joined #ceph
[18:08] * tsg_ (~tgohad@134.134.139.77) Quit (Remote host closed the connection)
[18:13] * borei1 (~dan@216.13.217.230) has joined #ceph
[18:19] * Skaag (~lunix@65.200.54.234) has joined #ceph
[18:25] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[18:26] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[18:33] * dneary (~dneary@80.169.137.53) Quit (Ping timeout: 480 seconds)
[18:40] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:41] * tokie (~Jebula@46.166.186.250) has joined #ceph
[18:51] * squizzi_ (~squizzi@2001:420:2240:1268:ad85:b28:ee1c:890) has joined #ceph
[18:53] * tokie (~Jebula@2RTAAAE14.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[18:56] * praveen_ (~praveen@121.244.155.11) Quit (Remote host closed the connection)
[18:56] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[19:00] * squizzi (~squizzi@107.13.237.240) Quit (Quit: bye)
[19:01] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[19:04] * sleinen1 (~Adium@2001:620:0:2d:a65e:60ff:fedb:f305) has joined #ceph
[19:04] * sleinen1 (~Adium@2001:620:0:2d:a65e:60ff:fedb:f305) Quit ()
[19:07] * tsg__ (~tgohad@192.55.54.43) Quit (Remote host closed the connection)
[19:10] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:10] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[19:10] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[19:11] * sleinen (~Adium@macsl.switch.ch) Quit (Ping timeout: 480 seconds)
[19:12] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[19:17] * tsg__ (~tgohad@jfdmzpr05-ext.jf.intel.com) has joined #ceph
[19:25] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[19:27] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[19:38] * shaunm (~shaunm@cpe-192-180-17-174.kya.res.rr.com) Quit (Ping timeout: 480 seconds)
[19:40] * allenmelon (~cyphase@tsn109-201-154-179.dyn.nltelcom.net) has joined #ceph
[19:41] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[19:42] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[19:46] * praveen_ (~praveen@122.172.122.231) has joined #ceph
[19:49] * sleinen (~Adium@2001:620:1000:4:a65e:60ff:fedb:f305) has joined #ceph
[19:54] * TMM (~hp@dhcp-077-248-009-229.chello.nl) has joined #ceph
[19:56] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[19:59] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[20:02] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[20:03] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[20:09] * allenmelon (~cyphase@tsn109-201-154-179.dyn.nltelcom.net) Quit ()
[20:11] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[20:12] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[20:19] * Behedwin (~Mattress@tsn109-201-154-179.dyn.nltelcom.net) has joined #ceph
[20:26] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[20:31] * rakeshgm (~rakesh@106.51.28.220) has joined #ceph
[20:32] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[20:33] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[20:39] * davidzlap (~Adium@2605:e000:1313:8003:b442:f2b6:b5d2:a6fa) has joined #ceph
[20:41] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[20:46] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[20:49] * Behedwin (~Mattress@tsn109-201-154-179.dyn.nltelcom.net) Quit ()
[21:08] * rakeshgm (~rakesh@106.51.28.220) Quit (Quit: Peace :))
[21:08] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[21:12] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[21:15] * mykola (~Mikolaj@91.245.74.66) has joined #ceph
[21:18] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Ping timeout: 480 seconds)
[21:19] * bene2 (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[21:26] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:405b:e9e5:f08e:901e) Quit (Ping timeout: 480 seconds)
[21:27] * rakeshgm (~rakesh@106.51.28.220) has joined #ceph
[21:27] * rakeshgm (~rakesh@106.51.28.220) Quit ()
[21:30] * blynch (~blynch@vm-nat.msi.umn.edu) has joined #ceph
[21:31] * shaunm (~shaunm@ms-208-102-105-216.gsm.cbwireless.com) has joined #ceph
[21:35] * diver__ (~diver@95.85.8.93) has joined #ceph
[21:37] * diver (~diver@95.85.8.93) Quit (Ping timeout: 480 seconds)
[21:44] * salwasser (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[21:44] * kristen (~kristen@134.134.139.82) Quit (Quit: Leaving)
[21:45] * penguinRaider (~KiKo@50.115.124.86) Quit (Ping timeout: 480 seconds)
[21:46] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[21:59] * penguinRaider (~KiKo@14.139.82.6) has joined #ceph
[22:19] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:22] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) Quit (Ping timeout: 480 seconds)
[22:29] * rwheeler (~rwheeler@202.62.94.195) Quit (Ping timeout: 480 seconds)
[22:29] * diver (~diver@216.85.162.34) has joined #ceph
[22:31] * diver_ (~diver@216.85.162.34) has joined #ceph
[22:33] * KindOne (kindone@h239.134.30.71.dynamic.ip.windstream.net) has joined #ceph
[22:35] * Jeffrey4l_ (~Jeffrey@119.251.244.248) has joined #ceph
[22:36] * diver__ (~diver@95.85.8.93) Quit (Ping timeout: 480 seconds)
[22:38] * diver (~diver@216.85.162.34) Quit (Ping timeout: 480 seconds)
[22:38] * Jeffrey4l (~Jeffrey@120.11.26.103) Quit (Ping timeout: 480 seconds)
[22:39] * diver_ (~diver@216.85.162.34) Quit (Ping timeout: 480 seconds)
[22:39] * mykola (~Mikolaj@91.245.74.66) Quit (Quit: away)
[22:42] * cathode (~cathode@50-232-215-114-static.hfc.comcastbusiness.net) has joined #ceph
[22:52] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[22:53] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[22:53] * jowilkin (~jowilkin@c-98-207-136-41.hsd1.ca.comcast.net) has joined #ceph
[22:57] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[22:59] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[23:11] * thansen (~thansen@17.253.sfcn.org) has joined #ceph
[23:13] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:21] * Racpatel (~Racpatel@2601:87:3:31e3::77ec) Quit (Quit: Leaving)
[23:25] * diver (~diver@95.85.8.93) has joined #ceph
[23:27] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[23:29] * Lunk2 (~Kurimus@64.ip-37-187-176.eu) has joined #ceph
[23:30] * diver (~diver@95.85.8.93) Quit ()
[23:30] * tsg_ (~tgohad@192.55.54.40) has joined #ceph
[23:31] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[23:33] * tsg__ (~tgohad@jfdmzpr05-ext.jf.intel.com) Quit (Remote host closed the connection)
[23:34] * tsg_ (~tgohad@192.55.54.40) Quit (Remote host closed the connection)
[23:42] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[23:43] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[23:46] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[23:47] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[23:50] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[23:53] * ircuser-1 (~Johnny@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[23:57] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[23:57] * squizzi_ (~squizzi@2001:420:2240:1268:ad85:b28:ee1c:890) Quit (Ping timeout: 480 seconds)
[23:58] * tsg_ (~tgohad@134.134.139.76) has joined #ceph
[23:58] * srk (~Siva@2605:6000:ed04:ce00:ac40:80be:c1b0:28cb) has joined #ceph
[23:59] * Lunk2 (~Kurimus@26XAABZNC.tor-irc.dnsbl.oftc.net) Quit ()

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.