#ceph IRC Log

Index

IRC Log for 2014-09-05

Timestamps are in GMT/BST.

[0:00] <rkdemon> done keyring store.db upstart
[0:00] <rkdemon> lurbs: /var/lib/ceph/mon/ceph-ceph1 has the above things
[0:00] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[0:00] <lurbs> But ceph-a doesn't?
[0:01] <rkdemon> there isn't a ceph-a
[0:01] <lurbs> Ah.
[0:01] <rkdemon> I think the ceph.conf file is wrong
[0:01] <rkdemon> mon.a should be mon.ceph1 ?
[0:01] <lurbs> Yeah.
[0:01] <lurbs> Was just about to say.
[0:01] <rkdemon> :-)
[0:02] <rkdemon> It is my days 2 and I feel like I am getting into quicksand
[0:02] <rkdemon> So if I tear down my ceph cluster and restart with the ceph.conf I pasted .. you think I should get it up...
[0:02] * BManojlovic (~steki@37.19.108.8) has joined #ceph
[0:02] <rkdemon> The defaults for 768 will not be taken so I will need to issue those commands manuall after a ceph install . will that be ok?
[0:03] <lurbs> rkdemon: I'd try swapping out mon.a for mon.ceph1, etc, in ceph.conf and starting the daemons.
[0:03] <lurbs> Not sure why or how that got out of sync to begin with.
[0:05] <lurbs> Our config doesn't even contain those mon.X blocks BTW, so they may not even be required.
[0:07] <rkdemon> lurbs: I will get rid of it.. correct the latest ceph quick start does not have it.. another guide on the site had it so I dumped it for perhaps the ip address specification but it isn't required.. that info is redundant in the file
[0:11] <lurbs> Yeah, the 'mon_host' line should suffice.
[0:11] <rkdemon> lurbs: I changed /etc/ceph/ceph.conf on ceph1 and the service restarted fine
[0:11] <rkdemon> If I tear down the cluster and redo it .. I will need those manual config steps for the 768 defaults correct ?
[0:11] <rkdemon> lurbs: I restart the services etc.. unfortunately the ceph is stilll not healthy
[0:11] <lurbs> If you're using ceph-deploy then I think you can set them in the config file on the server you're running the deploy from, and then push it out to the nodes as you create them ('ceph-deploy config push $host', or something) so that they will be in effect for when the cluster is first installed.
[0:11] <lurbs> That's not very nice of it. What does 'ceph health detail' say now?
[0:11] <rkdemon> lurbs: thanks.. I will try that
[0:11] <rkdemon> ceph health
[0:11] <rkdemon> HEALTH_WARN 896 pgs stuck unclean
[0:12] <rkdemon> pg 0.17e is stuck unclean since forever, current state active+remapped, last acting [20,10,9]
[0:12] * marrusl (~mark@2604:2000:60e3:8900:c044:9727:f15b:71a3) Quit (Quit: sync && halt)
[0:12] <rkdemon> its a scroll of such lines
[0:12] <lurbs> Looks like the OSD daemons need restarting.
[0:12] <lurbs> Dropping the replicas from 3 to 2 doesn't seem to stick properly until then.
[0:13] <rkdemon> service ceph-osd restart on each of the osd nodes
[0:13] <lurbs> ceph-osd-all, I think.
[0:14] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has left #ceph
[0:14] <lurbs> That line is saying that placement group 0.17e is trying to use OSDs 20,10,9 for its three replicas, even though it should only now have two, and that the CRUSH map can't place data in those as they're not all on different nodes. Not sure why it requires an daemon restart to fix.
[0:14] <lurbs> That's probably a bug, actually.
[0:17] * BManojlovic (~steki@37.19.108.8) Quit (Remote host closed the connection)
[0:18] * BManojlovic (~steki@37.19.108.8) has joined #ceph
[0:18] <rkdemon> for i in {0..11}; do sudo service ceph-osd restart
[0:18] <rkdemon> id=$i;done .. I just did this and for i in {11..23}; do sudo service ceph-osd restart id=$i;done
[0:19] <rkdemon> I don't have ceph-osd-all.. I am on dumpling.. is that a firefly feature.. ?
[0:19] <lurbs> Not sure, sorry.
[0:19] <rkdemon> lurbs: ceph health
[0:19] <rkdemon> HEALTH_OK
[0:20] <rkdemon> lurbs: Thank you so much.. I now need to tear it down and redo it !! HAHA!! BTW How did you arrive at our ceph prvice ?Do you develop ceph or gained experience through using it ?
[0:20] <lurbs> rkdemon: https://www.youtube.com/watch?v=xos2MnVxe-c
[0:20] <lurbs> Just a user.
[0:21] <rkdemon> hahahaha!!
[0:21] <rkdemon> Exaclty the sentiment!!
[0:21] <rkdemon> If u don't mind me asking.. did u take formal training anywhere ? Or the headbanging approach ?
[0:22] <rkdemon> lurbs; thank you .. you have been more than helpful and this has been the best help I have ever received on IRC!!
[0:22] <dmick> !norris lurbs
[0:22] <kraken> The original title for Alien vs. Predator was Alien and Predator vs lurbs. The film was cancelled shortly after going into preproduction. No one would pay nine dollars to see a movie fourteen seconds long.
[0:23] <lurbs> We had fghaas (who was in here earlier) come over for a training session quite some time ago, who did a couple of days of OpenStack/Ceph training, but since then it's been headbanging. :)
[0:23] <lurbs> rkdemon: You're welcome. Now I just need to figure out how to timesheet this. ;)
[0:24] <rkdemon> I am looking for training and I know it is not for advertising but a recommendation on how to get fghaas's to get me training and him/her to make a bunch of money from my company would be great (please pm that info if you don't mind)
[0:25] <lurbs> Done.
[0:26] * sreddy (~oftc-webi@32.97.110.56) Quit (Remote host closed the connection)
[0:28] * tab_ (~oftc-webi@89-212-99-37.dynamic.t-2.net) Quit (Remote host closed the connection)
[0:29] * steki (~steki@37.19.108.8) has joined #ceph
[0:29] <loicd> rkdemon: why not go to the source http://www.inktank.com/university/ ?
[0:31] <lurbs> We got fghaas in because we also wanted training in OpenStack at the same time - for pure Ceph then straight to Inktank is definitely worth considering.
[0:32] * BManojlovic (~steki@37.19.108.8) Quit (Ping timeout: 480 seconds)
[0:33] <loicd> there seems to be a ceph + openstack course also http://www.inktank.com/university/ceph120/ but maybe it did not exist at the time ?
[0:33] <lurbs> Wasn't aware of that one. Was quite some time ago though.
[0:34] * jobewan (~jobewan@snapp.centurylink.net) Quit (Quit: Leaving)
[0:36] <lurbs> loicd: Shout me a place on that course, and I'll tell you how it compares. ;)
[0:36] * rendar (~I@95.234.176.198) Quit ()
[0:36] * tab (~oftc-webi@89-212-99-37.dynamic.t-2.net) has joined #ceph
[0:37] * Cybertinus (~Cybertinu@2a00:6960:1:1:0:24:107:1) Quit (Remote host closed the connection)
[0:37] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[0:38] <loicd> :-)
[0:38] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[0:39] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[0:45] * Cybert1nus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[0:45] * markbby1 (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[0:48] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Ping timeout: 480 seconds)
[0:50] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[0:51] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[0:53] * Cybert1nus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Ping timeout: 480 seconds)
[0:55] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:56] * dmsimard is now known as dmsimard_away
[0:57] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[0:59] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Ping timeout: 480 seconds)
[1:01] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[1:02] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[1:05] * LeaChim (~LeaChim@host86-135-182-184.range86-135.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:09] * fsimonce (~simon@host135-17-dynamic.8-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:10] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) has joined #ceph
[1:14] * oms101 (~oms101@p20030057EA3E6F00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:15] * zack_dolby (~textual@p843a3d.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:18] <Pedras> join #noops
[1:18] <Pedras> :)
[1:21] <tab> how does Ceph solve power-failure case? Does it start replicate data to another rack or it waits for the operator to make a move?
[1:21] <tab> let's say we have two racks and one goes down due to power failure for some time
[1:22] <iggy> tab: after a certain interval, it automatically starts recovering
[1:22] <iggy> the interval is configurable and I think it's 120s by default
[1:22] <tab> so power failure is durability issue for ceph? although data are still on disks within that rack, but temporary not accessible?
[1:23] <iggy> how does ceph know it's a power failure and not a dead node?
[1:23] <tab> is this configurable per rack or per OSD?
[1:23] * oms101 (~oms101@p20030057EA395D00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:23] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[1:23] <iggy> I think it's a global configurable
[1:23] <iggy> but don't quote me on that
[1:23] <lurbs> 'mon osd down out interval', defaults to 300.
[1:24] <tab> what do you mean by global?
[1:24] <iggy> ^ 300
[1:24] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[1:25] <tab> but does CEPH than check on first rack, before it start's replicating data, if it will have enogh space to do that automatically?
[1:25] <iggy> no
[1:25] <jcsp> when an OSD stops responding (could be because of power, dead node, whatever), ceph marks it 'down'. At this stage the data is usually still accessible from other replicas (default 3-way replication). After some time ('mon osd down out interval') the OSD is marked as 'out', at which point ceph starts making new copies in case something else fails.
[1:25] <joshd1> carmstrong: that hang on the host may have been http://tracker.ceph.com/issues/8818, which started occurring in 3.15 (fixed in the stable kernel trees now). it's certainly separate from the issue inside the container, so it seems there'd need to be extra debugging added to the rbd module to figure out what the EINVAL is coming from
[1:25] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Quit: leaving)
[1:25] <carmstrong> joshd1: gotcha. thanks for all your help
[1:25] <carmstrong> I'm going down the route of just using the radosgw for now for blob storage
[1:25] <carmstrong> and we'll revisit the RBD volume in the future
[1:27] <tab> on what basis Ceph decides when OSD is bad? is there any script that parse for interesting disk trouble notes from kernel logs?
[1:28] <tab> and it than moves osd out of service somehow?
[1:29] <joshd1> carmstrong: you're welcome, that makes sense for now. I'll add a bug about the container issue
[1:31] <lurbs> I've just added a new node into the cluster, and have started using 'ceph osd crush reweight' to shuffle data on the drives. A weird thing happened, though.
[1:31] <lurbs> After having put the weight on the first drive up to its final value the disk still only contained a small subset of the data, and was only part of a fraction of number the PGs, that I'd expect.
[1:31] <carmstrong> joshd1: thanks. shoot me a link when it's open so I can subscribe to it, please
[1:32] <lurbs> But then when I increased the weight on the second drive, it also increased (by a factor of at least 2) the number of PGs that contained the first drive.
[1:32] * rkdemon (~rkdemon@pool-71-244-62-208.dllstx.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[1:32] <lurbs> Default CRUSH map, BTW.
[1:36] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[1:37] <joshd1> carmstrong: http://tracker.ceph.com/issues/9355
[1:38] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[1:38] <carmstrong> joshd1: thanks! watching that issue
[1:38] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[1:39] <carmstrong> joshd1: I also commented on 8818, in case anyone else with my kernel and coreos stumbles across it
[1:41] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[1:42] <joshd1> good idea, thanks!
[1:48] * steki (~steki@37.19.108.8) Quit (Ping timeout: 480 seconds)
[1:48] <flaf> Hi, I'm testing cephfs and I have questions. One Ceph cluster can provide juste one cephfs, is it correct?
[1:55] * thomnico (~thomnico@2a01:e35:8b41:120:d4d5:4c7d:6707:1912) Quit (Quit: Ex-Chat)
[1:56] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[2:02] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[2:11] * joshd1 (~jdurgin@2602:306:c5db:310:6d90:cc4b:79c0:1eb8) Quit (Quit: Leaving.)
[2:16] <flaf> Second question: when I create my cephfs in my cluster, can I choose the size of my cephfs (100G, 500G etc.)?
[2:17] <carmstrong> can I have multiple radosgw machines, and load balance them with nginx upstreams? I assume so based on the namespacing in the config file (i.e. [client.radosgw.{instance-name}])
[2:20] * dneary (~dneary@96.237.180.105) has joined #ceph
[2:22] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[2:27] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[2:29] * monsterz_ (~monsterzz@94.19.146.224) has joined #ceph
[2:29] * monsterzz (~monsterzz@94.19.146.224) Quit (Read error: Connection reset by peer)
[2:29] * cronix1 (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[2:30] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[2:36] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[2:37] * monsterz_ (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[2:37] <flaf> Ok, sorry, for my first question I have found the answer in the dev documentation.
[2:37] <flaf> http://ceph.com/docs/master/cephfs/createfs/
[2:38] <flaf> -> "at present only one filesystem may exist at a time"
[2:39] * tab (~oftc-webi@89-212-99-37.dynamic.t-2.net) Quit (Remote host closed the connection)
[2:39] * rmoe (~quassel@12.164.168.117) Quit (Read error: Operation timed out)
[2:44] * KevinPerks (~Adium@2606:a000:80a1:1b00:80d5:8f07:a8ea:5c4d) Quit (Quit: Leaving.)
[2:53] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:55] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[2:56] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[3:00] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[3:04] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[3:10] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[3:13] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) has joined #ceph
[3:15] * zerick (~eocrospom@190.187.21.53) Quit (Read error: Operation timed out)
[3:18] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:21] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) Quit (Quit: Leaving)
[3:22] * lucas1 (~Thunderbi@222.247.57.50) Quit (Quit: lucas1)
[3:32] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[3:34] * zhaochao (~zhaochao@111.204.252.1) has joined #ceph
[3:35] * Sysadmin88 (~IceChat77@94.8.80.73) Quit (Ping timeout: 480 seconds)
[3:41] * dneary (~dneary@96.237.180.105) Quit (Ping timeout: 480 seconds)
[3:53] * jtaguinerd (~jtaguiner@112.205.18.40) has joined #ceph
[3:57] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[4:05] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[4:09] * adamcrume (~quassel@2601:9:6680:47:d90c:73bf:4474:dda6) Quit (Remote host closed the connection)
[4:16] * JayJ__ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[4:18] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[4:24] * rkdemon (~rkdemon@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[4:29] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[4:33] * jtaguinerd1 (~jtaguiner@203.215.116.66) has joined #ceph
[4:38] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[4:39] * jdillaman (~jdillaman@pool-108-18-232-208.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[4:39] * jtaguinerd (~jtaguiner@112.205.18.40) Quit (Ping timeout: 480 seconds)
[4:41] * JayJ__ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[4:41] * vbellur (~vijay@122.167.104.136) has joined #ceph
[4:42] * JayJ__ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[4:42] * jdillaman (~jdillaman@pool-108-18-232-208.washdc.fios.verizon.net) has joined #ceph
[4:58] * JayJ__ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Quit: Computer has gone to sleep.)
[5:01] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[5:02] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) Quit (Read error: Connection timed out)
[5:03] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) has joined #ceph
[5:08] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[5:09] * bandrus1 (~Adium@216.57.72.205) has joined #ceph
[5:11] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[5:20] * vbellur (~vijay@122.167.104.136) Quit (Ping timeout: 480 seconds)
[5:21] * lalatenduM (~lalatendu@122.172.34.85) has joined #ceph
[5:24] * lucas1 (~Thunderbi@222.240.148.130) Quit (Quit: lucas1)
[5:24] * Vacuum_ (~vovo@i59F7AFE5.versanet.de) has joined #ceph
[5:31] * Vacuum (~vovo@i59F79388.versanet.de) Quit (Ping timeout: 480 seconds)
[5:34] * toabctl (~toabctl@toabctl.de) Quit (Remote host closed the connection)
[5:35] * KevinPerks (~Adium@2606:a000:80a1:1b00:74b8:4d15:f65a:14a) has joined #ceph
[5:35] * longguang (~chatzilla@123.126.33.253) has joined #ceph
[5:36] * toabctl (~toabctl@toabctl.de) has joined #ceph
[5:36] <longguang> how to let ceph refresh storage useage?
[5:38] * bandrus1 is now known as bandrus
[5:40] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[5:45] * bkunal (~bkunal@121.244.87.115) has joined #ceph
[5:52] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[5:53] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[5:56] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) has joined #ceph
[5:58] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Ping timeout: 480 seconds)
[5:58] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[6:00] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[6:00] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[6:00] * ircolle (~Adium@2601:1:a580:145a:8927:c0ac:8784:7f5a) has joined #ceph
[6:02] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[6:02] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[6:04] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[6:09] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Read error: Connection reset by peer)
[6:09] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[6:12] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) has joined #ceph
[6:12] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Remote host closed the connection)
[6:13] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[6:16] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) Quit ()
[6:17] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) has joined #ceph
[6:17] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[6:19] * ircolle (~Adium@2601:1:a580:145a:8927:c0ac:8784:7f5a) Quit (Quit: Leaving.)
[6:20] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:20] * lalatenduM (~lalatendu@122.172.34.85) Quit (Quit: Leaving)
[6:24] * bandrus (~Adium@216.57.72.205) Quit (Quit: Leaving.)
[6:25] * Nacer (~Nacer@2001:41d0:fe82:7200:6986:3ba5:e2a1:e6b0) has joined #ceph
[6:30] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[6:31] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: No route to host)
[6:31] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[6:34] * madkiss (~madkiss@81.16.159.83) has joined #ceph
[6:34] * rkdemon (~rkdemon@pool-71-244-62-208.dllstx.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:39] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[6:47] * rdas (~rdas@121.244.87.115) has joined #ceph
[6:48] * madkiss (~madkiss@81.16.159.83) Quit (Quit: Leaving.)
[7:03] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:05] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[7:10] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[7:14] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:15] * Nacer (~Nacer@2001:41d0:fe82:7200:6986:3ba5:e2a1:e6b0) Quit (Ping timeout: 480 seconds)
[7:17] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[7:20] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) Quit (Remote host closed the connection)
[7:20] * ashishchandra (~ashish@49.32.0.175) has joined #ceph
[7:23] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[7:28] * lucas1 (~Thunderbi@222.240.148.130) Quit (Quit: lucas1)
[7:30] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[7:30] * bkunal (~bkunal@121.244.87.115) Quit (Read error: Operation timed out)
[7:33] <Xiol> hi guys, we've been doing some ceph and OS upgrades, and we had some CRUSH problems after restarting OSDs which caused some backfilling. We've corrected the CRUSH problems now, but the backfilling is still happening, which is fine, but I need to restart the rest of the OSDs in the cluster to upgrade to firefly and i'm coming to the end of my maintenance windows. Am I ok to restart the OSDs whilst i still have degraded PGs (replica size 2!) or should I wait u
[7:34] <Xiol> Excuse the terrible English etc, I've been awake nearly 24 hours now
[7:34] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) has joined #ceph
[7:34] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) has left #ceph
[7:37] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[7:37] * peedu (~peedu@170.91.235.80.dyn.estpak.ee) has joined #ceph
[7:39] * peedu_ (~peedu@185.46.20.35) has joined #ceph
[7:39] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[7:43] * analbeard (~shw@host86-155-107-230.range86-155.btcentralplus.com) has joined #ceph
[7:45] * peedu (~peedu@170.91.235.80.dyn.estpak.ee) Quit (Ping timeout: 480 seconds)
[7:45] * pinoysk__ (~pinoyskul@121.54.54.145) has joined #ceph
[7:47] * Pedras (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[7:47] <pinoysk__> anyone familiar on how to set this? http://paste.openstack.org/show/106290/
[7:47] * pinoysk__ is now known as pinoyskull_
[7:48] * bkunal (~bkunal@121.244.87.124) has joined #ceph
[7:54] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[7:55] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[7:59] <fghaas> pinoyskull_: that's a current bug in firefly
[8:00] <fghaas> it's fixed in the firefly *branch*, but hasn't seen a release yet
[8:00] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[8:00] <pinoyskull_> got it
[8:01] <fghaas> I just ran into this last week; see http://irclogs.ceph.widodh.nl/index.php?date=2014-08-27 for context
[8:01] <fghaas> so, sadly, tell osd.X bench is currently non-functional :(
[8:02] <pinoyskull_> another question, my ceph has a slow write performance, my setup is raid1 SSD for journal, 7.2k rpm 4TB OSDs x 9 on 1 ceph node, i have 3, and the ceph network is 20GB
[8:03] <fghaas> so first of all drop that SSD for your journal
[8:03] <fghaas> bah, nonsense
[8:04] <fghaas> s/SSD/SSD RAID-1/
[8:04] <fghaas> IOW, don't put your SSDs into a RAID is what I meant to say
[8:05] <fghaas> but other that that I've heard "my ceph has a slow write performance" so many times from people that are just misinterpreting their benchmark results that it ain't even funny :)
[8:05] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[8:06] * ashishchandra (~ashish@49.32.0.175) Quit (Ping timeout: 480 seconds)
[8:07] * michalefty (~micha@p20030071CF183000CC0DF21572B9B31A.dip0.t-ipconnect.de) has joined #ceph
[8:07] * michalefty (~micha@p20030071CF183000CC0DF21572B9B31A.dip0.t-ipconnect.de) has left #ceph
[8:08] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[8:10] <cooldharma06> hi all
[8:11] <cooldharma06> i am facing following error when running this command -> ceph-deploy osd activate node2:/path node3:/path
[8:11] * analbeard (~shw@host86-155-107-230.range86-155.btcentralplus.com) Quit (Quit: Leaving.)
[8:12] <cooldharma06> error - > http://pastebin.com/jTCFcBrq
[8:15] * lucas1 (~Thunderbi@222.240.148.154) has joined #ceph
[8:15] <pinoyskull_> fghaas: the reason we put the SSD to raid1 is for it to have fault tolerance
[8:16] <fghaas> you don't need fault tolerance for your journal
[8:17] <fghaas> but if you're putting a whopping total of 9 journals on a single SSD (which is what you do if it's two SSDs in a RAID-1), I can guarantee you you're going to bottleneck on your journal device
[8:20] * ashishchandra (~ashish@49.32.0.167) has joined #ceph
[8:20] * cronix1 (~cronix@5.199.139.166) Quit (Ping timeout: 480 seconds)
[8:21] <cooldharma06> any suggestions.?
[8:28] * green_man (~green_man@129.94.63.39) has joined #ceph
[8:28] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:28] <Kioob> cooldharma06: if I well understand the output, you force a different path than the standard one (/var/local/osd1 vs /var/lib/ceph/osd/ceph-1), and somewhere in the process ceph-disk switch to the standard one
[8:29] <Kioob> maybe you have to put this non standard path in ceph.conf ?
[8:30] <cooldharma06> i am following this guide and i am newbie, making experiments with ceph
[8:30] <cooldharma06> http://ceph.com/docs/master/start/quick-ceph-deploy/
[8:30] <Kioob> then you should probably not change the standard path
[8:30] * vbellur (~vijay@121.244.87.117) has joined #ceph
[8:30] * mathias (~mathias@p5083D74A.dip0.t-ipconnect.de) has joined #ceph
[8:31] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Ping timeout: 480 seconds)
[8:32] * fghaas (~florian@zid-vpnn044.uibk.ac.at) has joined #ceph
[8:32] * dgurtner (~dgurtner@125-236.197-178.cust.bluewin.ch) has joined #ceph
[8:34] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[8:35] * peedu (~peedu@170.91.235.80.dyn.estpak.ee) has joined #ceph
[8:36] <cooldharma06> no i am not getting. and my ceph.conf is - > http://pastebin.com/2hqL3032
[8:37] <cooldharma06> can u explain me clearly.
[8:40] <mathias> I lost my admin node from which I ran ceph-deploy. Now I want to create a new mon with ceph-deploy mon add and it complains about not finding the mon keyring. I figured its probably the file mon01:/var/lib/ceph/mon/ceph-mon01/keyring but there is no parameter for tell ceph-deploy its now there. Any naming convention the file needs to follow for ceph-deploy to recognize?
[8:41] * Kioob (~Kioob@2a01:e34:ec0a:c0f0:21e:8cff:fe07:45b6) Quit (Quit: Leaving.)
[8:41] * Kioob (~Kioob@2a01:e34:ec0a:c0f0:21e:8cff:fe07:45b6) has joined #ceph
[8:41] * singler (~singler@zeta.kirneh.eu) Quit (Quit: leaving)
[8:41] <Kioob> cooldharma06: try with using /var/lib/ceph/osd/ceph-1/ instead of /var/local/osd1 when you call ceph-deploy
[8:42] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[8:42] * peedu_ (~peedu@185.46.20.35) Quit (Ping timeout: 480 seconds)
[8:43] <cooldharma06> for ceph-0 which path i have to use /var/local or /var/lib
[8:45] <Kioob> what I'm saying is that for me in your log, it seems that ceph-deploy doesn't not properly handle this non standard path. So, an easy way to check this is by trying with the standard one.
[8:46] <Kioob> But if you don't want, well, I can't help you. But maybe someone which use "ceph-deploy" can.
[8:47] * lcavassa (~lcavassa@89.184.114.246) has joined #ceph
[8:51] * singler (~singler@178.62.28.20) has joined #ceph
[8:53] * KevinPerks (~Adium@2606:a000:80a1:1b00:74b8:4d15:f65a:14a) Quit (Quit: Leaving.)
[8:54] * green_man (~green_man@129.94.63.39) Quit (Quit: Leaving)
[8:54] * thomnico (~thomnico@2a01:e35:8b41:120:d4d5:4c7d:6707:1912) has joined #ceph
[8:55] * green_man (~green_man@129.94.63.39) has joined #ceph
[8:55] * lucas1 (~Thunderbi@222.240.148.154) Quit (Quit: lucas1)
[8:56] <cooldharma06> kioob now also some error -> http://pastebin.com/BVTVD8zQ
[8:57] <Kioob> ?? ??ceph-disk: Error: No cluster conf found in /etc/ceph with fsid 62c60864-f3b6-41b4-99ab-7c4d0bf1acad ??
[9:05] <cooldharma06> oh i have to run this one -> ceph-deploy mon create in node2 .?
[9:06] * boichev (~boichev@213.169.56.130) Quit (Quit: Nettalk6 - www.ntalk.de)
[9:08] * garphy`aw is now known as garphy
[9:15] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:16] * ashishchandra (~ashish@49.32.0.167) Quit (Ping timeout: 480 seconds)
[9:17] <cooldharma06> oh sorry kioob i found and i have not run the gathrekeys. sorry my mistake only
[9:19] * analbeard (~shw@support.memset.com) has joined #ceph
[9:26] * mgarcesMZ (~mgarces@5.206.228.5) has joined #ceph
[9:28] * ashishchandra (~ashish@49.32.0.227) has joined #ceph
[9:30] * vbellur (~vijay@121.244.87.124) has joined #ceph
[9:32] * cok (~chk@2a02:2350:18:1012:8c85:a2d5:18fe:9487) has joined #ceph
[9:33] * fsimonce (~simon@host135-17-dynamic.8-79-r.retail.telecomitalia.it) has joined #ceph
[9:34] * steki (~steki@91.195.39.5) has joined #ceph
[9:35] <mgarcesMZ> hi
[9:37] * rendar (~I@host174-179-dynamic.7-87-r.retail.telecomitalia.it) has joined #ceph
[9:38] * dgurtner (~dgurtner@125-236.197-178.cust.bluewin.ch) Quit (Read error: Connection reset by peer)
[9:39] * pinoyskull_ (~pinoyskul@121.54.54.145) Quit (Ping timeout: 480 seconds)
[9:42] * hyperbaba (~hyperbaba@mw-at-rt-nat.mediaworksit.net) has joined #ceph
[9:44] * Georgyo (~georgyo@shamm.as) Quit (Quit: No Ping reply in 180 seconds.)
[9:45] * vbellur (~vijay@121.244.87.124) Quit (Read error: Operation timed out)
[9:46] * phantomcircuit (~phantomci@2600:3c01::f03c:91ff:fe73:6892) Quit (Ping timeout: 480 seconds)
[9:46] * Andreas-IPO_ (~andreas@2a01:2b0:2000:11::cafe) has joined #ceph
[9:46] * phantomcircuit (~phantomci@2600:3c01::f03c:91ff:fe73:6892) has joined #ceph
[9:46] * Georgyo (~georgyo@shamm.as) has joined #ceph
[9:49] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) Quit (Ping timeout: 480 seconds)
[9:49] * singler (~singler@178.62.28.20) Quit (Quit: leaving)
[9:50] * singler (~singler@178.62.28.20) has joined #ceph
[9:53] * dgurtner (~dgurtner@217.192.177.51) has joined #ceph
[9:57] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:58] * vbellur (~vijay@121.244.87.117) has joined #ceph
[10:00] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[10:06] * michalefty1 (~micha@pD9E0372A.dip0.t-ipconnect.de) has joined #ceph
[10:08] * tab (~oftc-webi@194.249.247.164) has joined #ceph
[10:09] <tab> Mon config reference: "When a Ceph Storage Cluster gets close to its maximum capacity (i.e., mon osd full ratio), Ceph prevents you from writing to or reading from Ceph OSD Daemons as a safety measure to prevent data loss. " Why is good to protect cluster from reading in this case?
[10:10] * bkunal (~bkunal@121.244.87.124) Quit (Ping timeout: 480 seconds)
[10:10] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Read error: Operation timed out)
[10:13] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[10:14] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[10:16] * branto (~borix@ip-213-220-214-245.net.upcbroadband.cz) has joined #ceph
[10:20] * bkunal (~bkunal@121.244.87.115) has joined #ceph
[10:20] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[10:22] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[10:22] * karnan (~karnan@121.244.87.117) has joined #ceph
[10:24] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[10:29] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[10:37] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[10:38] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[10:40] * darkling (~hrm@00012bd0.user.oftc.net) has joined #ceph
[10:42] * rkdemon (~rkdemon@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[10:47] * ashishchandra (~ashish@49.32.0.227) Quit (Ping timeout: 480 seconds)
[10:52] * blackmen (~Ajit@121.244.87.115) has joined #ceph
[11:00] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:00] * ashishchandra (~ashish@49.32.0.247) has joined #ceph
[11:06] <stefano> q
[11:07] <stefano> oops sorry
[11:08] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:10] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[11:14] * jtaguinerd1 (~jtaguiner@203.215.116.66) Quit (Quit: Leaving.)
[11:15] * madkiss (~madkiss@chaoscdn63.syseleven.net) has joined #ceph
[11:15] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[11:16] * mgarcesMZ (~mgarces@5.206.228.5) Quit (Quit: mgarcesMZ)
[11:16] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[11:17] * bkunal (~bkunal@121.244.87.115) Quit (Ping timeout: 480 seconds)
[11:20] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[11:22] * linjan (~linjan@176.195.196.165) has joined #ceph
[11:22] * rkdemon (~rkdemon@pool-71-244-62-208.dllstx.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[11:23] * vbellur (~vijay@121.244.87.124) has joined #ceph
[11:23] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (Ping timeout: 480 seconds)
[11:23] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[11:26] * jtaguinerd (~jtaguiner@203.215.116.66) has joined #ceph
[11:27] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) has joined #ceph
[11:30] * bkunal (~bkunal@121.244.87.124) has joined #ceph
[11:32] * jtaguinerd (~jtaguiner@203.215.116.66) Quit (Quit: Leaving.)
[11:34] <cooldharma06> i am following this guide for the setup with 2 nodes -> http://ceph.com/docs/master/start/quick-ceph-deploy/
[11:36] <cooldharma06> and i am getting error when 'ceph-deploy osd activate' and my error is -> http://pastebin.com/7tM4MaUH
[11:47] <cooldharma06> can i install like this mon, osd0,admin-node,ceph-deploy in one node and osd1 in another node.
[11:53] * shyu (~shyu@203.114.244.88) has joined #ceph
[11:53] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[11:54] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[11:57] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[11:58] * Nacer (~Nacer@37.161.109.48) has joined #ceph
[11:59] * michalefty1 (~micha@pD9E0372A.dip0.t-ipconnect.de) has left #ceph
[12:01] * Nacer (~Nacer@37.161.109.48) Quit (Remote host closed the connection)
[12:07] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[12:10] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[12:12] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[12:12] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[12:14] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:15] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[12:15] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[12:15] * madkiss (~madkiss@chaoscdn63.syseleven.net) Quit (Quit: Leaving.)
[12:15] * madkiss (~madkiss@chaoscdn63.syseleven.net) has joined #ceph
[12:22] * AbyssOne is now known as a1-away
[12:22] * rotbart (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[12:25] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[12:27] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[12:32] * b0e (~aledermue@213.95.25.82) has joined #ceph
[12:32] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[12:32] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[12:33] * joerocklin (~joe@cpe-65-185-149-56.woh.res.rr.com) Quit (Ping timeout: 480 seconds)
[12:34] * cok (~chk@2a02:2350:18:1012:8c85:a2d5:18fe:9487) has left #ceph
[12:39] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:40] * joerocklin (~joe@cpe-65-185-149-56.woh.res.rr.com) has joined #ceph
[12:42] * karnan (~karnan@121.244.87.117) has joined #ceph
[12:43] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[12:44] * shyu (~shyu@203.114.244.88) Quit (Quit: Leaving)
[12:44] * shyu (~shyu@203.114.244.88) has joined #ceph
[12:44] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[12:46] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[12:47] * peedu_ (~peedu@185.46.20.35) has joined #ceph
[12:48] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[12:49] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[12:53] * peedu (~peedu@170.91.235.80.dyn.estpak.ee) Quit (Ping timeout: 480 seconds)
[13:00] * steki (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[13:00] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[13:02] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[13:03] <mathias> I dont know what caused it but running ceph -w now results in a connection timeout. How do I debug this?
[13:06] * vbellur (~vijay@121.244.87.124) Quit (Read error: Operation timed out)
[13:08] * i_m (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) has joined #ceph
[13:13] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Ping timeout: 480 seconds)
[13:13] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[13:14] <tab> on what basis Ceph decides if disk is bad and automatically removes it from the operation?
[13:14] <tab> is there any log parsing script that detects possible disk errors that are written to kernel/messages log?
[13:15] <mathias> tab: I didnt read that ceph actually does that - where did you get that from?
[13:16] <joao> I'm pretty sure that's not a feature
[13:16] <tab> ok i am just asking/guessing. So how does it than decides which OSD is bad? Purely on current reponse?
[13:17] <joao> there were discussions about doing that a while ago but general conclusion was "that's to be detected and managed by external tools"
[13:17] <tab> aha ok thx to both of you.
[13:17] <joao> scrub will detect crc mismatch across pgs
[13:17] <joao> and will mark pgs as corrupt
[13:18] <tab> maybe also this question for you two
[13:18] <tab> Mon config reference: "When a Ceph Storage Cluster gets close to its maximum capacity (i.e., mon osd full ratio), Ceph prevents you from writing to or reading from Ceph OSD Daemons as a safety measure to prevent data loss. " Why is good to protect cluster from reading in this case?
[13:18] <joao> but it will fall to the operator to do something about it (e.g., repair)
[13:18] * vbellur (~vijay@121.244.87.117) has joined #ceph
[13:18] <joao> that's a different thing
[13:18] <joao> monitors need available disk space to write new epochs of maps
[13:19] <joao> you run out of disk space, you may end up with corrupt maps or missing maps
[13:19] <joao> corrupt maps or missing maps *may* lead to chaos
[13:19] <tab> so even reading is not possible due to this epochs?
[13:19] <joao> there's no switch, to my knowledge, to have the cluster working on a read-only basis
[13:20] <joao> even if that were possible, there's a lot of things that could go wrong
[13:20] <tab> aha. is that also due to strongly consistency operation of Ceph?
[13:20] <joao> for instance, if an osd would happen to fail during that window, and pgs needed to be moved to maintain replication size
[13:21] * madkiss (~madkiss@chaoscdn63.syseleven.net) Quit (Quit: Leaving.)
[13:21] <joao> we would need to generate a new map epoch, but the monitors would not be able to generate said map epoch
[13:21] <joao> (they're read-only because they lack the disk space to commit that map to disk)
[13:21] * scalability-junk (sid6422@id-6422.ealing.irccloud.com) has joined #ceph
[13:21] <joao> what would happen next is beyond me
[13:22] <joao> probably we'd end up waiting to shuffle data around, which would reduce the amount of replicas if osds kept on failing
[13:23] * mgarcesMZ (~mgarces@5.206.228.5) has joined #ceph
[13:23] <tab> aha ok. that is than something to consider :)
[13:23] <joao> this is not directly related with strong consistency, but I think that it helps maintaining it
[13:24] <joao> simply by not allowing new map epochs if the cluster may not be able to take it ensures that strong consistency is maintained (unlike allowing epochs without being sure if they'd reach a stable state in proper condition)
[13:25] <joao> but aside from that, this is more like a way to avoid dealing with issues we don't really have a good answer for
[13:25] <joao> the monitors will let you know however if they're running out of disk space in advance
[13:25] <joao> they won't just shutdown out of the blue
[13:26] <tab> yes. that is configurable
[13:26] <joao> oh wait
[13:26] <joao> I totally read that the wrong way
[13:26] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[13:26] <joao> that's about mon osd full ratio, I thought it was about the mon data avail stuff
[13:27] <joao> that's different
[13:27] <tab> ok. i listen :)
[13:27] <joao> but mostly the same thing, except that you'll don't have nodes committing suicide (as the mons would)
[13:28] <joao> I actually though that would allow reads from the osd, but I may be wrong
[13:29] <tab> i was reading this: http://ceph.com/docs/master/rados/configuration/mon-config-ref/
[13:29] <tab> actual this: http://ceph.com/docs/master/rados/configuration/mon-config-ref/#storage-capacity
[13:29] <joao> the whole purpose of that is that, given that a given osd is responsible for keeping replicas of pgs, if you have a pg full you will not be able to modify its replicas
[13:30] <joao> therefore, you must not be able to modify any replicas of those objects, given you can't guarantee the replication level of said replicas (given one of the osds won't be able to write the updates to disk)
[13:30] <tab> writing part i understood, but reading was not clear for me, why is this also prevented in this case
[13:30] <joao> you either shuffle pgs out of that osd (setting weight should do it I think), or add a new osd or remove said osd
[13:31] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:31] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[13:31] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:32] <joao> that I'm not sure
[13:32] <tab> ok
[13:32] <joao> may have something to do with shuffling data to accommodate a new replica on some other osd
[13:34] <tab> I think also that somehow possible disk failure (based on disk kernel logs) should be notified to operator of the cluster in advance, so that someone has the time to prepare in to gradualy remove OSD device and add into operation new OSD....
[13:34] <joao> although, even reading will generate new map epochs and the osd will want those map epochs too, and may want to write them to disk; and not having enough disk space would certainly trigger a failure while writing those maps, so that could be it
[13:34] <joao> but I'm just theorizing about it
[13:35] <tab> ok that's great yes
[13:35] * cok (~chk@2a02:2350:1:1203:15d8:95e2:32d7:4348) has joined #ceph
[13:35] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[13:36] <joao> tab, same argument can be given for that as well as figuring out whether a disk is bad: that's the job for external monitoring tools
[13:36] <joao> beyond the scope of ceph
[13:37] <tab> yes i read that and i understand. i am just saying that it would be also great for ceph to know how to deal with this ... i am just saying .. :)
[13:37] * shyu (~shyu@203.114.244.88) Quit (Remote host closed the connection)
[13:37] <mathias> running ceph -w now results in a connection timeout. How do I debug this?
[13:38] <jcsp> mathias: first check your ceph-mon processes are running on the mon servers
[13:39] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[13:39] * rotbart (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Read error: Operation timed out)
[13:39] <tab> joao: is there also any code written for ceph objects, that are saved through radosgw, to be used for the purposes of data processing - on storage data compute ?
[13:40] <mathias> jcsp: yes ps aux shows one process on each mon node
[13:41] <mathias> jcsp: netstat shows them listening on port 6789
[13:41] * boichev (~boichev@213.169.56.130) has joined #ceph
[13:41] <tab> Does Ceph extends it's functionality only through classes? http://ceph.com/docs/master/architecture/#extending-ceph
[13:41] <jcsp> mathias, ok next check their status with "ceph daemon mon.<id> mon_status" run locally on the mon server where <id> is the id of the mon on that server
[13:42] <mathias> jcsp: I am getting admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[13:42] <mathias> what file is it looking for?
[13:43] <jcsp> it's looking for /var/run/ceph/ceph-mon.<id>.asok
[13:43] <joao> tab, I recall seeing something about that (or conversation about that) but can't remember what that would be called
[13:43] <joao> jcsp or loicd, do you have an idea on what tab may be referring to?
[13:43] * loicd reading
[13:44] <jcsp> joao/tab: is the question about something like RADOS classes for RGW, or about RADOS classes themselves?
[13:44] <joao> I think its about rados classes; I'm guessing rgw would just be the means to get data to the node
[13:45] <mathias> jcsp: /var/run/ceph/ceph-mon.mon01.asok exists - I ran "ceph@mon01:~$ ceph daemon mon.01 mon_status"
[13:45] <jcsp> mathias, looks like you needed "ceph daemon mon.mon01"
[13:46] <jcsp> you put mon in the name of your mon, so now you have to type mon.mon :-)
[13:46] <tab> cold be radoss classes, but I thing that saving data through RGW has different namespace than for examp. saving data through RBD. So i am interested for classes that know to deal woth HTTP RestFUL objects saved to RADOS
[13:46] <mathias> jcsp: that worked: http://pastebin.com/tEaxzfXc
[13:47] <jcsp> mathias: so your mons are not in quorum, and you only have two
[13:47] <loicd> tab: you have read this http://ceph.com/community/blog/tag/lua/ ? Not that it answers your question but it may be related.
[13:47] <jcsp> mathias: are there really supposed to be only two?
[13:47] <mathias> thats correct - I though I would extend the mons from 1 to 3 step one by one and after adding the second mon I broke something
[13:47] <jcsp> ah...
[13:47] <joao> did you start the second monitor?
[13:47] * ashishchandra (~ashish@49.32.0.247) Quit (Ping timeout: 480 seconds)
[13:48] <mathias> joao: yes the ceph-mon process is running on that other node, too
[13:48] <joao> one mon appears to be down
[13:50] <mathias> the second node doesnt seem to know about the first though: http://pastebin.com/AA4a5vsw
[13:51] <joao> which node did you add last? mon02?
[13:51] <joao> and how did you do it?
[13:51] <mathias> yes
[13:51] <mathias> ceph-deploy
[13:52] <tab> locid: yes that's something i am looking for. Lua seem similar to ZeroVM
[13:52] <joao> mathias, mind pasting both monitors ceph.conf?
[13:52] <mathias> joao: I did ceph-deploy mon create mon02
[13:53] <joao> huh
[13:53] * fghaas (~florian@zid-vpnn044.uibk.ac.at) has left #ceph
[13:53] <alfredodeza> mathias: you should've done 'mon add mon02' not create
[13:53] <joao> wait, have to check how ceph-deploy works
[13:53] <joao> well, alfredo is here :)
[13:53] * dmsimard_away is now known as dmsimard
[13:53] <joao> that, I knew something was strange
[13:54] <alfredodeza> `ceph-deploy mon --help` will tell you the differences
[13:54] <mathias> joao: http://pastebin.com/uHFV11su
[13:54] <alfredodeza> you can also specify the address for that monitor
[13:54] <alfredodeza> ceph-deploy mon add node1 --address 192.168.1.10
[13:55] <tab> joao: just for clarification, when Ceph could not read/write that to disk, than it is declared dead? I guess there are some bad-reponses returned to monitors on this I/O operations, in case disk not responsible?
[13:55] <joao> yeah, both monitors know nothing about each other, although it seems that mon01 has mon02 in its monmap now
[13:55] <mathias> alfredodeza: so mon create is just for the first one and then "add" is the right way to go?
[13:56] <mathias> so now I guesst purge followed by mon add should help, right?
[13:56] <alfredodeza> create is for creating them initially, add is for adding a monitor to an existing cluster
[13:56] <alfredodeza> you can destroy the mon I think?
[13:56] <joao> alfredodeza, I'm guessing ceph-deploy will need a quorum to add a new monitor, right?
[13:57] <alfredodeza> joao: I don't recall
[13:57] <alfredodeza> but I can check
[13:57] <joao> I'm sure it will
[13:57] <joao> don't bother
[13:57] <joao> there's no good way to do it otherwise
[13:57] <joao> none that you'd tolerate doing anyway :p
[13:58] * ashishchandra (~ashish@49.32.0.247) has joined #ceph
[13:58] <alfredodeza> :)
[13:58] * alfredodeza goes for coffee
[13:59] <mathias> ok so just did "ceph-deploy purge mon02" followed by "ceph-deploy purgedata mon02" - now mon01 still shows mon02 in the output of monstatus
[13:59] <mathias> going to deploy mon add mon02 now
[13:59] <joao> I doubt it will work
[14:00] <joao> if ceph-deploy runs something analogous to 'ceph mon add' at some point, you'll get stuck trying to run that
[14:00] <joao> if it comes to that, let me know
[14:00] <mathias> hmm
[14:00] <mathias> so whats next?
[14:00] <joao> easiest way will be to adjust mon01's monmap
[14:00] <mathias> how to?
[14:01] <joao> just a sec, there's a doc for that
[14:02] <joao> http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-mon/#most-common-monitor-issues
[14:02] <joao> actually, more straight to the point: http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-mon/#recovering-a-monitor-s-broken-monmap
[14:02] <joao> aww
[14:02] <joao> that doesn't help that much with editing the monmap
[14:03] <joao> okay
[14:03] <joao> kill mon01, run 'ceph-mon -i mon01 --extract-monmap /tmp/monmap -d'
[14:03] <joao> then 'monmaptool --rm mon02 /tmp/monmap'
[14:04] <joao> check map using 'monmaptool --print /tmp/monmap' and make sure mon01 is the only one in there
[14:04] <joao> run 'ceph-mon -i mon01 --inject-monmap /tmp/monmap -d'
[14:04] <joao> restart mon
[14:04] <joao> your cluster should start responding again, and you'll be able to run ceph-deploy mon add
[14:04] * MrBy (~MrBy@85.115.23.42) has joined #ceph
[14:05] <mathias> up and running! awsome! thx joao!
[14:05] <joao> np
[14:08] <mathias> now mon01 blocked my "ceph -w" because it was not in quorum? is that what happend?
[14:08] * i_m (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) Quit (Quit: Leaving.)
[14:08] * i_m (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) has joined #ceph
[14:10] * dgurtner (~dgurtner@217.192.177.51) Quit (Ping timeout: 480 seconds)
[14:11] * yguang11_ (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[14:11] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[14:13] <tab> joao: just for clarification, when Ceph could not read/write that to disk, than it is declared dead? I guess there are some bad-reponses returned to monitors on this I/O operations, in case disk not responsible?
[14:13] <mathias> something is very broken here :D I ran "ceph-deploy mon add mon02" which failed with a complaint about the admin key: http://pastebin.com/fYMQRMSF It is correct, that the key did not get pushed to mon02 - "only" the client.admin keyring did - but shouldnt it then push the admin key to the node?
[14:13] <joao> did you gather keys?
[14:14] <mathias> I beliege everything I need is in my cwd: http://pastebin.com/ZDz5m4jR
[14:14] <mathias> s/beliege/believe/
[14:14] <kraken> mathias meant to say: I believe everything I need is in my cwd: http://pastebin.com/ZDz5m4jR
[14:18] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Ping timeout: 480 seconds)
[14:21] * maxxware (~maxx@149.210.133.105) Quit (Quit: leaving)
[14:22] * maxxware (~maxx@149.210.133.105) has joined #ceph
[14:22] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[14:23] * saurabh (~saurabh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:25] <mathias> anyone?
[14:28] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Quit: Lost terminal)
[14:28] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Ping timeout: 480 seconds)
[14:32] * tinklebear (~tinklebea@66.55.152.53) has joined #ceph
[14:33] * dgurtner (~dgurtner@217.192.177.51) has joined #ceph
[14:34] * rotbart (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[14:35] <am88b> I'm running "ceph-deploy osd prepare" and I get "config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite"... but it is completely the same! (md5sum made) and I have restarted all ceph daemons to be sure that running conf is also same. I can't use --overwrite-config for 'osd prepare'. I'm totally confused about this error...
[14:35] * JayJ__ (~jayj@157.130.21.226) has joined #ceph
[14:35] <mathias> I removed the client.admin section that pointed to admin.key from ceph.conf and ran "mon add mon02" again - didnt look bad at all until it is not stuck for 5min: http://pastebin.com/uZ2TmTer
[14:38] <Gugge-47527> look at the ceph mon log on mon02
[14:38] * rkdemon (~rkdemon@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[14:39] <Gugge-47527> sorry, its "ceph mon add mon02 192.168.10.54" that hangs ... most likely because you have no quorum on your mons anymore
[14:39] <Gugge-47527> after the failed first attempt
[14:39] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:39] * boichev2 (~boichev@213.169.56.130) has joined #ceph
[14:39] * zhaochao (~zhaochao@111.204.252.1) has left #ceph
[14:40] * cok (~chk@2a02:2350:1:1203:15d8:95e2:32d7:4348) Quit (Quit: Leaving.)
[14:41] <Gugge-47527> mathias: what happens if you log into mon02 and start the mon manually now?
[14:42] * markbby (~Adium@168.94.245.4) has joined #ceph
[14:43] * saurabh (~saurabh@121.244.87.124) has joined #ceph
[14:44] * boichev (~boichev@213.169.56.130) Quit (Ping timeout: 480 seconds)
[14:48] * hyperbaba (~hyperbaba@mw-at-rt-nat.mediaworksit.net) Quit (Ping timeout: 480 seconds)
[14:48] * bkunal (~bkunal@121.244.87.124) Quit (Ping timeout: 480 seconds)
[14:49] * linuxkidd_ (~linuxkidd@rtp-isp-nat-pool1-1.cisco.com) has joined #ceph
[14:50] * rkdemon (~rkdemon@pool-71-244-62-208.dllstx.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[14:55] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[14:58] * saurabh (~saurabh@121.244.87.124) Quit (Ping timeout: 480 seconds)
[14:58] * apolloJess (~Thunderbi@202.60.8.252) has joined #ceph
[14:59] * rkdemon (~rkdemon@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[14:59] * maxxware (~maxx@149.210.133.105) Quit (Quit: leaving)
[15:00] * KevinPerks (~Adium@2606:a000:80a1:1b00:456b:bf96:6ed7:2eb1) has joined #ceph
[15:00] * apolloJess (~Thunderbi@202.60.8.252) has left #ceph
[15:01] * maxxware (~maxx@149.210.133.105) has joined #ceph
[15:02] * tinklebear (~tinklebea@66.55.152.53) Quit (Quit: Nettalk6 - www.ntalk.de)
[15:04] * JayJ__ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[15:05] * yguang11_ (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Ping timeout: 480 seconds)
[15:09] * maxxware (~maxx@149.210.133.105) Quit (Quit: leaving)
[15:09] * maxxware (~maxx@149.210.133.105) has joined #ceph
[15:09] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) has joined #ceph
[15:11] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[15:13] * bkunal (~bkunal@1.23.195.13) has joined #ceph
[15:14] <mathias> Gugge-47527: the mons start but mon_status on mon01 shows both mons, the same on mon02 shows mon01 only
[15:15] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[15:15] <mathias> http://pastebin.com/UZpKDrs8
[15:15] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[15:17] * hybrid512 (~walid@195.200.167.70) Quit (Quit: Leaving.)
[15:17] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) Quit (Remote host closed the connection)
[15:19] * linuxkidd_ (~linuxkidd@rtp-isp-nat-pool1-1.cisco.com) Quit (Quit: Leaving)
[15:20] <absynth__> hm, anyone know what "cleversafe" is?
[15:20] <absynth__> their spec sheet http://www.cleversafe.com/images/pdf/cleversafe-products sounds _exactly_ like ceph
[15:22] <mgarcesMZ> caringo also had something very similar to ceph??? I think it was called CAStor or something
[15:22] * JayJ__ (~jayj@157.130.21.226) has joined #ceph
[15:23] <absynth__> Combines erasure coding and object storage to store data reliably,
[15:23] <absynth__> and with a high level of availability, at significantly lower cost than
[15:23] <absynth__> solutions based on RAID and replication
[15:23] <absynth__> i wouldn't be surprised if this were copied&pasted from the ceph firefly specsheet
[15:23] * JayJ__ (~jayj@157.130.21.226) Quit ()
[15:24] * cok (~chk@2a02:2350:18:1012:3c80:5f36:6692:a0b5) has joined #ceph
[15:24] * marrusl (~mark@2604:2000:60e3:8900:409c:a141:606f:ff70) has joined #ceph
[15:25] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[15:26] <mgarcesMZ> http://www.caringo.com/solutions/active-archive.html
[15:27] * linuxkidd_ (~linuxkidd@cpe-076-182-096-100.nc.res.rr.com) has joined #ceph
[15:29] * peedu (~peedu@185.46.20.35) has joined #ceph
[15:29] * peedu_ (~peedu@185.46.20.35) Quit (Read error: Connection reset by peer)
[15:29] <mgarcesMZ> also: http://www.caringo.com/products/swarm.html
[15:31] <mgarcesMZ> we investigated swarm (it was called CAStor) before, but the pricing led us to drop it??? then I discovered Ceph :)
[15:31] * rkdemon (~rkdemon@pool-71-244-62-208.dllstx.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[15:35] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has left #ceph
[15:35] <mgarcesMZ> one think I miss from RadosGW??? is the creation of the object, the unique ID must be created on the client side
[15:36] <mgarcesMZ> I would love to have the server handle the UID
[15:36] * pressureman (~pressurem@62.217.45.26) has joined #ceph
[15:37] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[15:38] <mgarcesMZ> know what I mean?
[15:40] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[15:41] * madkiss (~madkiss@business-176-094-041-213.static.arcor-ip.net) has joined #ceph
[15:42] * linuxkidd_ (~linuxkidd@cpe-076-182-096-100.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:45] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[15:46] * peedu (~peedu@185.46.20.35) Quit (Ping timeout: 480 seconds)
[15:46] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has left #ceph
[15:47] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[15:47] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[15:50] * rwheeler (~rwheeler@173.48.207.57) Quit (Quit: Leaving)
[15:52] * ashishchandra (~ashish@49.32.0.247) Quit (Quit: Leaving)
[15:54] * mathias (~mathias@p5083D74A.dip0.t-ipconnect.de) Quit (Quit: leaving)
[15:57] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[16:03] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[16:08] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Ping timeout: 480 seconds)
[16:10] * danieagle_ (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[16:10] * danieagle_ (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Remote host closed the connection)
[16:10] * vbellur (~vijay@122.166.179.14) has joined #ceph
[16:11] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[16:17] * saurabh (~saurabh@121.244.87.117) Quit (Quit: Leaving)
[16:17] * JayJ__ (~jayj@157.130.21.226) has joined #ceph
[16:25] * JayJ__ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[16:28] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[16:29] * ccheng (~ccheng@csdhcp-120-248.cs.purdue.edu) has joined #ceph
[16:29] <ccheng> cd ../.. ; make unittest_sharedptr_registry && ./unittest_sharedptr_registry # --gtest_filter=*.* --log-to-stderr=true
[16:30] * JayJ__ (~jayj@157.130.21.226) has joined #ceph
[16:30] <ccheng> complains: "make: *** No rule to make target `../src/gtest/lib/libgtest.a', needed by `unittest_sharedptr_registry'. Stop."
[16:32] * markbby1 (~Adium@168.94.245.1) has joined #ceph
[16:32] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[16:33] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[16:34] * lupu (~lupu@86.107.101.214) has joined #ceph
[16:37] * Gugge-47527 (gugge@kriminel.dk) Quit (Quit: Bye)
[16:39] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) has joined #ceph
[16:40] * Gugge-47527 (gugge@kriminel.dk) has joined #ceph
[16:46] * markbby (~Adium@168.94.245.3) has joined #ceph
[16:47] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[16:48] * madkiss (~madkiss@business-176-094-041-213.static.arcor-ip.net) Quit (Quit: Leaving.)
[16:48] * peedu (~peedu@153.26.46.176.dyn.estpak.ee) has joined #ceph
[16:49] * markbby1 (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[16:49] * peedu (~peedu@153.26.46.176.dyn.estpak.ee) Quit ()
[16:50] * JayJ__ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[16:54] * tab_ (~oftc-webi@89-212-99-37.dynamic.t-2.net) has joined #ceph
[16:56] * linjan (~linjan@176.195.196.165) Quit (Ping timeout: 480 seconds)
[16:56] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[16:57] * JayJ__ (~jayj@157.130.21.226) has joined #ceph
[16:59] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:01] * micka (~micka@178.23.33.193) has joined #ceph
[17:01] * drankis (~drankis__@89.111.13.198) Quit (Quit: Leaving)
[17:02] <micka> Hi, is anyone here ? :)
[17:02] * cok (~chk@2a02:2350:18:1012:3c80:5f36:6692:a0b5) Quit (Quit: Leaving.)
[17:02] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:02] * JayJ__ (~jayj@157.130.21.226) Quit ()
[17:02] <micka> I would have some question about ceph ...
[17:03] <joao> go ahead and ask them then
[17:03] <micka> ok thx :)
[17:03] * pressureman (~pressurem@62.217.45.26) Quit (Quit: Ex-Chat)
[17:05] <micka> So i have 2 NAS and i would like to make it work with Ceph, do i need to install Ceph on a server and connect the 2 NAS by a switch or i must install Ceph on the NAS ?
[17:06] <jcsp> micka: it depends what you mean by NAS. If you mean some servers with disks in, then you would install Ceph on them. If you mean a NAS appliance that serves NFS/SMB, then you would not normally use that with ceph at all.
[17:07] <micka> it is 2 Netgear ReadyNAS 3200
[17:07] * rotbart (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[17:07] <micka> So i can't use Ceph with this 2 Netgear ?
[17:09] <jcsp> if that's the only storage you have, then you could expose some iSCSI volumes from the netgear onto some linux servers, and run ceph from there. But that's not a typical deployment and the performance would probably not be amazing.
[17:09] <darkling> If you can reflash the firmware on it to include the various pieces of ceph server software, then yes you can, but it's probably going to be tricky.
[17:10] <micka> Ok thx :)
[17:12] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[17:12] * i_m (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[17:14] * jtaguinerd (~jtaguiner@112.198.77.224) has joined #ceph
[17:14] * linuxkidd (~linuxkidd@cpe-066-057-017-151.nc.res.rr.com) Quit (Quit: Leaving)
[17:14] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[17:15] <micka> if i take a server and install Ceph on it, can i have a way to make the storage of the NAS detected by Ceph ? (NAS and Ceph linked by a switch)
[17:18] <absynth__> what?!
[17:18] <absynth__> no
[17:19] <micka> So Ceph and a NAS are incompatible ?
[17:19] <gregmark> micka: just like jcsp said, YES, but you'd have to mount the NAS volumes on the server or attach them with iSCSI. That would work, but would suck.
[17:19] <steveeJ> micka: in theory you could somehow get a ceph osd to store files on that NAS, but in practice you don't want that for at least performance reasons
[17:21] <gregmark> If your server connects to the NAS on a different VLAN or physical segment, I suppose it would be alright.
[17:21] <gregmark> You definitely wouldn't want OSDs talking on the same network as your NAS
[17:22] * zerick (~eocrospom@190.118.28.252) has joined #ceph
[17:22] <micka> ok thx for all this info.
[17:26] <micka> I explain the context : I have a Proxmox Server with several VW, and the storage is on 2 NAS, and in the middle i have 1 switch. So I have thought, can i use Ceph to make it work better ?
[17:26] <micka> *VM
[17:28] <SpComb> micka: you would run ceph on the NAS's that have the disks
[17:28] <micka> but i don't want to reflash the firmware
[17:29] <micka> so if i put a Server connected to the switch, with Ceph installed, is it work ?
[17:31] * BManojlovic (~steki@91.195.39.5) Quit (Remote host closed the connection)
[17:33] * hybrid512 (~walid@195.200.167.70) Quit (Quit: Leaving.)
[17:35] * markbby1 (~Adium@168.94.245.2) has joined #ceph
[17:35] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[17:39] * bandrus (~Adium@216.57.72.205) has joined #ceph
[17:40] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[17:40] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:41] * garphy is now known as garphy`aw
[17:43] * jobewan (~jobewan@snapp.centurylink.net) has joined #ceph
[17:44] <jcsp> micka: adding in a ceph layer isn't going to make your existing storage "better" automatically. What are you trying to achieve?
[17:46] <micka> I would like to set a cluster storage with Ceph, for the scalability, and then if i need more capacity, i will just add new NAS to the cluster...
[17:47] <micka> But this will work just with a Server with storage inside like DAS, if i correctly understand
[17:48] * JayJ__ (~jayj@157.130.21.226) has joined #ceph
[17:50] <micka> (sry i'm just a french trainee who need to implement a DFS on a company, so sry for my bad english and comprehension)
[17:52] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[17:55] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[17:58] * alram (~alram@38.122.20.226) has joined #ceph
[18:01] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:05] * micka (~micka@178.23.33.193) Quit (Quit: Quitte)
[18:06] * JayJ__ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[18:07] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[18:09] * squisher (~david@2601:0:580:8be:8111:afa4:c922:daec) has joined #ceph
[18:10] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:12] * mgarcesMZ (~mgarces@5.206.228.5) Quit (Ping timeout: 480 seconds)
[18:14] * mourgaya (~kvirc@233.50.84.79.rev.sfr.net) has joined #ceph
[18:14] * mourgaya|2 (~kvirc@233.50.84.79.rev.sfr.net) has joined #ceph
[18:18] * zack_dolby (~textual@p843a3d.tokynt01.ap.so-net.ne.jp) has joined #ceph
[18:22] * designated_ (~rroberts@host-177-39-52-24.midco.net) Quit (Ping timeout: 480 seconds)
[18:24] * dgurtner (~dgurtner@217.192.177.51) Quit (Ping timeout: 480 seconds)
[18:27] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) has joined #ceph
[18:32] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[18:33] * linuxkidd (~linuxkidd@rtp-isp-nat1.cisco.com) has joined #ceph
[18:36] * linjan (~linjan@78.108.207.168) has joined #ceph
[18:36] * todayman (~quassel@magellan.acm.jhu.edu) Quit (Ping timeout: 480 seconds)
[18:39] * zerick (~eocrospom@190.118.28.252) Quit (Ping timeout: 480 seconds)
[18:42] * branto (~borix@ip-213-220-214-245.net.upcbroadband.cz) has left #ceph
[18:42] * mourgaya (~kvirc@233.50.84.79.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[18:43] * mourgaya|2 (~kvirc@233.50.84.79.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[18:44] * linjan (~linjan@78.108.207.168) Quit (Ping timeout: 480 seconds)
[18:44] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[18:44] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[18:46] * alram_ (~alram@38.122.20.226) has joined #ceph
[18:46] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has joined #ceph
[18:47] * JayJ__ (~jayj@157.130.21.226) has joined #ceph
[18:49] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[18:50] * zerick (~eocrospom@190.118.28.252) has joined #ceph
[18:51] * darkling (~hrm@00012bd0.user.oftc.net) Quit (Quit: Absinthe makes the heart grow fonder.)
[18:51] * JayJ__ (~jayj@157.130.21.226) Quit ()
[18:53] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[18:55] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[18:55] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:55] * linjan (~linjan@93.91.1.170) has joined #ceph
[18:57] * lcavassa (~lcavassa@89.184.114.246) Quit (Quit: Leaving)
[18:57] * JayJ__ (~jayj@157.130.21.226) has joined #ceph
[18:58] * dupont-y (~dupont-y@2a01:e34:ec92:8070:3400:81b0:ef39:d655) has joined #ceph
[19:00] * JayJ__ (~jayj@157.130.21.226) Quit ()
[19:01] * JayJ__ (~jayj@157.130.21.226) has joined #ceph
[19:02] * todayman (~quassel@magellan.acm.jhu.edu) has joined #ceph
[19:04] * jtaguinerd (~jtaguiner@112.198.77.224) Quit (Quit: Leaving.)
[19:05] * sjustwork (~sam@2607:f298:a:607:f88e:c8d2:ee4c:2534) has joined #ceph
[19:06] * JayJ__ (~jayj@157.130.21.226) Quit (Remote host closed the connection)
[19:07] * JayJ__ (~jayj@157.130.21.226) has joined #ceph
[19:08] * houkouonchi-dc (~sandon@gw.sepia.ceph.com) has joined #ceph
[19:09] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:09] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[19:13] * linjan (~linjan@93.91.1.170) Quit (Read error: Operation timed out)
[19:13] * sleinen1 (~Adium@2001:620:0:68::100) Quit (Ping timeout: 480 seconds)
[19:17] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[19:21] * zerick (~eocrospom@190.118.28.252) Quit (Ping timeout: 480 seconds)
[19:22] * adamcrume (~quassel@2601:9:6680:47:bd1f:a39d:fcfb:7ee4) has joined #ceph
[19:22] * mourgaya|2 (~kvirc@233.50.84.79.rev.sfr.net) has joined #ceph
[19:23] * mourgaya|2 (~kvirc@233.50.84.79.rev.sfr.net) Quit ()
[19:24] * mourgaya (~kvirc@233.50.84.79.rev.sfr.net) has joined #ceph
[19:25] * blackmen (~Ajit@121.244.87.115) Quit (Quit: Leaving)
[19:25] * oms101 (~oms101@p20030057EA395D00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Quit: Leaving)
[19:25] * mourgaya (~kvirc@233.50.84.79.rev.sfr.net) Quit ()
[19:26] * mourgaya (~kvirc@233.50.84.79.rev.sfr.net) has joined #ceph
[19:30] * zerick (~eocrospom@190.118.28.252) has joined #ceph
[19:31] * thomnico (~thomnico@2a01:e35:8b41:120:d4d5:4c7d:6707:1912) Quit (Quit: Ex-Chat)
[19:33] * Nacer (~Nacer@2001:41d0:fe82:7200:583a:d0d2:1fd6:22db) has joined #ceph
[19:47] <loicd> ceph user committee meeting in 15 minutes ?
[19:47] <mourgaya> yes!
[19:51] * michalefty (~micha@ip25045ed2.dynamic.kabel-deutschland.de) has joined #ceph
[19:51] * michalefty (~micha@ip25045ed2.dynamic.kabel-deutschland.de) has left #ceph
[19:53] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[19:57] * linuxkidd_ (~linuxkidd@cpe-066-057-017-151.nc.res.rr.com) has joined #ceph
[19:59] * erice_ (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[19:59] <mourgaya> we can start the meetup ! who will participate?
[20:00] <scuttlemonkey> mourgaya: hey there, the user committee mojo kicking off is it?
[20:01] <mourgaya> it is done!
[20:01] <loicd> :-)
[20:01] <loicd> mourgaya: what's the first topic today ?
[20:02] * loicd pulls his mail
[20:02] <mourgaya> you propose to speak about the cycle of stable relaese
[20:02] <loicd> ah, right
[20:02] <loicd> how do you feel about that ?
[20:02] <mourgaya> 3 or 4 months!
[20:02] * mcms (~mcms@46.224.106.218) has joined #ceph
[20:03] * sputnik13 (~sputnik13@172.56.32.40) has joined #ceph
[20:03] <loicd> how often do you upgrade your Ceph cluster ?
[20:03] <scuttlemonkey> loicd: so you want to see dev cycles move to 3/yr instead of 4/yr?
[20:03] <mourgaya> in fact not so often :-)
[20:03] <loicd> scuttlemonkey: ahaha
[20:04] * loicd discussed with Yann Dupont the other day and he mostly does not upgrade his cluster.
[20:04] <dupont-y> hello loic :!)
[20:04] <dupont-y> well, I've been forced :)
[20:04] <loicd> dupont-y: here he is !
[20:04] <scuttlemonkey> well that's the age old question...why upgrade if it isn't broken :P
[20:04] <dupont-y> now 2 of my 5 clusters are using fire"fly
[20:04] <loicd> how so ?
[20:04] <mourgaya> scuttlemonkey: agree with you!
[20:05] <scuttlemonkey> the problem is state-of-the-art needs to move fast
[20:05] <scuttlemonkey> the trick is finding the middle ground where real people can live
[20:05] <loicd> dupont-y: did you know that dumpling and firefly receive more attention than emperor ?
[20:05] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[20:06] <dupont-y> I was sticking with emperor because it was perfectly stable ; I've upgrade to firefly because one of my collegue goofed on the sources.list of some OSD, so a upgrade fired an upgrade to firefly on some OSD
[20:06] <mourgaya> I think functionnality and bug correction must lead this!
[20:06] <loicd> mourgaya: +1
[20:06] <scuttlemonkey> loicd: so what is the actual proposal here?
[20:06] <scuttlemonkey> feature-driven (rather than time-drive) release schedules?
[20:07] <dupont-y> so I had to upgrade the clusters on firefly, But it's perfectly stable since.
[20:07] <scuttlemonkey> or just relaxing the stable release schedule?
[20:07] <mourgaya> now, my cluster is also on firefly version!
[20:07] <dupont-y> Time drive is good i think, because you have to release . feature driven is hard to predict
[20:07] <loicd> scuttlemonkey: I guess I'm trying to figure out if any user cares if releases are on a 3 or 4 month frequency.
[20:07] <dupont-y> I'll go for 4 months :)
[20:08] <scuttlemonkey> I could go either way
[20:08] <erice_> As a user, I would like to see 2 or 3 a year, as its too hard to schedule 4 updates in 12 months
[20:08] <dupont-y> Release often is good, BUT too fast isn't possible on production environment
[20:08] <scuttlemonkey> having longer release schedules would allow for more QA/testing time potentially
[20:08] <mourgaya> I vote for 6 months except in case of bug correction!
[20:08] <scuttlemonkey> however, the 3 month cadence really forces us to revisit work being done and have CDS to continue moving the community forward
[20:09] <loicd> why would it matter if you can skip one release and be on a 6/8 month cycle if you want ?
[20:09] <scuttlemonkey> I'd rather see dev cycles continue to be 3mos, but only do a "stable" release every other one (so 6 mos stable cycle)
[20:09] <loicd> erice_: ^
[20:09] <scuttlemonkey> loicd: yes, that
[20:10] <erice_> There is nothing wrong with having 2 named releases a year, plus the dev branch. That is what I am seeing with the Linux zfs releases
[20:10] <dupont-y> 1 stable every 6 months could be OK, yes.
[20:10] <loicd> the problem I guess is that it puts a lot of pressure on the dev / support team to have one stable release in flight every 3/4 month
[20:11] * sputnik13 (~sputnik13@172.56.32.40) Quit (Ping timeout: 480 seconds)
[20:11] <mourgaya> loicd: +1
[20:11] <scuttlemonkey> I mean really, aren't we just talking about formalizing what is already the spirit of Sage's release cycle?
[20:11] <loicd> dupont-y: how far back in time would you like ceph releases to be supported ?
[20:11] <scuttlemonkey> every other is a "more-stable" stable release
[20:11] <dupont-y> And really, in production, you don't like to change versions too often. You need stability.
[20:11] <mourgaya> what is important a new release or stability?
[20:11] * zerick (~eocrospom@190.118.28.252) Quit (Ping timeout: 480 seconds)
[20:12] <loicd> scuttlemonkey: I don't think any user is aware of that subtlety. Do you ?
[20:12] <mcms> I think support time is more important than release cycles
[20:12] <scuttlemonkey> loicd: I agree completely...and I think it's a good idea to formalize it
[20:12] <loicd> mcms: what time frame are you thinking about ?
[20:12] <mourgaya> scuttlemonkey: +1
[20:12] <scuttlemonkey> just saying that I think the spirit is there...a recommendation to formalize it to 2 stable releases would probably be met agreeably
[20:12] <dupont-y> loicd: ideally, a long time, just like you have Long Term Support on kernel, for example
[20:13] <dupont-y> That is, 2 years ? 3 years ?
[20:13] <scuttlemonkey> yeah, our "LTS" right now is still 18mos, right?
[20:13] <scuttlemonkey> or did that get extended in the Firefly stuff?
[20:13] <loicd> scuttlemonkey: is there a URL where it is written ?
[20:13] <loicd> I guess all this is in flux with Red Hat in the loop
[20:15] <loicd> my impression is that we will now have longer LTS ;-)
[20:15] <loicd> a *lot* longer
[20:15] <loicd> 10 years ?
[20:15] * loicd contemplates backporting fixes to firefly in 2024
[20:15] <mcms> If there is a stable release every T months, T months of support is enough, and for example 4T for those who don't want to always update
[20:15] <scuttlemonkey> loicd: no URL that I can find
[20:15] <scuttlemonkey> there has been a slide floating around about it though
[20:16] <loicd> mcms: I see
[20:16] <dupont-y> mcms : not really agree here. In a production environment you WON'T change version unless forced . That is : A very important need for new features, or a bug that's only corrected by a newer version. We're very conservative
[20:16] <mourgaya> dupont-y: +1
[20:16] <scuttlemonkey> https://dl.dropboxusercontent.com/u/5334652/release_schedule.png
[20:17] <loicd> 18 months it is
[20:17] <mourgaya> scuttlemonkey: in fact, I am always waiting at least 3 month before testing a release!
[20:17] <loicd> that being said, backporting has been more flexible that what you typically see in OS distributions
[20:17] <dupont-y> mourgaya: same here !
[20:18] * jobewan (~jobewan@snapp.centurylink.net) Quit (Quit: Leaving)
[20:18] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) Quit (Remote host closed the connection)
[20:18] <loicd> for instance the ioprio feature has been backported to dumpling and that's not something you (as a user) would require from upstream. I think.
[20:19] <loicd> mourgaya: you're waiting for people to stumble into problems so you have a more stable release, right ?
[20:19] <mourgaya> a very long term support will generate a lot of complication for developpers,
[20:19] <mourgaya> loicd: yes!
[20:19] <mourgaya> I think a 4 years LTS will be good for me!
[20:19] <loicd> mourgaya: if what is expected is feature backports, yes. If this is critical bug fixes / security problems, maybe not so much ?
[20:20] <dupont-y> loicd: yes, in a production environment you're waiting for things to stabilize first, closely wath mailing lists. It's the same for kernel. I Stick on LTS kernels, unless forced
[20:20] <dupont-y> and whan a new LTS (kernel) is out, I wait at least before version .3 or .4 .
[20:20] <mourgaya> so a LTS kernel policy is a good consensus
[20:22] <mourgaya> scuttlemonkey: can you formalize something about this?
[20:23] <scuttlemonkey> mourgaya: yeah, I think I can take the gist of what was said and get the communications more clear
[20:23] <scuttlemonkey> I think the sentiment is a match...just need the language to reflect that
[20:24] <mourgaya> Is this ok for all of us?
[20:24] <loicd> yes
[20:24] <erice_> yes
[20:24] * loicd feels weird to no longer be on the user side of things
[20:24] <scuttlemonkey> what's next?
[20:24] <scuttlemonkey> loicd: hehe
[20:24] <dupont-y> yes :)
[20:25] <mcms> ok
[20:25] <mourgaya> we can speak about the next point , that is Contributor credits updates !
[20:25] <loicd> The contributor list updates https://wiki.ceph.com/Community/Ceph_contributors_list_maintenance_guide
[20:25] <loicd> that will be a quick one
[20:25] <loicd> I was doing this by myself and called for help
[20:25] <loicd> M Ranga Swami Reddy stepped in
[20:26] <scuttlemonkey> nice
[20:26] <loicd> and has been doing the work for this release
[20:26] <mourgaya> loicd: and it is enought?
[20:26] <loicd> we've had nice interactions and he will publish the next list
[20:26] <loicd> mourgaya: yes :-)
[20:26] <scuttlemonkey> I'd love to see the user committee (rather than Red Hat, or any corp entity) start calling out developer milestones if that becomes possible
[20:26] <loicd> Abhishek L will be next and me again : rotating every three release
[20:27] <scuttlemonkey> "these 6 names were new committers" ... "these 8 names reached 100 commits" ... etc
[20:27] <scuttlemonkey> would that be feasible and/or interesting?
[20:27] <loicd> https://github.com/ceph/ceph/pull/2378 is swami's work
[20:27] <loicd> that's all
[20:27] * zerick (~eocrospom@190.118.28.252) has joined #ceph
[20:27] <scuttlemonkey> loicd: ^^
[20:28] <mourgaya> ok
[20:28] <scuttlemonkey> my dream is to do more to recognize the great contributions of our community
[20:28] <loicd> scuttlemonkey: I guess so
[20:28] <scuttlemonkey> saying "nice work" is the first step in that
[20:28] <mcms> scuttlemonkey: There can be badges an so, like manu OS projects
[20:28] <scuttlemonkey> lest it be too easy for the corporate overlords to forget how much blood, sweat, and tears comes from people who aren't on their payroll
[20:28] <loicd> scuttlemonkey: it sort of conflicts with http://metrics.ceph.com/ though, doesn't it ?
[20:29] <scuttlemonkey> I was thinking that if we made it a part of the process I could ask Bitergia to include it
[20:29] <scuttlemonkey> it's just an idea at this point...but I'd like to run with it
[20:29] <loicd> +1 on discussing this with Bitergia
[20:29] <scuttlemonkey> and I'd like the "atta boy" to come from the community...not as a hand out from daddy
[20:30] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[20:30] <scuttlemonkey> :)
[20:30] <scuttlemonkey> ok, I'll take that as an action and talk w/ Bitergia
[20:31] <loicd> cool
[20:31] <scuttlemonkey> when they come back with what they need we'll see about making it a part of this workflow
[20:31] <scuttlemonkey> glad that the work is being distributed though
[20:31] <scuttlemonkey> thanks for pushing that out Loic
[20:31] <scuttlemonkey> mourgaya: what's next?
[20:32] * zerick (~eocrospom@190.118.28.252) Quit (Read error: Operation timed out)
[20:32] <mourgaya> s the next point is Upcoming meetup!
[20:32] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) has joined #ceph
[20:32] <scuttlemonkey> nice!
[20:32] <loicd> ah, that's an easy one too : Berlin + Paris
[20:32] <loicd> mourgaya: have you hear of others ?
[20:32] <mourgaya> the link is https://wiki.ceph.com/Community/Meetups
[20:33] <scuttlemonkey> so I see the Paris meetup is actually Ceph Day
[20:33] <mourgaya> london also perhaps!
[20:33] <loicd> oh ?
[20:33] <scuttlemonkey> does the French language summary mention that they have to register on eventbrite?
[20:33] <mourgaya> I think all of us will be there
[20:34] <loicd> scuttlemonkey: I don't follow ?
[20:34] <scuttlemonkey> http://ceph.com/cephdays/paris/
[20:34] <scuttlemonkey> http://www.meetup.com/Ceph-in-Paris/
[20:34] <loicd> there is a link to it, yes
[20:34] * zerick (~eocrospom@190.118.28.252) has joined #ceph
[20:34] <scuttlemonkey> I see 17 people on meetup.com...do those folks realize they need a ticket?
[20:34] * squisher (~david@2601:0:580:8be:8111:afa4:c922:daec) Quit (Quit: Leaving)
[20:35] <loicd> to attend the meetup ?
[20:35] <loicd> ah !
[20:35] <loicd> there is a confusion : this is *after* the Ceph day :-)
[20:35] <scuttlemonkey> ahhhh
[20:35] <loicd> not during the ceph day
[20:35] <scuttlemonkey> cool
[20:35] <scuttlemonkey> as long as that's clear
[20:35] <loicd> the goal is to get wasted on behalf of Ceph
[20:35] <scuttlemonkey> haha, even better
[20:35] <scuttlemonkey> I expect someone to point me at several bottles of good french wine
[20:35] <scuttlemonkey> :)
[20:36] <dupont-y> :)
[20:36] <mourgaya> scuttlemonkey: :-)
[20:36] <scuttlemonkey> it also occurs to me that these international meetups would be so much easier if I spoke like 10 languages
[20:36] <scuttlemonkey> I should get on that
[20:36] <mourgaya> It will be a great day!
[20:36] <loicd> mourgaya: dupont-y is there a meetup planned in britany some time soon ?
[20:37] <mourgaya> we have to finish some developpemnt with inkscope and meetup after, I think ned of november!
[20:37] * loicd would like to add the topic of FOSDEM to the agenda
[20:37] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) Quit (Quit: Leaving.)
[20:37] <scuttlemonkey> FOSDEM 2015?
[20:37] <mourgaya> Fosdem is around febray?
[20:37] <scuttlemonkey> 31Jan-01Feb
[20:38] <mourgaya> so we can make a meetup at Bruxelle :-)
[20:38] <loicd> xarses: http://www.meetup.com/SF-Bay-Area-Ceph-User-group what about a meetup mid october ? I'll be around ;-)
[20:38] * BManojlovic (~steki@95.180.4.243) has joined #ceph
[20:38] * LeaChim (~LeaChim@host81-159-253-189.range81-159.btcentralplus.com) has joined #ceph
[20:39] <loicd> mourgaya: yes, the problem is that deadlines come real fast for FOSDEM
[20:39] <mourgaya> when?
[20:39] <loicd> https://fosdem.org/2015/news/2014-07-01-call-for-participation/
[20:39] <loicd> 15 September
[20:39] <loicd> deadline for developer room proposals
[20:39] <loicd> yerk !
[20:39] <scuttlemonkey> loicd: wow, yeah that's sooner than I thought
[20:40] <scuttlemonkey> I only had the Oct deadline
[20:40] <scuttlemonkey> I'll add that to our calendar
[20:40] <mourgaya> loicd: ok forget it!
[20:40] <scuttlemonkey> I'm sure we'll want to submit some stuff
[20:40] <xarses> loicd: sounds good, when abouts?
[20:40] <loicd> no, we have a shot for a "Distributed Storage Room" and Ceph in it, don't you think ?
[20:41] <scuttlemonkey> loicd: yeah, I think we can pool resources and get both Ceph and Gluster resources to back it
[20:41] <loicd> xarses: I'll be in Los Angeles 13th -> 23rd. But that was just a trick to get you in ;-)
[20:41] * linuxkidd_ (~linuxkidd@cpe-066-057-017-151.nc.res.rr.com) Quit (Quit: Leaving)
[20:41] <scuttlemonkey> might be fun to find at least one other storage person and have a mini-debate or something :P
[20:41] <loicd> scuttlemonkey: +1
[20:41] <scuttlemonkey> Swift?
[20:41] <mourgaya> scuttlemonkey: yes
[20:41] <scuttlemonkey> ExtremeFS?
[20:41] <loicd> all of them
[20:42] <scuttlemonkey> k, lemme get Danielle working on logistics and see if we can pull it together
[20:42] <scuttlemonkey> if you know folks who might be interested send them my way
[20:43] <mourgaya> what about meetup in asia?
[20:43] <scuttlemonkey> speaking of meetups...would it be useful to have a "drinkup" after every ceph day?
[20:44] <loicd> I'm not aware of any so far mourgaya
[20:44] <scuttlemonkey> loicd: there are a couple
[20:44] * mcms_ (~mcms@46.225.78.96) has joined #ceph
[20:44] <loicd> scuttlemonkey: where ?
[20:44] <scuttlemonkey> I know the Shanghai one was actually gathering steam
[20:44] <scuttlemonkey> and the Kuala Lumpur guys have had a couple informal ones (although they still haven't added them to the list)
[20:44] <dupont-y> scuttlemonkey : YES !
[20:44] <loicd> ah, there are plans to, yes
[20:45] <scuttlemonkey> dupont-y: ok, I'll make that part of the package and get them on the list here once we nail down locations
[20:45] <loicd> could you maybe send a mail to the community list with the people from Kuala Lumpur in cc for the record ?
[20:45] <loicd> scuttlemonkey: ^
[20:46] <mourgaya> and what about spain?
[20:46] <scuttlemonkey> loicd: I'll do one better, I'll get Dr. Ong to mail the list and try to get him to finally use meetup and the wiki :)
[20:46] <scuttlemonkey> mourgaya: Joao has run a couple in Lisbon
[20:46] <mourgaya> scuttlemonkey: great!
[20:46] <scuttlemonkey> that's as close as we have gotten to spaid I think
[20:46] <loicd> scuttlemonkey: he did ?
[20:46] <scuttlemonkey> http://www.meetup.com/Ceph-Lisbon/
[20:46] <joao> I ran *one*
[20:46] <scuttlemonkey> and one before meetup
[20:47] <scuttlemonkey> joao: didn't you guys have a drinkup when Taco first got hired?
[20:47] <scuttlemonkey> or did that never get off the ground?
[20:47] <joao> in Lisbon? nope
[20:47] <scuttlemonkey> ahh
[20:47] <mourgaya> so there is not enought communication our information on it !
[20:47] <scuttlemonkey> shame
[20:47] <loicd> dam, I forgot about this one, my apologies joao
[20:47] <joao> I organized a meetup in Lisbon, but due to lack of interest I dropped it
[20:47] <scuttlemonkey> joao: you know what this means, right?
[20:47] <scuttlemonkey> time to try again! :)
[20:47] <joao> eh
[20:47] <mourgaya> joao: it was at bruxelle!
[20:48] <joao> ah
[20:48] * mcms (~mcms@46.224.106.218) Quit (Read error: Operation timed out)
[20:48] <joao> mourgaya, yes, that one I attended
[20:48] <joao> I guess I got my wires crossed
[20:48] <mourgaya> :-)
[20:48] <joao> thought we were talking about Lisbon
[20:49] <scuttlemonkey> this there would be more interest inland?
[20:49] <scuttlemonkey> maybe Badajoz?
[20:49] <mourgaya> What is the curretn channl to announce meetup?
[20:49] <scuttlemonkey> mourgaya: wiki/lists is all really
[20:49] <loicd> ceph-user
[20:49] <scuttlemonkey> if it's coming up I'm happy to push it on twitter/facebook/g+
[20:50] <scuttlemonkey> someone would just have to poke me w/ a semi-sharp stick
[20:50] <mourgaya> scuttlemonkey: great
[20:50] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[20:50] <joao> I'd happily attend a meetup in spain
[20:51] <scuttlemonkey> anyone want to start the ball rolling on that and see who might be willing to attend one
[20:51] <scuttlemonkey> ?
[20:52] <joao> I'm not comfortable organizing anything beyond Lisbon
[20:52] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[20:52] <joao> let alone in spain
[20:52] <loicd> :-)
[20:52] <scuttlemonkey> hehe
[20:52] <scuttlemonkey> ok, tell you what
[20:53] <scuttlemonkey> I'm due for some list writeups...I'll push existing meetups and throw out some potential new sites and see if anyone bites
[20:53] <mourgaya> joao: try with a ceph sangria party!
[20:53] * adamcrume_ (~quassel@2601:9:6680:47:1954:4d59:7b88:2d33) has joined #ceph
[20:53] <loicd> mourgaya: dupont-y should we do something during the OpenStack summit in Paris ?
[20:53] <mourgaya> loicd: I will be there !
[20:53] <loicd> there are plans for a Ceph Summit (it has been submitted, not sure it will be accepted) but it's a user centric thing
[20:53] <scuttlemonkey> loicd: I'll most likely be there for that as well
[20:53] <dupont-y> loicd: I probably go here too
[20:54] <mourgaya> loicd: let think about this!
[20:54] <scuttlemonkey> there were >50 ceph talks submitted...I think only 1 got accepted
[20:54] <xarses> =(
[20:54] <scuttlemonkey> so we should definitely do _something_ to foster the interest
[20:54] <mourgaya> only one?
[20:54] <scuttlemonkey> since there were so many interested people
[20:54] <scuttlemonkey> yep
[20:55] <scuttlemonkey> I think it was 1 ceph talk and a couple of panels w/ Ceph people
[20:55] <scuttlemonkey> unless I missed something
[20:55] <scuttlemonkey> (which is entirely possible)
[20:56] * mcms_ (~mcms@46.225.78.96) Quit (Ping timeout: 480 seconds)
[20:56] <mourgaya> ok, then let finish this meetup we the endest point : rados gateway use key!
[20:56] <mourgaya> case!
[20:56] <mourgaya> sorry
[20:57] <scuttlemonkey> mourgaya: just "what are the use cases?"
[20:57] <loicd> dupont-y: you are using it, right ?
[20:57] <mourgaya> scuttlemonkey: I would like to find some!
[20:57] <dupont-y> loicd: not yet
[20:57] <scuttlemonkey> https://www.pcextreme.nl/beta/objects/en/
[20:57] <dupont-y> just rbd for the moment
[20:57] <scuttlemonkey> https://www.dreamhost.com//cloud/storage/
[20:57] <mourgaya> I use it
[20:58] * adamcrume (~quassel@2601:9:6680:47:bd1f:a39d:fcfb:7ee4) Quit (Ping timeout: 480 seconds)
[20:58] <dupont-y> loicd : But it's not because I don't use it I'm not seeing the use case ...
[20:58] <scuttlemonkey> those are the two banner use cases for public consumption
[20:58] <mourgaya> :-)
[20:58] <loicd> :-)
[20:58] * vbellur (~vijay@122.166.179.14) Quit (Read error: Operation timed out)
[20:59] <dupont-y> I wanted to use rgw for owncloud.
[20:59] <mourgaya> do you know some guy how wwant to talk about this in these company?
[20:59] <dupont-y> But I need 2 path of acces for data . 1 is http interface, so owncloud (which can use RGW), but also samba access. So no RGW here.
[21:00] <scuttlemonkey> mourgaya: pcextreme is Wido's company
[21:00] <mourgaya> ok
[21:00] <scuttlemonkey> he gave a talk at Ceph Day Frankfurt that I think we captured on video
[21:00] <scuttlemonkey> http://youtu.be/k9gmRe5qxKU
[21:00] <mourgaya> I think it is worth to display a use case of rados gateway!
[21:01] <scuttlemonkey> and slides: http://www.slideshare.net/Inktank_Ceph/building-auroraobjects
[21:01] <mourgaya> scuttlemonkey: thanks
[21:01] <loicd> mourgaya: thanks for organizing this :-)
[21:01] * loicd runs to a meeting
[21:01] <scuttlemonkey> I have one thing I'd like to add
[21:01] * scuttlemonkey waves to loicd
[21:02] <mourgaya> loicd: yu are welcome !
[21:02] <scuttlemonkey> next week I'll be publishing a big chunk of data that will be moving from /docs/master/ to the wiki
[21:02] <scuttlemonkey> and a bunch of things I'd like to add to the wiki
[21:02] <scuttlemonkey> I could _really_ use some community help adding content to the wiki
[21:02] <scuttlemonkey> FAQs, procedural doc, etc
[21:02] <loicd> scuttlemonkey: how blocking are the pain point I sent you ?
[21:03] * loicd waiting for the meeting to start...
[21:03] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:03] <scuttlemonkey> loicd: yeah, there are things that could be smoother
[21:03] <scuttlemonkey> but I hadn't heard anyone say they couldn't contribute
[21:03] <scuttlemonkey> (sorry I haven't responded on that, but I did read it!)
[21:03] * zerick (~eocrospom@190.118.28.252) Quit (Ping timeout: 480 seconds)
[21:04] <loicd> I discovered that someone did not contribute because of it and said nothing, just moved on. I wonder what's the actual churn.
[21:04] * rendar (~I@host174-179-dynamic.7-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:04] <scuttlemonkey> hmm
[21:04] <scuttlemonkey> yeah, I need to revist that
[21:04] <scuttlemonkey> the login solution is a real bummer
[21:05] * rkdemon (~rkdemon@rrcs-67-79-20-162.sw.biz.rr.com) has joined #ceph
[21:05] <scuttlemonkey> no real action item on that yet
[21:05] <scuttlemonkey> but just wanted to get it out there that a content push is coming
[21:05] <scuttlemonkey> and the beginnings should hit next week
[21:05] <scuttlemonkey> any help is ++
[21:05] <mourgaya> scuttlemonkey: +
[21:06] <mourgaya> :-)
[21:06] <scuttlemonkey> mourgaya: was there anything else?
[21:06] <mourgaya> no that is all
[21:06] <scuttlemonkey> cool beans
[21:06] <scuttlemonkey> thanks for organizing mourgaya
[21:06] <mourgaya> thank you for all participants!
[21:06] * zerick (~eocrospom@190.118.28.252) has joined #ceph
[21:06] <dupont-y> ys, thanks eric
[21:06] * rendar (~I@host174-179-dynamic.7-87-r.retail.telecomitalia.it) has joined #ceph
[21:06] <erice> thanks for hosting this.
[21:07] <mourgaya> let's have a drink ?? the cephdays at Paris
[21:07] <scuttlemonkey> you bet :)
[21:07] <mourgaya> no I am sure!
[21:12] * vbellur (~vijay@122.172.250.159) has joined #ceph
[21:14] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has joined #ceph
[21:14] * JoeJulian (~JoeJulian@shared.gaealink.net) has joined #ceph
[21:14] * mourgaya (~kvirc@233.50.84.79.rev.sfr.net) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[21:14] <scalability-junk> Oh drinks... always good :P
[21:14] * mourgaya (~kvirc@233.50.84.79.rev.sfr.net) has joined #ceph
[21:16] * mourgaya (~kvirc@233.50.84.79.rev.sfr.net) Quit ()
[21:20] * vbellur (~vijay@122.172.250.159) Quit (Ping timeout: 480 seconds)
[21:26] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) Quit (Quit: Leaving.)
[21:30] * vbellur (~vijay@122.167.213.68) has joined #ceph
[21:34] * zerick (~eocrospom@190.118.28.252) Quit (Ping timeout: 480 seconds)
[21:39] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[21:39] * houkouonchi-dc (~sandon@gw.sepia.ceph.com) Quit (Read error: Connection reset by peer)
[21:39] * JCL (~JCL@2601:9:5980:39b:4825:9364:c857:1693) Quit (Quit: Leaving.)
[21:41] * madkiss (~madkiss@46.115.160.87) has joined #ceph
[21:41] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit ()
[21:44] * zerick (~eocrospom@190.118.30.195) has joined #ceph
[21:45] * JCL (~JCL@2601:9:5980:39b:b52d:4cf:f1d0:f293) has joined #ceph
[21:52] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[21:53] * sleinen1 (~Adium@2001:620:0:68::103) has joined #ceph
[22:00] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:01] * rwheeler (~rwheeler@173.48.207.57) has joined #ceph
[22:01] * vbellur (~vijay@122.167.213.68) Quit (Ping timeout: 480 seconds)
[22:02] * rkdemon (~rkdemon@rrcs-67-79-20-162.sw.biz.rr.com) Quit (Ping timeout: 480 seconds)
[22:02] * sleinen1 (~Adium@2001:620:0:68::103) Quit (Ping timeout: 480 seconds)
[22:05] * rkdemon (~rkdemon@12.250.199.170) has joined #ceph
[22:08] * BManojlovic (~steki@95.180.4.243) Quit (Ping timeout: 480 seconds)
[22:12] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[22:12] * vbellur (~vijay@122.178.251.4) has joined #ceph
[22:13] * rkdemon (~rkdemon@12.250.199.170) Quit (Ping timeout: 480 seconds)
[22:15] * jobewan (~jobewan@snapp.centurylink.net) has joined #ceph
[22:22] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[22:26] * Jeff10 (~oftc-webi@WE.ST.HMC.Edu) has joined #ceph
[22:26] * vbellur (~vijay@122.178.251.4) Quit (Ping timeout: 480 seconds)
[22:26] * kfei (~root@61-227-13-158.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[22:32] * BManojlovic (~steki@77.243.20.79) has joined #ceph
[22:33] * jobewan (~jobewan@snapp.centurylink.net) Quit (Quit: Leaving)
[22:33] * jobewan (~jobewan@snapp.centurylink.net) has joined #ceph
[22:34] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[22:35] * jobewan (~jobewan@snapp.centurylink.net) Quit ()
[22:35] <carmstrong> do all of the hosts in the cluster need the gateway instances added to ceph.conf with [client.radosgw.{instance-name}]?
[22:38] * kfei (~root@114-27-88-249.dynamic.hinet.net) has joined #ceph
[22:39] * rkdemon (~rkdemon@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[22:40] <carmstrong> also, confused by the "data directory" for the radosgw - is it storing state? I figured it was stateless, since it's backed by the actual cluster
[22:46] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) Quit (Quit: Leaving.)
[22:49] <devicenull> my mon's have all decided to fill up their hard drives with logs
[22:49] <devicenull> yay :/
[22:52] <tab_> hey, can someone tell me, who does ceph OSD deamon declare some disk is failed/dead. Does it write to some disk location, doing ls command or something else?
[22:54] <absynth__> it checks if writes that _should_ have been successful can be read back, i think
[22:56] <tab_> so osd deamons is doing some data write to disk and it calculates crc or something over file
[22:56] * markbby1 (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[22:56] <tab_> i guess it does not write something big, just a few bytes
[22:57] <dmick> carmstrong: the [client.radosgw.<instance>] stanza will only be used by that radosgw instance
[22:57] <carmstrong> dmick: ok. so it doesn't need to be in the ceph.conf of any of the other instances
[22:57] <dmick> but it will save years from your life if you treat ceph.conf as a shared identical resource
[22:57] <carmstrong> but it can, it just wont' be used
[22:57] <carmstrong> ok :)
[22:57] <carmstrong> that's what I'm doing now - I write it one place (in etcd), and it's templated out on all machines
[22:58] <carmstrong> so including that stanza on all hosts is a lot easier for me
[23:04] * dgbaley27 (~matt@ucb-np1-206.colorado.edu) has joined #ceph
[23:07] * johntwilkins (~john@c-50-131-97-162.hsd1.ca.comcast.net) has joined #ceph
[23:07] * madkiss (~madkiss@46.115.160.87) Quit (Ping timeout: 480 seconds)
[23:12] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) has joined #ceph
[23:21] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) Quit (Ping timeout: 480 seconds)
[23:23] * Sysadmin88 (~IceChat77@2.124.167.78) has joined #ceph
[23:28] * bandrus (~Adium@216.57.72.205) Quit (Quit: Leaving.)
[23:29] * bandrus (~Adium@216.57.72.205) has joined #ceph
[23:30] <carmstrong> dmick: do you happen to know if the gateway stores any stateful data to the local filesystem? still not sure what the "data directory" stores
[23:34] <JayJ__> I need to help to figure out why "nova volume-detach" leaves the cinder volume in the "detaching" state for ever? Cinder backend is Ceph. There are no errors in any logs. rbi command shows the volume still in the Ceph pool. It appears that only way to detach is to terminate the instance. Any thoughts folks?
[23:43] * linuxkidd (~linuxkidd@rtp-isp-nat1.cisco.com) Quit (Read error: Connection reset by peer)
[23:48] * dmick (~dmick@2607:f298:a:607:649e:a8fc:9d86:f096) Quit (Ping timeout: 480 seconds)
[23:50] <rkdemon> Good evening y'all.
[23:50] <rkdemon> I needed to install calamari on a ubuntu trusty system and all the guides I see online are for calamari on 12.0x
[23:50] <rkdemon> ANy pointers on the best guide to go about with getting calamri to work with my ceph cluster on ubuntu 14 ?
[23:55] <tab_> Does swift treats chunked data, which is written to the same container, as each would be it's own object, meaning it would write them to different disks? Some sort of striping chunked data to disks... ?
[23:56] <tab_> never mind question, wrong window :)
[23:58] * dmick (~dmick@2607:f298:a:607:f502:aaa7:2b88:838a) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.