#ceph IRC Log

Index

IRC Log for 2016-08-23

Timestamps are in GMT/BST.

[0:05] * fsimonce (~simon@host203-44-dynamic.183-80-r.retail.telecomitalia.it) Quit (Remote host closed the connection)
[0:13] * squizzi (~squizzi@107.13.237.240) Quit (Quit: bye)
[0:14] * bene2 (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) Quit (Ping timeout: 480 seconds)
[0:16] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[0:16] * sudocat1 (~dibarra@192.185.1.20) Quit (Read error: Connection reset by peer)
[0:30] <jiffe> so I'm curious, when I take an osd out it goes all out moving things around for a while but gradually slows down. When there's not too much left to move it just crawls and that last bit seems to take a long time
[0:32] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[0:33] * Pulp (~Pulp@63-221-50-195.dyn.estpak.ee) Quit (Read error: Connection reset by peer)
[0:34] <jiffe> the reason I think I preferred raid to one osd per drive is that the recovery time of raid was much quicker than ceph's recovery
[0:34] <jiffe> 6 hours with raid vs a couple days with ceph
[0:36] * komljen (~chatzilla@217.197.142.111) Quit (Ping timeout: 480 seconds)
[0:42] * xarses_ (~xarses@64.124.158.32) has joined #ceph
[0:49] * andreww (~xarses@64.124.158.32) Quit (Ping timeout: 480 seconds)
[0:50] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[0:53] * srk (~Siva@2605:6000:ed04:ce00:a1df:47d6:84b9:4850) has joined #ceph
[0:53] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit ()
[1:07] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Ping timeout: 480 seconds)
[1:09] <wak-work> jiffe, you can tune recovery to be much faster if you are just using the defaults
[1:09] * kuku (~kuku@119.93.91.136) has joined #ceph
[1:14] * oms101 (~oms101@p20030057EA321500C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:19] * owasserm (~owasserm@a212-238-239-152.adsl.xs4all.nl) has joined #ceph
[1:20] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[1:23] * oms101 (~oms101@p20030057EA69E600C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:24] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[1:24] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:36] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[1:42] * srk (~Siva@2605:6000:ed04:ce00:a1df:47d6:84b9:4850) Quit (Ping timeout: 480 seconds)
[1:45] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[1:47] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[1:47] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[1:47] * bvi (~Bastiaan@185.56.32.1) Quit (Quit: Leaving)
[1:49] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:52] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[1:54] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[1:57] * xarses_ (~xarses@64.124.158.32) Quit (Ping timeout: 480 seconds)
[2:02] * zigo (~quassel@182.54.233.6) Quit (Remote host closed the connection)
[2:03] * zigo (~quassel@gplhost-3-pt.tunnel.tserv18.fra1.ipv6.he.net) has joined #ceph
[2:09] <jiffe> I may have to look into what I can tweak
[2:09] <jiffe> so after everything recovered I have 1 PG still in down+incomplete
[2:10] <jiffe> doing a query on that pg I see peering_blocked_by_detail says peering_blocked_by_history_les_bound
[2:11] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) Quit (Remote host closed the connection)
[2:14] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[2:17] <jiffe> so from what I'm reading it doesn't sound like I can fix this without adding a new osd with the same id as this is trying to peer with
[2:21] <jiffe> I see someone recommending `ceph osd lost [ID]`, is that safe to do?
[2:23] * Jeffrey4l_ (~Jeffrey@110.252.71.112) has joined #ceph
[2:35] * blizzow (~jburns@50.243.148.102) Quit (Remote host closed the connection)
[2:36] <jiffe> this is the query output for the pg I am trying to fix: http://nsab.us/public/ceph
[2:36] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[2:39] <jiffe> from the looks of it this pg is up on two other osds so I'm not sure what its trying to do with this osd I just took out
[2:39] * chunmei (~chunmei@134.134.139.82) Quit (Remote host closed the connection)
[2:50] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[2:58] * BrianA1 (~BrianA@c-24-130-77-13.hsd1.ca.comcast.net) has joined #ceph
[2:58] * BrianA1 (~BrianA@c-24-130-77-13.hsd1.ca.comcast.net) has left #ceph
[3:02] * yanzheng (~zhyan@125.70.21.51) has joined #ceph
[3:12] * Vacuum__ (~Vacuum@88.130.193.126) has joined #ceph
[3:12] * dnunez (~dnunez@209-6-91-147.c3-0.smr-ubr1.sbo-smr.ma.cable.rcn.com) Quit (Quit: Leaving)
[3:18] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[3:18] * Vacuum_ (~Vacuum@88.130.210.59) Quit (Ping timeout: 480 seconds)
[3:22] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[3:26] * thetrav (~thetrav@203.35.9.142) has joined #ceph
[3:31] * kefu (~kefu@114.92.101.38) has joined #ceph
[3:35] <thetrav> I'm trying to set up ceph radosgw to integrate with keystone. I'm getting "Unauthorized" as an error message whenever I invoke the openstack APIs, however all I can see in the ceph log is "keystone auth". How do I increase the verbosity of the radosgw log?
[3:37] * EinstCrazy (~EinstCraz@211-72-118-98.HINET-IP.hinet.net) has joined #ceph
[3:37] <thetrav> also there is mention of with-nss and having to convert some certificates, however my keystone install uses http, there are no certificates to convert... Anyone know if that will break things?
[3:39] * georgem (~Adium@107-179-157-134.cpe.teksavvy.com) has joined #ceph
[3:39] * TehZomB (~Silentspy@185.3.135.154) has joined #ceph
[3:41] * jfaj__ (~jan@p4FC24EAA.dip0.t-ipconnect.de) has joined #ceph
[3:42] * georgem (~Adium@107-179-157-134.cpe.teksavvy.com) Quit ()
[3:42] * sebastian-w_ (~quassel@212.218.8.138) Quit (Remote host closed the connection)
[3:43] * georgem (~Adium@206.108.127.16) has joined #ceph
[3:43] * sebastian-w (~quassel@212.218.8.138) has joined #ceph
[3:47] * jfaj_ (~jan@p4FC5B053.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:48] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[3:54] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[3:57] <thetrav> ok found that, unfortunately it's not helpful :/ doesn't show me what it's sending to keystone, nor what the response is... In fact, looking at the logs for keystone, it doesn't look like ceph is sending anything at all
[3:57] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:f1b4:a64b:aad6:ad14) Quit (Ping timeout: 480 seconds)
[4:00] * vbellur (~vijay@71.234.224.255) has joined #ceph
[4:04] * m8x (~user@182.150.27.112) has joined #ceph
[4:07] * efirs (~firs@98.207.153.155) has joined #ceph
[4:08] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Quit: Leaving.)
[4:09] * TehZomB (~Silentspy@61TAABHSB.tor-irc.dnsbl.oftc.net) Quit ()
[4:12] <thetrav> so ... internet looks a lot like ceph => keystone integration basically doesn't work? Has anyone gotten it wokring?
[4:12] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[4:14] * Arcturus (~Zeis@exit0.liskov.tor-relays.net) has joined #ceph
[4:16] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[4:17] * wkennington (~wakIII@104.132.1.96) has joined #ceph
[4:18] <SamYaple> thetrav: plenty of people have gotten it working
[4:18] <thetrav> ok
[4:18] <thetrav> so
[4:19] <thetrav> http://docs.ceph.com/docs/jewel/radosgw/keystone/ <- implies that I should use the client.radosgw.gateway section
[4:19] <thetrav> some mailing list response says that is ignored, as radosgw is started by the user ceph
[4:20] <thetrav> so, I remove the [client.radosgw.gateway] line, and put all the conf in global, and suddenly I get more logging detail
[4:20] <SamYaple> they did something with radosgw. moved it away from apache2 to something else. but it still should be reading the ceph.conf
[4:21] <SamYaple> right. they moved to civetweb
[4:21] <thetrav> it is, just not that section
[4:22] <thetrav> I suspect my new error is around the identity v2 vs v3 apis
[4:22] <SamYaple> should be [client.rgw.gateway] no?
[4:23] <SamYaple> anyway. v3 is working. you need minimum of jewel
[4:24] <thetrav> I have jewel
[4:24] <thetrav> client.rgw.gateway is different to the doc I linked above
[4:24] <thetrav> also, I found on another mailing list somewhere that it uses admintoken auth
[4:24] <thetrav> which is likely disabled on this openstack
[4:24] * thetrav checks
[4:25] <SamYaple> you shouldnt be using admintoken anyway
[4:25] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[4:25] <thetrav> is there an alternative?
[4:25] <thetrav> the ceph docs appear to be silent on the matter
[4:25] <SamYaple> i dont think so. sorry wasnt trying to be not heplful
[4:25] <SamYaple> just want that gone from keystone
[4:25] <thetrav> right. well, in principle I totally agree with you
[4:26] <thetrav> but in reality... I want my object storage
[4:26] <SamYaple> in this case, yes i beleive you have to use the admin token to get this to work. but the admin token is going away in Octavia I believe
[4:26] <SamYaple> might be Newton, though i doubt it
[4:27] <thetrav> hmm, well it looks like it's enabled anyway
[4:27] * wkennington (~wakIII@104.132.1.96) Quit (Quit: Leaving)
[4:27] <thetrav> must be some other problem
[4:27] <SamYaple> it is by default
[4:27] <SamYaple> before mitaka it was teh only way to bootstrap
[4:27] <SamYaple> but thats beside teh point
[4:27] <thetrav> so keystone is http rather than https
[4:27] * wkennington (~wakIII@104.132.1.96) has joined #ceph
[4:27] <thetrav> that could be related
[4:27] <SamYaple> it should be as straight forward as getting those options populated and in the correct section
[4:28] <SamYaple> http vs https shouldnt matter here
[4:28] <SamYaple> mind you, this is not an ideal implementation even when you do manage to get it to work
[4:28] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[4:28] <SamYaple> you wont be able to have two projects (even in different domains) create teh same named container
[4:28] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[4:29] <SamYaple> its best thought of as a single container for all of openstack segregated via ACLs
[4:29] <thetrav> hmm, maybe it's not enabled
[4:30] <thetrav> admin_token is in the pipeline in paste-config, but there's no admin_token mentioned in keystone.conf
[4:30] <SamYaple> then it defaults to None (and is basically disabled)
[4:31] <thetrav> in which case ceph will likely never work with this setup >_<
[4:31] <SamYaple> correct
[4:31] <SamYaple> well never is a long ways away
[4:31] <SamYaple> not until the next LTS at soonest i would say is reasonable
[4:32] <SamYaple> considering the scope of work hasnt even been defined
[4:32] * Vaelatern (~Vaelatern@cvgateway.utdallas.edu) Quit (Ping timeout: 480 seconds)
[4:35] * tgmedia (~tom@202.14.217.2) has joined #ceph
[4:35] * tgmedia (~tom@202.14.217.2) Quit ()
[4:37] * adamcrume (~quassel@2601:647:cb01:f890:6869:1e7a:8ba4:857b) Quit (Quit: No Ping reply in 180 seconds.)
[4:38] * tgmedia (~tom@202.14.217.2) has joined #ceph
[4:38] * adamcrume (~quassel@2601:647:cb01:f890:a288:69ff:fe70:6caa) has joined #ceph
[4:41] * Vaelatern (~Vaelatern@cvgateway.utdallas.edu) has joined #ceph
[4:41] <tgmedia> hi guys, quick question in regards to ceph -> rbd -> quotas per pool. I'd like to set a quota for the max_bytes of a pool so that I can limit the amount a ceph client can use. This is all working, however, if data gets written into rbd mapped device (/dev/rbd0 --> /mnt/rbd0) and the pool reaches its capacity (full), it sets the ceph cluster health to WARN and that the pool is full. The write operation on the client stopps and hangs after that.
[4:41] <tgmedia> increasing the quota of the pool helps removing the cluster health
[4:42] <tgmedia> but the write is stuck forever
[4:44] * Arcturus (~Zeis@exit0.liskov.tor-relays.net) Quit ()
[4:49] * jermudgeon (~jhaustin@199.200.6.73) has joined #ceph
[4:52] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[4:55] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) has joined #ceph
[4:58] * Vaelatern (~Vaelatern@cvgateway.utdallas.edu) Quit (Ping timeout: 480 seconds)
[5:04] * sankarshan (~sankarsha@121.244.87.116) has joined #ceph
[5:05] * wjw-freebsd3 (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[5:08] * Vaelatern (~Vaelatern@cvgateway.utdallas.edu) has joined #ceph
[5:17] * yuelongguang_ (~chatzilla@114.134.84.144) has joined #ceph
[5:20] * mollstam (~galaxyAbs@46.166.188.229) has joined #ceph
[5:22] * yuelongguang (~chatzilla@114.134.84.144) Quit (Ping timeout: 480 seconds)
[5:22] * yuelongguang_ is now known as yuelongguang
[5:23] * wkennington (~wakIII@104.132.1.96) Quit (Ping timeout: 480 seconds)
[5:28] * chunmei (~chunmei@134.134.137.73) has joined #ceph
[5:30] * FashyAttr0x (~FashyAttr@ip72-213-169-219.ok.ok.cox.net) has joined #ceph
[5:30] * sankarshan (~sankarsha@121.244.87.116) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[5:31] * masber (~masber@129.94.15.152) Quit (Read error: Connection reset by peer)
[5:32] * sankarshan (~sankarsha@121.244.87.116) has joined #ceph
[5:34] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) Quit (Quit: This computer has gone to sleep)
[5:34] * kefu (~kefu@114.92.101.38) Quit (Max SendQ exceeded)
[5:35] * kefu (~kefu@114.92.101.38) has joined #ceph
[5:38] * Vaelatern (~Vaelatern@cvgateway.utdallas.edu) Quit (Ping timeout: 480 seconds)
[5:38] * EinstCrazy (~EinstCraz@211-72-118-98.HINET-IP.hinet.net) Quit (Quit: Leaving...)
[5:38] * FashyAttr0x (~FashyAttr@ip72-213-169-219.ok.ok.cox.net) Quit (Ping timeout: 480 seconds)
[5:44] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[5:45] * Vacuum_ (~Vacuum@88.130.198.71) has joined #ceph
[5:47] * Mika_c (~Mika@122.146.93.152) has joined #ceph
[5:48] * vimal (~vikumar@114.143.165.227) has joined #ceph
[5:50] * mollstam (~galaxyAbs@46.166.188.229) Quit ()
[5:52] * Vacuum__ (~Vacuum@88.130.193.126) Quit (Ping timeout: 480 seconds)
[5:53] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Ping timeout: 480 seconds)
[5:54] * DoDzy (~Da_Pineap@46.166.188.234) has joined #ceph
[5:55] * cyphase_eviltwin (~cyphase@2601:640:c401:969a:468a:5bff:fe29:b5fd) has joined #ceph
[5:57] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:03] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[6:08] * vimal (~vikumar@114.143.165.227) Quit (Quit: Leaving)
[6:08] * [0x4A6F]_ (~ident@p508CD4FE.dip0.t-ipconnect.de) has joined #ceph
[6:10] * [0x4A6F] (~ident@0x4a6f.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:10] * [0x4A6F]_ is now known as [0x4A6F]
[6:20] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:24] * DoDzy (~Da_Pineap@46.166.188.234) Quit ()
[6:25] * jermudgeon (~jhaustin@199.200.6.73) Quit (Quit: jermudgeon)
[6:27] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[6:30] * EinstCrazy (~EinstCraz@211-72-118-98.HINET-IP.hinet.net) has joined #ceph
[6:31] * vimal (~vikumar@121.244.87.116) has joined #ceph
[6:36] * chunmei (~chunmei@134.134.137.73) Quit (Ping timeout: 480 seconds)
[6:47] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[6:59] * thetrav (~thetrav@203.35.9.142) Quit (Read error: Connection reset by peer)
[6:59] * kefu (~kefu@114.92.101.38) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:05] * brad- (~Brad@TMA-1.brad-x.com) has joined #ceph
[7:11] * brad[] (~Brad@TMA-1.brad-x.com) Quit (Ping timeout: 480 seconds)
[7:13] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[7:14] * swami1 (~swami@49.44.57.239) has joined #ceph
[7:14] <ivve> is any benefit/point at all to have ssd journals on sata drives that run an erasurepool?
[7:16] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[7:24] * epicguy (~epicguy@41.164.8.42) has joined #ceph
[7:32] * Mikko (~Mikko@dfs61tydv6d0m267n35xt-3.rev.dnainternet.fi) has joined #ceph
[7:34] * Mikko (~Mikko@dfs61tydv6d0m267n35xt-3.rev.dnainternet.fi) Quit ()
[7:42] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) Quit (Quit: Leaving.)
[7:50] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[8:00] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[8:02] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[8:04] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Read error: Connection reset by peer)
[8:04] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[8:05] <ivve> is any benefit/point at all to have ssd journals on sata drives that run an erasurepool?
[8:15] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Killed (NickServ (Too many failed password attempts.)))
[8:15] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[8:16] * i_m (~ivan.miro@88.206.123.152) has joined #ceph
[8:22] <IcePic> yes, EC pools are slow in writing, mostly since they need to wait for N+M drives to ack their parts, so having a journal accept incoming data would be a win
[8:22] * bviktor (~bviktor@213.16.80.50) has joined #ceph
[8:22] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:22] <Be-El> men
[8:22] <IcePic> "k+m"
[8:23] <Be-El> oops, wrong window
[8:23] <IcePic> or, have a cache pool in front of the EC pool
[8:29] * karnan (~karnan@121.244.87.117) has joined #ceph
[8:39] * ade (~abradshaw@p4FF798C9.dip0.t-ipconnect.de) has joined #ceph
[8:46] * EinstCrazy (~EinstCraz@211-72-118-98.HINET-IP.hinet.net) Quit (Read error: Connection reset by peer)
[8:46] <ivve> i see
[8:47] <ivve> well i have a cache pool in fornt
[8:47] <ivve> front
[8:47] <ivve> however im wondering if both are overkill
[8:48] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[8:50] * saintpablo (~saintpabl@gw01.mhitp.dk) has joined #ceph
[8:54] <ivve> ceph osd erasure-code-profile get isa_k2_m1 ||| directory=/usr/lib64/ceph/erasure-code k=2 m=1 plugin=isa ruleset-failure-domain=host ruleset-root=sata technique=reed_sol_van
[8:54] <ivve> shouldn't this work if i have 3 nodes with 27 osds?
[8:55] <ivve> my cluster doesn't want to create the pgs.. tried with 1024 27x1tb
[8:58] <ivve> works with the default profile ... grrr
[9:01] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[9:01] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[9:02] <ivve> i think i know what it might be.. my tree uses root for this profile
[9:06] * swami2 (~swami@49.38.3.191) has joined #ceph
[9:08] * krypto (~krypto@106.51.24.252) has joined #ceph
[9:11] * swami1 (~swami@49.44.57.239) Quit (Ping timeout: 480 seconds)
[9:14] * kutija (~kutija@89.216.27.139) has joined #ceph
[9:17] * raphaelsc (~raphaelsc@177.42.73.142) Quit (Remote host closed the connection)
[9:18] * saintpablo (~saintpabl@gw01.mhitp.dk) Quit (Quit: Leaving)
[9:19] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:20] * analbeard (~shw@support.memset.com) has joined #ceph
[9:23] * swami2 (~swami@49.38.3.191) Quit (Read error: Connection timed out)
[9:24] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[9:24] * EinstCrazy (~EinstCraz@101.78.195.62) has joined #ceph
[9:25] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[9:25] * swami1 (~swami@49.38.3.191) has joined #ceph
[9:25] <IcePic> ivve: dont know if two layers would help. The ceph docs were a bit like "this is hard to bench, suits certain io patterns" and so on, so I cant help you with qualified guesses
[9:28] * komljen (~chatzilla@217.197.142.111) has joined #ceph
[9:28] * chunmei (~chunmei@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[9:35] <ivve> understandable
[9:36] <ivve> I was hoping for some experience :)
[9:36] <IcePic> and the hard part could be other than "will it make it <just a bit> faster?" but rather "is it worth spending ssd monies on"
[9:36] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[9:36] <ivve> yeah
[9:36] <ivve> i have set it up with ssds now
[9:36] <ivve> so i will bench it
[9:36] <ivve> move the journals, bench agian
[9:36] <ivve> see what happens
[9:36] <ivve> :)
[9:36] * madkiss (~madkiss@178.115.129.28.wireless.dyn.drei.com) has joined #ceph
[9:42] * kefu (~kefu@114.92.101.38) has joined #ceph
[9:42] * EinstCrazy (~EinstCraz@101.78.195.62) Quit (Ping timeout: 480 seconds)
[9:43] <IcePic> \o/
[9:43] <IcePic> science!
[9:45] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:46] <ivve> however i have one thing i don't quite understand. erasurecode failure domain... i set it to disktype (which is hostname-sata) below root and below the disktypes we have osds directly.. if i set ruleset-failure-domain= to disktype i don't get enough osd (even when k+m=3)
[9:49] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[9:50] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[9:51] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[9:53] <ivve> trying to understand it but im guessing it don't.. in the docs it says Ensure that no two chunks are in a bucket with the same failure domain. For instance, if the failure domain is host no two chunks will be stored on the same host. It is used to create a ruleset step such as step chooseleaf host.
[9:53] * Mikko (~Mikko@dfs61tyb5qxwkg9f1qs7y-3.rev.dnainternet.fi) has joined #ceph
[9:53] * chunmei (~chunmei@jfdmzpr04-ext.jf.intel.com) Quit (Ping timeout: 480 seconds)
[9:53] * kefu (~kefu@114.92.101.38) Quit (Max SendQ exceeded)
[9:53] <Be-El> ivve: are the disktype bucket above the host buckets or are they subbuckets of the hosts?
[9:54] <ivve> pastebin inc
[9:54] * kefu (~kefu@114.92.101.38) has joined #ceph
[9:54] <ivve> http://pastebin.com/YBYd98CJ
[9:54] <ivve> this is just part of the tree
[9:55] <ivve> but the relevant one i gues
[9:57] <ivve> this is the erasure profile
[9:57] <Be-El> do you have a second root for the ssds?
[9:57] <ivve> directory=/usr/lib64/ceph/erasure-code
[9:57] <ivve> k=2
[9:57] <ivve> m=1
[9:57] <ivve> plugin=isa
[9:57] <ivve> ruleset-failure-domain=osd
[9:57] <ivve> ruleset-root=sata
[9:57] <ivve> technique=reed_sol_van
[9:57] <ivve> yea
[9:58] <ivve> ill just paste the whole thing
[9:58] <ivve> http://pastebin.com/bXJcSrFT
[9:59] <ivve> but if i change failuredomain to "disktype" which to me sounds like the chunks should go on each host (ie raid5)
[9:59] <ivve> and the two datas on the two other, but separate
[10:00] <ivve> but if i try to create a pool like that it fails to create pgs
[10:00] <ivve> so im getting it wants more osds per host?
[10:00] * thomnico (~thomnico@2a01:e35:8b41:120:f83a:7515:94ca:897f) has joined #ceph
[10:01] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[10:01] <ivve> pool 6 'ssd_cache' replicated size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 512 pgp_num 512 last_change 518 flags hashpspool,incomplete_clones tier_of 19 cache_mode writeback target_bytes 600000000000 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 600s x1 stripe_width 0
[10:01] <ivve> pool 19 'isa_ec_pool' erasure size 3 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 514 lfor 514 flags hashpspool tiers 6 read_tier 6 write_tier 6 stripe_width 4096
[10:01] <ivve> those are the pools
[10:02] <ivve> so with the current setup, where failuredomain = osd
[10:02] <Be-El> your crush map is not ok
[10:02] <Be-El> you have the same osd at different locations
[10:02] <ivve> oh
[10:02] <Be-El> e.g. osd.0 is under the default and sata root
[10:03] <Be-El> unfortunately you cannot use a single root and a bucket like disktype below the host buckets to differentiate between hdd and ssd
[10:04] <Be-El> the easiest setup is having to distinct roots (e.g. default, ssd), host entries below them and finally osds below the hosts
[10:04] * DanFoster (~Daniel@2a00:1ee0:3:1337:14cb:7d02:10a7:4a85) has joined #ceph
[10:04] <Be-El> the host bucket names for the second root have to be different from the similar buckets in the first root, e.g. hostname-ssd
[10:05] <Be-El> or you keep your crush tree, but remove the default root to ensure that each entry only shows up once
[10:08] <ivve> i understand that
[10:08] <ivve> but shouldn't it be possible to mix pools over storage?
[10:09] * komljen (~chatzilla@217.197.142.111) Quit (Quit: ChatZilla 0.9.92 [Firefox 48.0.1/20160817112116])
[10:09] <ivve> i.e right now i have it set to failuredomain = osd
[10:09] <ivve> and it works, with this setup
[10:10] <ivve> now i was thinking about deleting the default
[10:10] * fsimonce (~simon@host203-44-dynamic.183-80-r.retail.telecomitalia.it) has joined #ceph
[10:10] * b0e (~aledermue@213.95.25.82) has joined #ceph
[10:10] <ivve> but my impression was that it should be able to work, i mean stuff will be written onto disk, its just a matter of space?
[10:13] <Be-El> you want one replicate in one kind of storage, and the other replicates in a different kind?
[10:13] <ivve> i mean if it is invalid like this i shouldn't be able to compile the crushmap even?
[10:13] <Be-El> or just different pools with different kinds of backing storage (hdd vs. ssd)?
[10:14] <ivve> yea as an example, not that im going to set it up
[10:14] <ivve> if you look at the roots ssd and sata
[10:14] <ivve> that is mostly how things will work
[10:14] * Mikko (~Mikko@dfs61tyb5qxwkg9f1qs7y-3.rev.dnainternet.fi) Quit (Quit: This computer has gone to sleep)
[10:14] <ivve> disktype really just is a buckettype below host
[10:14] <ivve> using host, rack or even my created disktype shouldn't matter
[10:15] <ivve> as long as it is one level above?
[10:15] <ivve> type 0 osd
[10:15] <ivve> type 1 disktype
[10:15] <ivve> type 2 host
[10:15] <ivve> type 3 chassis
[10:15] <ivve> type 4 rack
[10:15] <ivve> ... and then type 11 root
[10:16] <ivve> okay better example
[10:16] <ivve> say i have 6 disks
[10:16] <ivve> and 2 pools
[10:16] <ivve> 1 pool uses all 6
[10:16] <ivve> 2nd pool just uses 4 of the 6
[10:16] <ivve> there is no other way to do it other than this way?
[10:17] <ivve> well in the sense that pool obeys the ruleset root is setup in a way like mine...
[10:18] <ivve> maybe i should paste my rules
[10:19] <Be-El> a pool will always distribute its pgs across all entities valid for the crush ruleset. if you want a pool to span a certain number of host only, you need a crush ruleset that is able to restrict lookups to these hosts only
[10:19] <Be-El> and the best way to achieve this is adding another layer between root and host, e.g. rack, chassis
[10:20] <ivve> yeah, my way was to create a new type, disktype between osd and host
[10:21] <ivve> http://pastebin.com/aRZkyLxy
[10:21] <ivve> maybe the rule is incorrect?
[10:21] <Be-El> you need to fix the crush tree first and ensure that every osd has a single path only
[10:22] <ivve> so back to the root, that is the problem that is causing failuredomain = disktype not to work in erasure?
[10:22] <ivve> ill try it
[10:24] * Mikko (~Mikko@dfs61tydyv9rycr3wjgty-3.rev.dnainternet.fi) has joined #ceph
[10:30] * MrFusion (~ryan@d207-81-7-44.bchsia.telus.net) Quit (Read error: Connection reset by peer)
[10:33] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[10:47] * TMM (~hp@185.5.121.201) has joined #ceph
[10:47] * Ryfi (~ryan@d207-81-7-44.bchsia.telus.net) has joined #ceph
[10:48] * MrFusion (~ryan@209-207-112-193.ip.van.radiant.net) has joined #ceph
[10:48] * Ryfi (~ryan@d207-81-7-44.bchsia.telus.net) Quit (Read error: Connection reset by peer)
[10:50] <ivve> Be-El: yeah got solved
[10:51] <ivve> now i created a new profile with disktype as faildomain
[10:52] <ivve> so
[10:52] <ivve> with this new setup
[10:52] <ivve> i should be able to have a profile where m=3?
[10:52] <ivve> ie k=7 m=3
[10:53] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:f1b4:a64b:aad6:ad14) has joined #ceph
[10:54] <ivve> nah, that didn't work
[10:55] <ivve> so since k+m=10 and i only have 3 domains it will fail by 7
[10:55] <ivve> so my total is 3, max is k=2 m=1
[11:00] <ivve> got it now lol :)
[11:00] <ivve> putting it on osd i can decrease overhead by increasing k&m
[11:00] * thomnico (~thomnico@2a01:e35:8b41:120:f83a:7515:94ca:897f) Quit (Quit: Ex-Chat)
[11:03] * Mikko (~Mikko@dfs61tydyv9rycr3wjgty-3.rev.dnainternet.fi) Quit (Quit: This computer has gone to sleep)
[11:05] * Mikko (~Mikko@dfs61tycyvhj-6l56w35t-3.rev.dnainternet.fi) has joined #ceph
[11:09] * madkiss (~madkiss@178.115.129.28.wireless.dyn.drei.com) Quit (Quit: Leaving.)
[11:15] * srk_ (~Siva@2605:6000:ed04:ce00:4839:83e:8295:5844) has joined #ceph
[11:18] * sankarshan (~sankarsha@121.244.87.116) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[11:23] * srk_ (~Siva@2605:6000:ed04:ce00:4839:83e:8295:5844) Quit (Ping timeout: 480 seconds)
[11:26] * brians__ (~brian@80.111.114.175) has joined #ceph
[11:26] * brians (~brian@80.111.114.175) Quit (Read error: Connection reset by peer)
[11:40] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[11:46] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[11:47] * adun153 (~adun153@130.105.147.50) has joined #ceph
[11:52] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[11:53] * IvanJobs_ (~ivanjobs@103.50.11.146) Quit (Ping timeout: 480 seconds)
[11:54] * bara (~bara@213.175.37.12) has joined #ceph
[11:57] * m8x (~user@182.150.27.112) has left #ceph
[11:57] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[11:57] * jamespag` is now known as jamespage
[11:57] * EinstCrazy (~EinstCraz@101.78.195.62) has joined #ceph
[11:58] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[12:02] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[12:03] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[12:05] * branto (~branto@178-253-140-72.3pp.slovanet.sk) has joined #ceph
[12:05] * EinstCrazy (~EinstCraz@101.78.195.62) Quit (Ping timeout: 480 seconds)
[12:07] * Mika_c (~Mika@122.146.93.152) Quit (Remote host closed the connection)
[12:14] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[12:15] * smf68 (~Ian2128@31.220.4.161) has joined #ceph
[12:18] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[12:27] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[12:28] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:28] * Nats (~natscogs@114.31.195.238) Quit (Read error: No route to host)
[12:32] * ashah (~ashah@121.244.87.117) has joined #ceph
[12:40] * i_m (~ivan.miro@88.206.123.152) Quit (Ping timeout: 480 seconds)
[12:41] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[12:45] * smf68 (~Ian2128@5AEAAA56W.tor-irc.dnsbl.oftc.net) Quit ()
[12:46] * chunmei (~chunmei@134.134.137.73) has joined #ceph
[12:47] * icey (~Chris@pool-71-162-145-72.phlapa.fios.verizon.net) has joined #ceph
[12:54] * kefu (~kefu@114.92.101.38) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:54] * kuku (~kuku@112.203.56.253) has joined #ceph
[12:57] * i_m (~ivan.miro@88.206.123.152) has joined #ceph
[12:58] * Racpatel (~Racpatel@2601:87:0:24af::313b) has joined #ceph
[13:04] * sankarshan (~sankarsha@121.244.87.116) has joined #ceph
[13:06] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[13:10] * chunmei (~chunmei@134.134.137.73) Quit (Ping timeout: 480 seconds)
[13:12] * evelu (~erwan@poo40-1-78-231-184-196.fbx.proxad.net) has joined #ceph
[13:15] * Racpatel (~Racpatel@2601:87:0:24af::313b) Quit (Ping timeout: 480 seconds)
[13:16] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[13:19] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[13:19] * dneary (~dneary@207.236.147.202) has joined #ceph
[13:26] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[13:29] * i_m (~ivan.miro@88.206.123.152) Quit (Quit: Leaving.)
[13:29] * i_m (~ivan.miro@88.206.123.152) has joined #ceph
[13:29] * kuku (~kuku@112.203.56.253) Quit (Remote host closed the connection)
[13:36] * bene2 (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) has joined #ceph
[13:36] * karnan (~karnan@121.244.87.117) Quit (Quit: Leaving)
[13:36] * karnan (~karnan@121.244.87.117) has joined #ceph
[13:37] * karnan (~karnan@121.244.87.117) Quit ()
[13:37] * karnan (~karnan@121.244.87.117) has joined #ceph
[13:38] * Hemanth (~hkumar_@121.244.87.117) has joined #ceph
[13:39] * karnan (~karnan@121.244.87.117) Quit ()
[13:39] * karnan (~karnan@121.244.87.117) has joined #ceph
[13:42] * georgem (~Adium@24.114.49.249) has joined #ceph
[13:42] * georgem (~Adium@24.114.49.249) Quit ()
[13:42] * georgem (~Adium@206.108.127.16) has joined #ceph
[13:43] * i_m (~ivan.miro@88.206.123.152) Quit (Ping timeout: 480 seconds)
[13:48] * Enikma (~chrisinaj@104.238.169.61) has joined #ceph
[13:49] * evelu (~erwan@poo40-1-78-231-184-196.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[14:01] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Ping timeout: 480 seconds)
[14:06] * srk_ (~Siva@2605:6000:ed04:ce00:69c4:9e42:1e8:8120) has joined #ceph
[14:12] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[14:12] * krypto (~krypto@106.51.24.252) Quit (Read error: Connection reset by peer)
[14:13] * krypto (~krypto@106.51.24.252) has joined #ceph
[14:15] * kefu (~kefu@114.92.101.38) has joined #ceph
[14:16] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[14:16] * ashah (~ashah@121.244.87.117) Quit (Remote host closed the connection)
[14:17] * ashah (~ashah@121.244.87.117) has joined #ceph
[14:18] * Enikma (~chrisinaj@104.238.169.61) Quit ()
[14:22] * srk_ (~Siva@2605:6000:ed04:ce00:69c4:9e42:1e8:8120) Quit (Ping timeout: 480 seconds)
[14:24] * kefu_ (~kefu@114.92.101.38) has joined #ceph
[14:25] * Racpatel (~Racpatel@2601:87:0:24af::313b) has joined #ceph
[14:26] * yuelongguang_ (~chatzilla@114.134.84.144) has joined #ceph
[14:29] * yuelongguang (~chatzilla@114.134.84.144) Quit (Ping timeout: 480 seconds)
[14:29] * yuelongguang_ is now known as yuelongguang
[14:31] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[14:31] * kefu (~kefu@114.92.101.38) Quit (Ping timeout: 480 seconds)
[14:35] * bviktor (~bviktor@213.16.80.50) Quit (Read error: Connection reset by peer)
[14:35] * cyphase_eviltwin (~cyphase@2601:640:c401:969a:468a:5bff:fe29:b5fd) Quit (Ping timeout: 480 seconds)
[14:36] * bviktor (~bviktor@213.16.80.50) has joined #ceph
[14:36] * cyphase (~cyphase@000134f2.user.oftc.net) has joined #ceph
[14:39] * vimal (~vikumar@121.244.87.116) Quit (Quit: Leaving)
[14:40] * vimal (~vikumar@121.244.87.116) has joined #ceph
[14:41] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[14:42] * vimal (~vikumar@121.244.87.116) Quit ()
[14:46] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Remote host closed the connection)
[14:46] <jiffe> I've removed an osd and now have a pg in this state: http://nsab.us/public/ceph, can I safely claim osd 29 as lost to fix this?
[14:46] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[14:47] * ashah (~ashah@121.244.87.117) Quit (Quit: Leaving)
[14:49] <TMM> jiffe, did you only just remove it?
[14:49] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[14:50] <TMM> jiffe, looks like something else went wrong before
[14:50] * i_m (~ivan.miro@31.173.100.99) has joined #ceph
[14:50] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[14:50] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:51] <TMM> jiffe, I think you're going to need ignore_history_les=true to get this pg back online
[14:53] * adun153 (~adun153@130.105.147.50) Quit (Remote host closed the connection)
[14:54] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[14:56] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[14:59] * lobstar (~allenmelo@46.166.188.229) has joined #ceph
[15:00] * bara (~bara@213.175.37.12) has joined #ceph
[15:00] * mattbenjamin (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[15:01] * mattbenjamin (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) Quit ()
[15:02] * sudocat1 (~dibarra@192.185.1.20) has joined #ceph
[15:03] * dneary (~dneary@207.236.147.202) Quit (Ping timeout: 480 seconds)
[15:09] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[15:09] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[15:20] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[15:22] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:26] * Jeffrey4l_ (~Jeffrey@110.252.71.112) Quit (Ping timeout: 480 seconds)
[15:26] * rraja (~rraja@121.244.87.117) has joined #ceph
[15:29] * lobstar (~allenmelo@5AEAAA6A1.tor-irc.dnsbl.oftc.net) Quit ()
[15:29] * T1w (~jens@node3.survey-it.dk) Quit (Remote host closed the connection)
[15:29] * vimal (~vikumar@114.143.165.227) has joined #ceph
[15:30] * bviktor (~bviktor@213.16.80.50) Quit (Ping timeout: 480 seconds)
[15:34] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[15:38] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[15:38] * dnunez (~dnunez@nat-pool-bos-u.redhat.com) has joined #ceph
[15:39] * dneary (~dneary@173.243.39.74) has joined #ceph
[15:42] * madkiss (~madkiss@178.115.129.28.wireless.dyn.drei.com) has joined #ceph
[15:42] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:43] * wes_dillingham (~wes_dilli@ipb986b5dc.g.packetsurge.net) has joined #ceph
[15:47] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[15:51] * gregmark (~Adium@68.87.42.115) has joined #ceph
[15:56] * kefu_ (~kefu@114.92.101.38) Quit (Ping timeout: 480 seconds)
[15:57] * wes_dillingham (~wes_dilli@ipb986b5dc.g.packetsurge.net) Quit (Quit: wes_dillingham)
[15:58] * mattbenjamin (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[16:02] * wes_dillingham (~wes_dilli@65.112.8.131) has joined #ceph
[16:03] * chunmei (~chunmei@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[16:04] * Hemanth (~hkumar_@121.244.87.117) Quit (Ping timeout: 480 seconds)
[16:04] * dneary (~dneary@173.243.39.74) Quit (Ping timeout: 480 seconds)
[16:04] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:04] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[16:05] * sudocat1 (~dibarra@192.185.1.20) Quit (Read error: Connection reset by peer)
[16:05] * squizzi (~squizzi@107.13.237.240) has joined #ceph
[16:12] * madkiss (~madkiss@178.115.129.28.wireless.dyn.drei.com) Quit (Quit: Leaving.)
[16:20] * mattbenjamin (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[16:21] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:22] * kefu (~kefu@114.92.101.38) has joined #ceph
[16:25] * oliveiradan2 (~doliveira@67.214.238.80) has joined #ceph
[16:25] * swami1 (~swami@49.38.3.191) Quit (Quit: Leaving.)
[16:28] * dneary (~dneary@173.243.39.74) has joined #ceph
[16:36] * Nacer (~Nacer@LStLambert-656-1-8-107.w90-63.abo.wanadoo.fr) has joined #ceph
[16:37] * chunmei (~chunmei@jfdmzpr04-ext.jf.intel.com) Quit (Ping timeout: 480 seconds)
[16:37] * madkiss (~madkiss@178.115.129.28.wireless.dyn.drei.com) has joined #ceph
[16:37] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[16:38] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[16:40] * xarses (~xarses@64.124.158.32) has joined #ceph
[16:41] * madkiss (~madkiss@178.115.129.28.wireless.dyn.drei.com) Quit ()
[16:41] * xarses (~xarses@64.124.158.32) Quit (Remote host closed the connection)
[16:41] * kefu is now known as kefu|afk
[16:41] * xarses (~xarses@64.124.158.32) has joined #ceph
[16:42] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) Quit (Ping timeout: 480 seconds)
[16:45] * valeech_ (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) has joined #ceph
[16:46] * yanzheng (~zhyan@125.70.21.51) Quit (Quit: This computer has gone to sleep)
[16:47] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[16:47] * valeech_ is now known as valeech
[16:49] * oliveiradan2 (~doliveira@67.214.238.80) Quit (Ping timeout: 480 seconds)
[16:49] * wes_dillingham (~wes_dilli@65.112.8.131) Quit (Quit: wes_dillingham)
[16:50] * vimal (~vikumar@114.143.165.227) Quit (Quit: Leaving)
[16:50] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[16:52] * kefu (~kefu@114.92.101.38) has joined #ceph
[16:54] * dneary (~dneary@173.243.39.74) Quit (Ping timeout: 480 seconds)
[16:54] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[16:57] * kefu|afk (~kefu@114.92.101.38) Quit (Ping timeout: 480 seconds)
[16:58] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[17:02] * kutija (~kutija@89.216.27.139) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:04] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[17:06] * raphaelsc (~raphaelsc@2804:7f2:2080:7c8f:5e51:4fff:fe86:bbae) has joined #ceph
[17:06] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:06] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) has joined #ceph
[17:06] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) Quit ()
[17:06] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) has joined #ceph
[17:07] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[17:08] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[17:09] * yanzheng (~zhyan@125.70.21.51) has joined #ceph
[17:11] * babilen (~babilen@babilen.user.oftc.net) has left #ceph
[17:14] * raphaelsc (~raphaelsc@2804:7f2:2080:7c8f:5e51:4fff:fe86:bbae) Quit (Ping timeout: 480 seconds)
[17:14] * raphaelsc (~raphaelsc@2804:7f2:2080:7c8f:5e51:4fff:fe86:bbae) has joined #ceph
[17:15] * Nacer (~Nacer@LStLambert-656-1-8-107.w90-63.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[17:18] * mattbenjamin (~mbenjamin@12.118.3.106) has joined #ceph
[17:22] * sankarshan (~sankarsha@121.244.87.116) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[17:22] * i_m (~ivan.miro@31.173.100.99) Quit (Read error: Connection reset by peer)
[17:22] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[17:23] * i_m (~ivan.miro@83.149.37.139) has joined #ceph
[17:26] * mykola (~Mikolaj@91.245.79.118) has joined #ceph
[17:27] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[17:28] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[17:29] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[17:30] * wjw-freebsd3 (~wjw@smtp.digiware.nl) has joined #ceph
[17:31] * vimal (~vikumar@114.143.165.227) has joined #ceph
[17:34] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[17:34] * kefu (~kefu@114.92.101.38) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:36] * kefu (~kefu@114.92.101.38) has joined #ceph
[17:37] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[17:38] * sep (~sep@95.62-50-191.enivest.net) Quit (Ping timeout: 480 seconds)
[17:40] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[17:43] * kefu is now known as kefu|afk
[17:45] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[17:46] * stein_ (~stein@185.56.185.82) has joined #ceph
[17:46] * stein (~stein@185.56.185.82) Quit (Read error: Connection reset by peer)
[17:48] * i_m (~ivan.miro@83.149.37.139) Quit (Ping timeout: 480 seconds)
[17:49] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[17:49] * sep (~sep@95.62-50-191.enivest.net) has joined #ceph
[17:54] * yanzheng (~zhyan@125.70.21.51) Quit (Quit: This computer has gone to sleep)
[17:57] * ivve (~zed@c83-248-116-21.bredband.comhem.se) has joined #ceph
[18:01] * georgem1 (~Adium@206.108.127.16) has joined #ceph
[18:01] * georgem (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[18:02] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[18:06] * yanzheng (~zhyan@125.70.21.51) has joined #ceph
[18:06] * vimal (~vikumar@114.143.165.227) Quit (Quit: Leaving)
[18:08] * mattch (~mattch@w5430.see.ed.ac.uk) Quit (Ping timeout: 480 seconds)
[18:13] * i_m (~ivan.miro@83.149.37.139) has joined #ceph
[18:13] * bara (~bara@213.175.37.12) has joined #ceph
[18:15] * yanzheng (~zhyan@125.70.21.51) Quit (Quit: This computer has gone to sleep)
[18:21] * i_m (~ivan.miro@83.149.37.139) Quit (Ping timeout: 480 seconds)
[18:23] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Coyote finally caught me)
[18:23] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[18:25] * GeoTracer (~Geoffrey@41.77.153.99) Quit (Quit: Leaving)
[18:27] * GeoTracer (~Geoffrey@41.77.153.99) has joined #ceph
[18:28] * GeoTracer (~Geoffrey@41.77.153.99) Quit ()
[18:29] * GeoTracer (~Geoffrey@41.77.153.99) has joined #ceph
[18:30] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[18:36] * i_m (~ivan.miro@83.149.37.139) has joined #ceph
[18:39] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[18:39] * cathode (~cathode@50.232.215.114) has joined #ceph
[18:39] * doppelgrau (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[18:41] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:42] * rraja (~rraja@121.244.87.117) Quit (Ping timeout: 480 seconds)
[18:42] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[18:45] * rendar (~I@host158-39-dynamic.57-82-r.retail.telecomitalia.it) has joined #ceph
[18:46] * i_m (~ivan.miro@83.149.37.139) Quit (Ping timeout: 480 seconds)
[18:48] * ade (~abradshaw@p4FF798C9.dip0.t-ipconnect.de) Quit (Quit: Too sexy for his shirt)
[18:53] * yanzheng (~zhyan@125.70.21.51) has joined #ceph
[18:53] * yanzheng (~zhyan@125.70.21.51) Quit ()
[18:56] * branto (~branto@178-253-140-72.3pp.slovanet.sk) Quit (Quit: Leaving.)
[18:56] * Arfed (~Jamana@45.32.239.246) has joined #ceph
[19:00] * rotbeard (~redbeard@2a02:908:df13:bb00:5069:8502:824e:76fb) has joined #ceph
[19:01] * TheSov (~TheSov@108-75-213-57.lightspeed.cicril.sbcglobal.net) has joined #ceph
[19:01] <TheSov> will jewel get parallel reads?
[19:04] * georgem (~Adium@206.108.127.16) has joined #ceph
[19:04] * georgem1 (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[19:04] * kefu|afk is now known as kefu
[19:04] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[19:07] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has left #ceph
[19:10] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[19:13] * smithfarm (~smithfarm@217.30.64.210) has joined #ceph
[19:14] <smithfarm> kefu: ping - quick scrub question
[19:14] <kefu> smithfarm, yes.
[19:14] <smithfarm> when a manual deep-scrub is run on a PG, does that cancel normal scrubs that have already been scheduled?
[19:14] <smithfarm> and reset timers?
[19:14] <smithfarm> for ordinary scrubs?
[19:14] <kefu> it does not cancel
[19:15] <smithfarm> that's what I thought
[19:15] <kefu> but timestamp is reset for sure.
[19:15] <kefu> so the next scrub will be postponed i think.
[19:16] <kefu> smithfarm, wait a sec.
[19:16] <kefu> lemme read the code.
[19:16] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[19:16] <smithfarm> scrubs and deep scrubs are scheduled by putting them into some kind of queue, right?
[19:17] <kefu> smithfarm right
[19:17] <smithfarm> OSDService::ScrubJob::ScrubJob
[19:18] <kefu> i am trying to understand if the same PG can be scheduled into the queue twice at the same time.
[19:19] <smithfarm> context is that "ceph osd deep-scrub" is being run manually on PGs at regular intervals
[19:19] * DanFoster (~Daniel@2a00:1ee0:3:1337:14cb:7d02:10a7:4a85) Quit (Quit: Leaving)
[19:19] <smithfarm> the question is, does this "reset the timer" for the regularly-scheduled scrubs and deep-scrubs (i.e. including the ones that might already be in the queue)
[19:22] <kefu> smithfarm. once a scrub finishes, the timestamp for a pg's scrub is reset to now().
[19:22] <smithfarm> makes sense
[19:23] <kefu> and both scrub and deep scrub sets this timestamp.
[19:23] <smithfarm> and then when an already scheduled scrub comes up on the queue, it sees the timestamp and postpones?
[19:23] <smithfarm> for example, sequence of events:
[19:23] <smithfarm> 1. scrub gets scheduled automatically on the queue
[19:24] <smithfarm> 2. user runs "ceph osd deep-scrub"
[19:24] <smithfarm> 3. 5 minutes later, the previously scheduled scrub starts
[19:24] <smithfarm> in this case, the scrub will be postponed according to the latest timestamp?
[19:25] * chunmei (~chunmei@134.134.137.75) has joined #ceph
[19:26] * Arfed (~Jamana@61TAABIGP.tor-irc.dnsbl.oftc.net) Quit ()
[19:27] <kefu> smithfarm. i don't think so.
[19:27] <smithfarm> so it's possible that a PG could be scrubbed (automatically) 5 minutes after a manual scrub
[19:28] <kefu> yes.
[19:28] <smithfarm> and it might even be deep-scrubbed 5 minutes later, in the same way
[19:28] <kefu> b/c the stamp by which we order the scrub jobs is set when the scrub job is queued.
[19:28] <smithfarm> right
[19:29] <kefu> smithfarm, maybe we can continue the discussion tmr if anything else.
[19:29] <smithfarm> kefu: sure - you've helped a lot thanks
[19:29] <kefu> it's a little bit late in my tz.
[19:29] <smithfarm> get some sleep!
[19:29] <kefu> thanks
[19:29] <kefu> ttyl
[19:29] * kefu is now known as kefu|afk
[19:33] * i_m (~ivan.miro@31.173.100.14) has joined #ceph
[19:35] * rraja (~rraja@122.166.180.14) has joined #ceph
[19:38] * Hemanth (~hkumar_@103.228.221.141) has joined #ceph
[19:42] * Hemanth (~hkumar_@103.228.221.141) Quit ()
[19:42] * MrFusion (~ryan@209-207-112-193.ip.van.radiant.net) Quit (Quit: Leaving)
[19:42] * Hemanth (~hkumar_@103.228.221.141) has joined #ceph
[19:49] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) Quit (Read error: Connection reset by peer)
[19:50] * davidz (~davidz@2605:e000:1313:8003:35bc:2156:758e:fbad) has joined #ceph
[19:54] * Pulp (~Pulp@63-221-50-195.dyn.estpak.ee) has joined #ceph
[19:57] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[19:59] * dneary (~dneary@173.243.39.74) has joined #ceph
[19:59] * krypto (~krypto@106.51.24.252) Quit (Ping timeout: 480 seconds)
[20:00] * krypto (~krypto@G68-90-105-99.sbcis.sbc.com) has joined #ceph
[20:02] * rraja (~rraja@122.166.180.14) Quit (Quit: Leaving)
[20:04] * Mikko (~Mikko@dfs61tycyvhj-6l56w35t-3.rev.dnainternet.fi) Quit (Quit: This computer has gone to sleep)
[20:16] * gregmark (~Adium@68.87.42.115) has joined #ceph
[20:19] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[20:22] * rotbeard (~redbeard@2a02:908:df13:bb00:5069:8502:824e:76fb) Quit (Quit: Leaving)
[20:32] * wes_dillingham (~wes_dilli@140.247.242.44) Quit (Quit: wes_dillingham)
[20:34] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[20:38] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) has joined #ceph
[20:42] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[20:48] * Nexus (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) has joined #ceph
[20:58] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[20:59] * luigiman (~galaxyAbs@5.153.233.18) has joined #ceph
[20:59] * georgem1 (~Adium@206.108.127.16) has joined #ceph
[20:59] * georgem (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[21:04] * scg (~zscg@181.122.4.166) has joined #ceph
[21:06] * madkiss (~madkiss@178.115.129.28.wireless.dyn.drei.com) has joined #ceph
[21:08] * bene2 is now known as bene2_afk
[21:12] * Nexus (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[21:18] * krypto (~krypto@G68-90-105-99.sbcis.sbc.com) Quit (Quit: Leaving)
[21:21] * haplo37 (~haplo37@107.190.37.90) has joined #ceph
[21:24] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[21:29] * luigiman (~galaxyAbs@61TAABIKP.tor-irc.dnsbl.oftc.net) Quit ()
[21:30] * karnan (~karnan@106.51.131.100) has joined #ceph
[21:35] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[21:38] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[21:39] <GooseYArd> hm, sometime between 10.0.5 and 10.2.2, builds with --disable-server wind up with a link error since ceph_json.o is missing from something that librados links against
[21:39] <GooseYArd> im digging through the makefile templates now, if this sounds familiar to anybody let me know
[21:40] * sudocat1 (~dibarra@192.185.1.20) has joined #ceph
[21:42] * salwasser (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[21:42] * rendar (~I@host158-39-dynamic.57-82-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:42] * salwasser (~Adium@a23-79-238-10.deploy.static.akamaitechnologies.com) has joined #ceph
[21:44] * salwasser1 (~Adium@72.246.3.14) has joined #ceph
[21:44] * salwasser (~Adium@a23-79-238-10.deploy.static.akamaitechnologies.com) Quit (Read error: Connection reset by peer)
[21:44] * salwasser1 (~Adium@72.246.3.14) Quit ()
[21:46] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[21:47] * kefu|afk (~kefu@114.92.101.38) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:47] * kefu (~kefu@114.92.101.38) has joined #ceph
[21:50] * madkiss (~madkiss@178.115.129.28.wireless.dyn.drei.com) Quit (Quit: Leaving.)
[21:51] * xarses_ (~xarses@172.56.38.76) has joined #ceph
[21:51] * xarses_ (~xarses@172.56.38.76) Quit (Remote host closed the connection)
[21:52] * xarses_ (~xarses@172.56.38.76) has joined #ceph
[21:53] * xarses (~xarses@64.124.158.32) Quit (Ping timeout: 480 seconds)
[21:53] * georgem (~Adium@206.108.127.16) has joined #ceph
[21:53] * georgem1 (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[21:53] * i_m (~ivan.miro@31.173.100.14) Quit (Ping timeout: 480 seconds)
[21:54] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[21:54] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:f1b4:a64b:aad6:ad14) Quit (Ping timeout: 480 seconds)
[21:55] * kefu (~kefu@114.92.101.38) Quit (Ping timeout: 480 seconds)
[21:58] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[22:01] * xarses (~xarses@64.124.158.32) has joined #ceph
[22:06] * xarses_ (~xarses@172.56.38.76) Quit (Ping timeout: 480 seconds)
[22:09] * rendar (~I@host158-39-dynamic.57-82-r.retail.telecomitalia.it) has joined #ceph
[22:09] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[22:10] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[22:18] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:21] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:fdc8:c4d5:24bd:956f) has joined #ceph
[22:24] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[22:28] * Hemanth (~hkumar_@103.228.221.141) Quit (Ping timeout: 480 seconds)
[22:35] * georgem (~Adium@24.114.66.172) has joined #ceph
[22:41] * karnan (~karnan@106.51.131.100) Quit (Remote host closed the connection)
[22:41] * mykola (~Mikolaj@91.245.79.118) Quit (Quit: away)
[22:43] * georgem (~Adium@24.114.66.172) Quit (Quit: Leaving.)
[22:43] * georgem (~Adium@206.108.127.16) has joined #ceph
[22:46] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:fdc8:c4d5:24bd:956f) Quit (Ping timeout: 480 seconds)
[22:53] * danieagle (~Daniel@177.138.169.68) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[22:55] * jackhill (~jackhill@bog.hcoop.net) Quit (Server closed connection)
[22:55] * jackhill (~jackhill@bog.hcoop.net) has joined #ceph
[22:58] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[22:59] * dneary (~dneary@173.243.39.74) Quit (Ping timeout: 480 seconds)
[23:01] * georgem1 (~Adium@24.114.66.172) has joined #ceph
[23:01] * georgem1 (~Adium@24.114.66.172) has left #ceph
[23:06] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[23:09] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[23:10] * ivve (~zed@c83-248-116-21.bredband.comhem.se) Quit (Ping timeout: 480 seconds)
[23:10] * smithfarm (~smithfarm@217.30.64.210) Quit (Ping timeout: 480 seconds)
[23:11] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[23:14] * bniver (~bniver@pool-98-110-180-234.bstnma.fios.verizon.net) has joined #ceph
[23:14] * badone (~badone@66.187.239.16) Quit (Quit: k?thxbyebyenow)
[23:19] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[23:27] * tgmedia (~tom@202.14.217.2) has left #ceph
[23:32] * sudocat1 (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[23:39] * raphaelsc (~raphaelsc@2804:7f2:2080:7c8f:5e51:4fff:fe86:bbae) Quit (Remote host closed the connection)
[23:39] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[23:40] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.