#ceph IRC Log

Index

IRC Log for 2014-06-06

Timestamps are in GMT/BST.

[0:00] <seapasulli> anyone see errors like this :: 2014-06-05 16:59:46.621236 7f9367d43700 1 -- 10.16.64.24:6832/18129 --> 10.16.64.5:0/2038654 -- osd_op_reply(5 rbd_header.107a25574e68a [call rbd.get_stripe_unit_count] v0'0 uv0 ondisk = -8 (Exec format error)) v6 -- ?+0 0x7f938d07ca80 con 0x7f938ad6a160
[0:01] * sprachgenerator (~sprachgen@173.150.196.199) Quit (Ping timeout: 480 seconds)
[0:01] <seapasulli> dcurtiss: I sent my request in this morning and received min automatically I believe within 20 minutes
[0:01] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:9934:7752:90d8:e7b7) has joined #ceph
[0:01] <seapasulli> looks like one of my disks is having issues. Just wondering if anyone else has seen such an error
[0:05] * gregsfortytwo (~Adium@129.210.115.14) has joined #ceph
[0:05] * sarob (~sarob@2001:4998:effd:600:6c59:7bed:37d6:9e6) has joined #ceph
[0:09] * koleosfuscus (~koleosfus@adsl-84-226-68-69.adslplus.ch) has joined #ceph
[0:09] * Cube (~Cube@66-87-66-229.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[0:10] * Cube (~Cube@66.87.131.17) has joined #ceph
[0:13] * sarob (~sarob@2001:4998:effd:600:6c59:7bed:37d6:9e6) Quit (Ping timeout: 480 seconds)
[0:14] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[0:15] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[0:16] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:17] * rendar (~I@host123-161-dynamic.1-87-r.retail.telecomitalia.it) Quit ()
[0:22] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:9934:7752:90d8:e7b7) Quit (Remote host closed the connection)
[0:25] * joef1 (~Adium@2601:9:2a00:690:8089:eb40:ba18:575) has joined #ceph
[0:25] * joef (~Adium@2620:79:0:131:8186:713a:4d50:3ff7) Quit (Remote host closed the connection)
[0:25] * joef1 (~Adium@2601:9:2a00:690:8089:eb40:ba18:575) has left #ceph
[0:26] * erice (~erice@host-sb226.res.openband.net) has joined #ceph
[0:28] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:28] * JayJ (~jayj@157.130.21.226) has joined #ceph
[0:28] * Sysadmin88 (~IceChat77@94.4.22.173) has joined #ceph
[0:36] * JayJ (~jayj@157.130.21.226) Quit (Ping timeout: 480 seconds)
[0:37] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:37] * sprachgenerator (~sprachgen@173.150.161.97) has joined #ceph
[0:39] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[0:45] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:48] * sprachgenerator (~sprachgen@173.150.161.97) Quit (Read error: Operation timed out)
[0:52] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[0:53] * KaZeR (~kazer@64.201.252.132) Quit (Ping timeout: 480 seconds)
[0:56] * danieagle (~Daniel@191.250.136.251) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[0:57] * bandrus (~Adium@66-87-119-105.pools.spcsdns.net) Quit (Quit: Leaving.)
[0:59] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:00] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[1:00] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[1:02] * ircolle (~Adium@mobile-166-137-217-150.mycingular.net) Quit (Read error: Connection reset by peer)
[1:03] * sputnik13 (~sputnik13@207.8.121.241) Quit ()
[1:03] * KaZeR (~kazer@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[1:03] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:04] * Vacum (~vovo@88.130.193.115) Quit (Ping timeout: 480 seconds)
[1:05] * koleosfuscus (~koleosfus@adsl-84-226-68-69.adslplus.ch) Quit (Quit: koleosfuscus)
[1:08] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Remote host closed the connection)
[1:08] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:09] * jdmason (~jon@192.55.55.39) Quit (Remote host closed the connection)
[1:11] * steki (~steki@cable-94-189-165-169.dynamic.sbb.rs) Quit (Ping timeout: 480 seconds)
[1:11] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[1:11] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[1:14] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[1:15] <KaZeR> can someone please help me to fix this ? 63 stale+active+clean
[1:15] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[1:20] * stj (~s@tully.csail.mit.edu) Quit (Quit: I accidentally the whole program.)
[1:21] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[1:21] * The_Bishop (~bishop@f055213012.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[1:21] * doubleg (~doubleg@69.167.130.11) Quit (Quit: Lost terminal)
[1:23] * oms101 (~oms101@p20030057EA342800EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:25] * The_Bishop (~bishop@f055213012.adsl.alicedsl.de) has joined #ceph
[1:32] * oms101 (~oms101@p20030057EA2FD500EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:33] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) Quit (Quit: Leaving.)
[1:33] <dcurtiss> KaZeR: http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#stuck-placement-groups
[1:33] <dcurtiss> "stale - The placement group status has not been updated by a ceph-osd, indicating that all nodes storing this placement group may be down."
[1:34] <dcurtiss> "For stuck stale placement groups, it is normally a matter of getting the right ceph-osd daemons running again."
[1:34] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:34] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[1:35] <KaZeR> dcurtiss, thanks, but i only have one OSD down, and my replica ratio is 3
[1:35] * analbeard (~shw@host86-155-192-138.range86-155.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:36] <KaZeR> actually the stale status disapeared 3 or 4 minutes ago.. it's now in backfilling..
[1:36] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has joined #ceph
[1:36] <KaZeR> but i had it for a couple of hours before so it's really weird
[1:36] * huangjun (~kvirc@111.173.98.164) has joined #ceph
[1:37] * evl (~chatzilla@139.216.138.39) has joined #ceph
[1:37] <classicsnail> hi guys, I have 4 mds deployed, max_mds is set at 2, and I have 3 in standby, one in resolve
[1:37] * LeaChim (~LeaChim@host86-174-77-240.range86-174.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:37] <classicsnail> is there anyway to force one of the standby into a pairing mode?
[1:37] <classicsnail> this is on firefly
[1:38] * newbie|2 (~kvirc@111.173.98.164) has joined #ceph
[1:41] * gregsfortytwo (~Adium@129.210.115.14) Quit (Quit: Leaving.)
[1:41] * jdmason (~jon@192.55.55.39) has joined #ceph
[1:43] * dennis__ (~chatzilla@ip-95-223-87-252.unitymediagroup.de) has joined #ceph
[1:44] * huangjun (~kvirc@111.173.98.164) Quit (Ping timeout: 480 seconds)
[1:45] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[1:45] * gregsfortytwo (~Adium@129.210.115.14) has joined #ceph
[1:45] <dennis__> what are the diskspace requirements for a ceph-deploy based cluster? I just tried to create a 3-node cluster but the journal files fill up the disks and I see no option to specify the journal size.
[1:46] * gregsfortytwo (~Adium@129.210.115.14) Quit ()
[1:47] <lurbs> You should be able to set the journal size in the ceph.conf local to where you're running ceph-deploy, using 'osd journal size'.
[1:47] <lurbs> https://ceph.com/docs/master/rados/configuration/osd-config-ref/
[1:47] * sprachgenerator (~sprachgen@173.150.212.105) has joined #ceph
[1:49] * gregphone (~gregphone@66-87-119-167.pools.spcsdns.net) has joined #ceph
[1:49] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) Quit (Quit: Leaving.)
[1:49] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:50] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) Quit (Quit: Leaving.)
[1:51] * erice (~erice@host-sb226.res.openband.net) Quit (Ping timeout: 480 seconds)
[1:53] * gregphone (~gregphone@66-87-119-167.pools.spcsdns.net) Quit ()
[1:53] * gregphone (~gregphone@66.87.119.167) has joined #ceph
[1:56] * stj (~stj@2001:470:8b2d:bb8:21d:9ff:fe29:8a6a) has joined #ceph
[1:56] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[1:57] * spekzor (spekzor@d.clients.kiwiirc.com) has joined #ceph
[1:57] * nwat (~textual@eduroam-248-28.ucsc.edu) has joined #ceph
[1:57] <spekzor> hi
[1:58] <spekzor> we've just ugpraded to 80.1 (came from 0.72) on a 3 node cluster and after restarting the osds
[1:58] <spekzor> 2% of placementgroups were degraeded
[1:58] <spekzor> 18 osds we have
[1:58] <spekzor> is this normal?
[1:59] <spekzor> restart took only a few seconds
[1:59] <classicsnail> it was for every update I've done, there was some repair required on all
[2:00] <classicsnail> when I then set the optimal tunables (don't do this just yet if you're using the kernel cephfs driver on linux 3.14 or below), it resynced all the pgs again
[2:00] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[2:00] <spekzor> we had perfect performance before ugprade but during recovery we have high io wait on vms
[2:00] <spekzor> client io is minimal
[2:00] <spekzor> 100 iops
[2:01] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) has joined #ceph
[2:01] <spekzor> som osds disks max out (iostat)
[2:01] * yanfali_lap (~yanfali@75-101-14-52.static.sonic.net) Quit (Quit: yanfali_lap)
[2:01] <spekzor> is this normal? and can i expect the performance to recover after recovery is complete?
[2:02] <classicsnail> I've seen it in certain situations, where I have a lot of clients requesting a lot of small files while a recovery is occuring
[2:03] <classicsnail> um, http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-July/002624.html for example
[2:04] * stj (~stj@2001:470:8b2d:bb8:21d:9ff:fe29:8a6a) Quit (Quit: leaving.)
[2:04] * stj (~stj@2001:470:8b2d:bb8:21d:9ff:fe29:8a6a) has joined #ceph
[2:05] <spekzor> 21169 GB used, 45866 GB / 67035 GB avail
[2:05] <spekzor> 9251/5494286 objects degraded (0.168%)
[2:05] <spekzor> 253 active+recovery_wait
[2:05] <spekzor> 2 active+recovery_wait+degraded+remapped
[2:05] <spekzor> 421 active+clean
[2:05] <spekzor> 28 active+recovering
[2:05] <spekzor> recovery io 19872 kB/s, 5 objects/s
[2:05] <spekzor> client io 1419 B/s rd, 13939 kB/s wr, 84 op/ 21169 GB used, 45866 GB / 67035 GB avail
[2:05] <spekzor> 9251/5494286 objects degraded (0.168%)
[2:05] <spekzor> 253 active+recovery_wait
[2:05] <spekzor> 2 active+recovery_wait+degraded+remapped
[2:05] <spekzor> 421 active+clean
[2:05] <spekzor> 28 active+recovering
[2:05] <spekzor> recovery io 19872 kB/s, 5 objects/s
[2:05] <spekzor> client io 1419 B/s rd, 13939 kB/s wr, 84 op/s
[2:05] <spekzor> 9251/5494286 objects degraded (0.168%)
[2:05] <spekzor> sorry for that
[2:05] <spekzor> recovery is taking ages
[2:06] <classicsnail> for me, the final few thousand objects were much slower than the initial few tens of millions
[2:06] <spekzor> hmm
[2:07] <spekzor> if i restart an osd it says lots of objects degraded (when its down) and when it's up again it instantly recovers to 0.168%
[2:07] * brytown (~Adium@2620:79:0:8204:f9b7:1ce1:5e4e:7334) has joined #ceph
[2:08] <spekzor> 133 requests are blocked > 32 sec;
[2:08] * brytown (~Adium@2620:79:0:8204:f9b7:1ce1:5e4e:7334) has left #ceph
[2:08] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[2:09] <KaZeR> spekzor, facing the exact same issue here. my VMs are almost deads
[2:10] <KaZeR> recovery io 210 MB/s, 38 objects/s
[2:10] <KaZeR> client io 976 kB/s rd, 17173 kB/s wr, 205 op/s
[2:10] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:10] <spekzor> and it's 2.00 in the night so i hope that eveything will be ok after recovery or else the phone wil be hot
[2:10] <KaZeR> i've tried to tweak some recovey settings using ceph osd tell but no go
[2:10] <spekzor> me two
[2:10] <Kupo1> Anyone know if its possible to copy children snapshots for backup purposes?
[2:11] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:11] <spekzor> did you also upgrade?
[2:11] * gregphone (~gregphone@66.87.119.167) Quit (Quit: Rooms ??? iPhone IRC Client ??? http://www.roomsapp.mobi)
[2:12] * yanfali_lap (~yanfali@75-101-14-52.static.sonic.net) has joined #ceph
[2:13] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[2:13] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[2:15] <KaZeR> spekzor, no, i removed two OSD in the intent of moving their journals to SSD
[2:15] <spekzor> how many osds total?
[2:15] <spekzor> how many nodes?
[2:16] <KaZeR> right now 12 OSDs on 5 nodes
[2:16] <spekzor> right
[2:16] <spekzor> funny ting is that we've added osds before went we went from 9 to 12 but things weren't slow then
[2:17] <KaZeR> not even during the backfilling ?
[2:18] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[2:19] <spekzor> slower but not as slow as now
[2:19] <KaZeR> interesting
[2:20] <spekzor> funny thing is dat we have: recovery 9215/5494370 objects degraded (0.168%); you should expect that (with an object size of 4 megs) 35,99609375 shouldn't take that long
[2:21] <KaZeR> i think (but i'm not sure) that it could be related to new IOs on your cluster
[2:22] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[2:23] <spekzor> at least 2 out of 9 harddisks in each host does 99% utilisation
[2:24] <spekzor> why would pg's get degraded after upgrade
[2:24] <spekzor> we still use legacy tinables
[2:24] <spekzor> tunables
[2:24] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[2:26] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[2:26] * sarob_ (~sarob@2001:4998:effd:600:c841:bcb8:1f3e:2738) has joined #ceph
[2:29] * zack_dolby (~textual@em111-188-196-71.pool.e-mobile.ne.jp) has joined #ceph
[2:31] * dpippenger (~Adium@66-192-9-78.static.twtelecom.net) has joined #ceph
[2:31] * rturk is now known as rturk|afk
[2:31] <spekzor> root@ceph01:~grep 'slow request' /var/log/ceph/ceph.log | awk '{print $3}' | sort | uniq -c | sort -n
[2:31] <spekzor> 2 osd.15
[2:31] <spekzor> 2 osd.2
[2:31] <spekzor> 12 osd.4
[2:31] <spekzor> 31 osd.5
[2:31] <spekzor> 177 osd.1
[2:31] <spekzor> 361 osd.17
[2:31] <spekzor> 570 osd.13
[2:31] <spekzor> 1329 osd.7
[2:31] <spekzor> 1574 osd.8
[2:32] <dennis__> thanks, setting osd journal size did the trick
[2:33] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:33] <dennis__> now the cluster is running...but all pgs are shown as incomplete?
[2:33] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[2:34] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[2:34] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[2:34] * bandrus (~Adium@66.87.119.3) has joined #ceph
[2:34] * sarob_ (~sarob@2001:4998:effd:600:c841:bcb8:1f3e:2738) Quit (Ping timeout: 480 seconds)
[2:35] <sage> dennis__: this is a brand new cluster?
[2:35] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:36] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[2:38] <dennis__> yes, but I noticed that ntpd wasn't running so that might have something to do with it. i called ntpdate and started ntpd on all nodes and then restarted the mon's one after the other but now it still says "clock skew detected on mon.ceph2"
[2:38] <spekzor> what could a cluster be doing if it's maxing out disks and taking ages to recover just 9000 objects and pulling recovery speeds nog over 2 mb sec
[2:38] <spekzor> any help really appreciated, i'm a bit stressed out here
[2:39] * sprachgenerator (~sprachgen@173.150.212.105) Quit (Quit: sprachgenerator)
[2:40] <sage> spekzor: small objects?
[2:40] <lurbs> dennis: It can take quite some time for a clock skew state to clear, even if it's now in sync. I've resorted to restarting the monitor(s) in order to clear it.
[2:41] <spekzor> eh regular object size i guess
[2:41] <spekzor> 2 pgs degraded; 29 pgs recovering; 272 pgs recovery_wait; 301 pgs stuck unclean; 45 requests are blocked > 32 sec; recovery 9158/5494415 objects degraded (0.167%);
[2:41] <spekzor> 21170 GB used, 45865 GB / 67035 GB avail
[2:42] <dennis__> by restart do you mean reboot? i used ntpdate to force the correct time because ntpd takes a while to sync up but that doesn't seem to help.
[2:43] <spekzor> btw sage, i've added ceph to our university (of applied science, to as fancy as the real shit) curriculum and students love it
[2:43] <dennis__> is there a way to get the times that the mon's report?
[2:43] <lurbs> dennis: Nope, just the service, after running ntpdate to force set the time, and setting up ntpd to keep it in sync.
[2:44] <sage> spekzor: nice
[2:44] <sage> spekzor: it's making slow progress, or no progress at all?
[2:44] <spekzor> very slow but still
[2:44] <lurbs> We see it on reboot of the monitor nodes from time to time - if the monitor service comes up before the NTP daemon has things sorted correctly.
[2:44] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[2:45] <spekzor> it's doing 10 objectas a minute..
[2:45] <sage> what version?
[2:45] <spekzor> recovery op prio is 2, client is at default
[2:45] <spekzor> 80.1 on 12.04 lts
[2:45] <spekzor> 3.11.0-23-generic
[2:46] <sage> can you pastbin ceph pg dump | grep recovering ?
[2:46] <spekzor> sur hang on
[2:47] <spekzor> http://pastebin.com/2kF9Bh6v
[2:47] <dennis__> i did an ntpdate call again and that did the trick apparently. no more skew reported.
[2:47] <dennis__> however the cluster still reports all 192 pg's as incomplete
[2:48] * dpippenger (~Adium@66-192-9-78.static.twtelecom.net) has left #ceph
[2:49] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[2:49] <dennis__> health detail tells me that reducing rbd min_size, data min_size or metadata min_size from 2 might help
[2:49] <dennis__> not sure what exactly is supposed to mean though
[2:50] * ircolle (~Adium@mobile-166-137-217-150.mycingular.net) has joined #ceph
[2:51] <spekzor> sage, you also need a pg query?
[2:53] * hitsumabushi_ (hitsumabus@b.clients.kiwiirc.com) has joined #ceph
[2:54] <spekzor> or a couple of beers, of apple pie name it
[2:58] <dennis__> is it normal that when i look in the ceph.log i see somthing like "pgmap v69" followed by "pgmap v70", v71, v72, etc. even though the cluster is completely fresh and empty an nothing is really happening?
[2:58] * gregphone (~gregphone@66-87-119-167.pools.spcsdns.net) has joined #ceph
[2:58] * gregphone (~gregphone@66-87-119-167.pools.spcsdns.net) Quit ()
[2:59] <spekzor> dennis__ i guess that's just the version increment of your pg map that get's updated now an then
[2:59] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) Quit (Quit: Leaving.)
[3:00] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) has joined #ceph
[3:00] <spekzor> mine is at pgmap v11952384:
[3:00] <spekzor> hope it's an int64 :)
[3:01] <dennis__> ok, i guessed it had something to do with versioning but i wasn't sure if it is only supposed to grow when something special in the cluster happens
[3:01] <sage> spekzor: | grep recovering (not recover) ?
[3:02] <dennis__> hm, the clock skew is back. it almost seems that the timing in these vm's is way off
[3:02] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:02] <spekzor> sorry: http://pastebin.com/bnb3PV8e
[3:03] <sage> spekzor: we did see a problem where the recovery throttling stalled out incorrectly, but not for several weeks (and it made no sense). you might try identifying which node(s) are common to the pgs that are recovering and restarting one or more of them to see if that gets things moving along more quickly
[3:04] <spekzor> i cranked up recovery io priority to 20, it's a bit faster now.
[3:04] <spekzor> i restarted all osd's in sequence on all nodes. no cigar
[3:05] <spekzor> is it normal for a cluster to have a couple of percentage in degraded state after a ugprade (still using legacy)
[3:05] * Vacum (~vovo@i59F79A68.versanet.de) has joined #ceph
[3:06] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:09] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[3:09] * narb (~Jeff@38.99.52.10) Quit (Quit: narb)
[3:10] <dennis__> so what can i do to get the pgs out of their incomplete state?
[3:10] * joef (~Adium@2601:9:2a00:690:a5a2:d2f7:1cbc:d98f) has joined #ceph
[3:10] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[3:10] <dennis__> the cluster reports 3 mons with quorum and 3 osds as up and in
[3:11] <dennis__> which looks healthy to me
[3:11] <spekzor> dennis__ what is your replica count?
[3:12] <spekzor> ceph osd dump | grep size ( ook at digit after size)
[3:12] * joef (~Adium@2601:9:2a00:690:a5a2:d2f7:1cbc:d98f) has left #ceph
[3:12] <dennis__> replicated size 3 min_size 2
[3:13] <spekzor> and does ceph -s say?
[3:14] <dennis__> cluster 3c93823a-8d7e-4fe1-b009-8c54586f79d3
[3:14] <dennis__> health HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192 pgs stuck unclean
[3:14] <dennis__> monmap e2: 3 mons at {ceph1=192.168.100.110:6789/0,ceph2=192.168.100.111:6789/0,ceph3=192.168.100.112:6789/0}, election epoch 16, quorum 0,1,2 ceph1,ceph2,ceph3
[3:14] <dennis__> osdmap e10: 3 osds: 3 up, 3 in
[3:14] <dennis__> pgmap v98: 192 pgs, 3 pools, 0 bytes data, 0 objects
[3:14] <dennis__> 4284 MB used, 8496 MB / 13465 MB avail
[3:14] <dennis__> 192 incomplete
[3:15] <spekzor> try querying a pg, first do ceph pg dump | grep incomplete and then take the first column (like 4.4f) and query it with ceph pg [pgid] query | less
[3:15] <spekzor> look at the bottom to see if there are clues
[3:18] <dennis__> hm, recovery_state->name->"Started\/Primary\/Peering"
[3:18] <spekzor> sage, any suggestions? can i go to bed and sleep or should i be concerned
[3:19] <dennis__> and recovery_state->name->"Started"
[3:20] <dennis__> other than the output doesn't show any obvious clues
[3:21] <dennis__> peer_info is an empty array. not sure if that is correct or not.
[3:23] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[3:23] * ircolle (~Adium@mobile-166-137-217-150.mycingular.net) Quit (Read error: Connection reset by peer)
[3:24] * yanfali_lap (~yanfali@75-101-14-52.static.sonic.net) Quit (Quit: yanfali_lap)
[3:25] * spekzor (spekzor@d.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[3:26] * ircolle (~Adium@mobile-166-137-217-150.mycingular.net) has joined #ceph
[3:27] * bandrus1 (~Adium@66-87-119-42.pools.spcsdns.net) has joined #ceph
[3:27] * sarob (~sarob@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[3:27] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:31] * ircolle (~Adium@mobile-166-137-217-150.mycingular.net) Quit ()
[3:31] <dmick> dennis__: what does the 'active' and 'up' set look like for a broken pg
[3:31] * yanfali_lap (~yanfali@75-101-14-52.static.sonic.net) has joined #ceph
[3:32] * bandrus (~Adium@66.87.119.3) Quit (Ping timeout: 480 seconds)
[3:33] <dennis__> acting and up is 0 for all pgs
[3:33] <dmick> so that means crush is giving you only one OSD
[3:33] <dmick> but you've asked for 3-way replication
[3:33] <dmick> so it's a crush problem
[3:33] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[3:34] <dennis__> that makes sense
[3:35] * bandrus1 (~Adium@66-87-119-42.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[3:35] * sarob (~sarob@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[3:36] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Quit: Ex-Chat)
[3:37] <dennis__> ceph osd tree looks good though. three hosts with one osd each and all marked as up.
[3:41] * yanfali_lap (~yanfali@75-101-14-52.static.sonic.net) Quit (Quit: yanfali_lap)
[3:42] * spekzor (spekzor@d.clients.kiwiirc.com) has joined #ceph
[3:42] <spekzor> sage, i dropped out. dit you say anything after my last line?
[3:43] <spekzor> i'm still hoping you ore somebody can help me out.
[3:43] <spekzor> root@ceph01:~# grep 'slow request' /var/log/ceph/ceph.log | awk '{print $3}' | sort | uniq -c | sort
[3:43] <spekzor> 1282 osd.12
[3:43] <spekzor> 12 osd.4
[3:43] <spekzor> 1337 osd.17
[3:43] <spekzor> 1450 osd.13
[3:43] <spekzor> 2323 osd.7
[3:43] <spekzor> 252 osd.5
[3:43] <spekzor> 2925 osd.8
[3:43] <spekzor> 2 osd.15
[3:43] <spekzor> 2 osd.2
[3:43] <spekzor> 418 osd.1
[3:43] <spekzor> root@ceph01:~# grep 'slow request' /var/log/ceph/ceph.log | awk '{print $3}' | sort | uniq -c | sort
[3:43] <spekzor> 1282 osd.12
[3:43] <spekzor> 12 osd.4
[3:43] <spekzor> 1337 osd.17
[3:43] <spekzor> 1450 osd.13
[3:43] <spekzor> 2323 osd.7
[3:43] <spekzor> 252 osd.5
[3:43] <spekzor> 2925 osd.8
[3:43] <spekzor> 2 osd.15
[3:43] <spekzor> 2 osd.2
[3:43] <spekzor> 418 osd.1
[3:44] <spekzor> sorry for that
[3:44] <dennis__> the crushmap looks ok although it says "hash 0" and "weight 0.000" everywhere.
[3:50] <dmick> dennis__: can you pastebin the decompiled map
[3:51] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[3:51] <dmick> dennis__: does ceph osd dump show all osds 'up in weight 1'?
[3:52] <spekzor> i still have a cluster with only 4127 ot of 5494553 object that are recovering but it takes a very long time. some times it goes a bit faster but then stop again. most of the disks are maxed out and client is only doing 200 iops. just upgraed to 80.1
[3:54] <dennis__> http://pastebin.com/SLuLNHQx
[3:55] <dennis__> yep, all 'up in weight 1'
[3:55] <dmick> weight 0 in the crushmap strikes me as a problem
[3:56] * vbellur (~vijay@122.167.205.178) Quit (Ping timeout: 480 seconds)
[3:58] <dmick> can you try this command:
[3:58] <dmick> df -P -k $osd_data/. | tail -1 | awk '{ print sprintf("%.2f",$2/1073741824) }'
[3:58] <dmick> replacing $osd_data with one of your osd's data directories
[3:58] <dmick> (that, from the init script, is what should have set your initial osd weights)
[4:00] <dennis__> 0.00
[4:01] <newbie|2> [61745.758334] libceph: mon0 192.168.2.102:6789 feature set mismatch, my 4a042aca < server's 504a042aca, missing 5000000000
[4:01] * newbie|2 is now known as huangjun
[4:01] <huangjun> what should i set to disable this?
[4:01] <dmick> that's a problem, then. Perhaps you could figure out why that's saying 0
[4:03] <dmick> are they very small drives?
[4:03] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Read error: No route to host)
[4:04] * stewiem2000 (~stewiem20@195.10.250.233) Quit (Read error: Operation timed out)
[4:04] <dmick> (that's calculating terabytes, and should probably have a check for 0)
[4:04] <dmick> a quick fix would be to weight them all 1.00 for now
[4:04] <dmick> (it's a relative number)
[4:04] <dennis__> yes, very small. the plan was to kick the cluster around a little bit to test how it behaves so i don't really need a lot of space but quick scrubbing/recovery times would be nice.
[4:05] <huangjun> ceph server version is 0.80 on centos 6.4, and kernel client on ubuntu 14.04, but can not mount
[4:05] <huangjun> says feature not supported
[4:05] <dmick> ok. I'll file an issue on "zero is bad mmk"
[4:05] <dennis__> ok, what is the best way to reset the weight manually to 1.0?
[4:05] <dmick> http://ceph.com/docs/master/rados/operations/crush-map/#editing-a-crush-map
[4:08] <dennis__> perfect, now it shows the pgs as peering and the number shrinks steadily
[4:08] <dennis__> HEALTH_OK
[4:09] * cephalopod (~chris@194.28.69.111.static.snap.net.nz) has joined #ceph
[4:09] <dennis__> thanks for the help! not only did that fix my problem but i also learned something in the process :)
[4:10] <dmick> yeah. and you id'ed a bug for us, so thank you
[4:13] <huangjun> dmick: how to resolve this, [61745.758334] libceph: mon0 192.168.2.102:6789 feature set mismatch, my 4a042aca < server's 504a042aca, missing 5000000000
[4:13] <huangjun> [61745.765381] libceph: mon0 192.168.2.102:6789 socket error on read
[4:13] <dmick> dennis__: http://tracker.ceph.com/issues/8551
[4:14] * zhaochao (~zhaochao@124.205.245.26) has joined #ceph
[4:14] * stewiem2000 (~stewiem20@195.10.250.233) has joined #ceph
[4:15] <dennis__> cool, thanks
[4:15] <dmick> huangjun: google can help you a lot with things like this. The issue is that the kernel client is lagging the cluster's features (which is not uncommon)
[4:16] <dmick> the fix will be a later kernel, if there is one
[4:16] <dmick> (in order to get the later Ceph kernel modules)
[4:17] * sarob (~sarob@2001:4998:effd:600:9043:ba77:8e17:25e0) has joined #ceph
[4:17] <huangjun> one way is to get the newest stable linux kernel, another is to see if the server end can disable this features
[4:18] * erice (~erice@50.240.86.181) has joined #ceph
[4:19] * sarob (~sarob@2001:4998:effd:600:9043:ba77:8e17:25e0) Quit (Remote host closed the connection)
[4:19] * sarob (~sarob@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[4:22] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (Quit: leaving)
[4:22] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[4:23] * yguang11 (~yguang11@2406:2000:ef96:e:d0d1:dd97:7748:70c0) has joined #ceph
[4:23] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Quit: Computer has gone to sleep.)
[4:24] <cephalopod> Could anyone help me figure out what would cause rbd: add failed: (34) Numerical result out of range when trying to rbd map?
[4:24] * lucas1 (~Thunderbi@222.240.148.130) Quit (Ping timeout: 480 seconds)
[4:25] <cephalopod> I hit that when setting up a new cluster, and thought maybe I did something wrong, so started from scratch again folowing the quick start guide, but still hit the same problem
[4:26] <dmick> there are a lot of google results for that string cephalopod
[4:26] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has joined #ceph
[4:26] * zhaochao (~zhaochao@124.205.245.26) Quit (Remote host closed the connection)
[4:27] * sarob (~sarob@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[4:27] <cephalopod> many of which I have looked at. I didn't think it was so obvious, but that response will get me to look again
[4:27] <dmick> ok, just checking if you've looked
[4:28] <dmick> I'm looking too
[4:28] <dmick> and I agree they're not very satisfying
[4:29] <dmick> anything in the kernel log?
[4:29] <dmick> (which kernel? what's the format of the rbd image you're trying to map?)
[4:29] <cephalopod> I tried version two which just got me "operation not supported"
[4:30] <cephalopod> when I try to map, dmesg just shows mon0 10.30.83.29:6789 session established
[4:30] <cephalopod> I've tried on a couple different kernels, arch/fedora etc
[4:30] <cephalopod> the one I'm looking at right now is 3.11.10-301.fc20.x86_64
[4:31] <janos_> that's old for f20
[4:32] <cephalopod> ah, yeah, default install from pxe, one sec
[4:37] <cephalopod> ok, just double checked, and the same thing for 3.14.5-200.fc20.x86_64 and 3.14.4-1-ARCH
[4:39] * stewiem2000 (~stewiem20@195.10.250.233) Quit (Read error: Connection timed out)
[4:39] * stewiem2000 (~stewiem20@195.10.250.233) has joined #ceph
[4:39] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) Quit (Quit: Leaving.)
[4:45] <cephalopod> ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)
[4:46] * erice (~erice@50.240.86.181) Quit (Ping timeout: 480 seconds)
[4:51] * dennis__ (~chatzilla@ip-95-223-87-252.unitymediagroup.de) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 26.0/20131209182739])
[4:57] * nwat (~textual@eduroam-248-28.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:57] * dereky (~derek@proxy00.umiacs.umd.edu) has joined #ceph
[5:00] * yguang11 (~yguang11@2406:2000:ef96:e:d0d1:dd97:7748:70c0) Quit (Remote host closed the connection)
[5:00] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[5:02] * vbellur (~vijay@209.132.188.8) has joined #ceph
[5:05] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) Quit (Quit: Leaving.)
[5:06] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[5:07] * shang (~ShangWu@ipvpn110138.netvigator.com) has joined #ceph
[5:15] * scuttlemonkey (~scuttlemo@72.11.211.243) has joined #ceph
[5:15] * ChanServ sets mode +o scuttlemonkey
[5:19] * lucas1 (~Thunderbi@222.240.148.154) has joined #ceph
[5:25] * lucas1 (~Thunderbi@222.240.148.154) Quit (Quit: lucas1)
[5:36] * Vacum_ (~vovo@88.130.213.129) has joined #ceph
[5:37] <spekzor> can anyone help me? i have a cluster upgraded to 80.1 few hours ago. but the cluster just can't seem to finish the last few degraded objects
[5:40] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[5:41] * vbellur (~vijay@209.132.188.8) Quit (Ping timeout: 480 seconds)
[5:42] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[5:43] * Vacum (~vovo@i59F79A68.versanet.de) Quit (Ping timeout: 480 seconds)
[5:43] * lucas1 (~Thunderbi@218.76.25.66) Quit ()
[5:46] <spekzor> here is the health detail list http://pastebin.com/ASgSB4DR
[5:49] * zhaochao (~zhaochao@124.205.245.26) has joined #ceph
[5:51] * vbellur (~vijay@nat-pool-blr-t.redhat.com) has joined #ceph
[5:55] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[5:57] * Cube (~Cube@66.87.131.17) Quit (Quit: Leaving.)
[5:58] * hitsumabushi_ (hitsumabus@b.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[6:00] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) has joined #ceph
[6:01] * haomaiwang (~haomaiwan@124.161.78.105) has joined #ceph
[6:02] * joef (~Adium@c-67-188-220-98.hsd1.ca.comcast.net) has joined #ceph
[6:02] * joef (~Adium@c-67-188-220-98.hsd1.ca.comcast.net) Quit ()
[6:08] * stewiem2000 (~stewiem20@195.10.250.233) Quit (Read error: Connection timed out)
[6:09] * stewiem2000 (~stewiem20@195.10.250.233) has joined #ceph
[6:16] * scuttlemonkey (~scuttlemo@72.11.211.243) Quit (Ping timeout: 480 seconds)
[6:16] <sage> spekzor: back. when you say the disks are maxed out, you mean they are very busy? but no client ops?
[6:17] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) Quit (Ping timeout: 480 seconds)
[6:17] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:17] <sage> is it specific disks, or all of them? (i'm guessing its the ones with the slow requests messages?)
[6:17] * shang (~ShangWu@ipvpn110138.netvigator.com) Quit (Remote host closed the connection)
[6:17] * haomaiwang (~haomaiwan@124.161.78.105) Quit (Remote host closed the connection)
[6:17] <sage> if you can pick a single osd that is busy but not apaprently doing much client work, please do
[6:18] * haomaiwang (~haomaiwan@li634-52.members.linode.com) has joined #ceph
[6:18] <sage> ceph daemon osd.NNN config set debug_ms 1 ; ceph daemon osd.NNN config set debug_osd 20 ; let it go for a few minutes, and hten set the debug levels back to 0. and then open a ticket at tracker.ceph.com and attach or link to the log?
[6:18] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:25] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[6:28] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:28] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[6:33] * scuttlemonkey (~scuttlemo@72.11.211.243) has joined #ceph
[6:33] * ChanServ sets mode +o scuttlemonkey
[6:33] * haomaiwa_ (~haomaiwan@124.161.78.105) has joined #ceph
[6:40] * haomaiwang (~haomaiwan@li634-52.members.linode.com) Quit (Ping timeout: 480 seconds)
[6:41] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has joined #ceph
[6:45] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[6:50] <spekzor> hi, sage
[6:50] <spekzor> will try
[6:52] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has left #ceph
[6:56] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[6:56] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit ()
[6:56] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[6:57] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit ()
[6:58] * michalefty (~micha@p20030071CF63F800E471F6D248F304EA.dip0.t-ipconnect.de) has joined #ceph
[7:01] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:30fe:1707:d6ea:dba3) has joined #ceph
[7:05] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[7:08] * dereky (~derek@proxy00.umiacs.umd.edu) Quit (Ping timeout: 480 seconds)
[7:09] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:10] * m0e (~Moe@41.45.107.223) Quit (Ping timeout: 480 seconds)
[7:11] * stewiem2000 (~stewiem20@195.10.250.233) Quit (Read error: Operation timed out)
[7:12] * stewiem2000 (~stewiem20@195.10.250.233) has joined #ceph
[7:14] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:30fe:1707:d6ea:dba3) Quit (Ping timeout: 480 seconds)
[7:19] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[7:21] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) has joined #ceph
[7:21] * cephalopod (~chris@194.28.69.111.static.snap.net.nz) Quit (Remote host closed the connection)
[7:28] * michalefty (~micha@p20030071CF63F800E471F6D248F304EA.dip0.t-ipconnect.de) has left #ceph
[7:29] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) Quit (Quit: Leaving)
[7:29] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[7:30] * hitsumabushi_ (hitsumabus@b.clients.kiwiirc.com) has joined #ceph
[7:32] * rotbeard (~redbeard@2a02:908:df11:9480:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[7:32] * scuttlemonkey (~scuttlemo@72.11.211.243) Quit (Ping timeout: 480 seconds)
[7:33] * lalatenduM (~lalatendu@209.132.188.8) has joined #ceph
[7:34] * yguang11_ (~yguang11@2406:2000:ef96:e:d0d1:dd97:7748:70c0) has joined #ceph
[7:34] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[7:35] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[7:35] * sleinen (~Adium@2001:620:0:26:a4ec:c48f:b068:683d) Quit (Quit: Leaving.)
[7:37] * yanzheng (~zhyan@jfdmzpr01-ext.jf.intel.com) has joined #ceph
[7:37] * stewiem2000 (~stewiem20@195.10.250.233) Quit (Read error: Connection timed out)
[7:38] * sleinen1 (~Adium@2001:620:0:26:549e:5929:f482:14a2) has joined #ceph
[7:39] * stewiem2000 (~stewiem20@195.10.250.233) has joined #ceph
[7:39] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:30fe:1707:d6ea:dba3) has joined #ceph
[7:40] * vbellur (~vijay@nat-pool-blr-t.redhat.com) Quit (Ping timeout: 480 seconds)
[7:46] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[7:47] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[7:50] * spekzor (spekzor@d.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[7:50] * yguang11_ (~yguang11@2406:2000:ef96:e:d0d1:dd97:7748:70c0) Quit (Remote host closed the connection)
[7:50] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[7:54] * vbellur (~vijay@209.132.188.8) has joined #ceph
[7:54] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[7:55] * yguang11 (~yguang11@2406:2000:ef96:e:ace3:76e9:4203:f5ae) has joined #ceph
[7:58] * sleinen1 (~Adium@2001:620:0:26:549e:5929:f482:14a2) Quit (Quit: Leaving.)
[7:58] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:30fe:1707:d6ea:dba3) Quit (Remote host closed the connection)
[8:02] * fdmanana_ (~fdmanana@bl10-253-137.dsl.telepac.pt) has joined #ceph
[8:03] * Cube (~Cube@netblock-75-79-17-138.dslextreme.com) has joined #ceph
[8:03] * steki (~steki@91.195.39.5) has joined #ceph
[8:04] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[8:04] * Cube (~Cube@netblock-75-79-17-138.dslextreme.com) Quit ()
[8:07] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[8:07] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[8:07] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[8:07] * tws_1 (~traviss@rrcs-24-123-86-154.central.biz.rr.com) Quit (Read error: Operation timed out)
[8:07] * haomaiwa_ (~haomaiwan@124.161.78.105) Quit (Remote host closed the connection)
[8:08] * haomaiwang (~haomaiwan@li634-52.members.linode.com) has joined #ceph
[8:08] * JCL1 (~JCL@c-24-23-166-139.hsd1.ca.comcast.net) has joined #ceph
[8:09] * tws_ (~traviss@rrcs-24-123-86-154.central.biz.rr.com) has joined #ceph
[8:09] * fdmanana (~fdmanana@bl13-158-240.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[8:09] * ikrstic (~ikrstic@77-46-245-216.dynamic.isp.telekom.rs) has joined #ceph
[8:10] * haomaiwa_ (~haomaiwan@124.161.78.105) has joined #ceph
[8:11] * JCL (~JCL@2601:9:5980:39b:582a:a327:c20:46fa) Quit (Ping timeout: 480 seconds)
[8:12] * madkiss (~madkiss@212095007082.public.telering.at) has joined #ceph
[8:12] * dereky (~derek@proxy00.umiacs.umd.edu) has joined #ceph
[8:15] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) Quit (Remote host closed the connection)
[8:15] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[8:16] * haomaiwang (~haomaiwan@li634-52.members.linode.com) Quit (Read error: Operation timed out)
[8:17] * steveeJ (~junky@client248.amh.kn.studentenwohnheim-bw.de) Quit (Quit: Leaving)
[8:18] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[8:19] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:19] * madkiss (~madkiss@212095007082.public.telering.at) Quit (Quit: Leaving.)
[8:19] * ade (~abradshaw@193.202.255.218) has joined #ceph
[8:22] * thb (~me@port-22022.pppoe.wtnet.de) has joined #ceph
[8:23] * thb is now known as Guest12736
[8:24] * jeremy__s (~jeremy@LReunion-151-3-51.w193-253.abo.wanadoo.fr) has joined #ceph
[8:24] * yuriw (~Adium@ABordeaux-654-1-80-223.w109-214.abo.wanadoo.fr) has joined #ceph
[8:25] * hitsumabushii (~hitsumabu@KD106132090218.au-net.ne.jp) has joined #ceph
[8:25] * hitsumabushii (~hitsumabu@KD106132090218.au-net.ne.jp) Quit (Max SendQ exceeded)
[8:25] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) has joined #ceph
[8:26] * hitsumabushii (~hitsumabu@KD106132090218.au-net.ne.jp) has joined #ceph
[8:26] * vbellur (~vijay@209.132.188.8) Quit (Ping timeout: 480 seconds)
[8:28] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[8:31] * sleinen (~Adium@194.230.53.158) has joined #ceph
[8:32] * yguang11 (~yguang11@2406:2000:ef96:e:ace3:76e9:4203:f5ae) Quit (Ping timeout: 480 seconds)
[8:33] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Remote host closed the connection)
[8:33] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[8:33] * aldavud (~aldavud@213.55.176.220) has joined #ceph
[8:33] * michalefty (~micha@p20030071CF63F800004787769D04AC61.dip0.t-ipconnect.de) has joined #ceph
[8:33] * michalefty (~micha@p20030071CF63F800004787769D04AC61.dip0.t-ipconnect.de) has left #ceph
[8:34] * sleinen1 (~Adium@2001:620:0:26:11bd:d533:b49d:45ac) has joined #ceph
[8:36] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) has joined #ceph
[8:37] * zack_dolby (~textual@em111-188-196-71.pool.e-mobile.ne.jp) Quit (Ping timeout: 480 seconds)
[8:37] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[8:37] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[8:37] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[8:37] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[8:37] * vbellur (~vijay@nat-pool-blr-t.redhat.com) has joined #ceph
[8:39] * sleinen (~Adium@194.230.53.158) Quit (Ping timeout: 480 seconds)
[8:39] * Sysadmin88 (~IceChat77@94.4.22.173) Quit (Quit: Hard work pays off in the future, laziness pays off now)
[8:40] * yguang11_ (~yguang11@2406:2000:ef96:e:e8f2:a086:600c:bd8) has joined #ceph
[8:40] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[8:40] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[8:41] * stewiem2000 (~stewiem20@195.10.250.233) Quit (Read error: Connection timed out)
[8:41] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:42] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Read error: Operation timed out)
[8:44] <ingard> hi guys. anyone know if there are calamari deb packages in the works?
[8:48] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) Quit (Quit: Leaving.)
[8:51] * thomnico (~thomnico@2a01:e35:8b41:120:295b:75af:854b:eac5) has joined #ceph
[8:53] * zack_dolby (~textual@em114-49-30-44.pool.e-mobile.ne.jp) has joined #ceph
[8:54] * Guest12736 (~me@port-22022.pppoe.wtnet.de) Quit (Quit: Leaving.)
[8:54] * jeremy__s (~jeremy@LReunion-151-3-51.w193-253.abo.wanadoo.fr) Quit (Quit: leaving)
[8:56] * analbeard (~shw@support.memset.com) has joined #ceph
[8:57] * jeremy___s (~jeremy__s@LReunion-151-3-51.w193-253.abo.wanadoo.fr) has joined #ceph
[9:01] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Remote host closed the connection)
[9:02] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Remote host closed the connection)
[9:03] * ajazdzewski (~quassel@lpz-66.sprd.net) has joined #ceph
[9:03] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[9:07] * saurabh (~saurabh@nat-pool-blr-t.redhat.com) has joined #ceph
[9:10] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[9:13] * yuriw (~Adium@ABordeaux-654-1-80-223.w109-214.abo.wanadoo.fr) Quit (Quit: Leaving.)
[9:14] * yguang11_ (~yguang11@2406:2000:ef96:e:e8f2:a086:600c:bd8) Quit (Ping timeout: 480 seconds)
[9:16] * haomaiwa_ (~haomaiwan@124.161.78.105) Quit (Remote host closed the connection)
[9:16] * haomaiwang (~haomaiwan@li634-52.members.linode.com) has joined #ceph
[9:17] * bandrus (~Adium@98.238.176.251) has joined #ceph
[9:19] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[9:25] * evl (~chatzilla@139.216.138.39) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 29.0.1/20140514131124])
[9:27] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:31] * haomaiwa_ (~haomaiwan@124.161.78.105) has joined #ceph
[9:32] * aldavud (~aldavud@213.55.176.220) Quit (Ping timeout: 480 seconds)
[9:33] * madkiss (~madkiss@212095007082.public.telering.at) has joined #ceph
[9:35] * ikrstic (~ikrstic@77-46-245-216.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[9:37] * haomaiwang (~haomaiwan@li634-52.members.linode.com) Quit (Ping timeout: 480 seconds)
[9:37] * CAPSLOCK2000 (~oftc@541856CC.cm-5-1b.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[9:40] * zerick (~eocrospom@190.118.43.113) has joined #ceph
[9:41] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[9:44] * lalatenduM (~lalatendu@209.132.188.8) Quit (Ping timeout: 480 seconds)
[9:45] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[9:45] * hitsumabushii (~hitsumabu@KD106132090218.au-net.ne.jp) Quit (Remote host closed the connection)
[9:47] * sleinen1 (~Adium@2001:620:0:26:11bd:d533:b49d:45ac) Quit (Quit: Leaving.)
[9:47] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[9:48] * lucas1 (~Thunderbi@222.247.57.50) Quit (Quit: lucas1)
[9:49] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Ping timeout: 480 seconds)
[9:49] * sleinen (~Adium@194.230.53.158) has joined #ceph
[9:50] * zack_dolby (~textual@em114-49-30-44.pool.e-mobile.ne.jp) Quit (Ping timeout: 480 seconds)
[9:51] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[9:51] * ChanServ sets mode +v andreask
[9:51] * sleinen1 (~Adium@2001:620:0:26:a9d6:d684:e046:23fd) has joined #ceph
[9:52] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[9:52] * allsystemsarego (~allsystem@188.27.188.69) has joined #ceph
[9:53] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) has joined #ceph
[9:54] * kwaegema (~kwaegema@daenerys.ugent.be) has joined #ceph
[9:56] * sleinen1 (~Adium@2001:620:0:26:a9d6:d684:e046:23fd) Quit ()
[9:56] * sleinen (~Adium@194.230.53.158) Quit (Read error: Connection reset by peer)
[10:05] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[10:10] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[10:13] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:16] * lucas1 (~Thunderbi@222.240.148.130) Quit (Quit: lucas1)
[10:16] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[10:16] * yguang11_ (~yguang11@2406:2000:ef96:e:e8f2:a086:600c:bd8) has joined #ceph
[10:17] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[10:17] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[10:21] * LeaChim (~LeaChim@host86-174-77-240.range86-174.btcentralplus.com) has joined #ceph
[10:24] <mo-> is there a way to make the OSDs failover to use the public network for communication when the cluster network fails?
[10:25] * vbellur (~vijay@nat-pool-blr-t.redhat.com) Quit (Ping timeout: 480 seconds)
[10:26] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:26] * yguang11_ (~yguang11@2406:2000:ef96:e:e8f2:a086:600c:bd8) Quit (Remote host closed the connection)
[10:26] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[10:26] * madkiss1 (~madkiss@zid-vpnn018.uibk.ac.at) has joined #ceph
[10:27] * sleinen (~Adium@194.230.53.158) has joined #ceph
[10:29] * sleinen1 (~Adium@2001:620:0:26:b91d:52de:daf6:8524) has joined #ceph
[10:30] * drankis_off is now known as drankis
[10:32] * madkiss (~madkiss@212095007082.public.telering.at) Quit (Ping timeout: 480 seconds)
[10:33] * sleinen (~Adium@194.230.53.158) Quit (Read error: Operation timed out)
[10:34] * ccooke (~ccooke@spirit.gkhs.net) Quit (Remote host closed the connection)
[10:36] * vbellur (~vijay@209.132.188.8) has joined #ceph
[10:37] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:46] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[10:46] * davyjang (~oftc-webi@182.139.191.145) has joined #ceph
[10:46] * davyjang (~oftc-webi@182.139.191.145) Quit (Remote host closed the connection)
[10:47] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[10:47] * davyjang (~oftc-webi@182.139.191.145) has joined #ceph
[10:49] * sleinen1 (~Adium@2001:620:0:26:b91d:52de:daf6:8524) Quit (Quit: Leaving.)
[10:57] <singler_> mo-: I guess you could set up pacemaker to move IPs to other network
[10:58] * m0e (~Moe@41.45.234.191) has joined #ceph
[10:58] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[10:58] * madkiss (~madkiss@zid-vpnn018.uibk.ac.at) has joined #ceph
[11:00] * yanzheng (~zhyan@jfdmzpr01-ext.jf.intel.com) Quit (Remote host closed the connection)
[11:00] <mo-> hm so only in a manual-ish way. thanks tho
[11:00] <singler_> np
[11:02] * m0e (~Moe@41.45.234.191) Quit (Read error: Connection reset by peer)
[11:03] * madkiss1 (~madkiss@zid-vpnn018.uibk.ac.at) Quit (Ping timeout: 480 seconds)
[11:05] * sleinen (~Adium@194.230.53.158) has joined #ceph
[11:06] * stewiem2000 (~stewiem20@195.10.250.233) has joined #ceph
[11:06] * m0e (~Moe@41.45.234.191) has joined #ceph
[11:08] * sleinen1 (~Adium@2001:620:0:26:5e6:627c:7b54:722f) has joined #ceph
[11:13] * sleinen (~Adium@194.230.53.158) Quit (Ping timeout: 480 seconds)
[11:14] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[11:16] * rendar (~I@host19-176-dynamic.20-87-r.retail.telecomitalia.it) has joined #ceph
[11:22] * m0e (~Moe@41.45.234.191) Quit (Ping timeout: 480 seconds)
[11:26] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[11:28] * leseb (~leseb@185.21.174.206) has joined #ceph
[11:28] * dereky_ (~derek@pool-71-114-104-38.washdc.fios.verizon.net) has joined #ceph
[11:31] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[11:31] * analbeard (~shw@support.memset.com) Quit (Read error: Operation timed out)
[11:31] * xdeller (~xdeller-t@95-31-29-125.broadband.corbina.ru) Quit (Quit: Leaving)
[11:32] * mongo (~gdahlman@voyage.voipnw.net) Quit (Remote host closed the connection)
[11:32] * dereky (~derek@proxy00.umiacs.umd.edu) Quit (Ping timeout: 480 seconds)
[11:32] * dereky_ is now known as dereky
[11:32] * mongo (~gdahlman@voyage.voipnw.net) has joined #ceph
[11:33] <davyjang> I am deploying a Ceph clusters following the official guide, in one step, when I input "ceph -s", the print is "2014-06-06 01:53:03.797265 b2792b40 0 -- :/1003930 >> 192.168.50.133:6789/0 pipe(0xb591a008 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0xb591a1d0).fault"
[11:33] <davyjang> I don't konw it mean
[11:34] <davyjang> Is someone help me?
[11:34] <singler_> I think it means that ir cannot contact monitor
[11:34] <singler_> check your ceph.conf
[11:35] * lucas1 (~Thunderbi@218.76.25.66) Quit ()
[11:36] <davyjang> but according to the guide,this order is processed on the monitor
[11:36] * davyjang (~oftc-webi@182.139.191.145) Quit (Remote host closed the connection)
[11:40] * sleinen1 (~Adium@2001:620:0:26:5e6:627c:7b54:722f) Quit (Quit: Leaving.)
[11:42] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) Quit (Quit: Leaving)
[11:43] * Ponyo (~fuzzy@c-98-232-38-159.hsd1.wa.comcast.net) Quit (Ping timeout: 480 seconds)
[11:44] * mongo (~gdahlman@voyage.voipnw.net) Quit (Ping timeout: 480 seconds)
[11:44] * sleinen (~Adium@194.230.53.158) has joined #ceph
[11:47] * analbeard (~shw@support.memset.com) has joined #ceph
[11:48] * madkiss1 (~madkiss@212095007082.public.telering.at) has joined #ceph
[11:48] * davyjang (~oftc-webi@182.139.191.145) has joined #ceph
[11:49] * madkiss (~madkiss@zid-vpnn018.uibk.ac.at) Quit (Ping timeout: 480 seconds)
[11:50] <ibuclaw> Hi, I'm trying to enable usage logging, but it doesn't seem to be working.
[11:50] <ibuclaw> Just returns: { "entries": [], "summary": []}
[11:50] <ibuclaw> I have [client.radosgw.gateway] rgw enable usage log = true
[11:51] <ibuclaw> Though I do note that the .usage pool has not been created by the radosgw.
[11:51] <ibuclaw> is this something I need to create manually to remedy? Or it is just something that I'm missing in the config.
[11:52] <singler_> davyjang: I think that "ceph" command read ceph.conf and tries to contact mon
[11:52] * sleinen (~Adium@194.230.53.158) Quit (Ping timeout: 480 seconds)
[11:53] <davyjang> but the host name and IP
[11:54] <davyjang> are all in the ceph.conf
[11:54] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[11:56] <singler_> is 192.168.50.133 a mon host and does ceph-mon run there?
[11:58] * davyjang (~oftc-webi@182.139.191.145) Quit (Remote host closed the connection)
[11:59] * davyjang (~oftc-webi@182.139.191.145) has joined #ceph
[11:59] <davyjang> yes
[12:00] <davyjang> and when I input "ceph-deploy gatherkeys node1", the output consists "Unable to find /var/lib/ceph/bootstrap-osd/ceph.keyring on ['node1']".
[12:01] <davyjang> contain
[12:02] * mongo (~gdahlman@voyage.voipnw.net) has joined #ceph
[12:02] <singler_> sorry, I am not able to help you with gatherkeys (and I guess it may be a problem). You could try searching keys manually
[12:02] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[12:03] <davyjang> ok,I will learn more,thanks for discussing
[12:03] * davyjang (~oftc-webi@182.139.191.145) has left #ceph
[12:04] <singler_> you can try following manual install guide, maybe it will be clearer where is the problem
[12:05] * The_Bishop (~bishop@f055213012.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[12:06] * madkiss1 (~madkiss@212095007082.public.telering.at) Quit (Quit: Leaving.)
[12:07] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Remote host closed the connection)
[12:10] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[12:13] * lalatenduM (~lalatendu@209.132.188.8) has joined #ceph
[12:15] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[12:29] * Infitialis (~infitiali@194.30.182.18) has joined #ceph
[12:30] * eternaleye (~eternaley@50.245.141.73) Quit (Ping timeout: 480 seconds)
[12:32] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[12:35] * eternaleye (~eternaley@50.245.141.73) has joined #ceph
[12:42] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) has joined #ceph
[12:45] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Connection reset by peer)
[12:46] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[12:49] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[12:50] * ade (~abradshaw@193.202.255.218) Quit (Ping timeout: 480 seconds)
[12:57] * rwheeler (~rwheeler@173.48.207.57) has joined #ceph
[12:58] * sleinen (~Adium@194.230.53.158) has joined #ceph
[12:59] * sleinen1 (~Adium@2001:620:0:26:b894:11d:2990:d746) has joined #ceph
[13:02] * i_m (~ivan.miro@gbibp9ph1--blueice4n2.emea.ibm.com) has joined #ceph
[13:06] * sleinen (~Adium@194.230.53.158) Quit (Ping timeout: 480 seconds)
[13:08] * ikrstic (~ikrstic@c82-214-88-26.loc.akton.net) has joined #ceph
[13:09] * zoltan_ (~zoltan@pat1.zurich.ibm.com) has joined #ceph
[13:09] <zoltan_> hey guys
[13:09] <zoltan_> so I need a shared fs for just storing these bloody libvirt.xmls with OpenStack (<2kB files / VM)
[13:09] <zoltan_> for such a use-case, do you think I'm gonna run into any problems with cephfs?
[13:09] <zoltan_> I know it hasn't been declared stable...
[13:11] * sleinen (~Adium@194.230.53.158) has joined #ceph
[13:12] * sleinen (~Adium@194.230.53.158) Quit ()
[13:13] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[13:14] <classicsnail> I Can't see why there would be, but as a paranoia step I'd still back them up into a rbd blob or something
[13:15] <zoltan_> and I should only run one mds, right?
[13:16] <zoltan_> can I do active/backup MDS instead of multi-master? is that more stable?
[13:17] * sleinen1 (~Adium@2001:620:0:26:b894:11d:2990:d746) Quit (Ping timeout: 480 seconds)
[13:17] <zoltan_> I don't want to keep a gluster cluster just for this shared nova secret
[13:21] * analbeard (~shw@support.memset.com) Quit (Remote host closed the connection)
[13:21] * capri (~capri@212.218.127.222) has joined #ceph
[13:21] * analbeard (~shw@support.memset.com) has joined #ceph
[13:23] * capri (~capri@212.218.127.222) Quit ()
[13:23] * sleinen (~Adium@194.230.53.158) has joined #ceph
[13:23] * capri (~capri@212.218.127.222) has joined #ceph
[13:23] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[13:25] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[13:26] * sleinen1 (~Adium@2001:620:0:26:bc73:18c1:b771:9ea2) has joined #ceph
[13:26] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit ()
[13:26] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit ()
[13:26] * ifur (~osm@hornbill.csc.warwick.ac.uk) Quit (Ping timeout: 480 seconds)
[13:27] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[13:29] * sleinen1 (~Adium@2001:620:0:26:bc73:18c1:b771:9ea2) Quit ()
[13:29] * saurabh (~saurabh@nat-pool-blr-t.redhat.com) Quit (Quit: Leaving)
[13:31] * sleinen (~Adium@194.230.53.158) Quit (Ping timeout: 480 seconds)
[13:37] * bens_ (~ben@c-71-231-52-111.hsd1.wa.comcast.net) has joined #ceph
[13:37] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) Quit (Read error: Connection reset by peer)
[13:41] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[13:46] * tserong (~tserong@203-57-208-132.dyn.iinet.net.au) Quit (Quit: Leaving)
[13:48] * bens_ (~ben@c-71-231-52-111.hsd1.wa.comcast.net) Quit (Ping timeout: 480 seconds)
[13:51] * thomnico (~thomnico@2a01:e35:8b41:120:295b:75af:854b:eac5) Quit (Quit: Ex-Chat)
[13:52] * jeremy___s (~jeremy__s@LReunion-151-3-51.w193-253.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[13:55] * sleinen (~Adium@194.230.53.158) has joined #ceph
[13:56] * thomnico (~thomnico@2a01:e35:8b41:120:295b:75af:854b:eac5) has joined #ceph
[13:59] * sleinen1 (~Adium@2001:620:0:26:50a4:cf5f:62da:cd81) has joined #ceph
[14:01] * haomaiwa_ (~haomaiwan@124.161.78.105) Quit (Remote host closed the connection)
[14:01] * haomaiwang (~haomaiwan@124.248.205.17) has joined #ceph
[14:02] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) has joined #ceph
[14:04] * sleinen (~Adium@194.230.53.158) Quit (Ping timeout: 480 seconds)
[14:07] * haomaiwa_ (~haomaiwan@124.161.78.105) has joined #ceph
[14:09] * The_Bishop (~bishop@f055213012.adsl.alicedsl.de) has joined #ceph
[14:14] * haomaiwang (~haomaiwan@124.248.205.17) Quit (Ping timeout: 480 seconds)
[14:15] * jpuellma (uid32064@id-32064.ealing.irccloud.com) has joined #ceph
[14:16] <jpuellma> http://ceph.com/?mdg=218-pharmacy+support+viagra
[14:17] <jpuellma> I'm not sure why there is Viagra gibberish on ceph.com but maybe someone in here wants to take a look at it.
[14:17] * scuttlemonkey (~scuttlemo@72.11.211.243) has joined #ceph
[14:17] * ChanServ sets mode +o scuttlemonkey
[14:17] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[14:17] <bandrus> thank you jpuellma, I'll make sure the proper people see it
[14:18] * zhaochao (~zhaochao@124.205.245.26) has left #ceph
[14:23] * thomnico (~thomnico@2a01:e35:8b41:120:295b:75af:854b:eac5) Quit (Quit: Ex-Chat)
[14:23] * huangjun (~kvirc@111.173.98.164) Quit (Ping timeout: 480 seconds)
[14:24] * vbellur (~vijay@209.132.188.8) Quit (Quit: Leaving.)
[14:30] * sleinen1 (~Adium@2001:620:0:26:50a4:cf5f:62da:cd81) Quit (Quit: Leaving.)
[14:32] * scuttlemonkey (~scuttlemo@72.11.211.243) Quit (Ping timeout: 480 seconds)
[14:33] <Infitialis> lol how did you find that jpuellma
[14:34] <jpuellma> If you pull up ceph.com in Chrome on an Android phone, there are some Viagra links at the very top of the page that don't seem to appear for other use cases.
[14:34] <jpuellma> It's weird.
[14:35] * JayJ (~jayj@157.130.21.226) has joined #ceph
[14:35] <jpuellma> They don't appear for the "Internet"web browser in Android. Nor do they seem to appear on Firefox or Chromium on Linux.
[14:35] <Infitialis> That's indeed weird, I did not see it in Safari until you sent that link.
[14:36] <Infitialis> indeed
[14:36] <kraken> http://i.imgur.com/bQcbpki.gif
[14:36] <Infitialis> Kraken did it, I knew it.
[14:36] <jpuellma> Gimme a sec and I'll upload a screen from my phone to site you what I mean...
[14:38] * thomnico (~thomnico@2a01:e35:8b41:120:295b:75af:854b:eac5) has joined #ceph
[14:39] <jpuellma> Here. http://m.imgur.com/LsfXwpF
[14:39] * thomnico (~thomnico@2a01:e35:8b41:120:295b:75af:854b:eac5) Quit ()
[14:40] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[14:40] <Infitialis> I guess Ceph really is scalable.
[14:40] <jpuellma> I'm using Chrome 34.0.1847.114 on Android 4.4.2
[14:43] <Infitialis> Chrome 35.0.1916.138 here with Android 4.2.2 jpuellma
[14:44] <zoltan_> interesting how viagra can generate traffic on the channel, unlike legitimate questions ;-)
[14:44] <jpuellma> Yeah, #rhel gets busiest when arguing about television.
[14:44] <Infitialis> Well some people might need support on those subjects.
[14:45] * bandrus1 (~Adium@75.5.249.229) has joined #ceph
[14:46] <jpuellma> Infitialis: are you seeing those links on the homepage too, or just when you click my original direct link to the gibberish page?
[14:46] <vhasi> Chrome 35.0.1916.138 on Android 4.4.2 - no viagra spam
[14:46] <Infitialis> jpuellma: I did not see the links on the homepage, only when clicking on your link.
[14:47] <jpuellma> Weird. I know one other guy who saw it last night when I tested him. He's on the latest nexus phone.
[14:48] <jpuellma> One of the guys at last night's meetup could not get them to show up on his Firefox or Chrome on his Linux laptop.
[14:49] <jpuellma> I haven't done any testing beyond that.
[14:50] <Infitialis> We need more people to test these cases. Viagra advertisements have high priority.
[14:50] * gregmark (~Adium@68.87.42.115) has joined #ceph
[14:50] * bandrus (~Adium@98.238.176.251) Quit (Read error: Operation timed out)
[14:51] * thomnico (~thomnico@2a01:e35:8b41:120:295b:75af:854b:eac5) has joined #ceph
[14:51] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit ()
[14:54] * thomnico (~thomnico@2a01:e35:8b41:120:295b:75af:854b:eac5) Quit ()
[14:55] <cronix> line 140 in page html
[14:55] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[14:59] * lalatenduM (~lalatendu@209.132.188.8) Quit (Quit: Leaving)
[14:59] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[15:08] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[15:09] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[15:11] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[15:11] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[15:13] * drankis (~drankis__@89.111.13.198) Quit (Remote host closed the connection)
[15:15] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) Quit (Quit: Leaving)
[15:16] * sleinen (~Adium@194.230.53.158) has joined #ceph
[15:19] * sleinen1 (~Adium@2001:620:0:26:1c7f:ecb3:499d:ee6b) has joined #ceph
[15:19] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Ping timeout: 480 seconds)
[15:22] * sleinen (~Adium@194.230.53.158) Quit (Read error: Operation timed out)
[15:26] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[15:34] * sleinen1 (~Adium@2001:620:0:26:1c7f:ecb3:499d:ee6b) Quit (Quit: Leaving.)
[15:37] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) has joined #ceph
[15:38] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: No route to host)
[15:38] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[15:39] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[15:40] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[15:40] * steki (~steki@91.195.39.5) Quit (Read error: Connection reset by peer)
[15:45] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[15:51] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Remote host closed the connection)
[15:51] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[15:51] <cronix> is there a way to check which version of ceph is "running" currently on a mon?
[15:52] <cronix> ceph --admin-daemon /var/run/ceph/ceph-mon.csliveeubap-u01mon02.asok version does not work on said mon
[15:52] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[15:53] * zoltan_ (~zoltan@pat1.zurich.ibm.com) Quit (Ping timeout: 480 seconds)
[15:58] * vbellur (~vijay@122.167.205.178) has joined #ceph
[15:59] * rpowell (~rpowell@128.135.219.215) has joined #ceph
[16:00] * ikrstic (~ikrstic@c82-214-88-26.loc.akton.net) Quit (Quit: Konversation terminated!)
[16:04] * zoltan_ (~zoltan@pat1.zurich.ibm.com) has joined #ceph
[16:16] * sleinen (~Adium@194.230.53.158) has joined #ceph
[16:18] * sleinen1 (~Adium@user-28-16.vpn.switch.ch) has joined #ceph
[16:19] * sleinen1 (~Adium@user-28-16.vpn.switch.ch) Quit ()
[16:23] * erice (~erice@71-208-255-210.hlrn.qwest.net) has joined #ceph
[16:24] * sleinen (~Adium@194.230.53.158) Quit (Ping timeout: 480 seconds)
[16:25] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[16:26] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[16:26] * i_m (~ivan.miro@gbibp9ph1--blueice4n2.emea.ibm.com) Quit (Quit: Leaving.)
[16:26] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[16:27] * diegows (~diegows@190.190.5.238) has joined #ceph
[16:27] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[16:28] * mdjp (~mdjp@2001:41d0:52:100::343) Quit (Quit: mdjp has quit)
[16:31] * markbby (~Adium@168.94.245.3) has joined #ceph
[16:33] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[16:33] * markbby (~Adium@168.94.245.3) Quit ()
[16:36] * markbby (~Adium@168.94.245.3) has joined #ceph
[16:42] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[16:43] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[16:44] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[16:45] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[16:45] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Connection reset by peer)
[16:49] * huangjun (~kvirc@117.151.52.204) has joined #ceph
[16:50] * hitsumabushi_ (hitsumabus@b.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[16:51] * zoltan_ (~zoltan@pat1.zurich.ibm.com) Quit (Ping timeout: 480 seconds)
[16:53] * huangjun (~kvirc@117.151.52.204) Quit ()
[16:53] * huangjun (~kvirc@117.151.52.204) has joined #ceph
[16:58] * Ronald (~oftc-webi@vpn.mc.osso.nl) has joined #ceph
[16:59] * zoltan_ (~zoltan@pat1.zurich.ibm.com) has joined #ceph
[17:01] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[17:01] * rpowell (~rpowell@128.135.219.215) has left #ceph
[17:04] * ikrstic (~ikrstic@77-46-245-216.dynamic.isp.telekom.rs) has joined #ceph
[17:07] <jcsp1> cronix: what in particular is not working about it?
[17:07] <jcsp1> (I get a {"version": '???' } response)
[17:08] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[17:10] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:12] * madkiss1 (~madkiss@zid-vpnn040.uibk.ac.at) has joined #ceph
[17:13] * Infitialis (~infitiali@194.30.182.18) Quit ()
[17:13] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:14] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[17:16] * KaZeR (~kazer@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:17] <cronix> well with the mon in question i get an mon.nodename: Error EINVAL: invalid command
[17:17] * madkiss (~madkiss@178.188.60.118) Quit (Ping timeout: 480 seconds)
[17:19] * erice (~erice@71-208-255-210.hlrn.qwest.net) Quit (Ping timeout: 480 seconds)
[17:23] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[17:23] <cronix> same with quorum_status works fine
[17:24] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[17:24] * ChanServ sets mode +v andreask
[17:25] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[17:25] <cronix> said nodes are 0.80.1
[17:26] <seapasulli> do any commands work for the monitor?
[17:27] <seapasulli> how about /var/log/ceph/mon* (any info?)
[17:28] * kwaegema (~kwaegema@daenerys.ugent.be) Quit (Ping timeout: 480 seconds)
[17:30] * narb (~Jeff@38.99.52.10) has joined #ceph
[17:31] <cronix> jup
[17:31] <cronix> its just the version command which is broken
[17:31] <cronix> i think its a firefly problem
[17:31] <cronix> we dont have this issue on 0.72
[17:32] <seapasulli> ah I am still running .72 that could be why
[17:32] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[17:34] * blSnoopy (~snoopy@miram.persei.mw.lg.virgo.supercluster.net) Quit (Remote host closed the connection)
[17:37] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:38] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[17:39] * ajazdzewski (~quassel@lpz-66.sprd.net) Quit (Remote host closed the connection)
[17:40] * scuttlemonkey (~scuttlemo@12.130.116.145) has joined #ceph
[17:40] * ChanServ sets mode +o scuttlemonkey
[17:41] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[17:43] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[17:51] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Ping timeout: 480 seconds)
[17:51] * huangjun (~kvirc@117.151.52.204) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[17:51] * joef (~Adium@2620:79:0:131:6087:c937:957d:e3cb) has joined #ceph
[17:52] * xarses (~andreww@12.164.168.117) has joined #ceph
[17:54] * scuttlemonkey (~scuttlemo@12.130.116.145) Quit (Ping timeout: 480 seconds)
[17:55] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) has left #ceph
[17:59] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[18:01] * lupu (~lupu@86.107.101.246) Quit (Quit: Leaving.)
[18:02] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[18:04] * The_Bishop (~bishop@f055213012.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[18:05] * lupu (~lupu@86.107.101.246) has joined #ceph
[18:08] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (Quit: Lost terminal)
[18:09] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[18:09] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[18:10] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[18:10] * gregsfortytwo (~Adium@129.210.115.14) has joined #ceph
[18:11] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:11] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[18:14] * ibuclaw (~ibuclaw@rabbit.dbplc.com) Quit (Quit: Leaving)
[18:14] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[18:15] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[18:15] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[18:15] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[18:16] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[18:19] * madkiss1 (~madkiss@zid-vpnn040.uibk.ac.at) Quit (Ping timeout: 480 seconds)
[18:22] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:27] * zoltan_ (~zoltan@pat1.zurich.ibm.com) Quit (Quit: Leaving)
[18:30] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[18:31] <KaZeR> some of my OSDs don't respond to commands :
[18:31] <KaZeR> osd.0: Error EINTR: problem getting command descriptions from osd.0
[18:31] <KaZeR> osd.0: problem getting command descriptions from osd.0
[18:31] <diegows> hi
[18:32] <KaZeR> it is shown as up in ceph -s. what can i do ?
[18:32] <diegows> AFAIK 0.80> requires three osd at least, something defined in the crush rules
[18:32] <diegows> is there an easy way to change that? I have to do a proof of concept I have only two phisycal servers
[18:34] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has joined #ceph
[18:34] * The_Bishop (~bishop@f055213012.adsl.alicedsl.de) has joined #ceph
[18:34] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) has joined #ceph
[18:37] <KaZeR> damn recovery is completely killing my cluster :(
[18:37] <KaZeR> client io 0 B/s rd, 36709 B/s wr, 5 op/s
[18:38] <KaZeR> is it completely unusable
[18:40] <Nacer_> KaZeR: what kind of network interface do you have ?
[18:41] <KaZeR> 1GB on this cluster unfortunately
[18:41] * gregsfortytwo (~Adium@129.210.115.14) Quit (Quit: Leaving.)
[18:41] * gregsfortytwo (~Adium@129.210.115.14) has joined #ceph
[18:41] * markbby (~Adium@168.94.245.1) has joined #ceph
[18:42] * Sysadmin88 (~IceChat77@94.4.22.173) has joined #ceph
[18:42] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:42] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:43] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) has joined #ceph
[18:45] <KaZeR> but i think that my bottleneck is more on the hard drive currently. that's why i moved the journal of some of my OSDs to SSDs, but despite lowering the recovery priority with ceph osd tell, my volumes are almost unusable
[18:46] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[18:48] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[18:50] * mourgaya (~kvirc@41.0.206.77.rev.sfr.net) has joined #ceph
[18:51] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) has joined #ceph
[18:53] * nwat (~textual@eduroam-240-162.ucsc.edu) has joined #ceph
[18:54] <ingard> hi guys. does anyone have any sane docs to install calamari? :)
[18:55] <ingard> i puppetized the server with this manifest : https://review.openstack.org/#/c/97128/2/manifests/calamari/install.pp
[18:55] <ingard> which works fine
[18:55] <ingard> but i'm not a dev, and the "setting up your dev environment" github description from https://github.com/ceph/calamari
[18:56] <ingard> was not quite what i was looking for :)
[18:58] <ingard> i guess i'm looking for the "production mode" doc
[18:58] <ingard> from github :
[18:59] <ingard> This code is meant to be runnable in two ways: in production mode where it is installed systemwide from packages, or in development mode where it is running out of a git repo somewhere in your home directory. The rest of this readme will discuss setting up a git clone in development mode.
[18:59] <ingard> hehe
[19:03] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:07] <KaZeR> ingard, i'm definitely not a python expert but i think that you might just need to replace "python setup.py develop" by "python setup.py install"
[19:07] <KaZeR> the difference is mostly that develop will install it in your own env, in your homedir where install will install it system-wide
[19:11] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) has joined #ceph
[19:11] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) has joined #ceph
[19:14] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[19:15] <Karcaw> i had hoped tha vanguard would have build me a production vm from the clamari sources, but i have had no luck there yet.
[19:16] <Karcaw> vagrant.. not vanguard
[19:16] <ingard> hehe yeah i can imagine :)
[19:17] <ingard> i found some docs but they only talk about the enterprise packages
[19:17] <ingard> so no luck there either
[19:17] * tracphil (~tracphil@130.14.71.217) has joined #ceph
[19:18] * tracphil (~tracphil@130.14.71.217) Quit ()
[19:18] * tracphil (~tracphil@130.14.71.217) has joined #ceph
[19:24] * steki (~steki@cable-94-189-165-169.dynamic.sbb.rs) has joined #ceph
[19:26] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) Quit (Read error: Operation timed out)
[19:26] * Cube (~Cube@12.248.40.138) has joined #ceph
[19:29] <jcsp1> ingard: there are some sparse notes on building packages here http://calamari.readthedocs.org/en/latest/development/building_packages.html
[19:29] <jcsp1> however, this stuff is not yet in a "plug and play" state for non-developers
[19:30] <ingard> i did get the packages built actually
[19:30] <ingard> but it doesnt install anything to /opt/calamari
[19:31] <jcsp1> what distribution are you on?
[19:31] <ingard> ubuntu precise
[19:31] <jcsp1> and you were using the "precise-build" vagrant environment?
[19:31] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[19:31] <ingard> https://review.openstack.org/#/c/97128/2/manifests/calamari/install.pp
[19:31] <ingard> i used that manifest to get everything installed
[19:31] <ingard> i had to manually install nodejs 0.10
[19:32] <jcsp1> ok, so you've invented your own build method, your mileage may vary ;-)
[19:32] <ingard> well i'm happy to do it which ever way :)
[19:33] <ingard> i've no experience with vagrant unfortunately
[19:33] <jcsp1> vagrant is pretty easy to use, I recommend it. The actual dependencies are set up using salt, using the states here: https://github.com/ceph/calamari/tree/master/vagrant/precise-build/salt/roots/
[19:33] <jcsp1> that would be your reference point for a working build environment
[19:34] <jcsp1> the build is very very sensitive to the environment, because it's building a virtualenv that gets built into a package, and virtualenvs include or exclude things depending on what's installed systemwide
[19:35] <jcsp1> i.e. it's not just what you have installed that matters, what you *don't* have installed is important too ??? so using vagrant to get a super-clean build VM is a good idea
[19:35] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[19:35] <jcsp1> it should go without saying that we'll welcome efforts to make this stuff less idiosyncratic
[19:36] <ingard> i'm assuming its only a matter of time until someone builds packages for the general public :)
[19:36] <ingard> it feels like i'm close tho, i've got everything build and compiled
[19:36] <ingard> i just cant get it through the last 5%
[19:37] <ingard> make install keeps referencing /opt/calamari but its not getting created for some reason
[19:39] * Vacum_ is now known as Vacum
[19:40] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[19:41] * leseb (~leseb@185.21.174.206) has joined #ceph
[19:45] * talonisx (~talonisx@pool-108-18-97-131.washdc.fios.verizon.net) has joined #ceph
[19:49] <ingard> jcsp1: so anyway i'm happy to follow instructions to do it the vagrant way if you can explain it to me :)
[19:50] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[19:53] <jcsp1> ingard: so you need vagrant installed (naturally), and then you go into vagrant/precise-build and run "vagrant up".
[19:53] * pressureman (~daniel@f053035017.adsl.alicedsl.de) has joined #ceph
[19:53] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) has joined #ceph
[19:54] * gregsfortytwo (~Adium@129.210.115.14) Quit (Quit: Leaving.)
[19:57] * steki (~steki@cable-94-189-165-169.dynamic.sbb.rs) Quit (Ping timeout: 480 seconds)
[19:58] <ingard> /usr/local/src/calamari/vagrant/precise-build/Vagrantfile:7: undefined method `configure' for Vagrant:Module (NoMethodError)
[19:58] <ingard> :s
[19:59] * bandrus1 (~Adium@75.5.249.229) Quit (Quit: Leaving.)
[20:00] * haomaiw__ (~haomaiwan@124.161.78.105) has joined #ceph
[20:00] * haomaiwa_ (~haomaiwan@124.161.78.105) Quit (Read error: Connection reset by peer)
[20:01] <loicd> mourgaya: \o
[20:01] <loicd> mourgaya: now is the ceph user committee monthly meeting right ?
[20:02] <mourgaya> hi, I will begin the ceph online meetup in few minutes!
[20:02] <loicd> https://wiki.ceph.com/Community/Meetings
[20:02] <Vacum> Hi! :)
[20:03] <loicd> calamari has seen a lot of activity recently
[20:03] <mourgaya> Vacum: welcome !
[20:03] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) Quit (Quit: dereky)
[20:03] * ch4os (~ch4os@153.19.13.10) has joined #ceph
[20:03] <loicd> but I don't know if anyone has actually been successfull in installing it ?
[20:03] <ch4os> hi
[20:03] <mourgaya> loicd: not yet
[20:03] <mourgaya> ?
[20:03] <ingard> loicd: i've tried for this whole day - but not successfully :)
[20:04] <loicd> dmsimard: is trying to write a puppet-ceph class for it
[20:04] <mourgaya> what about creating a docker image of calamaris server?
[20:04] <loicd> ingard: dam :-)
[20:04] <loicd> mourgaya: +1
[20:04] <ingard> would be nice :)
[20:04] <loicd> ingard: what problem did you run into ?
[20:05] <ingard> all sorts
[20:05] * loicd looks at the tracker for calamari
[20:05] <ingard> mostly that i'm not a developer and dont really understand all the stuff thats going on in the install instructions
[20:05] <ingard> https://github.com/Crapworks/ceph-dash
[20:05] <ch4os> I've tried to reweight cluster a little bit (due to uneven space usage) and now one of pg's is in state: 1 active+remapped+backfill_toofull, but all osd have more than 10% of space
[20:05] <loicd> http://tracker.ceph.com/projects/calamari/issues
[20:05] <ingard> concidering how easy that hting was to get up and running calamari has some way to go :)
[20:06] <loicd> ch4os: we're having a meeting right now and it will last one hour (we're 5 minutes in). do you mind waiting until it is complete ?
[20:06] <ch4os> sure, sorry ;-)
[20:07] <ingard> http://dachary.org/?p=2548
[20:07] <loicd> no worries
[20:07] <ingard> is that your blog btw?
[20:07] <loicd> yes
[20:07] <ingard> was just reading on there :)
[20:07] <loicd> ingard: your refering to https://github.com/Crapworks/ceph-dash which is not calamari, did you try that too ?
[20:08] <ingard> yeah the crapworks dash is running nicely
[20:08] <loicd> cool
[20:08] <mourgaya> ingard: I think that calamari provide more functionnality!
[20:08] <ingard> yeah
[20:08] <ingard> indeed
[20:08] <kraken> http://i.imgur.com/bQcbpki.gif
[20:08] <ingard> which is why i wanted to set it up :)
[20:08] <ingard> https://review.openstack.org/#/c/97128/2/manifests/calamari/install.pp
[20:08] <loicd> mourgaya: do you know how far Alain went ?
[20:08] <ingard> this puppet manifest works-ish
[20:08] <ingard> i can compile both calamari and calamari-client
[20:09] <ingard> but after compiling i'm a bit in the dark as to how to get the app to actually work and point it to the cluster etc
[20:09] <mourgaya> Alain, will stop slowly the development and will contributed to calamari
[20:09] <diegows> what's the parameters that affects the number of osd1 required to activate the cluster?
[20:09] <diegows> I want to activate it with two osds only
[20:10] <diegows> I was told that firefly enforces three osds by defaults
[20:10] <diegows> but I have two :)
[20:10] <loicd> here is my selfish, passive user perspective : wait until it is packaged, because it will be, eventually. Wait a little more (say a month) after it is packaged so other than me run into problems. Then install and try it and report bugs. The installation procedure is too difficult as it is.
[20:10] <mourgaya> but we can create pools with it
[20:10] <loicd> diegows: we're having a meeting right now and it will last one hour (we're 10 minutes in). do you mind waiting until it is complete ?
[20:11] <diegows> loicd, of couse! :)
[20:11] * yuriw (~Adium@AMontpellier-653-1-477-111.w92-143.abo.wanadoo.fr) has joined #ceph
[20:11] <diegows> thanks!
[20:11] <loicd> thanks you !
[20:11] <loicd> mourgaya: it's good news that Alain is going to contribute to calamari
[20:11] * loicd digging the URL of the mail thread
[20:11] <mourgaya> loicd: I hope, but I can't wait, so I will work on a docker solution
[20:12] <ingard> loicd: yeah i'm assuming it will be packaged eventually :)
[20:12] <ingard> but why not just release the packages that inktank has already built? :)
[20:12] <Vacum> ingard++
[20:12] <mourgaya> ingard: +
[20:13] <jcsp1> ingard: we haven't actually done a release of this software internally yet either.
[20:13] <ingard> https://download.inktank.com/docs/Calamari%201.1%20Installation%20Guide.pdf
[20:13] <loicd> ingard: did they build packages ?
[20:13] <Vacum> jcsp1: so what did you give enterprise customers?
[20:13] <ingard> this doc seems to reference packges in a password protected rpo
[20:13] <ingard> repo
[20:14] <jcsp1> that doc refers to calamari 1.1, which is rather old and very different to the state of master
[20:14] * bandrus (~Adium@66-87-118-238.pools.spcsdns.net) has joined #ceph
[20:14] * varde (~varde@95.68.84.133) has joined #ceph
[20:15] * yuriw (~Adium@AMontpellier-653-1-477-111.w92-143.abo.wanadoo.fr) has left #ceph
[20:15] <loicd> I guess https://github.com/ceph/calamari/tree/master/debian is a starting point for package
[20:15] * gregsfortytwo (~Adium@129.210.115.14) has joined #ceph
[20:16] <ingard> make dpkg should build i guess
[20:16] <varde> laba diena
[20:16] <loicd> jcsp1: is there any way passive users could help facilitate the installation process ? I mean other than passively waiting as I just said I would ;-)
[20:16] <Vacum> if someone in the community, not being part of Inktank, would package ie for debian or ubuntu - how would it become an "official" package on github.com/ceph ?
[20:16] <jcsp1> let me repeat: we are using a vagrant environment for building packages, see the vagrant/ folder in calamari's source repo. Within that, it calls a make target which does lots of things to build a package.
[20:16] <ingard> jcsp1: agrantfile:7: undefined method `configure' for Vagrant:Module (NoMethodError)
[20:16] <loicd> Vacum: with a pull request ;-)
[20:16] <ingard> any hints ?
[20:16] <jcsp1> it needs to be simpler and more obvious, but we're in week 1 of the open source lifetime of the project here people :-)
[20:17] <jcsp1> ingard: if you google that error, you'll find that it's because you're using an ancient version of vagrant
[20:17] <ingard> yeah its only a matter of 1 person getting it right and writing a blog post
[20:17] <jcsp1> I should have said, you need to download vagrant from the vagrant site, not use the version in precise
[20:17] <ingard> everyone else will follow ;)
[20:17] <ingard> right
[20:17] <ingard> you should have ;)
[20:17] * rendar (~I@host19-176-dynamic.20-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:18] <loicd> I completely missed the fact that there is some vagrant related packaging...
[20:18] * loicd reading https://github.com/ceph/calamari/#setting-up-a-development-environment againt
[20:19] <jcsp1> "setting up a developer environment" is kind of the opposite of building packages: it's about how to have a lightweight hackable instance running as an unprivileged user
[20:19] <Vacum> the Calamari/github/package/pull request thing brings me back to a question I had previously. and it boils down to: how can one determine which parts of github.com/ceph/... are "official" and commercially supported, and which not?
[20:19] <loicd> ok.
[20:19] <jcsp1> pull requests to http://calamari.readthedocs.org/en/latest/development/building_packages.html would be great for more detail on building packages
[20:20] <jcsp1> currently those docs are built from https://github.com/ceph/calamari/tree/wip-rtd, where I will be happy to take pull requests
[20:20] <loicd> jcsp1: nice
[20:20] * rturk|afk is now known as rturk
[20:21] <jcsp1> Vacum: you can tell whether it's official supported by whether you were sent an invoice for it ;-)
[20:21] <loicd> as a user I would like to see that first in the README.md
[20:21] <Vacum> jcsp1: unfortunaltey its the other way round. we paid, and then we heard what is NOT supported...
[20:21] <loicd> README.rst
[20:22] <loicd> my first reflex when reading https://github.com/ceph/calamari/ is to ask myself : where are the package ? where are the installation instructions if there are no packages ?
[20:22] <jcsp1> loicd: right. it's not there yet. we will need a real "landing page" for calamari at some stage that would send developers to the git repo and users to somewhere else
[20:23] <mourgaya> loicd: +++
[20:23] * yuriw (~Adium@AMontpellier-653-1-477-111.w92-143.abo.wanadoo.fr) has joined #ceph
[20:23] * JayJ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[20:23] <loicd> jcsp1: disclaimer : this is a user meeting and I'm deliberately leaning on the passive consumer side of things, expressing frustrations and wishes as clearly as possible ;-)
[20:23] <jcsp1> er, this is a meeting?
[20:24] <mourgaya> jcsp1: yes
[20:24] <loicd> jcsp1: yes, it's the Ceph User Committee meeting and the first order of business is Calamari (of course ;-)
[20:24] <jcsp1> oh, right. sorry guys, I thought we were just on normal IRC (wondered why it was so busy)
[20:25] <loicd> we're happy to have you as a guest star ;-)
[20:25] <loicd> regarding calamari use cases, I'm not sure I have one yet. How about you mourgaya ?
[20:26] <mourgaya> no use case!
[20:26] * varde (~varde@95.68.84.133) has left #ceph
[20:26] <loicd> Vacum: ?
[20:26] <Vacum> I'd love to be able to "drill" down on defective OSDs further than its number
[20:27] <ingard> ah hehe i didnt realize it was a meeting either :) I was happy how active everyone suddenly was :)
[20:27] <ingard> hehe
[20:27] <loicd> Maybe I will use it to show to people. And tell them : this is ceph. So they get an image associated to the concept. And not just the logo ;-)
[20:27] <Vacum> ie have on Calamari already the respective /dev/sdX device shown
[20:27] <loicd> ingard: ahaha
[20:27] * scuttlemonkey (~scuttlemo@c-107-5-193-244.hsd1.mi.comcast.net) has joined #ceph
[20:27] * ChanServ sets mode +o scuttlemonkey
[20:27] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[20:28] <loicd> Vacum: for monitoring or diagnostic ?
[20:28] * Cube (~Cube@12.248.40.138) has joined #ceph
[20:28] <Vacum> loicd: this goes hand in hand I guess. once an osd goes "red" its time to diagnose
[20:29] <loicd> jcsp1: in Atlanta I discuss with someone working on calamari (unfortunately his name escapes me at the moment) and he mentionned possible plans to provide a kind of SaaS based on calamari. Does that ring a bell ?
[20:29] <loicd> Vacum: but if you're using nagios for monitoring (for instance), how can you mix this with Calamari ?
[20:29] <mourgaya> sure but you can stay face on this page, you need pluging to current tools like shinken
[20:30] <loicd> mourgaya: +1
[20:30] <jcsp1> loicd: The idea of maybe doing SaaS one day influenced the design, yeah - it's one of the reasons all the connections are from ceph servers to calamari and not the other way around, so that it is possible for the calamari server to be in the cloud while the ceph servers are behind a NAT.
[20:30] <mourgaya> I work on this !
[20:30] <Vacum> sounds good
[20:30] <loicd> when I saw the Calamari architecture I thought that nagios/shinken plugins could use the REST API, which seems sensible
[20:30] <jcsp1> loicd: that is our thinking too.
[20:31] <loicd> mourgaya: you currently work on a shinken plugin ?
[20:31] <Vacum> loicd: would that be necessary?
[20:31] <loicd> Vacum: a nagios/shinken plugin you mean ?
[20:31] <mourgaya> yes, I will show you sooner
[20:32] <Vacum> loicd: it seems my question has been obsoleted even before I asked it :)
[20:32] <mourgaya> but not directely with calamari, it will be the next stage
[20:32] <loicd> jcsp1: what would be the use case for a SaaS calamari ? I mean, now that it's Free Software and anyone can install it locally. If it was proprietary, I get it.
[20:32] * Cube (~Cube@12.248.40.138) Quit ()
[20:32] * Cube (~Cube@12.248.40.138) has joined #ceph
[20:32] <loicd> mourgaya: cool
[20:33] * yguang11 (~yguang11@180.78.225.41) has joined #ceph
[20:33] <mourgaya> Vacum: Vacum: loicd: loicd: loicd:
[20:33] <jcsp1> loicd: perhaps for someone who does not want an extra server in their rack for the monitoring machine, or the need to install anything. same reason we have free software SaaS like readthedocs.org and travis-ci
[20:33] <Vacum> mourgaya: yes? :)
[20:34] <loicd> mourgaya: it's great to have jcsp1 with us today. Last time scuttlemonkey was around also. It looks like this works best when someone involved is present to bounce ideas and answer questions. That was not the original plan for these kind of user meetings but it's an interesting development ;-)
[20:34] <iggy> there are lots of SaaS monitoring tools out there
[20:34] <iggy> because small companies often don't have the talent/manpower/time to set all that stuff up
[20:34] <loicd> iggy: I did not know that ! (I know nothing of the proprietary world ;-) do you have links of some of them ?
[20:34] <mourgaya> loicd: +
[20:34] <jcsp1> happy to help, let me also mention that folks interested in the calamari issues should definitely join us on the ceph-calamari mailing list too.
[20:35] <ingard> if you can find the money to buy loads of hw for running a big cluster then you'll have probably dedicated monitoring hw as well
[20:35] <ingard> my 2c
[20:35] <mourgaya> iggy: I'am not sure that ceph in an SAAS is a good idea right now
[20:35] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[20:36] <Vacum> as long as that SaaS software a) is read-only and b) only shows business non-critical information its fine. but imagine your PB cluster can be remotely damaged through a security incident in the cloud based SaaS software
[20:36] <loicd> jcsp1: good point. Do you know of an "operation oriented saas" ? I mean readthedoc is publishing, travis-ci is dev.
[20:36] <iggy> I'm sure there are people that say ceph (period) is a good idea right now
[20:36] <mourgaya> jcsp1: is it #ceph-calarmai ?
[20:36] <jcsp1> well, I am currently wearing a t-shirt from newrelic.com, they are popular. various others exist aimed at cloud users.
[20:36] <loicd> ingard: jcsp1 has a point. travis-ci is used by people who have dev infrastructure but don't want to bother with yet another service to maintain.
[20:37] <iggy> *is not
[20:37] <jcsp1> mourgaya: the list is http://lists.ceph.com/listinfo.cgi/ceph-calamari-ceph.com
[20:37] <ingard> oh yeah for sure, ppl will probably use it :)
[20:37] <ingard> crazy as it sounds (to me)
[20:37] <loicd> :-P
[20:38] <loicd> time flies ! mourgaya should we move to the next topic ?
[20:38] <ingard> i'm super happy that there are diamond/graphite stuff in this project
[20:38] <ingard> it fits very nicely with what i've got already :)
[20:38] <loicd> ahahah
[20:38] <Vacum> same here :)
[20:39] <mourgaya> so next topics guys!
[20:39] <mourgaya> ceph-brag?
[20:39] <loicd> yes
[20:39] <loicd> this has been released with firefly
[20:40] <mourgaya> loicd: can you explain the idea of ceph brag?
[20:40] <scuttlemonkey> loicd: sorry, was hoping to make it but airports are stupid :P
[20:40] <loicd> but I've not heard much user stories
[20:40] <loicd> scuttlemonkey: here is the man of the hour ! welcome !
[20:40] <mourgaya> scuttlemonkey: welcome
[20:40] * bdonnahue (~James@24-148-64-18.c3-0.mart-ubr2.chi-mart.il.cable.rcn.com) has joined #ceph
[20:40] <scuttlemonkey> wont be on for long...but yeah, ceph-brag hasn't been publicized very well yet
[20:40] <Vacum> imo a tool that should _definitely_ be disabled by default. and enabling should need a --yes-i-really-really-mean-it switch.
[20:40] <scuttlemonkey> we should figure out how we want to do that
[20:40] <loicd> mourgaya: in a nutshell it's a command line that will push to brag.ceph.com some stats about your ceph cluster
[20:41] <scuttlemonkey> vacum: agreed
[20:41] <loicd> Vacum: it is *not* enabled in anyway
[20:41] <mourgaya> Vacum: what a great option!
[20:41] <scuttlemonkey> but it's a "run this command to send stuff"
[20:41] <loicd> http://brag.ceph.com/
[20:41] <scuttlemonkey> not an automated thing
[20:41] <bdonnahue> hello everyone. im trying to mount cephFS but having trouble. it was working yest and then froze. i rebooted the client but no luck
[20:41] <loicd> as you can see there are tons of people using it...
[20:41] <Vacum> yes. still it should need some failsafe so simply calling it once will send out stuff
[20:41] <bdonnahue> my cluster has a ntp warning but that never broke things before
[20:41] <Vacum> loicd: :D
[20:41] <bdonnahue> the ceph-fuse keeps hanging while starting the client
[20:42] <loicd> from a user perspective this would be a way for me to see what kind of deployment are out there
[20:42] <mourgaya> bdonnahue: there is a ceph meetup on this channel, can you wait for this?
[20:42] <Vacum> yes, it might give interested parties an insight what is already out there, how many users have at least tested it
[20:42] <loicd> bdonnahue:: we're having a meeting right now and it will last one hour (we're 42 minutes in). do you mind waiting until it is complete ?
[20:43] <bdonnahue> sure sorry guys
[20:43] <bdonnahue> thanks
[20:43] * diegows is first in the line :)
[20:43] <loicd> bdonnahue: no worries ;-)
[20:43] <mourgaya> loicd: why collecting these stats?
[20:44] * loicd reads https://wiki.ceph.com/Planning/Blueprints/Firefly/Ceph-Brag once more
[20:44] <janos_> mourgaya, i would imagine it helps witha few things - some sense of the size of deployments. performance, perf issues possibly. community-building
[20:44] <loicd> Ceph-brag is going to be an anonymized cluster reporting tool designed to collect a "registry" of Ceph clusters for community knowledge. This data will be displayed on a public web page using UUID by default, but users can claim their cluster and publish information about ownership if they so desire.
[20:44] <janos_> nothing like some friendly competition to drive improvement!
[20:44] * bandrus (~Adium@66-87-118-238.pools.spcsdns.net) Quit (Quit: Leaving.)
[20:44] <loicd> well, that does not answer your question mourgaya
[20:45] <iggy> "because people like stats"
[20:45] <janos_> ^^
[20:45] <loicd> I guess I'd be happy to have the biggest cluster....
[20:45] <loicd> it's unlikely though
[20:45] <loicd> iggy: maybe so :-)
[20:45] <mourgaya> iggy: sure, but wwith an history it will be a great tool!
[20:45] <janos_> will brag be capable of being a general perf/health tool for the cluster?
[20:45] <Vacum> me too, but my employer might not be happy if I shared that information to the world
[20:45] <loicd> bottom line is : ceph-brag, why not but ... why ?
[20:46] <iggy> I'm going to deploy a 1000000 node cluster on AWS for 5 minutes... wonder how much that'll cost me
[20:46] <loicd> leseb: are you around by any chance ?
[20:46] <loicd> iggy: ahahah
[20:47] * haomaiw__ (~haomaiwan@124.161.78.105) Quit (Remote host closed the connection)
[20:47] * haomaiwang (~haomaiwan@124.248.205.17) has joined #ceph
[20:47] <loicd> mourgaya: I guess it concludes the ceph-brag topic. Interesting development. For some reason I thought people would be interested in bragging about their cluster. But it turns out that although I'm interested in doing so in theory, in practice I don't have much incentive. My cluster is small (18 osds)
[20:47] <mourgaya> who is in charge of ceph.brag.com?
[20:47] * yguang11 (~yguang11@180.78.225.41) Quit ()
[20:47] <loicd> mourgaya: it's hosted by the Free Software Foundation France
[20:47] <Vacum> on the other hand, if I could show our C-Level that a competitor's cluster is larger, perhaps I get more machines? :)
[20:48] <loicd> development has been done by Babu Shanmugam mostly
[20:48] <loicd> sysadmin was done by leseb and myself
[20:48] <loicd> Vacum: :-)
[20:49] <mourgaya> for the next topic : can we speak about wiki?
[20:49] <loicd> note that brag.ceph.com runs on a virtual machine using a ceph disk (of course ;-)
[20:49] <loicd> mourgaya: ok, I think scuttlemonkey is all ears ;-)
[20:50] <mourgaya> loicd: so you can add calamari on this machine!
[20:50] <loicd> unless his plane had to take off
[20:50] <mourgaya> loicd: that is why I propose this topic!
[20:50] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[20:51] <loicd> I'm generally happy with wiki.ceph.com although its interface is unlike anything I'm used to
[20:51] <loicd> the only thing I can't quite figure out is links / URLs
[20:52] <loicd> which is the reason why https://wiki.ceph.com/Community/Meetings looks weird ;-)
[20:52] * steki (~steki@cable-94-189-165-169.dynamic.sbb.rs) has joined #ceph
[20:53] <mourgaya> :-)
[20:53] <loicd> we have 7 minutes left
[20:53] <mourgaya> I think we can add a job section in wiki
[20:53] <loicd> mourgaya: +1
[20:53] <mourgaya> with region of course
[20:54] <loicd> it's currently on http://ceph.com/community/careers/
[20:54] <loicd> but can only edited by you and a few other people, which is not convenient
[20:54] <mourgaya> we need a really man page
[20:54] * JayJ (~jayj@157.130.21.226) has joined #ceph
[20:55] <loicd> I wonder if the Cisco offer still stands
[20:55] <mourgaya> loicd: no idea!
[20:55] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) Quit (Ping timeout: 480 seconds)
[20:56] <loicd> I should have asked when at the openstack summit in atlanta. They seemed to be recruiting so my gut feeling is that the position is still open.
[20:56] <loicd> mourgaya: would you like to create the job page and copy the job offers there ?
[20:57] <mourgaya> loicd: I not sure it is flexible
[20:57] <loicd> ok
[20:57] <iggy> is this going to be a general "work with ceph" list or a "get hired by inktank/RH" list?
[20:58] * loicd just sent a mail to Don Talton asking of the position is still open
[20:58] <loicd> iggy: very good point !
[20:58] <mourgaya> iggy: you mean redhat/RH?
[20:58] <loicd> ahahah
[20:59] <diegows> "work with ceph" +1 :P
[20:59] <loicd> Red Hat RH is RH RH
[20:59] <mourgaya> :-)
[20:59] * loicd apologizes for this french pun on word (Human Resources spells Ressources Humaines in french ;-)
[21:00] <mourgaya> private joke!
[21:00] <Vacum> I have a question to the last agenda topic "CDS G/H". Will it be possible to simply join (passively) the video streams as last time? and use IRC for questions on-the-fly?
[21:00] <Vacum> or is it a must to register first on https://www.eventbrite.com/e/ceph-developer-summit-gh-day-1-tickets-11808540663
[21:00] <Vacum> ?
[21:00] <loicd> scuttlemonkey: is working on a solution that will allow wider participation to the CDS (i.e. not hangout)
[21:00] <Vacum> sounds great!
[21:01] <loicd> and in any case, yes, it will always be possible to join passively, I'm sure
[21:01] <loicd> mourgaya: we're running out of time, unfortunately :-(
[21:01] <Vacum> I also have an idea for a blueprint, which you loicd might find interesting :)
[21:01] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[21:01] <iggy> if it's a "work with ceph" thing, maybe it should also have a link to a ceph_careers@RH page (just so it's obvious that there's a difference, and maybe to get qualified people to apply for the right positions)
[21:01] <loicd> Vacum: let's move this to #ceph-devel if you'd like ?
[21:01] <Vacum> loicd: sure!
[21:01] <loicd> cool
[21:01] <mourgaya> loicd: is there an proposition to have an asynchronous replication with the rados level ?
[21:01] * rturk is now known as rturk|afk
[21:01] * gregsfortytwo (~Adium@129.210.115.14) Quit (Quit: Leaving.)
[21:02] <loicd> mourgaya: not that I know
[21:02] <Vacum> mourgaya: yes, yes, we would definitely like this! cross data center :)
[21:02] <loicd> mourgaya: thanks for holding this meeting :-) And even more for digesting the logs into a readable executuive summary. :-P
[21:02] <mourgaya> Vacum: ++++++++++
[21:03] <diegows> can we ask now? :)
[21:03] <mourgaya> diegows: yes I think
[21:03] <mourgaya> thanks everybody!
[21:03] <loicd> \o/
[21:04] <diegows> well, I would like to know if there is a way to have a cluster with 2 osds in firefly? It was possible in emperor
[21:04] * loicd steps outside wonder how one full hour can fly so quickly
[21:04] <diegows> I've been reading the docs but I can't find where is that parameters
[21:04] <janos_> diegows, i tink the default replication size in firefly changed to 3. change it back to 2 ;)
[21:04] <janos_> tink/think
[21:05] <janos_> it's in the crush map
[21:05] <diegows> osd pool default size in global?
[21:05] * aldavud (~aldavud@213.55.184.222) has joined #ceph
[21:05] <janos_> hrrmmm. not sure on that
[21:05] <diegows> I don't see a "3" in the crush maap :)
[21:05] <janos_> but it sounds like that's the nature of the problem though - replication size
[21:06] <diegows> yes, but the parameter says "osd pool..." and this parameter is not related to a pool... I think... but trying
[21:08] <iggy> you'll also have to change the default pools replication size if you've already started the cluster
[21:09] <diegows> no, it's new
[21:09] <diegows> and I can destroy it if it's required :)
[21:09] * JeanMax (~oftc-webi@vsr56-1-82-246-44-153.fbx.proxad.net) has joined #ceph
[21:10] <ch4os> i've unbalanced cluster, 10 osds, some have 83% hdd usage some 66%, it's probably due to low number of pg's (http://wklej.org/hash/e498ecb72ed/) 128.. it should be at least 512, am i right? (10 osds, replica 2), i wonder how safe is - ceph osd pool set <poolname> pg_num <numpgs> with ceph 0.72
[21:12] <diegows> that parameter didn't help
[21:12] <diegows> :(
[21:13] <diegows> janos_, do you have how to check that in the crush map?
[21:14] <janos_> it's been a while. lemme see what i can dig up
[21:14] <iggy> bdonnahue: did you try fixing the time? maybe you've passed some threshhold that went from warning to error
[21:14] <diegows> I've decompiled it but I not sure where it is
[21:14] <iggy> pastebin it?
[21:14] <diegows> iggy, me?
[21:14] <iggy> yeah
[21:15] * gregsfortytwo (~Adium@129.210.115.14) has joined #ceph
[21:18] * mourgaya (~kvirc@41.0.206.77.rev.sfr.net) Quit (Quit: KVIrc 4.1.3 Equilibrium http://www.kvirc.net/)
[21:19] <seapasulli> this is just an odd question but to upload a 150Gb file to my ceph cluster and it takes 6minutes but to delete that file from the cluster takes 45 minutes. The cluster says its healthy but I have no idea on why it takes so long to delete
[21:23] * JayJ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[21:25] * JayJ (~jayj@157.130.21.226) has joined #ceph
[21:26] <diegows> http://paste.ubuntu.com/7603607/
[21:26] <alfredodeza> KB: ping
[21:26] <diegows> iggy, janos_ sorry... phone call... paste http://paste.ubuntu.com/7603607/
[21:27] <janos_> np, i've been busy with work
[21:28] <Vacum> diegows: already created pools won't change their replication size if you change the default value
[21:28] <janos_> ah, i was wondering. i'm not seeing anything off in that crushmap
[21:30] <diegows> Vacum, but I don't understand... the cluster is degraded because there are issues with the pools?
[21:30] <diegows> with that theory creating a new one will fix the issue
[21:30] <Vacum> diegows: did you paste the output of ceph -s already somewhere?
[21:30] <Vacum> diegows: you can change the replica count of an existing pool too. (although you shouldn't do that if the pool is large already :) )
[21:30] <diegows> http://paste.ubuntu.com/7603708/
[21:30] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[21:31] <Vacum> diegows: ceph osd pool set {pool-name} size 2
[21:31] <Vacum> diegows: and probably min_size too
[21:32] <Vacum> diegows: to see all pools, ie run ceph df
[21:32] <Vacum> diegows: are those 2 OSDs on the same machine?
[21:32] * gregsfortytwo (~Adium@129.210.115.14) Quit (Quit: Leaving.)
[21:32] <diegows> no
[21:32] <diegows> two boxes
[21:32] <Vacum> diegows: ok, good :) otherwise you would need to change the default crush rule :)
[21:32] <diegows> fresh installation, defalut pools only
[21:33] <diegows> I've created a new one to check if something changes but nothing
[21:33] <Vacum> diegows: ok. then you can just change their size
[21:33] <Vacum> diegows: you should at least see more pgs in the output of ceph -s ?
[21:34] <diegows> no
[21:34] <Vacum> diegows: are those OSDs on top of a raid?
[21:34] <diegows> yes
[21:34] <Vacum> why?
[21:34] <diegows> hw raid
[21:34] * bandrus (~Adium@66.87.119.184) has joined #ceph
[21:34] <diegows> because the client left the server configured in that way, he's an asshole and I don't to discuss with them anymore :)
[21:34] <Vacum> I suggest you start over with that and get rid of those raid sets. use the HDDs as is
[21:34] <janos_> hahaha
[21:35] <diegows> BTW, this clients helps me to learn about ceph which is the most important
[21:35] <diegows> after that, good bye lol
[21:37] <diegows> well, changing min_size and size to all the defaults pools
[21:37] <diegows> changed somethign in the ceph -s output
[21:37] <diegows> :)
[21:37] <Vacum> good :)
[21:38] <diegows> HEALTH_WARN 192 pgs stuck unclean
[21:38] <diegows> but 192 active
[21:38] <Vacum> :)
[21:38] <diegows> I wanna see HEALTH_OK now :)
[21:39] <iggy> some people have mentioned having to restart the OSDs to get it to "click"
[21:39] <iggy> although I think just allowing some time to pass might do it
[21:40] <janos_> is there any activity in ceph -w?
[21:41] <diegows> hmm, no
[21:41] <diegows> nothing changesr
[21:42] <diegows> is there a way to recover that?
[21:43] <iggy> seems like this default change has caused a lot of headaches and maybe should have been considered a little harder by the devs
[21:43] <diegows> my issue?
[21:43] <iggy> I've seen multiple people per day in here asking about it since the last release
[21:43] <iggy> yes
[21:44] <diegows> hey, restarted and now HEALTH_OK :)
[21:44] <janos_> especially since repl 2 is considered very safe
[21:44] <diegows> yes, and with raid... really safe :P
[21:44] <janos_> this change seems like a vote of no-confidence on that
[21:45] <diegows> cluster running... nice
[21:45] <diegows> I love ceph
[21:46] <diegows> can I use the default pools?
[21:46] <diegows> or they have something special
[21:46] <diegows> data for example
[21:46] <janos_> they should be fine
[21:46] * mdjp (~mdjp@2001:41d0:52:100::343) has joined #ceph
[21:46] <iggy> at the very least, it should be easier to diagnose and recover from this state (i.e. warning in the logs that there aren't enough OSDs to satisfy repl=3, bonus if it links to a page that tells how to fix it without having to restart parts of the cluster)
[21:46] <janos_> yeah
[21:50] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 29.0.1/20140506152807])
[21:50] * ch4os (~ch4os@153.19.13.10) Quit (Read error: Connection reset by peer)
[21:52] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[21:54] * allsystemsarego (~allsystem@188.27.188.69) Quit (Quit: Leaving)
[21:57] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[21:57] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[21:59] * Defcon_102KALI_LINUX (~root@94.41.80.135.dynamic.ufanet.ru) has joined #ceph
[22:02] * markbby (~Adium@168.94.245.3) has joined #ceph
[22:03] * Defcon_102KALI_LINUX (~root@94.41.80.135.dynamic.ufanet.ru) Quit (Remote host closed the connection)
[22:05] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[22:06] * garibaldi (~oftc-webi@h216-165-139-220.mdsnwi.dedicated.static.tds.net) has joined #ceph
[22:07] * aldavud (~aldavud@213.55.184.222) Quit (Ping timeout: 480 seconds)
[22:09] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[22:10] * bcundiff_ (~oftc-webi@h216-165-139-220.mdsnwi.dedicated.static.tds.net) has joined #ceph
[22:14] <bcundiff_> Hey, I"m having some issues with rbd create on a Ubuntu 12.04 x86_64 server with Firefly. Process seems to hang indefinitely when trying to make an rbd device. I'm following the Storage Cluster Quick Start and Block Device Quick Start guides. I didn't get any errors when running the ceph-deploy commands. ceph-osd-all service is listed as running. Any suggestions?
[22:15] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) Quit (Quit: neurodrone)
[22:16] * ghartz_ (~ghartz@ip-68.net-80-236-84.joinville.rev.numericable.fr) has joined #ceph
[22:16] * mtanski (~mtanski@65.107.210.227) has joined #ceph
[22:16] <alfredodeza> Hi all, there is a new ceph-deploy out (1.5.4) that addresses the EPEL packaging issues we've had
[22:16] <alfredodeza> KB ^ ^
[22:18] <seapasulli> bcundiff_: how is your cluster health and how is your /var/log disk size? That happened to me too and it turned out my debug logging filled my disks :)
[22:19] <seapasulli> i was of course having tons of other issues so i failed to notice the failed disks.
[22:19] <seapasulli> ir full disks *
[22:19] * bandrus (~Adium@66.87.119.184) Quit (Ping timeout: 480 seconds)
[22:20] * rotbeard (~redbeard@2a02:908:df11:9480:76f0:6dff:fe3b:994d) has joined #ceph
[22:21] * bandrus (~Adium@66-87-118-55.pools.spcsdns.net) has joined #ceph
[22:22] <bcundiff_> @seapasulli: I have plenty of disk space on the node's drive.
[22:22] <cephalobot`> bcundiff_: Error: "seapasulli:" is not a valid command.
[22:22] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[22:22] <seapasulli> bcundiff_: I should have asked. Do other rbd commands work? ie can you list objects or anything like that?
[22:22] <bcundiff_> When I run ceph-deploy admin ceph-admin, the command goes through fine (no errors) but doing ceph health fails
[22:23] <bcundiff_> Trying to run ceph on the ceph admin node gives
[22:23] <bcundiff_> 2014-06-06 15:22:54.184842 7f66a3e1e700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication 2014-06-06 15:22:54.184843 7f66a3e1e700 0 librados: client.admin initialization error (2) No such file or directory Error connecting to cluster: ObjectNotFound
[22:23] <bcundiff_> So, I"m obviously missing something
[22:24] * gregsfortytwo (~Adium@129.210.115.14) has joined #ceph
[22:25] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[22:28] * bandrus1 (~Adium@66-87-118-138.pools.spcsdns.net) has joined #ceph
[22:28] * bandrus (~Adium@66-87-118-55.pools.spcsdns.net) Quit (Read error: Operation timed out)
[22:30] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[22:31] * bandrus (~Adium@66.87.118.235) has joined #ceph
[22:32] <bdonnahue> hey guys im trying to mount cephfs with ceph-fuse and having an issue
[22:33] <bdonnahue> the client has always worked in the past. today however, it seems to hang
[22:33] <bdonnahue> anyone know what logs to look at or maybe how to trouble shoot?
[22:33] <bdonnahue> the mds / osds are online, ceph health is warn from clockskew but thats it
[22:33] <iggy> bdonnahue: do you have the ability to try the kernel module instead of ceph-fuse?
[22:34] <bdonnahue> i do but i wanted to test out cephfs instead of rbd
[22:34] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[22:35] <bdonnahue> its been working for months until today :(
[22:35] <bdonnahue> all pages are active and clean. i cant think of why it would fail. should i take down the whole cluster and bring it back online?
[22:35] * JeanMax (~oftc-webi@vsr56-1-82-246-44-153.fbx.proxad.net) Quit (Quit: Page closed)
[22:36] * bandrus1 (~Adium@66-87-118-138.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[22:38] * bandrus1 (~Adium@66-87-118-86.pools.spcsdns.net) has joined #ceph
[22:38] <bdonnahue> 50.4.50.103:6927/4672 pipe(0x7f46a405f370 sd=9 :0 s=1 pgs=0 cs=0 l=0 c=0x7f46a405eeb0).fault
[22:38] <bdonnahue> that shows in my log
[22:38] <iggy> I meant the cephfs kernel module
[22:38] <iggy> not rbd
[22:38] * dignus (~jkooijman@t-x.dignus.nl) Quit (Quit: Changing server)
[22:39] * bandrus (~Adium@66.87.118.235) Quit (Ping timeout: 480 seconds)
[22:41] <bdonnahue> hmm i dont have that module installed
[22:41] <bdonnahue> is it easy to install?
[22:42] <bdonnahue> i can telnet the mds on port 6789
[22:42] <bdonnahue> im not sure why the fuse client is failing
[22:46] * bandrus1 (~Adium@66-87-118-86.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[22:46] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) Quit (Quit: Leaving.)
[22:47] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has joined #ceph
[22:47] <saturnine> Have there been deb packages built for the calamari server yet?
[22:49] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[22:49] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has joined #ceph
[22:53] <iggy> bdonnahue: it's been included in the mainline kernel since before rbd, so unless you are running some ancient kernel, you should have it
[22:54] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[22:54] * allig8r (~allig8r@128.135.219.116) Quit (Read error: Connection reset by peer)
[22:56] * rendar (~I@host19-176-dynamic.20-87-r.retail.telecomitalia.it) has joined #ceph
[22:57] * allig8r (~allig8r@128.135.219.116) has joined #ceph
[23:00] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[23:02] * Ronald_ (~oftc-webi@5ED41764.cm-7-5a.dynamic.ziggo.nl) has joined #ceph
[23:04] * bandrus (~Adium@66-87-118-127.pools.spcsdns.net) has joined #ceph
[23:09] * b0e (~aledermue@p5481F849.dip0.t-ipconnect.de) has joined #ceph
[23:12] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) Quit (Remote host closed the connection)
[23:14] * m0e (~Moe@41.45.234.191) has joined #ceph
[23:21] * b0e (~aledermue@p5481F849.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[23:22] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[23:22] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) Quit (Quit: Leaving)
[23:23] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[23:23] * ChanServ sets mode +v andreask
[23:23] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[23:28] * m0e (~Moe@41.45.234.191) Quit (Quit: This computer has gone to sleep)
[23:29] * nwat (~textual@eduroam-240-162.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:29] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[23:29] * bandrus (~Adium@66-87-118-127.pools.spcsdns.net) Quit (Quit: Leaving.)
[23:30] * rendar (~I@host19-176-dynamic.20-87-r.retail.telecomitalia.it) Quit ()
[23:32] * JCL1 (~JCL@c-24-23-166-139.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[23:32] * JCL (~JCL@2601:9:5980:39b:2d11:c6cb:bf1b:96e1) has joined #ceph
[23:33] * scuttlemonkey (~scuttlemo@c-107-5-193-244.hsd1.mi.comcast.net) Quit (Remote host closed the connection)
[23:34] * gregsfortytwo (~Adium@129.210.115.14) Quit (Quit: Leaving.)
[23:34] * scuttlemonkey (~scuttlemo@c-107-5-193-244.hsd1.mi.comcast.net) has joined #ceph
[23:34] * ChanServ sets mode +o scuttlemonkey
[23:34] * ikrstic (~ikrstic@77-46-245-216.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[23:43] * JCL (~JCL@2601:9:5980:39b:2d11:c6cb:bf1b:96e1) Quit (Quit: Leaving.)
[23:44] * JCL (~JCL@c-24-23-166-139.hsd1.ca.comcast.net) has joined #ceph
[23:47] * dmsimard is now known as dmsimard_away
[23:49] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[23:51] * sleinen1 (~Adium@2001:620:0:26:cda3:5a03:8565:45ea) has joined #ceph
[23:54] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Read error: Operation timed out)
[23:58] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.