#ceph IRC Log

Index

IRC Log for 2014-07-11

Timestamps are in GMT/BST.

[0:00] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[0:02] * rudolfsteiner (~federicon@190.153.218.195) has joined #ceph
[0:02] * rudolfsteiner (~federicon@190.153.218.195) Quit ()
[0:03] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[0:04] * rudolfsteiner (~federicon@190.153.218.195) has joined #ceph
[0:08] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[0:08] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[0:08] * adamcrume (~quassel@2601:9:6680:47:c996:ca64:72ad:eb72) has joined #ceph
[0:10] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) Quit (Read error: Operation timed out)
[0:17] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[0:23] * rudolfsteiner (~federicon@190.153.218.195) Quit (Quit: rudolfsteiner)
[0:23] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) Quit (Ping timeout: 480 seconds)
[0:33] <angdraug> can anyone here help debug an OSD startup problem?
[0:34] <angdraug> some osd's are stuck after "journal _open", and do not become active
[0:34] <angdraug> debug osd = 20/20,
[0:34] <angdraug> all we get is: http://pastebin.com/NH4qfPRU
[0:38] <joshd> angdraug: how about with debug filestore = 20 and debug journal = 20 too? I'd guess it's just replaying the journal if it's only been stuck a short while
[0:39] * kevinc (~kevinc__@client65-40.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[0:39] * kevinc (~kevinc__@client65-40.sdsc.edu) has joined #ceph
[0:40] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[0:41] <angdraug> no, it gets stuck for good
[0:41] <angdraug> I'll try with debug filestore and journal
[0:43] * sjm (~sjm@pool-72-76-115-220.nwrknj.fios.verizon.net) has left #ceph
[0:45] * rotbeard (~redbeard@aftr-37-24-147-15.unity-media.net) Quit (Quit: Leaving)
[0:45] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[0:46] <angdraug> http://pastebin.com/05mwkGF1
[0:52] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[0:52] <angdraug> looks like the difference between a good and bad osd starts with:
[0:52] <angdraug> osd.60 0 get_map 0 - return initial 0x23b0380
[0:53] <angdraug> good osd has instead:
[0:54] <angdraug> osd.58 0 get_map 182 - loading and decoding 0x3dba1c0
[0:54] <joshd> yeah, it looks like it's probably having trouble getting the monmap from the monitors
[0:55] <angdraug> anything I can check on the mon to confirm?
[0:55] <joshd> debug ms = 1 and debug monclient = 10 would show those details, and the monitor log with anything about that osd would help
[0:55] <joshd> and of course checking connectivity from that node to the mon
[0:55] <angdraug> connectivity is fine, I see established tcp session from osd to mon
[0:56] * Cube (~Cube@66.87.67.122) Quit (Quit: Leaving.)
[0:56] <angdraug> debug ms and monclient on osd? or both?
[0:56] <joshd> on the osd
[0:57] <joshd> on the mon you could do debug ms = 1 and debug mon = 20
[0:57] * rweeks (~rweeks@pat.hitachigst.com) Quit (Quit: Leaving)
[0:57] <joshd> oh, it's monc instead of monclient
[1:04] * zack_dolby (~textual@p67f6b6.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:06] <angdraug> http://pastebin.com/5fQR9CHd
[1:06] <angdraug> all I get on mon about that osd is:
[1:06] <angdraug> mon.node-1@0(leader).osd e187 create-or-move crush item name 'osd.60' initial_weight 0.27 at location {host=node-9,root=default}
[1:11] <el_seano> weee, getting my first little cluster up and running :D
[1:12] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[1:12] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[1:12] <el_seano> so, one compromise I've made thus far in order to get Windows systems to be able to see the cluster is to serve it up over smb
[1:13] <el_seano> as it looks like the attempts at native Windows clients have more or less faltered
[1:13] <el_seano> bad idea?
[1:13] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[1:16] * kevinc (~kevinc__@client65-40.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[1:17] * aldavud (~aldavud@213.55.176.137) has joined #ceph
[1:18] * eternaleye (~eternaley@50.245.141.73) Quit (Ping timeout: 480 seconds)
[1:18] * eternaleye (~eternaley@50.245.141.73) has joined #ceph
[1:19] <joshd> angdraug: odd that thread id 7ff57a791700 never logs anything past _maybe_boot mon has osdmaps 1..187
[1:19] <angdraug> found something interesting
[1:19] <joshd> can you attach with gdb and thread apply all bt?
[1:19] <angdraug> 2014-07-10 22:58:10.411680 7ff57af92700 10 monclient: _check_auth_rotating have uptodate secrets (they expire after 2014-07-10 22:57:40.411677)
[1:19] <angdraug> note how secrets exprire time stamp is less than current time?
[1:19] <angdraug> is that a problem?
[1:21] <angdraug> 7ff57a791700 has more entries, I just didn't paste everything
[1:21] <angdraug> 2014-07-10 23:01:10.414916 7ff57af92700 10 monclient: renew subs? (now: 2014-07-10 23:01:10.414916; renew after: 2014-07-10 23:03:10.414352) -- no
[1:21] <angdraug> the whole log is almost a megabyte
[1:22] <angdraug> too big for pastebin, I can email it to you if you want
[1:22] <joshd> the auth thing is fine - there's a few sets of keys valid at a time so the rotation works
[1:22] <joshd> sure
[1:23] <angdraug> sent
[1:25] * aldavud (~aldavud@213.55.176.137) Quit (Read error: Operation timed out)
[1:26] <joshd> I'm not seeing any other entries from 7ff57a791700
[1:26] <angdraug> don't have gdb on that system :(
[1:26] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[1:26] <joshd> strace?
[1:27] <angdraug> sure
[1:35] * allig8r (~allig8r@128.135.219.116) Quit (Quit: Leaving)
[1:39] * allig8r (~allig8r@128.135.219.116) has joined #ceph
[1:40] * oms101 (~oms101@p20030057EA048100EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:45] * eternaleye (~eternaley@50.245.141.73) Quit (Ping timeout: 480 seconds)
[1:48] * eternaleye (~eternaley@50.245.141.73) has joined #ceph
[1:49] * oms101 (~oms101@p20030057EA04E000EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:49] <angdraug> joshd: sent you strace logs for 2 osd's, one up and one down
[1:53] <joshd> angdraug: I don't see anything obvious in there unfortunately, probably need gdb to figure out why the thread is hanging
[1:56] * JC1 (~JC@AMontpellier-651-1-32-204.w90-57.abo.wanadoo.fr) has joined #ceph
[1:57] <angdraug> you mean monclient thread?
[1:57] <angdraug> I'll see if I can get gdb installed there
[1:58] * aldavud (~aldavud@213.55.176.137) has joined #ceph
[1:58] * shimo (~A13032@60.36.191.146) Quit (Quit: shimo)
[1:58] <joshd> the osd thread that calls the monclient, yeah
[1:59] <joshd> the mon logs with debug ms 1 and debug mon 20 might have something interesting too
[2:03] * JC (~JC@AMontpellier-651-1-32-204.w90-57.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[2:04] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[2:05] * zack_dolby (~textual@e0109-114-22-18-10.uqwimax.jp) has joined #ceph
[2:07] * aldavud (~aldavud@213.55.176.137) Quit (Ping timeout: 480 seconds)
[2:09] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[2:12] <angdraug> gdb backtrace:
[2:12] <angdraug> http://pastebin.com/aX9Jm1tK
[2:13] * shimo (~A13032@60.36.191.146) has joined #ceph
[2:13] * sarob (~sarob@ip-64-134-229-56.public.wayport.net) has joined #ceph
[2:18] * JC1 (~JC@AMontpellier-651-1-32-204.w90-57.abo.wanadoo.fr) Quit (Quit: Leaving.)
[2:19] <angdraug> emailed you mon log with debug ms 1 and debug mon 20
[2:20] * drankis (~drankis__@89.111.13.198) Quit (Read error: Connection reset by peer)
[2:21] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Remote host closed the connection)
[2:23] * [fred] (fred@earthli.ng) Quit (Ping timeout: 480 seconds)
[2:26] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[2:27] * sarob (~sarob@ip-64-134-229-56.public.wayport.net) Quit (Remote host closed the connection)
[2:27] * sarob (~sarob@ip-64-134-229-56.public.wayport.net) has joined #ceph
[2:28] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[2:30] * [fred] (fred@earthli.ng) has joined #ceph
[2:32] <joshd> angdraug: the mon log shows no contact with the osd after its initial osdmap request, and the backtrace is mainly threads waiting, with one reading from the network
[2:33] <joshd> does netstat -ntp | grep <osd_pid> show anything odd, like long queues?
[2:35] * sarob (~sarob@ip-64-134-229-56.public.wayport.net) Quit (Read error: Operation timed out)
[2:35] <angdraug> nope, queues are empty
[2:38] * eternaleye (~eternaley@50.245.141.73) Quit (Ping timeout: 480 seconds)
[2:39] * eternaleye (~eternaley@50.245.141.73) has joined #ceph
[2:40] * aldavud (~aldavud@213.55.176.137) has joined #ceph
[2:43] <angdraug> hm, live strace on osd process shows a lot of send and recv calls on the fd of its tcp connection to mon
[2:43] <angdraug> how come mon is responding, but isn't writing anything to the log?
[2:45] * aldavud_ (~aldavud@213.55.176.137) has joined #ceph
[2:45] <joshd> is the mon sending things on its end?
[2:46] <joshd> perhaps a different mon?
[2:47] * kevinc (~kevinc__@client65-40.sdsc.edu) has joined #ceph
[2:47] <angdraug> weird
[2:47] <angdraug> mons are on node-1, node-2, node-6
[2:48] <angdraug> netstat on osd shows established conection to node-2
[2:48] <angdraug> netstat on node-2 shows no connections to osd
[2:48] <angdraug> netstat on node-1 shows established connection to osd!
[2:49] * Hell_Fire__ (~HellFire@123-243-155-184.static.tpgi.com.au) Quit (Read error: Connection reset by peer)
[2:49] * Hell_Fire__ (~HellFire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[2:49] <joshd> that's quite strange
[2:49] <angdraug> netstat on node-6 shows 4 sessions with osd!
[2:50] * aldavud (~aldavud@213.55.176.137) Quit (Ping timeout: 480 seconds)
[2:50] <joshd> really weird that this reproduces too
[2:56] <angdraug> I think I'm being stupid
[2:56] <angdraug> some osd on node-9 are alive
[2:56] * kevinc (~kevinc__@client65-40.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[2:57] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[3:00] * yuriw1 (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[3:00] <angdraug> I don't think mon is a problem at all
[3:01] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[3:01] <angdraug> 2014-07-10 22:58:07.410559 7ff58e4bf7c0 20 osd.60 0 get_map 0 - return initial 0x269e380
[3:01] <angdraug> ...
[3:01] <angdraug> 2014-07-10 22:58:07.411105 7ff58e4bf7c0 10 monclient(hunting): init
[3:02] <angdraug> there's difference in osd logs between good and bad osd before monclient is even intialized
[3:02] <angdraug> what does it mean, get_map 0?
[3:03] <angdraug> if you didn't guess yet, the network connection inconsistencies where me being stupid
[3:03] <angdraug> I was looking at network sessions of different osd processes from the same node
[3:03] <joshd> get_map 0 means it's looking up osdmap from epoch 0 locally
[3:04] <joshd> then the (hunting) part is it trying to find a monitor to connect to
[3:04] <joshd> ah
[3:04] <joshd> so it does have a connection to the monitor then
[3:06] <angdraug> yup
[3:07] <joshd> the issue is it gets to the _maybe_boot stage, where it will request newer osdmap epochs from the mon if it needs them, then the thread never logs anything, and never gets to _send_boot(), where it tells the monitors it's ready
[3:08] * sage___ (~quassel@gw.sepia.ceph.com) Quit (Remote host closed the connection)
[3:08] <joshd> I'm not sure what would cause the hang at that point other than an odd network issue
[3:09] <angdraug> not sure
[3:09] <angdraug> here's another line:
[3:09] <angdraug> 2014-07-10 22:58:07.419443 7ff57f79b700 10 monclient: handle_get_version_reply finishing 0x26a72a0 version 187
[3:09] <angdraug> 2014-07-10 22:58:07.419493 7ff57a791700 10 osd.60 0 _maybe_boot mon has osdmaps 1..187
[3:09] * narb (~Jeff@38.99.52.10) Quit (Quit: narb)
[3:10] <angdraug> and before that:
[3:10] <angdraug> 2014-07-10 22:58:07.418566 7ff57f79b700 10 monclient: dump:
[3:10] <angdraug> epoch 3
[3:10] * sage__ (~quassel@gw.sepia.ceph.com) has joined #ceph
[3:10] <angdraug> looks like it is trying to get a more recent osdmap from mon
[3:11] <angdraug> or am I misinterpreting this again?
[3:12] <joshd> that's from dumping the monmap
[3:13] <joshd> I've got to get going, but I'll ask around tomorrow if you haven't figured it out
[3:13] <angdraug> thanks for your help so far, see you tomorrow!
[3:13] <joshd> yw, see you
[3:15] * markbby (~Adium@168.94.245.3) has joined #ceph
[3:18] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[3:18] * aldavud_ (~aldavud@213.55.176.137) Quit (Read error: Connection reset by peer)
[3:20] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[3:29] * houkouonchi-work (~linux@12.248.40.138) Quit (Quit: Client exiting)
[3:30] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) has joined #ceph
[3:31] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) Quit ()
[3:33] * huangjun (~kvirc@59.173.185.197) has joined #ceph
[3:33] <huangjun> we always get "common/HeartbeatMap.cc: 79: FAILED assert(0 == "hit suicide timeout")" error
[3:35] <huangjun> does this mean read|write spend time longer than expected?
[3:37] * diegows (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[3:39] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:40] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) has joined #ceph
[3:40] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[3:41] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) has joined #ceph
[3:41] * zhaochao (~zhaochao@123.151.134.238) has joined #ceph
[3:43] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (Quit: Leaving)
[3:44] * dlan (~dennis@116.228.88.131) Quit (Quit: leaving)
[3:45] * dlan (~dennis@116.228.88.131) has joined #ceph
[3:47] * zhaochao (~zhaochao@123.151.134.238) has left #ceph
[3:48] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[3:52] * carter (~carter@li98-136.members.linode.com) Quit (Quit: ZNC - http://znc.in)
[3:52] * carter (~carter@li98-136.members.linode.com) has joined #ceph
[3:53] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit ()
[3:57] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[3:58] * haomaiwa_ (~haomaiwan@118.186.129.94) Quit (Remote host closed the connection)
[3:59] * haomaiwang (~haomaiwan@124.248.205.19) has joined #ceph
[4:02] * adamcrume (~quassel@2601:9:6680:47:c996:ca64:72ad:eb72) Quit (Remote host closed the connection)
[4:04] * zhaochao (~zhaochao@123.151.134.238) has joined #ceph
[4:04] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[4:08] * markbby (~Adium@168.94.245.3) Quit (Ping timeout: 480 seconds)
[4:11] * shimo (~A13032@60.36.191.146) Quit (Quit: shimo)
[4:14] * shimo (~A13032@60.36.191.146) has joined #ceph
[4:17] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[4:20] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[4:30] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[4:35] * haomaiwa_ (~haomaiwan@118.186.129.94) has joined #ceph
[4:37] * haomaiwang (~haomaiwan@124.248.205.19) Quit (Read error: Connection reset by peer)
[4:37] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[4:40] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:47] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[4:52] * wenjunh (~wenjunh@corp-nat.peking.corp.yahoo.com) has joined #ceph
[4:53] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[5:01] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[5:15] * michalefty (~micha@188-195-129-145-dynip.superkabel.de) has joined #ceph
[5:19] * Cube (~Cube@66-87-67-122.pools.spcsdns.net) has joined #ceph
[5:21] * dlan (~dennis@116.228.88.131) Quit (Quit: leaving)
[5:23] * dlan (~dennis@116.228.88.131) has joined #ceph
[5:23] * koleosfuscus (~koleosfus@adsl-84-226-68-69.adslplus.ch) has joined #ceph
[5:30] * michalefty (~micha@188-195-129-145-dynip.superkabel.de) has left #ceph
[5:30] * Cube (~Cube@66-87-67-122.pools.spcsdns.net) Quit (Quit: Leaving.)
[5:31] * Cube (~Cube@66.87.67.122) has joined #ceph
[5:33] * Cube (~Cube@66.87.67.122) Quit ()
[5:47] * haomaiwa_ (~haomaiwan@118.186.129.94) Quit (Remote host closed the connection)
[5:47] * haomaiwang (~haomaiwan@124.248.205.19) has joined #ceph
[5:50] * vbellur1 (~vijay@122.166.173.41) has joined #ceph
[5:51] * Vacum (~vovo@88.130.204.42) has joined #ceph
[5:54] * vbellur (~vijay@122.167.82.113) Quit (Ping timeout: 480 seconds)
[5:58] * Vacum_ (~vovo@88.130.222.18) Quit (Ping timeout: 480 seconds)
[6:12] * haomaiwa_ (~haomaiwan@118.186.129.94) has joined #ceph
[6:17] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[6:18] * haomaiwang (~haomaiwan@124.248.205.19) Quit (Ping timeout: 480 seconds)
[6:27] * theanalyst (~abhi@49.32.0.101) has joined #ceph
[6:29] * vbellur1 (~vijay@122.166.173.41) Quit (Ping timeout: 480 seconds)
[6:32] * koleosfuscus (~koleosfus@adsl-84-226-68-69.adslplus.ch) Quit (Quit: koleosfuscus)
[6:38] * lucas1 (~Thunderbi@222.240.148.130) Quit (Ping timeout: 480 seconds)
[6:53] * vbellur (~vijay@121.244.87.124) has joined #ceph
[7:03] * yguang11_ (~yguang11@2406:2000:ef96:e:499:b9d2:712a:8577) has joined #ceph
[7:03] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[7:21] * michalefty (~micha@p20030071CF556600889C41B5BC40C3E7.dip0.t-ipconnect.de) has joined #ceph
[7:22] * rdas (~rdas@121.244.87.115) has joined #ceph
[7:28] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[7:33] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Remote host closed the connection)
[7:34] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[7:42] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[7:44] * rendar (~I@host223-182-dynamic.19-79-r.retail.telecomitalia.it) has joined #ceph
[7:48] * runfromnowhere (~runfromno@pool-108-29-25-203.nycmny.fios.verizon.net) Quit (Quit: leaving)
[7:53] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[7:54] * madkiss (~madkiss@p5797A39B.dip0.t-ipconnect.de) has joined #ceph
[7:56] * zack_dol_ (~textual@e0109-114-22-18-10.uqwimax.jp) has joined #ceph
[7:56] * zack_dolby (~textual@e0109-114-22-18-10.uqwimax.jp) Quit (Read error: Connection reset by peer)
[7:57] * zack_dolby (~textual@e0109-114-22-18-10.uqwimax.jp) has joined #ceph
[7:57] * zack_dol_ (~textual@e0109-114-22-18-10.uqwimax.jp) Quit (Read error: Connection reset by peer)
[8:00] * dpippenger (~Adium@cpe-172-249-34-50.socal.res.rr.com) has joined #ceph
[8:00] * dpippenger (~Adium@cpe-172-249-34-50.socal.res.rr.com) has left #ceph
[8:01] * runfromnowhere (~runfromno@pool-108-29-25-203.nycmny.fios.verizon.net) has joined #ceph
[8:05] * madkiss (~madkiss@p5797A39B.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[8:06] * koleosfuscus (~koleosfus@adsl-84-226-68-69.adslplus.ch) has joined #ceph
[8:06] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[8:09] * zidarsk8 (~zidar@2001:1470:fffd:101c:ea11:32ff:fe9a:870) has joined #ceph
[8:16] * koleosfuscus (~koleosfus@adsl-84-226-68-69.adslplus.ch) Quit (Quit: koleosfuscus)
[8:19] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[8:20] * el_seano (~el_seano@0001a13a.user.oftc.net) Quit (Quit: Changing server)
[8:24] * drankis (~drankis__@91.188.43.210) has joined #ceph
[8:27] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[8:28] * el_seano (~el_seano@0001a13a.user.oftc.net) has joined #ceph
[8:30] * koleosfuscus (~koleosfus@adsl-84-226-68-69.adslplus.ch) has joined #ceph
[8:30] * koleosfuscus (~koleosfus@adsl-84-226-68-69.adslplus.ch) Quit ()
[8:35] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[8:36] * drankis (~drankis__@91.188.43.210) Quit (Ping timeout: 480 seconds)
[8:37] * ikrstic (~ikrstic@109-93-240-204.dynamic.isp.telekom.rs) has joined #ceph
[8:42] * Hell_Fire__ (~HellFire@123-243-155-184.static.tpgi.com.au) Quit (Read error: Network is unreachable)
[8:42] * Hell_Fire__ (~HellFire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[8:44] * zidarsk8 (~zidar@2001:1470:fffd:101c:ea11:32ff:fe9a:870) Quit (Remote host closed the connection)
[8:45] * pressureman (~pressurem@62.217.45.26) Quit (Ping timeout: 480 seconds)
[8:45] * stephan (~stephan@62.217.45.26) Quit (Ping timeout: 480 seconds)
[8:48] * Sysadmin88 (~IceChat77@94.4.20.0) Quit (Quit: Say What?)
[8:50] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[8:53] * drankis (~drankis__@89.111.13.198) has joined #ceph
[8:54] * stephan (~stephan@62.217.45.26) has joined #ceph
[9:00] * vbellur (~vijay@121.244.87.117) has joined #ceph
[9:06] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) Quit (Quit: vilobhmm)
[9:06] * stephan (~stephan@62.217.45.26) Quit (Ping timeout: 480 seconds)
[9:11] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:13] * oro (~oro@2001:620:20:222:6436:cb51:5a38:8386) has joined #ceph
[9:15] * stephan (~stephan@62.217.45.26) has joined #ceph
[9:17] * dignus (~jkooijman@t-x.dignus.nl) Quit (Quit: leaving)
[9:21] * ikrstic_ (~ikrstic@178-222-84-38.dynamic.isp.telekom.rs) has joined #ceph
[9:22] * vbellur (~vijay@121.244.87.124) has joined #ceph
[9:23] * hyperbaba__ (~hyperbaba@private.neobee.net) has joined #ceph
[9:26] * ikrstic (~ikrstic@109-93-240-204.dynamic.isp.telekom.rs) Quit (Ping timeout: 480 seconds)
[9:27] * ade (~abradshaw@193.202.255.218) has joined #ceph
[9:30] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:33] * mtl2 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) has joined #ceph
[9:34] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[9:34] * simulx (~simulx@66-194-114-178.static.twtelecom.net) Quit (Read error: Connection reset by peer)
[9:34] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) Quit (Read error: Connection reset by peer)
[9:34] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[9:34] * simulx (~simulx@66-194-114-178.static.twtelecom.net) has joined #ceph
[9:43] * fsimonce (~simon@host245-0-dynamic.37-79-r.retail.telecomitalia.it) has joined #ceph
[9:44] * Hell_Fire (~HellFire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[9:44] * Hell_Fire__ (~HellFire@123-243-155-184.static.tpgi.com.au) Quit (Read error: Connection reset by peer)
[9:46] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[9:48] * brunoleon (~quassel@199-109-190-109.dsl.ovh.fr) has joined #ceph
[9:49] * jordanP (~jordan@78.193.36.209) has joined #ceph
[9:51] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[9:53] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:53] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[9:53] * ChanServ sets mode +v andreask
[9:58] * dignus (~jkooijman@t-x.dignus.nl) has joined #ceph
[10:01] * vbellur (~vijay@121.244.87.124) Quit (Read error: Operation timed out)
[10:02] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[10:06] * marrusl (~mark@faun.canonical.com) has joined #ceph
[10:13] * vbellur (~vijay@121.244.87.117) has joined #ceph
[10:16] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[10:25] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[10:28] * lcavassa (~lcavassa@78.25.240.221) has joined #ceph
[10:28] * ikrstic_ (~ikrstic@178-222-84-38.dynamic.isp.telekom.rs) Quit (Read error: Operation timed out)
[10:29] * ikrstic_ (~ikrstic@178-221-94-203.dynamic.isp.telekom.rs) has joined #ceph
[10:32] * ikrstic__ (~ikrstic@178-222-78-255.dynamic.isp.telekom.rs) has joined #ceph
[10:38] * ikrstic_ (~ikrstic@178-221-94-203.dynamic.isp.telekom.rs) Quit (Ping timeout: 480 seconds)
[10:41] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[10:49] * zack_dolby (~textual@e0109-114-22-18-10.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[10:49] * zack_dolby (~textual@e0109-114-22-18-10.uqwimax.jp) has joined #ceph
[10:51] * analbeard (~shw@support.memset.com) has joined #ceph
[10:57] * zack_dolby (~textual@e0109-114-22-18-10.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[10:58] * bandrus (~Adium@c-4f66cf1e-74736162.cust.telenor.se) has joined #ceph
[11:01] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[11:02] * zerick (~eocrospom@190.187.21.53) Quit (Read error: Connection reset by peer)
[11:04] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[11:06] * theanalyst (~abhi@49.32.0.101) Quit (Ping timeout: 480 seconds)
[11:07] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Read error: Operation timed out)
[11:11] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[11:16] * salgeras (~salgeras@sw4i.wifi.b92.net) has joined #ceph
[11:21] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:24] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[11:24] * haomaiwa_ (~haomaiwan@118.186.129.94) Quit (Remote host closed the connection)
[11:26] * haomaiwang (~haomaiwan@124.248.205.19) has joined #ceph
[11:26] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[11:30] * zz_hitsumabushi (~hitsumabu@175.184.30.148) Quit (Ping timeout: 480 seconds)
[11:30] * ifur (~osm@hornbill.csc.warwick.ac.uk) Quit (Read error: Connection reset by peer)
[11:31] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[11:31] <longguang> in ceph , what kinds of rule take part in vote?
[11:31] * clayg (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) Quit (Ping timeout: 480 seconds)
[11:31] * rektide (~rektide@eldergods.com) Quit (Ping timeout: 480 seconds)
[11:32] * schmee (~quassel@phobos.isoho.st) Quit (Quit: No Ping reply in 180 seconds.)
[11:32] * zerick (~eocrospom@190.187.21.53) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * simulx (~simulx@66-194-114-178.static.twtelecom.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * runfromnowhere (~runfromno@pool-108-29-25-203.nycmny.fios.verizon.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * lalatenduM (~lalatendu@121.244.87.117) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * rdas (~rdas@121.244.87.115) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * yguang11_ (~yguang11@2406:2000:ef96:e:499:b9d2:712a:8577) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * shang (~ShangWu@175.41.48.77) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * eternaleye (~eternaley@50.245.141.73) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * [fred] (fred@earthli.ng) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * allig8r (~allig8r@128.135.219.116) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * joshd (~jdurgin@2607:f298:a:607:fd80:ba24:7de3:bb37) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * bkopilov (~bkopilov@213.57.17.88) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * wayneeseguin (sid2139@id-2139.uxbridge.irccloud.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * lpabon (~lpabon@nat-pool-bos-t.redhat.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * \ask (~ask@oz.develooper.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * jlogan1 (~Thunderbi@2600:c00:3010:1:1::40) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * s3an2 (~root@korn.s3an.me.uk) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * zigo (quasselcor@ipv6-ftp.gplhost.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * ismell (~ismell@host-24-56-188-10.beyondbb.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * ccourtaut_ (~ccourtaut@2001:41d0:1:eed3::1) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * gchristensen (~gchristen@li65-6.members.linode.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * gregsfortytwo (~Adium@2607:f298:a:607:45d3:b9eb:da6:248c) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * rturk|afk (~rturk@nat-pool-rdu-t.redhat.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * joef (~Adium@2620:79:0:131:5c76:4020:e7cd:4d09) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * nwf (~nwf@67.62.51.95) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * erice (~erice@50.245.231.209) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * kevincox (~kevincox@4.s.kevincox.ca) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * todin (tuxadero@kudu.in-berlin.de) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * hk135 (~root@home.hornerscomputer.co.uk) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * dmick (~dmick@2607:f298:a:607:cd20:130:9c34:bf4) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * houkouonchi-home (~linux@pool-71-189-160-82.lsanca.fios.verizon.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * tws_ (~traviss@rrcs-24-123-86-154.central.biz.rr.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * darkfader (~floh@88.79.251.60) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * designated (~rroberts@host-177-39-52-24.midco.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * loicd (~loicd@cmd179.fsffrance.org) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * `10 (~10@69.169.91.14) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * jksM (~jks@3e6b5724.rev.stofanet.dk) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * beardo_ (~sma310@216-15-72-201.c3-0.drf-ubr1.atw-drf.pa.cable.rcn.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * maethor (~maethor@galactus.lahouze.org) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * sverrest (~sverrest@cm-84.208.166.184.getinternet.no) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * mdjp (~mdjp@2001:41d0:52:100::343) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * grepory (uid29799@id-29799.uxbridge.irccloud.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * saaby (~as@mail.saaby.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * joerocklin_ (~joe@cpe-65-185-149-56.woh.res.rr.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * hflai (~hflai@alumni.cs.nctu.edu.tw) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * v2 (~venky@ov42.x.rootbsd.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * ponyofdeath (~vladi@cpe-66-27-98-26.san.res.rr.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * benner (~benner@162.243.49.163) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * Gugge-47527 (gugge@kriminel.dk) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * kwmiebach__ (sid16855@id-16855.charlton.irccloud.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * hybrid512 (~walid@195.200.167.70) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * FL1SK (~quassel@159.118.92.60) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * devicenull (sid4013@id-4013.ealing.irccloud.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * cce (~cce@50.56.54.167) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * sadbox (~jmcguire@sadbox.org) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * markl (~mark@knm.org) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * oblu (~o@62.109.134.112) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * shk (sid33582@id-33582.charlton.irccloud.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * jnq (~jnq@0001b7cc.user.oftc.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * philips (~philips@ec2-54-196-103-51.compute-1.amazonaws.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * Jakey (uid1475@id-1475.uxbridge.irccloud.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * kraken (~kraken@gw.sepia.ceph.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * iggy (~iggy@theiggy.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * brambles (~xymox@s0.barwen.ch) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * haomaiwang (~haomaiwan@124.248.205.19) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * ikrstic__ (~ikrstic@178-222-78-255.dynamic.isp.telekom.rs) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * lcavassa (~lcavassa@78.25.240.221) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * vbellur (~vijay@121.244.87.117) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * dignus (~jkooijman@t-x.dignus.nl) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * fsimonce (~simon@host245-0-dynamic.37-79-r.retail.telecomitalia.it) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * mtl2 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * el_seano (~el_seano@0001a13a.user.oftc.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * saurabh (~saurabh@121.244.87.117) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * rendar (~I@host223-182-dynamic.19-79-r.retail.telecomitalia.it) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * carter (~carter@li98-136.members.linode.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * sage__ (~quassel@gw.sepia.ceph.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * imjustmatthew (~imjustmat@pool-74-110-226-158.rcmdva.fios.verizon.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * sage (~quassel@cpe-23-242-158-79.socal.res.rr.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * KaZeR (~kazer@c-67-161-64-186.hsd1.ca.comcast.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * dgarcia (~dgarcia@50-73-137-146-ip-static.hfc.comcastbusiness.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * lincolnb (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * cronix1 (~cronix@5.199.139.166) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * Psi-Jack (~psi-jack@psi-jack.user.oftc.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * trond (~trond@evil-server.alseth.info) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * KindOne (kindone@0001a7db.user.oftc.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * fretb (~fretb@drip.frederik.pw) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * terje (~joey@184-96-155-130.hlrn.qwest.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * wedge (lordsilenc@bigfoot.xh.se) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * magicrobotmonkey (~abassett@ec2-50-18-55-253.us-west-1.compute.amazonaws.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * saturnine (~saturnine@ashvm.saturne.in) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * winston-d (~ubuntu@ec2-54-244-213-72.us-west-2.compute.amazonaws.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * gleam (gleam@dolph.debacle.org) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * fouxm (~foucault@ks3363630.kimsufi.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * mondkalbantrieb (~quassel@mondkalbantrieb.de) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * mongo (~gdahlman@voyage.voipnw.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * tnt_ (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * jackhill (~jackhill@bog.hcoop.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * Fetch (fetch@gimel.cepheid.org) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * blynch (~blynch@vm-nat.msi.umn.edu) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * Guest625 (~coyo@209.148.95.237) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * jksM- (~jks@3e6b5724.rev.stofanet.dk) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * stj (~stj@tully.csail.mit.edu) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * kfei (~root@114-27-86-161.dynamic.hinet.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * erlkonig (~alex@cpe-68-203-11-82.austin.res.rr.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * beardo (~sma310@beardo.cc.lehigh.edu) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * joshwambua (~joshwambu@154.72.0.90) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * scuttlemonkey (~scuttle@nat-pool-rdu-t.redhat.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * nhm (~nhm@184-97-192-179.mpls.qwest.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * Meths (~meths@2.25.191.11) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * amospalla (~amospalla@amospalla.es) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * purpleidea (~james@199.180.99.171) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * athrift (~nz_monkey@203.86.205.13) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * tank100 (~tank@84.200.17.138) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * [caveman] (~quassel@boxacle.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * tchmnkyz (~jeremy@0001638b.user.oftc.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * pmatulis (~peter@ec2-23-23-42-7.compute-1.amazonaws.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * zackc (~zackc@0001ba60.user.oftc.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * tcatm (~quassel@mneme.draic.info) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * Azrael (~azrael@terra.negativeblue.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * ccourtaut (~ccourtaut@2001:41d0:2:4a25::1) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * odi (~quassel@2a00:12c0:1015:136::9) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * Hazelesque (~hazel@2a03:9800:10:13::2) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * sep (~sep@2a04:2740:1:0:52e5:49ff:feeb:32) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * sileht (~sileht@gizmo.sileht.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * Svedrin (svedrin@ketos.funzt-halt.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * Georgyo (~georgyo@shamm.as) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * hughsaunders (~hughsaund@wherenow.org) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * guppy (~quassel@guppy.xxx) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * higebu (~higebu@www3347ue.sakura.ne.jp) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * leseb_ (~leseb@88-190-214-97.rev.dedibox.fr) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * phantomcircuit (~phantomci@blockchain.ceo) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * eightyeight (~atoponce@atoponce.user.oftc.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (magnet.oftc.net resistance.oftc.net)
[11:32] * Daviey (~DavieyOFT@bootie.daviey.com) Quit (magnet.oftc.net resistance.oftc.net)
[11:33] * blue (~blue@irc.mmh.dk) has left #ceph
[11:33] * mfa298 (~mfa298@gateway.yapd.net) Quit (Read error: Connection reset by peer)
[11:34] * mfa298 (~mfa298@gateway.yapd.net) has joined #ceph
[11:34] * aldavud (~aldavud@213.55.176.169) has joined #ceph
[11:34] * hitsumabj (~hitsumabu@175.184.30.148) has joined #ceph
[11:34] * clayg_ (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) has joined #ceph
[11:34] * rektide_ (~rektide@eldergods.com) has joined #ceph
[11:34] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[11:34] * ifur_ (~osm@hornbill.csc.warwick.ac.uk) has joined #ceph
[11:34] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[11:34] * haomaiwang (~haomaiwan@124.248.205.19) has joined #ceph
[11:34] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[11:34] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[11:34] * ikrstic__ (~ikrstic@178-222-78-255.dynamic.isp.telekom.rs) has joined #ceph
[11:34] * lcavassa (~lcavassa@78.25.240.221) has joined #ceph
[11:34] * vbellur (~vijay@121.244.87.117) has joined #ceph
[11:34] * dignus (~jkooijman@t-x.dignus.nl) has joined #ceph
[11:34] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[11:34] * fsimonce (~simon@host245-0-dynamic.37-79-r.retail.telecomitalia.it) has joined #ceph
[11:34] * simulx (~simulx@66-194-114-178.static.twtelecom.net) has joined #ceph
[11:34] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[11:34] * mtl2 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) has joined #ceph
[11:34] * el_seano (~el_seano@0001a13a.user.oftc.net) has joined #ceph
[11:34] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[11:34] * runfromnowhere (~runfromno@pool-108-29-25-203.nycmny.fios.verizon.net) has joined #ceph
[11:34] * rendar (~I@host223-182-dynamic.19-79-r.retail.telecomitalia.it) has joined #ceph
[11:34] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[11:34] * rdas (~rdas@121.244.87.115) has joined #ceph
[11:34] * yguang11_ (~yguang11@2406:2000:ef96:e:499:b9d2:712a:8577) has joined #ceph
[11:34] * shang (~ShangWu@175.41.48.77) has joined #ceph
[11:34] * carter (~carter@li98-136.members.linode.com) has joined #ceph
[11:34] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) has joined #ceph
[11:34] * sage__ (~quassel@gw.sepia.ceph.com) has joined #ceph
[11:34] * eternaleye (~eternaley@50.245.141.73) has joined #ceph
[11:34] * [fred] (fred@earthli.ng) has joined #ceph
[11:34] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[11:34] * allig8r (~allig8r@128.135.219.116) has joined #ceph
[11:34] * imjustmatthew (~imjustmat@pool-74-110-226-158.rcmdva.fios.verizon.net) has joined #ceph
[11:34] * sage (~quassel@cpe-23-242-158-79.socal.res.rr.com) has joined #ceph
[11:34] * joshd (~jdurgin@2607:f298:a:607:fd80:ba24:7de3:bb37) has joined #ceph
[11:34] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[11:34] * bkopilov (~bkopilov@213.57.17.88) has joined #ceph
[11:34] * wayneeseguin (sid2139@id-2139.uxbridge.irccloud.com) has joined #ceph
[11:34] * lpabon (~lpabon@nat-pool-bos-t.redhat.com) has joined #ceph
[11:34] * \ask (~ask@oz.develooper.com) has joined #ceph
[11:34] * KaZeR (~kazer@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[11:34] * jlogan1 (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[11:34] * dgarcia (~dgarcia@50-73-137-146-ip-static.hfc.comcastbusiness.net) has joined #ceph
[11:34] * lincolnb (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) has joined #ceph
[11:34] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[11:34] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[11:34] * Psi-Jack (~psi-jack@psi-jack.user.oftc.net) has joined #ceph
[11:34] * mjeanson (~mjeanson@00012705.user.oftc.net) has joined #ceph
[11:34] * s3an2 (~root@korn.s3an.me.uk) has joined #ceph
[11:34] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) has joined #ceph
[11:34] * zigo (quasselcor@ipv6-ftp.gplhost.com) has joined #ceph
[11:34] * ismell (~ismell@host-24-56-188-10.beyondbb.com) has joined #ceph
[11:34] * trond (~trond@evil-server.alseth.info) has joined #ceph
[11:34] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) has joined #ceph
[11:34] * ccourtaut_ (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[11:34] * gchristensen (~gchristen@li65-6.members.linode.com) has joined #ceph
[11:34] * KindOne (kindone@0001a7db.user.oftc.net) has joined #ceph
[11:34] * gregsfortytwo (~Adium@2607:f298:a:607:45d3:b9eb:da6:248c) has joined #ceph
[11:34] * rturk|afk (~rturk@nat-pool-rdu-t.redhat.com) has joined #ceph
[11:34] * shk (sid33582@id-33582.charlton.irccloud.com) has joined #ceph
[11:34] * devicenull (sid4013@id-4013.ealing.irccloud.com) has joined #ceph
[11:34] * Gugge-47527 (gugge@kriminel.dk) has joined #ceph
[11:34] * kwmiebach__ (sid16855@id-16855.charlton.irccloud.com) has joined #ceph
[11:34] * benner (~benner@162.243.49.163) has joined #ceph
[11:34] * cce (~cce@50.56.54.167) has joined #ceph
[11:34] * philips (~philips@ec2-54-196-103-51.compute-1.amazonaws.com) has joined #ceph
[11:34] * ponyofdeath (~vladi@cpe-66-27-98-26.san.res.rr.com) has joined #ceph
[11:34] * v2 (~venky@ov42.x.rootbsd.net) has joined #ceph
[11:34] * hflai (~hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[11:34] * joerocklin_ (~joe@cpe-65-185-149-56.woh.res.rr.com) has joined #ceph
[11:34] * saaby (~as@mail.saaby.com) has joined #ceph
[11:34] * grepory (uid29799@id-29799.uxbridge.irccloud.com) has joined #ceph
[11:34] * mdjp (~mdjp@2001:41d0:52:100::343) has joined #ceph
[11:34] * sverrest (~sverrest@cm-84.208.166.184.getinternet.no) has joined #ceph
[11:34] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) has joined #ceph
[11:34] * iggy (~iggy@theiggy.com) has joined #ceph
[11:34] * FL1SK (~quassel@159.118.92.60) has joined #ceph
[11:34] * maethor (~maethor@galactus.lahouze.org) has joined #ceph
[11:34] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[11:34] * oblu (~o@62.109.134.112) has joined #ceph
[11:34] * brambles (~xymox@s0.barwen.ch) has joined #ceph
[11:34] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[11:34] * beardo_ (~sma310@216-15-72-201.c3-0.drf-ubr1.atw-drf.pa.cable.rcn.com) has joined #ceph
[11:34] * Jakey (uid1475@id-1475.uxbridge.irccloud.com) has joined #ceph
[11:34] * jksM (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[11:34] * `10 (~10@69.169.91.14) has joined #ceph
[11:34] * sadbox (~jmcguire@sadbox.org) has joined #ceph
[11:34] * loicd (~loicd@cmd179.fsffrance.org) has joined #ceph
[11:34] * jnq (~jnq@0001b7cc.user.oftc.net) has joined #ceph
[11:34] * designated (~rroberts@host-177-39-52-24.midco.net) has joined #ceph
[11:34] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) has joined #ceph
[11:34] * darkfader (~floh@88.79.251.60) has joined #ceph
[11:34] * tws_ (~traviss@rrcs-24-123-86-154.central.biz.rr.com) has joined #ceph
[11:34] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) has joined #ceph
[11:34] * houkouonchi-home (~linux@pool-71-189-160-82.lsanca.fios.verizon.net) has joined #ceph
[11:34] * dmick (~dmick@2607:f298:a:607:cd20:130:9c34:bf4) has joined #ceph
[11:34] * hk135 (~root@home.hornerscomputer.co.uk) has joined #ceph
[11:34] * markl (~mark@knm.org) has joined #ceph
[11:34] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[11:34] * kevincox (~kevincox@4.s.kevincox.ca) has joined #ceph
[11:34] * erice (~erice@50.245.231.209) has joined #ceph
[11:34] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[11:34] * nwf (~nwf@67.62.51.95) has joined #ceph
[11:34] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[11:34] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[11:34] * joef (~Adium@2620:79:0:131:5c76:4020:e7cd:4d09) has joined #ceph
[11:34] * scuttlemonkey (~scuttle@nat-pool-rdu-t.redhat.com) has joined #ceph
[11:34] * joshwambua (~joshwambu@154.72.0.90) has joined #ceph
[11:34] * beardo (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[11:34] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[11:34] * erlkonig (~alex@cpe-68-203-11-82.austin.res.rr.com) has joined #ceph
[11:34] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[11:34] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[11:34] * kfei (~root@114-27-86-161.dynamic.hinet.net) has joined #ceph
[11:34] * stj (~stj@tully.csail.mit.edu) has joined #ceph
[11:34] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) has joined #ceph
[11:34] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) has joined #ceph
[11:34] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) has joined #ceph
[11:34] * zackc (~zackc@0001ba60.user.oftc.net) has joined #ceph
[11:34] * nhm (~nhm@184-97-192-179.mpls.qwest.net) has joined #ceph
[11:34] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[11:34] * jksM- (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[11:34] * Hazelesque (~hazel@2a03:9800:10:13::2) has joined #ceph
[11:34] * Guest625 (~coyo@209.148.95.237) has joined #ceph
[11:34] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[11:34] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[11:34] * blynch (~blynch@vm-nat.msi.umn.edu) has joined #ceph
[11:34] * Fetch (fetch@gimel.cepheid.org) has joined #ceph
[11:34] * tank100 (~tank@84.200.17.138) has joined #ceph
[11:34] * purpleidea (~james@199.180.99.171) has joined #ceph
[11:34] * jackhill (~jackhill@bog.hcoop.net) has joined #ceph
[11:34] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) has joined #ceph
[11:34] * tnt_ (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) has joined #ceph
[11:34] * mongo (~gdahlman@voyage.voipnw.net) has joined #ceph
[11:34] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) has joined #ceph
[11:34] * mondkalbantrieb (~quassel@mondkalbantrieb.de) has joined #ceph
[11:34] * tcatm (~quassel@mneme.draic.info) has joined #ceph
[11:34] * fouxm (~foucault@ks3363630.kimsufi.com) has joined #ceph
[11:34] * [caveman] (~quassel@boxacle.net) has joined #ceph
[11:34] * SpamapS (~clint@xencbyrum2.srihosting.com) has joined #ceph
[11:34] * gleam (gleam@dolph.debacle.org) has joined #ceph
[11:34] * tchmnkyz (~jeremy@0001638b.user.oftc.net) has joined #ceph
[11:34] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[11:34] * winston-d (~ubuntu@ec2-54-244-213-72.us-west-2.compute.amazonaws.com) has joined #ceph
[11:34] * amospalla (~amospalla@amospalla.es) has joined #ceph
[11:34] * saturnine (~saturnine@ashvm.saturne.in) has joined #ceph
[11:34] * magicrobotmonkey (~abassett@ec2-50-18-55-253.us-west-1.compute.amazonaws.com) has joined #ceph
[11:34] * Meths (~meths@2.25.191.11) has joined #ceph
[11:34] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[11:34] * wedge (lordsilenc@bigfoot.xh.se) has joined #ceph
[11:34] * terje (~joey@184-96-155-130.hlrn.qwest.net) has joined #ceph
[11:34] * athrift (~nz_monkey@203.86.205.13) has joined #ceph
[11:34] * pmatulis (~peter@ec2-23-23-42-7.compute-1.amazonaws.com) has joined #ceph
[11:34] * fretb (~fretb@drip.frederik.pw) has joined #ceph
[11:34] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[11:34] * Svedrin (svedrin@ketos.funzt-halt.net) has joined #ceph
[11:34] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) has joined #ceph
[11:34] * odi (~quassel@2a00:12c0:1015:136::9) has joined #ceph
[11:34] * ccourtaut (~ccourtaut@2001:41d0:2:4a25::1) has joined #ceph
[11:34] * sep (~sep@2a04:2740:1:0:52e5:49ff:feeb:32) has joined #ceph
[11:34] * sileht (~sileht@gizmo.sileht.net) has joined #ceph
[11:34] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) has joined #ceph
[11:34] * Daviey (~DavieyOFT@bootie.daviey.com) has joined #ceph
[11:34] * higebu (~higebu@www3347ue.sakura.ne.jp) has joined #ceph
[11:34] * guppy (~quassel@guppy.xxx) has joined #ceph
[11:34] * hughsaunders (~hughsaund@wherenow.org) has joined #ceph
[11:34] * Georgyo (~georgyo@shamm.as) has joined #ceph
[11:34] * phantomcircuit (~phantomci@blockchain.ceo) has joined #ceph
[11:34] * eightyeight (~atoponce@atoponce.user.oftc.net) has joined #ceph
[11:34] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[11:34] * ivan` (~ivan`@000130ca.user.oftc.net) has joined #ceph
[11:34] * leseb_ (~leseb@88-190-214-97.rev.dedibox.fr) has joined #ceph
[11:36] * ChanServ sets mode +v sage
[11:36] * ChanServ sets mode +v elder
[11:36] * ChanServ sets mode +v scuttlemonkey
[11:36] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[11:36] * erlkonig (~alex@cpe-68-203-11-82.austin.res.rr.com) Quit (Remote host closed the connection)
[11:40] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:40] * schmee (~quassel@phobos.isoho.st) has joined #ceph
[11:40] * vmx (~vmx@dslb-084-056-023-132.pools.arcor-ip.net) has joined #ceph
[11:49] * Steki (~steki@198.199.65.141) has joined #ceph
[11:49] * fsimonce` (~simon@host89-28-dynamic.53-82-r.retail.telecomitalia.it) has joined #ceph
[11:51] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Read error: Operation timed out)
[11:52] * fsimonc`` (~simon@host239-195-dynamic.35-79-r.retail.telecomitalia.it) has joined #ceph
[11:53] * BManojlovic (~steki@cable-94-189-160-74.dynamic.sbb.rs) Quit (Ping timeout: 480 seconds)
[11:53] * fsimonce (~simon@host245-0-dynamic.37-79-r.retail.telecomitalia.it) Quit (Read error: Operation timed out)
[11:57] * vbellur (~vijay@121.244.87.124) has joined #ceph
[11:57] * fsimonce` (~simon@host89-28-dynamic.53-82-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[12:04] * yguang11_ (~yguang11@2406:2000:ef96:e:499:b9d2:712a:8577) Quit ()
[12:07] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) has joined #ceph
[12:07] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[12:07] * shimo (~A13032@60.36.191.146) Quit (Quit: shimo)
[12:09] * JC (~JC@AMontpellier-651-1-32-204.w90-57.abo.wanadoo.fr) has joined #ceph
[12:09] * ikrstic__ (~ikrstic@178-222-78-255.dynamic.isp.telekom.rs) Quit (Read error: Operation timed out)
[12:10] * JC (~JC@AMontpellier-651-1-32-204.w90-57.abo.wanadoo.fr) Quit ()
[12:11] * lightspeed (~lightspee@81.187.0.153) Quit (Ping timeout: 480 seconds)
[12:12] * ikrstic__ (~ikrstic@212.200.213.54) has joined #ceph
[12:12] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[12:15] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) Quit (Quit: Leaving.)
[12:15] * i_m (~ivan.miro@gbibp9ph1--blueice2n2.emea.ibm.com) has joined #ceph
[12:20] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[12:20] * jordanP (~jordan@78.193.36.209) Quit (Quit: Leaving)
[12:20] * zack_dolby (~textual@p67f6b6.tokynt01.ap.so-net.ne.jp) has joined #ceph
[12:22] * Zethrok (~martin@95.154.26.34) has joined #ceph
[12:23] * zhaochao (~zhaochao@123.151.134.238) Quit (Ping timeout: 480 seconds)
[12:25] * fsimonce (~simon@host142-0-dynamic.42-79-r.retail.telecomitalia.it) has joined #ceph
[12:26] * DV (~veillard@veillard.com) Quit (Ping timeout: 480 seconds)
[12:27] * Steki is now known as BManojlovic
[12:27] * fsimonc`` (~simon@host239-195-dynamic.35-79-r.retail.telecomitalia.it) Quit (Read error: Operation timed out)
[12:30] * aldavud (~aldavud@213.55.176.169) Quit (Ping timeout: 480 seconds)
[12:30] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[12:31] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[12:33] * zhaochao (~zhaochao@123.151.134.238) has joined #ceph
[12:33] * zhaochao (~zhaochao@123.151.134.238) has left #ceph
[12:40] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[12:41] * longguang (~chatzilla@123.126.33.253) Quit (Read error: Connection reset by peer)
[12:47] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[12:47] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[12:55] * madkiss (~madkiss@2001:6f8:12c3:f00f:8d30:fbb1:a2be:1cd3) has joined #ceph
[12:55] * al (d@niel.cx) Quit (Remote host closed the connection)
[12:57] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[12:57] * al (quassel@niel.cx) has joined #ceph
[12:58] * huangjun (~kvirc@59.173.185.197) Quit (Ping timeout: 480 seconds)
[12:58] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[12:58] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[13:00] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:02] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Ping timeout: 480 seconds)
[13:06] * sjm (~sjm@pool-72-76-115-220.nwrknj.fios.verizon.net) has joined #ceph
[13:08] * vbellur (~vijay@121.244.87.117) has joined #ceph
[13:14] * shang (~ShangWu@175.41.48.77) Quit (Remote host closed the connection)
[13:18] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[13:23] * saurabh (~saurabh@121.244.87.117) Quit (Quit: Leaving)
[13:30] * jordanP (~jordan@185.23.92.11) has joined #ceph
[13:36] * dneary (~dneary@87-231-145-225.rev.numericable.fr) has joined #ceph
[13:41] * garphy`aw is now known as garphy
[13:44] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[13:44] * ChanServ sets mode +v andreask
[13:45] * oomkiller (~AndChat39@ip-109-84-0-22.web.vodafone.de) has joined #ceph
[13:46] * huangjun (~kvirc@117.151.48.111) has joined #ceph
[13:48] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[13:48] <oomkiller> Hi i have a testsetup with 3 nodes with the same weight in crushmap... if i shut down one osd and wait 5 mins to get the osd marked out the cluster recovers. But not all pgs some pgs stay stuck unclean with active+degraded. What could be the problem?
[13:49] <tnt_> do you have any pools with replication set to 3 ?
[13:49] <oomkiller> All pools are set to 3
[13:50] <oomkiller> Wait the default pools ( data, metadata and rbd) are set to what by default?
[13:50] <tnt_> Then ... if you have only 3 nodes, and you take 1 away, it can't find 3 distinct host ...
[13:51] <tnt_> I think they're set to 2 by default.
[13:51] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Read error: Operation timed out)
[13:51] <tnt_> but not sure. just dump the pool list.
[13:51] <oomkiller> Shouldn't be the remaining osds on this node be considered to store copy?
[13:51] <oomkiller> I'll have a look
[13:52] <tnt_> oomkiller: no. It will only make copies in different failure domains.
[13:52] <tnt_> and in the default crush map this is at the 'host' level.
[13:53] <oomkiller> The size is set to 3 and the min_size to 2.
[13:54] <tnt_> yes, so you need at least 3 nodes to have a HEALTH_OK.
[13:54] <oomkiller> Thats true but a different osd on the same host as the failing osd is another failure domain...
[13:54] <tnt_> not in the default crushmap ...
[13:54] <tnt_> you can configure the crushmap to do that but then you could have some pg that end up all in the same host on different osd.
[13:58] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:00] * ganders (~root@200.0.230.235) has joined #ceph
[14:00] <oomkiller> Just to make it clear i dont take away one osd. I took one osd on one node away. It doesn't make sense to me that i need to have all osds up and running with 3 nodes to get health_ok. If I remove the osd from the crush map then i get health_ok
[14:00] * ikrstic (~ikrstic@93-86-144-62.dynamic.isp.telekom.rs) has joined #ceph
[14:01] <tnt_> Oh, I thought you took out the entire node.
[14:01] <oomkiller> Grr wanted to write node instead of osd i the first sentence...
[14:01] <oomkiller> Ok good :) just thought my understanding of crush was wrong:)
[14:02] <tnt_> Then yeah, it should recover. Are the tunables set to 'optimal' ? I know that the legacy ones sometime have issue finding placements in small clusters.
[14:03] <Vacum> iirc ceph can run into this if you have exactly as many instances of a failure domain as you have replicas defined. (in your case 3 hosts with replica count 3). if an osd goes down and out (vs removed) crush for some pgs will iterate down to alwys the same (down/out) osd. this becomes only better with either:
[14:03] <Vacum> a) having more than <replica count> failure domains. b) in firefly setting crush tunables to optimal
[14:03] <oomkiller> Ah ok.. no they are set to bobtail because of ubuntu having problems mounting cephfs with tunables set to firefly
[14:04] <oomkiller> Ok will try to set it to firefly and see if it fixes it
[14:04] <Vacum> chooseleaf_vary_r
[14:04] <Vacum> was the option
[14:04] <Vacum> http://ceph.com/docs/master/rados/operations/crush-map/
[14:05] <tnt_> oomkiller: yeah, you need like a 3.15 kernel to use chooseleaf_vary_r with CephFS or RBD kernel client.
[14:05] * vbellur (~vijay@121.244.87.117) Quit (Read error: Operation timed out)
[14:06] * ikrstic__ (~ikrstic@212.200.213.54) Quit (Ping timeout: 480 seconds)
[14:06] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[14:07] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[14:07] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[14:08] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[14:08] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[14:09] <oomkiller> tnt_ Vacum yep with tunables set to firefly it works. Thanks a lot
[14:12] * ikrstic (~ikrstic@93-86-144-62.dynamic.isp.telekom.rs) Quit (Read error: Operation timed out)
[14:13] * ikrstic (~ikrstic@109-93-179-169.dynamic.isp.telekom.rs) has joined #ceph
[14:15] * wenjunh (~wenjunh@corp-nat.peking.corp.yahoo.com) Quit (Quit: wenjunh)
[14:15] * stephan (~stephan@62.217.45.26) Quit (Quit: Ex-Chat)
[14:18] * oomkiller (~AndChat39@ip-109-84-0-22.web.vodafone.de) Quit (Quit: Bye)
[14:24] * vbellur (~vijay@121.244.87.124) has joined #ceph
[14:25] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[14:26] * andreask (~andreask@zid-vpnn113.uibk.ac.at) has joined #ceph
[14:26] * ChanServ sets mode +v andreask
[14:28] * ade (~abradshaw@193.202.255.218) Quit (Quit: Too sexy for his shirt)
[14:28] <brunoleon> hi there. on brand new cluster with 3 mon and 3 osd (each on different VMs on one host)
[14:29] <brunoleon> I can't get pg to active+clean status
[14:30] <brunoleon> i've been trying for 2 days but can't figure out where I'm front, is there any step after setting up OSD to have pg active+clean ?
[14:34] * haomaiwa_ (~haomaiwan@118.186.129.94) has joined #ceph
[14:40] * haomaiwang (~haomaiwan@124.248.205.19) Quit (Ping timeout: 480 seconds)
[14:43] * oro (~oro@2001:620:20:222:6436:cb51:5a38:8386) Quit (Remote host closed the connection)
[14:47] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[14:49] * rdas (~rdas@121.244.87.115) Quit (Ping timeout: 480 seconds)
[14:49] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[14:51] * michalefty (~micha@p20030071CF556600889C41B5BC40C3E7.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[14:55] * theanalyst (~abhi@49.32.0.21) has joined #ceph
[15:03] * hyperbaba__ (~hyperbaba@private.neobee.net) Quit (Ping timeout: 480 seconds)
[15:03] * jrankin (~jrankin@d47-69-66-231.try.wideopenwest.com) has joined #ceph
[15:05] * analbeard1 (~shw@185.28.167.198) has joined #ceph
[15:05] * analbeard (~shw@support.memset.com) Quit (Read error: No route to host)
[15:06] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[15:07] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[15:09] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[15:12] * pressureman (~pressurem@62.217.45.26) has joined #ceph
[15:13] * analbeard1 (~shw@185.28.167.198) Quit (Ping timeout: 480 seconds)
[15:13] * LeaChim (~LeaChim@host86-161-90-156.range86-161.btcentralplus.com) has joined #ceph
[15:14] * beardo (~sma310@beardo.cc.lehigh.edu) Quit (Remote host closed the connection)
[15:18] * beardo (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[15:18] * vbellur (~vijay@121.244.87.117) has joined #ceph
[15:20] * markbby (~Adium@168.94.245.2) has joined #ceph
[15:20] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[15:23] * analbeard (~shw@support.memset.com) has joined #ceph
[15:29] * brad_mssw (~brad@shop.monetra.com) has joined #ceph
[15:35] * jrankin (~jrankin@d47-69-66-231.try.wideopenwest.com) Quit (Quit: Leaving)
[15:37] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[15:39] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Quit: Leaving.)
[15:46] * ikrstic (~ikrstic@109-93-179-169.dynamic.isp.telekom.rs) Quit (Read error: Operation timed out)
[15:47] * ikrstic (~ikrstic@93-86-203-60.dynamic.isp.telekom.rs) has joined #ceph
[15:47] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[15:48] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[15:48] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[15:48] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[15:51] * nhm (~nhm@184-97-192-179.mpls.qwest.net) Quit (Read error: Operation timed out)
[15:52] * lcavassa (~lcavassa@78.25.240.221) Quit (Quit: Leaving)
[15:53] * nhm (~nhm@65-128-152-189.mpls.qwest.net) has joined #ceph
[15:53] * ChanServ sets mode +o nhm
[15:56] * brunoleon (~quassel@199-109-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[15:58] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[16:12] * zerick (~eocrospom@190.187.21.53) Quit (Ping timeout: 480 seconds)
[16:12] * salgeras (~salgeras@sw4i.wifi.b92.net) Quit (Quit: Leaving)
[16:12] * b0e (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[16:14] <Azrael> sage__: ping
[16:14] <Azrael> sage: ping
[16:14] <Azrael> :-)
[16:16] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[16:17] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Read error: Operation timed out)
[16:17] * fsimonce` (~simon@host180-71-dynamic.12-79-r.retail.telecomitalia.it) has joined #ceph
[16:18] * fsimonce (~simon@host142-0-dynamic.42-79-r.retail.telecomitalia.it) Quit (Read error: Operation timed out)
[16:20] * marrusl (~mark@faun.canonical.com) Quit (Quit: sync && halt)
[16:23] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:29] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[16:30] * i_m (~ivan.miro@gbibp9ph1--blueice2n2.emea.ibm.com) Quit (Quit: Leaving.)
[16:31] * ikrstic (~ikrstic@93-86-203-60.dynamic.isp.telekom.rs) Quit (Read error: Operation timed out)
[16:31] * ikrstic (~ikrstic@93-86-203-60.dynamic.isp.telekom.rs) has joined #ceph
[16:32] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[16:34] * tdasilva_ (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[16:46] * brambles (~xymox@s0.barwen.ch) Quit (Remote host closed the connection)
[16:47] * JC (~JC@AMontpellier-651-1-32-204.w90-57.abo.wanadoo.fr) has joined #ceph
[16:48] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[16:50] * kitz (~kitz@admin163-7.hampshire.edu) has joined #ceph
[16:50] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[16:50] * clayb (~kvirc@69.191.241.34) has joined #ceph
[16:50] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[16:50] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Remote host closed the connection)
[16:58] * tdb (~tdb@willow.kent.ac.uk) Quit (Quit: bbl)
[16:59] * marrusl (~mark@faun.canonical.com) has joined #ceph
[16:59] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[17:01] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[17:01] * tracphil (~tracphil@130.14.71.217) has joined #ceph
[17:02] * theanalyst (~abhi@49.32.0.21) Quit (Remote host closed the connection)
[17:02] * drankis (~drankis__@89.111.13.198) Quit (Quit: Leaving)
[17:05] * rektide_ is now known as rektide
[17:07] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Read error: Operation timed out)
[17:08] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[17:09] * kevinc (~kevinc__@client65-40.sdsc.edu) has joined #ceph
[17:09] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[17:10] * markbby (~Adium@168.94.245.2) has joined #ceph
[17:23] * tdb (~tdb@willow.kent.ac.uk) has joined #ceph
[17:25] * lupu1 (~lupu@86.107.101.214) has left #ceph
[17:26] <kitz> I'm trying to figure out if my ssds are slow or if the backing drives are slow. which performance metrics should I be looking at?
[17:30] * narb (~Jeff@38.99.52.10) has joined #ceph
[17:31] * kapil (~ksharma@2620:113:80c0:5::2222) has joined #ceph
[17:33] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[17:38] * wschulze1 (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[17:38] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[17:40] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[17:40] * kevinc (~kevinc__@client65-40.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[17:41] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[17:41] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[17:43] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[17:45] * garphy is now known as garphy`aw
[17:46] * kevinc (~kevinc__@client65-40.sdsc.edu) has joined #ceph
[17:46] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[17:46] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[17:53] * ikrstic_ (~ikrstic@178-222-63-84.dynamic.isp.telekom.rs) has joined #ceph
[17:53] * ikrstic (~ikrstic@93-86-203-60.dynamic.isp.telekom.rs) Quit (Read error: Operation timed out)
[18:00] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[18:00] * kevinc (~kevinc__@client65-40.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[18:01] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Remote host closed the connection)
[18:04] * marrusl (~mark@faun.canonical.com) Quit (Quit: sync && halt)
[18:04] * kapil (~ksharma@2620:113:80c0:5::2222) Quit (Quit: Leaving)
[18:05] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[18:07] * tsuraan (~tsuraan@c-71-195-10-137.hsd1.mn.comcast.net) has joined #ceph
[18:08] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[18:09] <tsuraan> Is there a way to configure CRUSH so that each block must be written to two chassis, and each chassis does RAIDn among its OSDs?
[18:09] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[18:09] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[18:11] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[18:12] <tsuraan> also, http://ceph.com/docs/master/rados/operations/crush-map/ lists rule/type options as replicated | raid4, but the actual documentation for type says that only replicated is supported. Ceph does actually support RAID at this point, right?
[18:12] <sage> tsuraan: yes. the raid part is not crush's responsibility.. you just deploy teh osds on top of a raided block device
[18:12] * ikrstic__ (~ikrstic@79-101-241-96.dynamic.isp.telekom.rs) has joined #ceph
[18:12] <sage> Azrael: ping
[18:13] <tsuraan> sage: so in that case, each chassis would have a single OSD that lives on some OS or hardware provided RAID, right?
[18:13] * ikrstic_ (~ikrstic@178-222-63-84.dynamic.isp.telekom.rs) Quit (Read error: Operation timed out)
[18:14] <sage> tsuraan: yeah. or you could have multiple raid volumes on a single host.. depends on how many disks and how big you want the raid sets
[18:14] * wschulze1 (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Read error: Operation timed out)
[18:14] * ikrstic (~ikrstic@212-200-204-50.dynamic.isp.telekom.rs) has joined #ceph
[18:17] <tsuraan> sage: is the raid (Jerasure) support not actually released yet? I know I had found some branch that had a foreign ref to the bitbucket Jerasure project, but I'm not even finding it now
[18:18] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:18] <sage> tsuraan: oh.. the erasure coding support is in firefly.
[18:18] * ikrstic_ (~ikrstic@178-223-55-14.dynamic.isp.telekom.rs) has joined #ceph
[18:19] <sage> that's erasure coding across osds, though. i thought you meant traditional RAID on the block device underneath the OSDs
[18:19] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:19] * ikrstic (~ikrstic@212-200-204-50.dynamic.isp.telekom.rs) Quit (Read error: Operation timed out)
[18:19] * andreask (~andreask@zid-vpnn113.uibk.ac.at) Quit (Read error: Connection reset by peer)
[18:20] * ikrstic__ (~ikrstic@79-101-241-96.dynamic.isp.telekom.rs) Quit (Read error: Operation timed out)
[18:21] * ikrstic_ (~ikrstic@178-223-55-14.dynamic.isp.telekom.rs) Quit (Read error: Operation timed out)
[18:22] * ikrstic_ (~ikrstic@93-86-13-69.dynamic.isp.telekom.rs) has joined #ceph
[18:25] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[18:25] <tsuraan> yeah, I wasn't quite clear enough. Either way, there currently isn't any way to specify both replication and erasure coding within a single rule, right?
[18:26] <tsuraan> I guess what I want is to be able to specify a rule in "step take" instead of just being able to specify a bucket
[18:28] <tsuraan> so I'd have a single chassis that has a bunch of OSDs in it, and create a rule to do erasure coding among those OSDs. Do the same for another chassis (or a bunch more). then, create a replicating rule that picks from erasure code rules instead of picking OSDs. I don't think it can be done right now, but is it crazy?
[18:31] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[18:31] <tsuraan> also, I mentioned it above, but the crush map docs (http://ceph.com/docs/master/rados/operations/crush-map/) only document the replicated type. Is there some other doc that gives the config options for erasure coding (data blocks, parity blocks)?
[18:32] * jordanP (~jordan@185.23.92.11) Quit (Quit: Leaving)
[18:33] <sage> tsuraan: that is part of the 'ec profile' and handled by rados, not crush. not sure where the docs are tho :(
[18:34] <tsuraan> sage: ah, something else for me to learn. Just when I thought I was starting to understand things :)
[18:34] <tsuraan> http://ceph.com/docs/master/rados/operations/erasure-code-profile/
[18:35] * bandrus (~Adium@c-4f66cf1e-74736162.cust.telenor.se) Quit (Ping timeout: 480 seconds)
[18:36] <tsuraan> and http://ceph.com/docs/master/rados/operations/pools/#create-a-pool , I guess
[18:36] * ikrstic_ (~ikrstic@93-86-13-69.dynamic.isp.telekom.rs) Quit (Read error: Operation timed out)
[18:36] <saturnine> The current release of Ceph is 0.80.2 correct?
[18:36] * ikrstic_ (~ikrstic@178-222-7-177.dynamic.isp.telekom.rs) has joined #ceph
[18:37] <sage> saturnine: we pulled the 0.80.2 pacakges due to an rgw issue; will have 0.80.3 out real soon, hopefully today
[18:37] <saturnine> ah, that explains why the upgrade wasn't working :)
[18:38] <saturnine> Got a customer that needs to update swift ACLs on existing buckets. Apparently that's fixed in 0.80.2+ :D
[18:41] <tsuraan> How tolerant is ceph of being partitioned? Suppose I have my laptop running two OSDs, and my desktop running two OSDs, a Ceph cluster including all four, and a replication factor of 2. If I take my laptop somewhere off my desktop's network, will Ceph continue to function on both my machines, and will it heal itself when they are both on the same network again?
[18:42] <saturnine> Is there a way to set ACLs on buckets in the meantime (make contents web accessible).
[18:44] * hyperbaba (~hyperbaba@80.74.175.250) has joined #ceph
[18:50] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:51] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Ping timeout: 480 seconds)
[18:51] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[18:54] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[18:54] * Cube (~Cube@12.248.40.138) has joined #ceph
[18:57] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[18:58] * rbuzzell (~root@li285-199.members.linode.com) has joined #ceph
[19:00] <rbuzzell> Hello, I was wondering if anyone has had any luck using ceph-deploy on centos 7.0.1406
[19:00] <alfredodeza> rbuzzell: what issues are you having?
[19:00] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[19:00] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[19:01] <iggy> tsuraan: no, ceph isn't designed to be used like that (and understandably will function like hell if you try)
[19:02] <rbuzzell> It says it has no support for the OS. But I saw patch notes for rhel 7, which is binary compatible.
[19:02] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[19:02] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[19:02] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Remote host closed the connection)
[19:02] * sarob (~sarob@ip-64-134-224-208.public.wayport.net) has joined #ceph
[19:03] <tsuraan> iggy: ok, thanks.
[19:03] <rbuzzell> alfredodeza: It says it has no support for the OS. But I saw patch notes for rhel 7, which is binary compatible.
[19:03] <alfredodeza> oh, good catch
[19:03] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[19:03] <alfredodeza> let me check
[19:04] <alfredodeza> we might need to do a release for that to work though
[19:04] <tsuraan> has anything from https://wiki.ceph.com/Planning/Blueprints/Dumpling/RGW_Geo-Replication_and_Disaster_Recovery been implemented? I guess I'm just really interested in how async replication of a ceph cluster can work
[19:04] <alfredodeza> which is unfortunate, the better temporary solution would be a --force flag
[19:04] <alfredodeza> so that if we hit this again you don't need to wait on a new relese
[19:04] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[19:05] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[19:06] <iggy> tsuraan: rgw geo replication isn't handled at the ceph layer really (yet)
[19:06] <iggy> it's done by another outside process
[19:06] <iggy> and it only does anything for rgw
[19:06] <tsuraan> that's just at the s3-type layer, right?
[19:06] <iggy> yes
[19:09] <rbuzzell> alfredodeza: Awesome, except ceph-deploy new node1 node2 node3 isn't taking --force as a flag.
[19:09] <alfredodeza> rbuzzell: I meant, I should add that :)
[19:10] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) has joined #ceph
[19:11] <rbuzzell> alfredodeza: Ah, so I have to wait for a release then? This was going to be my entertainment for the next few days. :(
[19:11] <alfredodeza> rbuzzell: issue 8816
[19:11] <kraken> alfredodeza might be talking about http://tracker.ceph.com/issues/8816 [ceph-deploy doesn't allow installation of CentOS7]
[19:12] * narb (~Jeff@38.99.52.10) Quit (Read error: Operation timed out)
[19:12] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) Quit (Read error: Connection reset by peer)
[19:12] <alfredodeza> rbuzzell: you could wait for it to hit master and just use that
[19:12] <alfredodeza> alternatively you could also just edit the source :)
[19:12] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) has joined #ceph
[19:13] * mlausch (~mlausch@2001:8d8:1fe:7:9577:39a:6dab:6f4a) Quit (Remote host closed the connection)
[19:13] * narb (~Jeff@38.99.52.10) has joined #ceph
[19:15] <rbuzzell> alfredodeza: Unfortunately I don't program very well. More ops than dev :( I was looking for that section of code so I could try it though.
[19:15] <alfredodeza> rbuzzell: I can walk you through
[19:15] <alfredodeza> it is just one line
[19:15] <alfredodeza> and you would need to comment it out
[19:17] <rbuzzell> alfredodeza: sure, that would be great actually
[19:17] <alfredodeza> rbuzzell: actually this looks like 5 lines
[19:17] <alfredodeza> :)
[19:18] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[19:19] <alfredodeza> in ceph_deploy/hosts/__init__.py in the get() function comment out the `if not codename` block so that it looks like this: http://fpaste.org/117346/09913014/
[19:19] <alfredodeza> rbuzzell: ^ ^
[19:20] * adamcrume (~quassel@2601:9:6680:47:c038:bea6:6d64:3289) has joined #ceph
[19:21] <rbuzzell> alfredodeza: All right, I'll se if that works, thank you very much
[19:24] * vbellur (~vijay@122.167.73.239) has joined #ceph
[19:25] * erice (~erice@50.245.231.209) Quit (Ping timeout: 480 seconds)
[19:33] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[19:34] * mrjack_ (mrjack@office.smart-weblications.net) Quit (Ping timeout: 480 seconds)
[19:34] * mrjack_ (mrjack@pD95F1A1D.dip0.t-ipconnect.de) has joined #ceph
[19:37] * ikrstic__ (~ikrstic@178-222-64-99.dynamic.isp.telekom.rs) has joined #ceph
[19:38] * rendar (~I@host223-182-dynamic.19-79-r.retail.telecomitalia.it) Quit (Read error: Operation timed out)
[19:39] * ganders (~root@200.0.230.235) Quit (Quit: WeeChat 0.4.1)
[19:40] * ganders (~root@200.0.230.235) has joined #ceph
[19:40] * ikrstic (~ikrstic@178-221-216-85.dynamic.isp.telekom.rs) has joined #ceph
[19:43] * ikrstic_ (~ikrstic@178-222-7-177.dynamic.isp.telekom.rs) Quit (Ping timeout: 480 seconds)
[19:44] * rendar (~I@host223-182-dynamic.19-79-r.retail.telecomitalia.it) has joined #ceph
[19:46] * ikrstic__ (~ikrstic@178-222-64-99.dynamic.isp.telekom.rs) Quit (Ping timeout: 480 seconds)
[19:46] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[19:46] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[19:47] * ikrstic_ (~ikrstic@178-222-11-8.dynamic.isp.telekom.rs) has joined #ceph
[19:48] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) Quit (Quit: Leaving.)
[19:49] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) has joined #ceph
[19:49] * tsuraan (~tsuraan@c-71-195-10-137.hsd1.mn.comcast.net) Quit (Quit: leaving)
[19:50] * ikrstic (~ikrstic@178-221-216-85.dynamic.isp.telekom.rs) Quit (Ping timeout: 480 seconds)
[19:57] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[19:57] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) Quit (Read error: Connection reset by peer)
[19:57] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) has joined #ceph
[19:59] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[20:02] * baylight1 (~tbayly@69-195-66-4.unifiedlayer.com) has joined #ceph
[20:02] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) Quit (Read error: Connection reset by peer)
[20:07] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[20:08] * xarses (~andreww@12.164.168.117) has joined #ceph
[20:10] * scuttlemonkey is now known as scuttle|afk
[20:13] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[20:13] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[20:15] * Nacer_ (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[20:15] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[20:16] * sarob (~sarob@ip-64-134-224-208.public.wayport.net) Quit (Remote host closed the connection)
[20:16] * sarob (~sarob@ip-64-134-224-208.public.wayport.net) has joined #ceph
[20:20] <el_seano> say you have a four node cluster with four drives each
[20:21] * sarob (~sarob@ip-64-134-224-208.public.wayport.net) Quit (Read error: Operation timed out)
[20:21] <el_seano> would it make sense to assign an OSD to each drive independently, or should you build the nodes with RAID just use one OSD?
[20:25] <brad_mssw> think ceph's official stance is raid is dead (meaning you should use an osd per disk)... however personally I find it to be a real pain to replace failed disks/osds with ceph, and use raid 5 with auto-rebuild on drive replacement (using a real hw raid controller)
[20:25] <el_seano> interesting
[20:26] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[20:31] <gchristensen> the attitude is your redundancy shouldn't be at the machine level, but at a much greater level, and if that is the case, why spend the money
[20:31] <el_seano> yeah, it makes sense
[20:32] <el_seano> it feels a little bit like a leap of faith, but that's probably just because I have nowhere near the confidence with OSDs yet that I do with md
[20:32] <el_seano> and maybe it's just my paranoid side, but I kind of like the dual layers of redundancy
[20:32] <el_seano> although it is obviously a compromise in efficent allocation of resources
[20:33] <gchristensen> might be better spent on backups :)
[20:34] <el_seano> heh
[20:34] <el_seano> I'm actually trying to implement ceph for our primary client backup storage >_>
[20:34] <el_seano> or rather, I'm testing it to see if it would be suitable
[20:34] <gchristensen> hrm
[20:35] <el_seano> the question is whether or not we can make something scalable with the resources we presently have, or if we should spring for a SAN
[20:36] <gchristensen> how are you planning on using ceph? object storage?
[20:36] <el_seano> filesystem
[20:36] <gchristensen> don't use ceph filesystem for your backups
[20:37] <gchristensen> "Ceph FS is currently not recommended for production data." -- avoid not production ready products for backups :P
[20:37] * vmx (~vmx@dslb-084-056-023-132.pools.arcor-ip.net) Quit (Quit: Leaving)
[20:37] <el_seano> deeerp
[20:38] * rturk|afk is now known as rturk
[20:41] <iggy> it's mostly not recommended because the MDS is still a single point of failure
[20:41] <iggy> as long as you can stand a little bit of downtime if you lose an MDS, you should be mostly okay
[20:42] <el_seano> I thought the MDS just helped with offsetting the processor load on nodes?
[20:42] <iggy> the MDS serves filesystem metadata to the cephfs clients
[20:43] <iggy> the data is actually stored in the cluster, the MDS just caches that info, handles locking, etc.
[20:51] * brad_mssw (~brad@shop.monetra.com) Quit (Remote host closed the connection)
[20:53] * brad_mssw (~brad@shop.monetra.com) has joined #ceph
[20:54] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[20:56] * aldavud (~aldavud@213.55.184.208) has joined #ceph
[20:59] * zidarsk81 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[21:00] * zidarsk81 (~zidar@89-212-142-10.dynamic.t-2.net) Quit (Remote host closed the connection)
[21:05] * gregphone (~gregphone@38.122.20.226) has joined #ceph
[21:05] * gregphone (~gregphone@38.122.20.226) Quit ()
[21:05] * houkouonchi-home (~linux@pool-71-189-160-82.lsanca.fios.verizon.net) Quit (Read error: No route to host)
[21:10] * hyperbaba (~hyperbaba@80.74.175.250) Quit (Remote host closed the connection)
[21:10] <el_seano> is there any documentation that explains CRUSH in more detail?
[21:10] <el_seano> I'm mostly finding references to it, and pages on how to tweak it
[21:11] <gchristensen> el_seano: http://ceph.com/papers/weil-crush-sc06.pdf :)
[21:11] <el_seano> gchristensen: thanks!
[21:12] <gchristensen> that might be too much detail
[21:15] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[21:15] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[21:17] * jehb (~jehb@nat-pool-rdu-t.redhat.com) has joined #ceph
[21:22] * houkouonchi-home (~linux@pool-71-189-160-82.lsanca.fios.verizon.net) has joined #ceph
[21:24] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Read error: Operation timed out)
[21:25] * brad_mssw (~brad@shop.monetra.com) Quit (Quit: Leaving)
[21:29] <el_seano> it was acctually pretty interesting
[21:29] <el_seano> I mean, granting that I skimmed :P
[21:30] <angdraug> joshd: hi, about that thing from yesterday
[21:30] <el_seano> but specifically, that it's attempting to approach a uniform distribution over the OSDs, given weights
[21:31] <angdraug> we're still looking for clues, so far eliminated a bunch of possible network related problems, but didn't come closer to root cause
[21:36] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[21:40] * mtl2 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[21:40] <angdraug> btw in case it's relevant we're having this with dumpling v0.67.9
[21:42] <angdraug> joshd: here's something new:
[21:42] <angdraug> root@node-9:~# /root/ceph-osd -i 60 --mkfs --mkjournal
[21:42] <angdraug> 2014-07-11 19:39:01.973111 7f788c3c97c0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
[21:43] <angdraug> 2014-07-11 19:39:01.982226 7f788c3c97c0 -1 ** ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-60: (22) Invalid argument
[21:45] * tracphil (~tracphil@130.14.71.217) Quit (Quit: leaving)
[21:57] * jehb (~jehb@nat-pool-rdu-t.redhat.com) Quit (Read error: Operation timed out)
[21:58] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[22:02] * kapil (~kapil@p54AFFC76.dip0.t-ipconnect.de) has joined #ceph
[22:03] * rweeks (~rweeks@pat.hitachigst.com) has joined #ceph
[22:04] <kapil> hi..can we have the rbd kernel module on a virtual machine ?
[22:05] * kapil1 (kapil@b.clients.kiwiirc.com) has joined #ceph
[22:06] <seapasulli> this is dumb and I have no way to test but have you tried moving /var/lib/ceph/osd/ceph-60 out of the way and making a new directory, then trying again?
[22:06] * aldavud (~aldavud@213.55.184.208) Quit (Ping timeout: 480 seconds)
[22:06] <seapasulli> angdraug:
[22:08] <iggy> kapil: yes, it's more efficient to do it in the host, but no reason you can't do it inside the guest
[22:08] <kapil> While trying to map rbd images to kernel module on SLE11-SP3 VM machine, I am
[22:08] <kapil> getting rbd module not found error:
[22:09] <kapil> teuthida-4-0:/home/jenkins/cephdeploy-cluster # sudo rbd map foo
[22:09] <kapil> ERROR: modinfo: could not find module rbd
[22:09] <kapil> FATAL: Module rbd not found.
[22:09] <kapil> rbd: modprobe rbd failed! (256)
[22:10] <iggy> the kernel is probably too old or the distro decided not to build it
[22:11] <kapil> ok, maybe it's the later. The kernel version is 3.0.76-0.11-default
[22:11] <kapil> 3.0.76-0.11-default is not a problem I think
[22:14] * kapil1 (kapil@b.clients.kiwiirc.com) has left #ceph
[22:15] * jehb (~jehb@nat-pool-rdu-u.redhat.com) has joined #ceph
[22:19] * tdasilva_ (~quassel@nat-pool-bos-t.redhat.com) Quit (Read error: Operation timed out)
[22:19] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) Quit (Read error: Operation timed out)
[22:21] * rturk is now known as rturk|afk
[22:22] * rturk|afk is now known as rturk
[22:25] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[22:27] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[22:28] * ikrstic_ (~ikrstic@178-222-11-8.dynamic.isp.telekom.rs) Quit (Ping timeout: 480 seconds)
[22:30] * ganders (~root@200.0.230.235) Quit (Quit: WeeChat 0.4.1)
[22:31] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[22:35] * b0e (~aledermue@x2f25102.dyn.telefonica.de) has joined #ceph
[22:36] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[22:38] * mrjack_ (mrjack@pD95F1A1D.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[22:38] * jehb (~jehb@nat-pool-rdu-u.redhat.com) Quit (Ping timeout: 480 seconds)
[22:39] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[22:40] * rturk is now known as rturk|afk
[22:40] * mrjack_ (mrjack@office.smart-weblications.net) has joined #ceph
[22:41] * kapil (~kapil@p54AFFC76.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[22:44] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[22:47] * jehb (~jehb@nat-pool-rdu-t.redhat.com) has joined #ceph
[22:49] <iggy> that's pretty old
[22:49] <iggy> I think rbd was added in 3.2?
[22:55] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Quit: Leaving...)
[22:57] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[22:59] * lupu1 (~lupu@86.107.101.214) has joined #ceph
[23:00] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[23:00] * jehb (~jehb@nat-pool-rdu-t.redhat.com) Quit (Read error: Operation timed out)
[23:00] * [fred] (fred@earthli.ng) Quit (Quit: +++ATH0)
[23:06] <angdraug> seapasulli: haven't tried that one yet, thanks
[23:08] * JC1 (~JC@AMontpellier-651-1-32-204.w90-57.abo.wanadoo.fr) has joined #ceph
[23:10] * [fred] (fred@earthli.ng) has joined #ceph
[23:11] * kevinc (~kevinc__@client65-40.sdsc.edu) has joined #ceph
[23:12] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Read error: Operation timed out)
[23:12] * JC (~JC@AMontpellier-651-1-32-204.w90-57.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[23:13] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[23:15] <angdraug> at least I'll know if the problem is osd, mon, or network
[23:17] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) Quit (Quit: Leaving.)
[23:17] * scuttle|afk is now known as scuttlemonkey
[23:18] <joshd> angdraug: there was a bug in 0.67.8, fixed in 0.67.9 that could result in the symptoms you saw (and yesterday the log showed it was running 0.67.8)
[23:18] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[23:22] * kevinc (~kevinc__@client65-40.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[23:24] <angdraug> oh darn
[23:25] <angdraug> thanks!
[23:25] <angdraug> do you have a bug number?
[23:27] <angdraug> 8278?
[23:28] <angdraug> bd5d6f116416d1b410d57ce00cb3e2abf6de102b?
[23:29] <joshd> bd5d6f116416d1b410d57ce00cb3e2abf6de102b
[23:29] <joshd> yeah :)
[23:30] <angdraug> I'll be back :D
[23:30] * phantomcircuit (~phantomci@blockchain.ceo) Quit (Ping timeout: 480 seconds)
[23:33] * alop (~abelopez@128-107-239-234.cisco.com) has joined #ceph
[23:35] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[23:42] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[23:43] * Hell_Fire_ (~HellFire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[23:44] * Hell_Fire (~HellFire@123-243-155-184.static.tpgi.com.au) Quit (Read error: Network is unreachable)
[23:46] * kfei (~root@114-27-86-161.dynamic.hinet.net) Quit (Read error: Operation timed out)
[23:56] <angdraug> joshd: problem solved, you were right
[23:58] <joshd> angdraug: awesome
[23:59] <angdraug> looks like our latest mirror sync messed something up and pulled an older ceph from somewhere
[23:59] <seapasulli> anyone know if I can upgrade radosgw without upgrading the rest of ceph at this time from 80.1 to 82.1 I think

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.