#ceph IRC Log

Index

IRC Log for 2015-02-13

Timestamps are in GMT/BST.

[0:01] * erikmack (~user@2602:306:37ec:5bb0::43) Quit (Ping timeout: 480 seconds)
[0:02] * elder__ (~elder@210.177.145.249) has joined #ceph
[0:06] <cholcombe973> ceph: is there a way to monitor if all the PGs have been migrated off an OSD without running ceph pg dump over and over?
[0:08] * oms101 (~oms101@p20030057EA7F5D00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[0:10] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[0:17] * oms101 (~oms101@p20030057EA748900EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[0:17] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) Quit (Read error: Connection reset by peer)
[0:17] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) has joined #ceph
[0:22] <wkennington> is there anything i can do to speed up small reads / writes?
[0:22] <wkennington> i feel like traversing a git repo in cephfs is horrendously slow
[0:22] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[0:22] <wkennington> or is latency always going to be a problem here
[0:23] <Sysadmin88> limited by your hardware?
[0:23] <Sysadmin88> or network?
[0:23] <wkennington> network no
[0:23] <wkennington> i mean, they are spinning disks so commits will always be somewhat slow
[0:23] <JCLM> cholcombe973: If all PGs are active+clean and ceph osd pool stats {poolname} show no recovery activity, everything has been moved. If recovery is stopped, all PGs won't be straight "active+clean". So ceph -s should be sufficient I guess complemented with the former pool stats
[0:24] <wkennington> it just seems like the fs metadata takes too long to deal with
[0:24] <JCLM> wkennington: Increase MDS cache size (mds_cache_size) could help
[0:24] <wkennington> how can i measure how big it is at the moment?
[0:24] <wkennington> or how utilized it is?
[0:26] <JCLM> ceph daemon mds.{id} config show | grep mds_cache_size
[0:26] <JCLM> Default is 100000 (<100KB)
[0:27] <wkennington> how bad would it be to say make it 1G
[0:27] <JCLM> Could be 100MB. Don't remember the default unit size
[0:27] <JCLM> Will grow your RAM usage
[0:27] <wkennington> sure
[0:27] <wkennington> but this isn't a problem
[0:28] <gregsfortytwo> the units are "number of dentries in the cache"
[0:28] <wkennington> oh
[0:28] <gregsfortytwo> they're...a couple kilobytes each?
[0:28] <wkennington> alright
[0:28] <VisBits> mdsmap e50: 1/1/1 up {0=ceph0-node1=up:creating}, 1 up:standby
[0:28] <VisBits> how long is creating reasonable
[0:28] <JCLM> Thanks gregsfortytwo
[0:31] <gregsfortytwo> VisBits: not very long at all
[0:31] <VisBits> mines just sitting here... sigh lol
[0:31] <gregsfortytwo> what's the pgmap output say?
[0:31] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) Quit (Read error: No route to host)
[0:32] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) has joined #ceph
[0:32] <VisBits> pgmap v34554: 6208 pgs, 4 pools, 2075 kB data, 20 objects
[0:32] <VisBits> all my pgs are stuck it looks like
[0:32] <VisBits> is it possible to dump out all the dead pgs?
[0:33] <gregsfortytwo> "ceph pg dump"
[0:33] <gregsfortytwo> that's certainly why your MDS isn't moving out of creating state
[0:33] <VisBits> stale+down+peering lol
[0:33] <gregsfortytwo> probably your OSDs are dead or the CRUSH map is bad or something, but somebody else will have to help with that
[0:34] <VisBits> my osd are up, but i think i lost some data
[0:34] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[0:35] <VisBits> crushmap has the right data in it
[0:35] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[0:36] <cholcombe973> JCLM: thanks. I was looking for a way to do that with bash though or python
[0:37] * danieagle (~Daniel@201-95-103-54.dsl.telesp.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[0:39] * SPACESHIP (~chatzilla@d24-141-231-126.home.cgocable.net) has joined #ceph
[0:40] <SPACESHIP> Do ceph pg objects suffer from fragmentation issues?
[0:40] <SPACESHIP> not the underlying filesystem, but ceph itself
[0:40] <cholcombe973> SPACESHIP: i don't believe so
[0:42] <SPACESHIP> I ask because my iops on my test cluster went from being able to handle 1500-2000 consistanly to barely able to hold 300 after 24 hours of straight testing
[0:42] <SPACESHIP> I thought it might be the filesystem, so I blew the osd's away one by one and allowed it to recovery
[0:42] <SPACESHIP> after recovery, same issue
[0:43] <SPACESHIP> looking at iostat showed it was doing reads even under pure writes
[0:43] <VisBits> are you using a ssd for journal?
[0:43] <SPACESHIP> 80GB S3500
[0:43] <VisBits> make sure trim is working
[0:43] <SPACESHIP> 1 SSD for each OSD
[0:43] <cholcombe973> does your filesystem have atimes enabled?
[0:43] <VisBits> if its doing garbage collection on the fly its going to be slow
[0:43] <SPACESHIP> atimes disabled
[0:43] <cholcombe973> ok
[0:44] * ircolle (~Adium@2601:1:a580:145a:c803:d242:7966:7b0b) Quit (Quit: Leaving.)
[0:44] <SPACESHIP> OSD are backed by 10k SAS drives
[0:44] <SPACESHIP> 900GB seagate 10.5k
[0:45] <VisBits> sounds like 150iops to me
[0:45] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[0:45] <cholcombe973> what FS are you backing these with?
[0:45] <SPACESHIP> XFS
[0:45] <cholcombe973> hmm ok
[0:46] <cholcombe973> what was your create command?
[0:46] <VisBits> probably deploy?
[0:46] <VisBits> prepare?
[0:46] * xarses (~andreww@12.164.168.117) has joined #ceph
[0:46] <cholcombe973> oh right.. i forgot about the ceph prepare thingy
[0:46] <cholcombe973> SPACESHIP: could you check the xfs fragmentation level?
[0:47] <cholcombe973> it's part of the xfs admin commands *i think*
[0:47] <SPACESHIP> I used ceph-disk prepare
[0:47] <cholcombe973> that's fine. i'm just curious what xfs says it's fragmented at
[0:47] <SPACESHIP> hold on
[0:49] <SPACESHIP> fragmentation factor 0.15%
[0:49] <cholcombe973> wow not bad
[0:49] <SPACESHIP> as I said
[0:49] <cholcombe973> i was guessing 50% or more
[0:49] <SPACESHIP> I blew away the OSDs
[0:49] <cholcombe973> right
[0:49] <SPACESHIP> and recreated
[0:49] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[0:50] <cholcombe973> what's your write size ?
[0:50] <SPACESHIP> let ceph recover
[0:50] <SPACESHIP> doing 8k fio tests
[0:50] <SPACESHIP> seq write is fine
[0:51] <SPACESHIP> right where I was getting before
[0:51] <SPACESHIP> but randwrite is still terrible compared to a fresh cluster
[0:51] <cholcombe973> hmm
[0:52] <SPACESHIP> seq write did tank as well but after the destroy and recreate the seq write went back
[0:52] <SPACESHIP> of the OSD (not cluster)
[0:52] * JCLM (~JCLM@73.189.243.134) Quit (Quit: Leaving.)
[0:53] <cholcombe973> do you have trim enabled for your ssds? just curious
[0:53] <cholcombe973> i saw it was enabled in kernel 3.18 for ceph
[0:53] <VisBits> [ceph@ceph0-mon0 ~]$ ceph mds rm mds.0
[0:53] <VisBits> Invalid command: mds.0 doesn't represent an int
[0:53] <SPACESHIP> centos 7, so 3.10 :/
[0:54] <cholcombe973> ok
[0:54] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) has joined #ceph
[0:54] <cholcombe973> i'm just taking a guess because i've never built a cluster with ssd's backing it
[0:56] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[0:56] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit ()
[0:57] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[0:57] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit ()
[0:57] <SPACESHIP> VisBits: have you tried ceph mds rm 0
[0:57] <VisBits> ceph> ceph mds rm 0
[0:57] <VisBits> no valid command found; 10 closest matches:
[0:57] <VisBits> osd pool set-quota <poolname> max_objects|max_bytes <val>
[0:57] <VisBits> ;p;
[0:57] <VisBits> lol*
[0:57] <VisBits> this is about how my day is going
[0:58] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[0:58] <SPACESHIP> your inside the ceph cli
[0:58] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit (Read error: Connection reset by peer)
[0:58] * fghaas1 (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[0:58] <SPACESHIP> so try mds rm 0
[0:58] * fghaas1 (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit ()
[0:58] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:58] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[0:59] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit ()
[0:59] * JCLM (~JCLM@73.189.243.134) has joined #ceph
[0:59] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[0:59] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit ()
[1:00] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[1:00] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit ()
[1:00] <SPACESHIP> some of the cph commands are in constant, sometimes they want the full name, others just the number valuue
[1:00] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[1:00] * dmsimard is now known as dmsimard_away
[1:00] <SPACESHIP> *inconsistant
[1:00] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit ()
[1:01] <SPACESHIP> i dont know about mds but removing osds ia like that
[1:01] <SPACESHIP> half the commands expect just the numerical value
[1:01] <SPACESHIP> others the full name calue
[1:01] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[1:01] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit ()
[1:01] <VisBits> i feel like this software is alpha still
[1:01] <VisBits> lol
[1:02] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[1:03] * fghaas1 (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[1:03] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit (Read error: Connection reset by peer)
[1:04] <SPACESHIP> nah, all sofware is like this :P
[1:04] <SPACESHIP> almost all anyways, but the simplest of tols
[1:04] <SPACESHIP> lol
[1:04] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[1:04] * fghaas1 (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit ()
[1:05] * Edmond21 (~Edmond21@95.141.20.201) has joined #ceph
[1:05] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit (Read error: Connection reset by peer)
[1:05] <Edmond21> High Quality photos and videos
[1:05] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[1:06] * fghaas1 (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[1:06] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit (Read error: Connection reset by peer)
[1:06] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[1:06] * fghaas1 (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit (Read error: Connection reset by peer)
[1:07] * Edmond21 (~Edmond21@95.141.20.201) Quit (autokilled: No spam. Do not contact support@oftc.net. (2015-02-13 00:07:12))
[1:07] * fghaas1 (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[1:08] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit ()
[1:08] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[1:08] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[1:08] * fghaas1 (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit (Read error: Connection reset by peer)
[1:09] * fghaas1 (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[1:09] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit (Read error: Connection reset by peer)
[1:10] * joef1 (~Adium@2601:9:280:f2e:cd81:ffde:c3e7:3404) has joined #ceph
[1:11] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[1:13] * sudocat1 (~davidi@192.185.1.20) Quit (Ping timeout: 480 seconds)
[1:14] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[1:14] <burley> what's the best way to tell what osd's a pg is on
[1:17] <dmick> ceph pg dump will show them all, ceph pg <pgid> query will show a specific one
[1:18] <burley> ceph health detail, the data on the end, is that the list of osds?
[1:18] <dmick> you mean if there are unclean pgs in the list?
[1:19] <dmick> 'acting' means 'set of acting OSDs', yes
[1:19] <dmick> (ordered, first one is primary)
[1:20] <SPACESHIP> Oh question
[1:20] <SPACESHIP> does setting weight to 0 still allow disk to be used for reads?
[1:21] <SPACESHIP> I'm assuming existing objects will coutinute to read and write if OSD is in and up, just new data won't be allocated to it
[1:22] <JCLM> SPACESHIP: No. With a weight set to 0 the OSd will not be assigned any PG
[1:23] <SPACESHIP> but what about existing pg's on said OSD?
[1:23] * fghaas1 (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit (Ping timeout: 480 seconds)
[1:24] * dyasny (~dyasny@209-112-41-210.dedicated.allstream.net) has joined #ceph
[1:24] <JCLM> Their mapping will get updated so that they are protected by a new set of OSDs. This will generate data movement within you cluster
[1:25] <SPACESHIP> Even if I don't out the OSD (OSD is still in and up)?
[1:26] <burley> dmick: ty
[1:26] <SPACESHIP> what about leaving the weight value alone, but setting reweight on OSD to 0?
[1:28] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (Quit: Away)
[1:28] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[1:32] <JCLM> Will have the same impact. No PG mapped to this OSD
[1:33] <SPACESHIP> so its the equivalent of setting the OSD to out then, in terms of PG allocation and balancing anyways
[1:33] <JCLM> The CRUSH reweight will keep the OSD UP and IN. OSD reweight will have the OSD UP and OUT
[1:34] * fsimonce (~simon@host217-37-dynamic.30-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:36] <JCLM> To be clear, the 2 commands are ceph osd crush reweight osd.n x.y / ceph osd reweight n x.y
[1:36] <JCLM> Yes
[1:36] <dmick> SPACESHIP: ISTR that reweight to 0 is essentially what "osd out" does
[1:37] <dmick> indeed, that's precisely how it's implemented
[1:39] <SPACESHIP> Does changing CRUSH reweight to 0 still cause a rebalance of the cluster than?
[1:39] <SPACESHIP> I'm trying to prevent new writes to an OSD, but while keeping it in the clustr and still avalible for reads without setting cluster to no-out
[1:41] <SPACESHIP> a small amount of writes to existing primary PG would be okay
[1:43] <SPACESHIP> I have a harebrained idea for a scripted background maintenance method to keep file-system fragmentation to minimum without distrubing cluster performance on spinning rust based OSDs
[1:48] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:48] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[1:52] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has left #ceph
[1:56] <JCLM> SPACESHIP: If a write hits the primary OSD, it will be applied to the secondary OSDs for consistency. On top of this ALL ios are issued against the primary OSD. Reads don't go the the secondary OSDs.
[1:59] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) has joined #ceph
[1:59] <SPACESHIP> And for my harebrained idea, that would be fine, I just want to limit writes as much as possible
[2:00] * bandrus (~brian@54.sub-70-211-78.myvzw.com) has joined #ceph
[2:01] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:01] <SPACESHIP> The idea for modifying CRUSH map to prevent allocation of new PGs to the OSD during, but prevent it from causing cluster to rebalance (which would cause performance degradation)
[2:03] <SPACESHIP> the idea I have is a bit harebrained but I want to try it out and see how it works, if it would work at all (well it would work, but I mean without affecting performance of the cluster too much)
[2:03] * LeaChim (~LeaChim@host86-159-114-39.range86-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:05] <JCLM> SPACESHIP: Assuming your HW is homogeneous and good quality, there is no reason that new PGs are mapped to an existing OSD. Unless you create a new pool of course. And then you can influence the mapping of new PGs by modifying your CRUSH map to never have a new pool or an existing pool outside of the one you want to be mapped to some specific OSDs
[2:05] * oms101 (~oms101@p20030057EA748900EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:06] <JCLM> So I guess your best target here is the creation of a specific CRUSH ruleset that will reserve some OSDs for your pool.
[2:11] <JCLM> SPACESHIP: That's the page to look at. It has an example - http://ceph.com/docs/master/rados/operations/crush-map/
[2:13] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:13] * oms101 (~oms101@p20030057EA08E300EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[2:17] <SPACESHIP> thanks, yea what I'm visualizing (for the crush map portion anyways) is hard to put into words
[2:18] <JCLM> Indeed
[2:18] <kraken> http://i.imgur.com/bQcbpki.gif
[2:18] * sputnik13 (~sputnik13@74.202.214.170) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:18] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[2:20] <SPACESHIP> heh
[2:24] * rmoe (~quassel@173-228-89-134.dsl.static.fusionbroadband.com) has joined #ceph
[2:25] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[2:26] * ssejourne (~ssejourne@2001:41d0:52:300::d16) Quit (Quit: WeeChat 0.4.2)
[2:27] <cholcombe973> SPACESHIP: did you figure out what was going on?
[2:27] <cholcombe973> ceph: i believe i found a bug with ceph pg dump --format json
[2:27] <SPACESHIP> Nope, not yet
[2:27] <cholcombe973> last_fresh has both a 0.000000 and a date for the key in different places
[2:28] <cholcombe973> i suppose i should post this on the dev channel
[2:31] <SPACESHIP> oh
[2:31] <SPACESHIP> well
[2:31] * kefu (~kefu@114.92.101.83) has joined #ceph
[2:31] <SPACESHIP> I did discover than creating a new rbd and mapping it performance i back to where it should be
[2:32] <SPACESHIP> *is back
[2:32] <SPACESHIP> but the old rbds are still performance damaged
[2:32] <cholcombe973> hmm odd
[2:32] * kefu (~kefu@114.92.101.83) Quit ()
[2:32] <cholcombe973> ok
[2:34] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[2:34] <SPACESHIP> new mapped rbd = 1600 iops
[2:34] <SPACESHIP> old rbd = 20 iops
[2:35] <cholcombe973> wow
[2:38] <SPACESHIP> that one had snapshots on it, testing on another old RBD without snapshots gives me 300 iops
[2:38] <SPACESHIP> but again new rbd after fixing file-system fragmentation give me 1600 iops
[2:39] <SPACESHIP> *OSD file-system fragmentation
[2:41] <SPACESHIP> can't make heads or tails of it yet
[2:43] <SPACESHIP> from iostat it looks like writing to a new RBD is clean
[2:43] <SPACESHIP> write only
[2:44] <SPACESHIP> but writing to an old RBD causes a read-modify-write
[2:46] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[2:46] <cholcombe973> hmm ok
[2:46] <cholcombe973> you might have to go further and gdb it
[2:46] <cholcombe973> or perf-stat
[2:48] * dneary (~dneary@96.237.180.105) has joined #ceph
[2:54] * cholcombe973 (~chris@pool-108-42-144-175.snfcca.fios.verizon.net) Quit (Remote host closed the connection)
[2:57] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) has joined #ceph
[2:59] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (Quit: Away)
[3:00] * zhaochao (~zhaochao@111.161.77.232) has joined #ceph
[3:00] * elder__ (~elder@210.177.145.249) Quit (Quit: Leaving)
[3:01] * houkouonchi-work (~linux@2607:f298:b:635:225:90ff:fe39:38ce) Quit (Read error: Connection reset by peer)
[3:02] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[3:09] * Concubidated (~Adium@2607:f298:b:635:de3:a30b:7e10:708a) Quit (Ping timeout: 480 seconds)
[3:09] * macjack (~Thunderbi@123.51.160.200) has joined #ceph
[3:10] * smokedmeets (~smokedmee@34.sub-70-197-6.myvzw.com) Quit (Quit: smokedmeets)
[3:17] * angdraug (~angdraug@12.164.168.117) Quit (Ping timeout: 480 seconds)
[3:18] * vakulkar (~vakulkar@c-50-185-132-102.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:20] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:21] * shyu (~shyu@119.254.196.66) has joined #ceph
[3:22] * med (~medberry@71.74.177.250) Quit (Ping timeout: 480 seconds)
[3:22] * AlecTaylor (~AlecTaylo@0001b36e.user.oftc.net) has joined #ceph
[3:22] <AlecTaylor> hi
[3:27] * med (~medberry@71.74.177.250) has joined #ceph
[3:37] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) has joined #ceph
[3:40] * mookins (~mookins@induct3.lnk.telstra.net) Quit ()
[3:43] * OutOfNoWhere (~rpb@76.8.45.168) has joined #ceph
[3:54] * joef1 (~Adium@2601:9:280:f2e:cd81:ffde:c3e7:3404) has left #ceph
[3:59] * dneary (~dneary@96.237.180.105) Quit (Ping timeout: 480 seconds)
[4:00] * smokedmeets (~smokedmee@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[4:01] * sudocat (~davidi@2601:e:2b80:9920:5091:266a:9795:e26) has joined #ceph
[4:07] * ohnomrbill (~ohnomrbil@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[4:08] * badone (~brad@66.187.239.16) Quit (Ping timeout: 480 seconds)
[4:10] * bandrus (~brian@54.sub-70-211-78.myvzw.com) Quit (Read error: Connection reset by peer)
[4:10] * badone (~brad@66.187.239.11) has joined #ceph
[4:13] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[4:13] * bandrus (~brian@54.sub-70-211-78.myvzw.com) has joined #ceph
[4:16] * sudocat (~davidi@2601:e:2b80:9920:5091:266a:9795:e26) Quit (Quit: Leaving.)
[4:16] * sudocat (~davidi@2601:e:2b80:9920:5091:266a:9795:e26) has joined #ceph
[4:16] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[4:21] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[4:24] * dyasny (~dyasny@209-112-41-210.dedicated.allstream.net) Quit (Ping timeout: 480 seconds)
[4:27] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[4:41] * shang (~ShangWu@69.80.100.140) has joined #ceph
[4:49] * vakulkar (~vakulkar@c-50-185-132-102.hsd1.ca.comcast.net) has joined #ceph
[4:53] * kefu (~kefu@114.92.101.83) has joined #ceph
[4:58] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[4:58] * davidz (~davidz@cpe-23-242-189-171.socal.res.rr.com) Quit (Quit: Leaving.)
[5:01] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[5:01] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[5:02] * Steki (~steki@cable-89-216-233-159.dynamic.sbb.rs) has joined #ceph
[5:04] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[5:08] * shang (~ShangWu@69.80.100.140) Quit (Ping timeout: 480 seconds)
[5:08] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[5:09] * BManojlovic (~steki@cable-89-216-225-243.dynamic.sbb.rs) Quit (Ping timeout: 480 seconds)
[5:10] * OutOfNoWhere (~rpb@76.8.45.168) Quit (Ping timeout: 480 seconds)
[5:10] * Vacuum (~vovo@i59F79A48.versanet.de) has joined #ceph
[5:17] * Vacuum_ (~vovo@i59F79379.versanet.de) Quit (Ping timeout: 480 seconds)
[5:23] * vakulkar (~vakulkar@c-50-185-132-102.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:23] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[5:25] * yanzheng (~zhyan@182.139.205.12) Quit ()
[5:26] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[5:27] * yanzheng (~zhyan@182.139.205.12) Quit ()
[5:28] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[5:29] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) Quit (Quit: Leaving.)
[5:32] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) has joined #ceph
[5:39] * sudocat (~davidi@2601:e:2b80:9920:5091:266a:9795:e26) Quit (Quit: Leaving.)
[5:40] * sudocat (~davidi@73.166.99.97) has joined #ceph
[5:45] * sudocat (~davidi@73.166.99.97) Quit (Read error: Connection reset by peer)
[5:48] * infernix (nix@213.125.67.106) Quit (Ping timeout: 480 seconds)
[5:53] * vakulkar (~vakulkar@c-50-185-132-102.hsd1.ca.comcast.net) has joined #ceph
[5:57] * zack_dolby (~textual@nfmv001175125.uqw.ppp.infoweb.ne.jp) has joined #ceph
[6:05] * marrusl (~mark@cpe-24-90-46-248.nyc.res.rr.com) Quit (Remote host closed the connection)
[6:07] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[6:12] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[6:13] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[6:17] * segutier (~segutier@216-166-19-146.fwd.datafoundry.com) Quit (Quit: segutier)
[6:33] * vbellur (~vijay@121.244.87.117) has joined #ceph
[6:33] * vakulkar (~vakulkar@c-50-185-132-102.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:38] * AlecTaylor (~AlecTaylo@0001b36e.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:39] * overclk (~overclk@121.244.87.117) has joined #ceph
[6:46] * wschulze1 (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[6:47] * AlecTaylor (~AlecTaylo@CPE-124-187-135-220.lns15.ken.bigpond.net.au) has joined #ceph
[6:48] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) Quit (Remote host closed the connection)
[6:49] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) has joined #ceph
[6:50] * mattronix (~quassel@fw1.sdc.mattronix.nl) has joined #ceph
[6:50] * masterpe_ (~masterpe@2a01:670:400::43) has joined #ceph
[6:50] * seapasul1i (~seapasull@95.85.33.150) has joined #ceph
[6:50] * Bosse_ (~bosse@rifter2.klykken.com) has joined #ceph
[6:50] * kingcu (~kingcu@kona.ridewithgps.com) Quit (Remote host closed the connection)
[6:50] * kingcu (~kingcu@kona.ridewithgps.com) has joined #ceph
[6:51] * mondkalbantrieb (~quassel@sama32.de) has joined #ceph
[6:52] * masterpe (~masterpe@2a01:670:400::43) Quit (Ping timeout: 480 seconds)
[6:52] * seapasulli (~seapasull@95.85.33.150) Quit (Ping timeout: 480 seconds)
[6:52] * oblu (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[6:52] * Bosse (~bosse@rifter2.klykken.com) Quit (Ping timeout: 480 seconds)
[6:52] * jnq (~jnq@95.85.22.50) Quit (Ping timeout: 480 seconds)
[6:52] * mattronix_ (~quassel@fw1.sdc.mattronix.nl) Quit (Ping timeout: 480 seconds)
[6:52] * asalor (~asalor@0001ef37.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:52] * marcan (marcan@marcansoft.com) Quit (Ping timeout: 480 seconds)
[6:52] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:52] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (Ping timeout: 480 seconds)
[6:52] * marcan (marcan@marcansoft.com) has joined #ceph
[6:52] * CAPSLOCK2000 (~oftc@2001:610:748:1::8) Quit (Ping timeout: 480 seconds)
[6:52] * Georgyo_ (~georgyo@shamm.as) Quit (Ping timeout: 480 seconds)
[6:52] * mondkalbantrieb_ (~quassel@sama32.de) Quit (Ping timeout: 480 seconds)
[6:53] * Georgyo (~georgyo@shamm.as) has joined #ceph
[6:53] * cronix1 (~cronix@5.199.139.166) Quit (Ping timeout: 480 seconds)
[6:53] * gsilvis_ (~andovan@c-75-69-162-72.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[6:53] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) Quit (Ping timeout: 480 seconds)
[6:53] * trociny (~mgolub@93.183.239.2) Quit (Ping timeout: 480 seconds)
[6:53] * chutz (~chutz@rygel.linuxfreak.ca) Quit (Ping timeout: 480 seconds)
[6:53] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) Quit (Ping timeout: 480 seconds)
[6:54] * asalor (~asalor@2a00:1028:96c1:4f6a:204:e2ff:fea1:64e6) has joined #ceph
[6:55] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) has joined #ceph
[6:55] * shang (~ShangWu@175.41.48.77) has joined #ceph
[6:55] * oblu (~o@62.109.134.112) has joined #ceph
[6:56] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) has joined #ceph
[6:57] * CAPSLOCK2000 (~oftc@2001:610:748:1::8) has joined #ceph
[6:57] * gsilvis (~andovan@c-75-69-162-72.hsd1.ma.comcast.net) has joined #ceph
[7:01] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit ()
[7:02] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[7:02] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) has joined #ceph
[7:03] * jnq (~jnq@95.85.22.50) has joined #ceph
[7:03] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[7:04] * trociny (~mgolub@93.183.239.2) has joined #ceph
[7:10] * badone (~brad@66.187.239.11) Quit (Ping timeout: 480 seconds)
[7:13] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[7:16] * badone (~brad@66.187.239.16) has joined #ceph
[7:17] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) has joined #ceph
[7:17] * shang (~ShangWu@175.41.48.77) has joined #ceph
[7:22] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[7:25] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[7:28] * wschulze1 (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:34] * mykola (~Mikolaj@91.225.201.136) has joined #ceph
[7:35] * AlecTaylor (~AlecTaylo@CPE-124-187-135-220.lns15.ken.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[7:40] * alaind (~dechorgna@ARennes-651-1-125-185.w2-2.abo.wanadoo.fr) has joined #ceph
[7:47] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[7:50] * kefu (~kefu@114.92.101.83) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:52] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:53] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[8:02] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[8:03] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[8:06] * derjohn_mob (~aj@tmo-109-190.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[8:06] * angdraug (~angdraug@63.142.161.2) has joined #ceph
[8:07] * angdraug (~angdraug@63.142.161.2) Quit ()
[8:10] * MACscr1 (~Adium@2601:d:c800:de3:fc3a:160e:1eb4:530c) Quit (Quit: Leaving.)
[8:20] * thb (~me@2a02:2028:17b:f9e1:74e7:1b95:7857:72a3) has joined #ceph
[8:23] * JCLM (~JCLM@73.189.243.134) Quit (Quit: Leaving.)
[8:26] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) Quit (Quit: Ex-Chat)
[8:34] * cok (~chk@nat-cph5-sys.net.one.com) has joined #ceph
[8:34] * Sysadmin88 (~IceChat77@94.12.240.104) Quit (Quit: A fine is a tax for doing wrong. A tax is a fine for doing well)
[8:36] * Steki (~steki@cable-89-216-233-159.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[8:40] * badone (~brad@66.187.239.16) Quit (Ping timeout: 480 seconds)
[8:45] * derjohn_mob (~aj@2001:6f8:1337:0:5948:fa30:6248:3a9b) has joined #ceph
[8:45] * cok (~chk@nat-cph5-sys.net.one.com) has left #ceph
[8:52] * alaind (~dechorgna@ARennes-651-1-125-185.w2-2.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[8:59] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Read error: Connection reset by peer)
[9:00] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[9:06] * linjan_ (~linjan@80.179.241.26) has joined #ceph
[9:06] * dgurtner (~dgurtner@178.197.231.49) has joined #ceph
[9:08] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) Quit (Quit: Leaving.)
[9:09] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[9:09] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:13] * oms101 (~oms101@p20030057EA08E300EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Quit: Leaving)
[9:13] * oms101 (~oms101@p20030057EA08E300EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[9:21] * BManojlovic (~steki@178-221-74-244.dynamic.isp.telekom.rs) has joined #ceph
[9:31] * analbeard (~shw@support.memset.com) has joined #ceph
[9:31] * shyu (~shyu@119.254.196.66) Quit (Remote host closed the connection)
[9:33] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[9:35] <Be-El> hi
[9:36] * shyu (~shyu@119.254.196.66) has joined #ceph
[9:47] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:51] * branto (~borix@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[9:53] * fsimonce (~simon@host217-37-dynamic.30-79-r.retail.telecomitalia.it) has joined #ceph
[9:54] * shyu (~shyu@119.254.196.66) Quit (Remote host closed the connection)
[9:56] * zack_dolby (~textual@nfmv001175125.uqw.ppp.infoweb.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:56] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[10:00] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[10:00] * ScOut3R (~ScOut3R@catv-80-98-46-171.catv.broadband.hu) has joined #ceph
[10:03] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) has joined #ceph
[10:07] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[10:08] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[10:08] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[10:11] * karnan (~karnan@121.244.87.117) has joined #ceph
[10:14] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[10:15] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) Quit (Ping timeout: 480 seconds)
[10:18] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:20] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) has joined #ceph
[10:22] * bandrus (~brian@54.sub-70-211-78.myvzw.com) Quit (Quit: Leaving.)
[10:24] * shyu (~shyu@119.254.196.66) has joined #ceph
[10:28] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:28] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Connection reset by peer)
[10:30] * vbellur (~vijay@121.244.87.124) has joined #ceph
[10:38] * oro (~oro@2001:620:20:16:4d95:ce96:ea5:edf8) has joined #ceph
[10:48] * kefu (~kefu@114.92.101.83) has joined #ceph
[10:48] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[10:50] * kefu (~kefu@114.92.101.83) Quit (Max SendQ exceeded)
[10:50] * overclk (~overclk@121.244.87.117) Quit (Ping timeout: 480 seconds)
[10:54] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) has joined #ceph
[10:54] * overclk (~overclk@121.244.87.117) has joined #ceph
[10:55] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[10:55] * eJunky (~markus@2001:638:812:100:9d36:c47d:1b21:cebf) has joined #ceph
[11:02] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[11:04] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[11:13] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[11:13] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[11:14] * shyu (~shyu@119.254.196.66) Quit (Remote host closed the connection)
[11:20] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[11:23] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) has joined #ceph
[11:35] * fam is now known as fam_away
[11:35] * fam_away is now known as fam
[11:35] * fam is now known as fam_away
[11:40] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[11:40] * zack_dolby (~textual@pw126255080211.9.panda-world.ne.jp) has joined #ceph
[11:49] * vbellur (~vijay@121.244.87.124) has joined #ceph
[11:50] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[11:54] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[11:57] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[11:57] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[12:02] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[12:02] * oro (~oro@2001:620:20:16:4d95:ce96:ea5:edf8) Quit (Ping timeout: 480 seconds)
[12:03] * badone (~brad@203-121-198-226.e-wire.net.au) has joined #ceph
[12:05] * zhaochao (~zhaochao@111.161.77.232) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 31.4.0/20150113100542])
[12:08] * mgolub (~Mikolaj@91.225.202.161) has joined #ceph
[12:09] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) has joined #ceph
[12:11] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[12:12] * zack_dolby (~textual@pw126255080211.9.panda-world.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[12:13] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[12:14] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) has joined #ceph
[12:15] * mykola (~Mikolaj@91.225.201.136) Quit (Ping timeout: 480 seconds)
[12:17] * Concubidated (~Adium@66-87-130-170.pools.spcsdns.net) has joined #ceph
[12:17] <ZyTer> hi
[12:19] <ZyTer> i whant to change the ceph.conf to add private and public network. when i restart ceph after change, my mon wont start on public network ... and cluster segfault... have you some idea why ?
[12:19] <ZyTer> (its a test cluster, not a production... ;-) )
[12:20] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[12:26] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) has joined #ceph
[12:35] * Concubidated (~Adium@66-87-130-170.pools.spcsdns.net) Quit (Quit: Leaving.)
[12:41] * overclk (~overclk@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:48] * ngoswami (~ngoswami@1.39.13.119) has joined #ceph
[12:48] * ngoswami (~ngoswami@1.39.13.119) Quit ()
[12:50] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[12:51] * overclk (~overclk@121.244.87.124) has joined #ceph
[12:52] * badone (~brad@203-121-198-226.e-wire.net.au) Quit (Ping timeout: 480 seconds)
[12:53] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[12:58] * kefu (~kefu@114.92.101.83) has joined #ceph
[12:58] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[12:59] * Bosse_ is now known as Bosse
[12:59] * oro (~oro@2001:620:20:16:4d95:ce96:ea5:edf8) has joined #ceph
[13:00] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) has joined #ceph
[13:05] * vbellur (~vijay@121.244.87.117) has joined #ceph
[13:06] <Bosse> ZyTer: what does your logs say? one thought - if you previously provisioned your MONs on your private subnet, then they may be listening to the wrong interface, and you might need to modify your monmap. you should read http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-mon/
[13:08] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[13:08] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[13:16] * brutusca_ (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) has joined #ceph
[13:16] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) Quit (Remote host closed the connection)
[13:21] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) has joined #ceph
[13:21] * brutusca_ (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[13:24] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) Quit (Remote host closed the connection)
[13:26] <ZyTer> Bosse: ok, i check your URL
[13:29] * MrBy (~MrBy@85.115.23.42) Quit (Ping timeout: 480 seconds)
[13:30] * MrBy (~MrBy@85.115.23.2) has joined #ceph
[13:31] * mgolub (~Mikolaj@91.225.202.161) Quit (Remote host closed the connection)
[13:40] <ZyTer> yes, i check whith netstat, mon are listening on the wrong interface...
[13:45] * mykola (~Mikolaj@91.225.202.161) has joined #ceph
[13:46] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[13:50] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[14:00] * kefu is now known as kefu|afk
[14:04] <ZyTer> but i dont understand how can i change my mon IP whith monmap
[14:07] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[14:09] <flaf> Hi, I have a book about ceph that says "It (cephfs) has a tight integration with SAMBA and support for CIFS and SMB ??.
[14:09] <flaf> I don't understand the "tight intergration with SAMBA".
[14:10] <flaf> If I share a mounted Cephfs with samba, a client can mount only *one* samba share, and there a SPOF, isn't it?
[14:12] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) has joined #ceph
[14:12] * yanzheng (~zhyan@182.139.205.12) Quit (Read error: Connection reset by peer)
[14:19] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[14:26] * yanzheng1 (~zhyan@182.139.205.12) has joined #ceph
[14:28] <kefu|afk> flaf: samba has a vfs module for ceph, it enables samba to talk to cephfs in a system where kernel space cephfs or ceph-fuse is not an option.
[14:29] * yanzheng (~zhyan@182.139.205.12) Quit (Ping timeout: 480 seconds)
[14:30] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:31] <flaf> kefu|afk: hum... I'm not sure to understand. What is monted in a node client? A cephfs or a Samba share?
[14:31] * kefu|afk is now known as kefu
[14:31] <flaf> s/monted/mounted/
[14:31] <kraken> flaf meant to say: kefu|afk: hum... I'm not sure to understand. What is mounted in a node client? A cephfs or a Samba share?
[14:31] * yanzheng2 (~zhyan@182.139.205.12) has joined #ceph
[14:31] <kefu> flaf: samba for sure.
[14:32] <flaf> Ok and where cephfs is mounted?
[14:32] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[14:33] * marrusl (~mark@cpe-24-90-46-248.nyc.res.rr.com) has joined #ceph
[14:33] <kefu> disclaimer: i have not used samba over cephfs. but i got these info by reading the code and doc.
[14:33] <kefu> flaf: cephfs is not mounted actually. IIUC.
[14:33] * yanzheng1 (~zhyan@182.139.205.12) Quit (Read error: Connection reset by peer)
[14:33] <kefu> flaf: assuming we setup a ceph cluster and the cephfs is ready to use.
[14:35] <kefu> and in the context of "tight integration where no kernel space cephfs or ceph-fuse is available", what we have is just a ceph cluster with cephfs enabled.
[14:36] <kefu> so, to talk with the cephfs, we need to use libcephfs. and the "tight integration" is a wrapper around libcephfs for samba to talk with cephfs.
[14:36] <flaf> If I understand, the cephfs is not mounted on any node and I mount directly the cephfs as a samba share in a node client. Is it correct?
[14:36] <kefu> libcephfs is a library provided by ceph project to offer filesystem semantic operations.
[14:37] <kefu> correct.
[14:37] <kefu> by node client, it is a client of cephfs, but not the client of samba.
[14:37] <flaf> kefu: is there a doc/link etc. that explains how to do that?
[14:38] * kefu is searching ...
[14:38] <flaf> Oh, thx kefu ;)
[14:38] <flaf> There no explanation about that in my book.
[14:40] * overclk (~overclk@121.244.87.124) Quit (Ping timeout: 480 seconds)
[14:40] <flaf> About cephfs again, will this request be implemented on Ceph Hammer? -> http://wiki.ceph.com/Planning/Sideboard/Client_Security_for_CephFS
[14:42] * yanzheng2 (~zhyan@182.139.205.12) Quit (Ping timeout: 480 seconds)
[14:42] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[14:44] <kefu> flaf: the ceph for samba was merged into samba project. but i failed to find it on samba's web page. just came across some mail threads... maybe because that all the conf is living in the ceph.conf?
[14:44] <kefu> https://git.samba.org/?p=samba.git;a=commit;h=301a1f919202c90c629a4926ebdf054b9f2fe1e8
[14:44] <kefu> https://git.samba.org/?p=samba.git;a=blob;f=source3/modules/vfs_ceph.c;h=fdb7feb44fafa594dfb9f48cfe7c932879ef03d6;hb=301a1f919202c90c629a4926ebdf054b9f2fe1e8
[14:45] <kefu> and the source and commit log are very informative, IMO =)
[14:47] <kefu> flaf: i am not sure if this will going into hammer, please see http://wiki.ceph.com/Planning/Blueprints/Hammer .
[14:48] * geekky (c1317c6b@107.161.19.109) has joined #ceph
[14:48] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[14:48] <kefu> if you need more confirmative answer, you might want to ask the guys in #ceph-devel. =)
[14:49] * overclk (~overclk@121.244.87.117) has joined #ceph
[14:54] <flaf> kefu thx for your help.
[14:54] <kefu> flaf: with pleasure. =)
[14:54] <flaf> I'm a little lost with this http://ceph.com/docs/cuttlefish/faq/#can-ceph-export-a-filesystem-via-nfs-or-samba-cifs
[14:55] <kefu> flaf: but i think it's accurate.
[14:55] * geekky is now known as allaok
[14:55] <kefu> flaf: ceph does not natively support CIFS or NFS.
[14:56] <Be-El> flaf: that documentation is ancient
[14:57] <Be-El> flaf: use the current documentation version
[14:57] <kefu> but it exports its filesystem semantic API with libcephfs. so the "gateway" which understands cifs/nfs can use this library to utilize cephfs.
[14:58] <Be-El> flaf: and you want to read this thread: http://article.gmane.org/gmane.comp.file-systems.ceph.user/17119/match=samba+vfs
[14:59] * dyasny (~dyasny@198.251.52.196) has joined #ceph
[15:01] <flaf> Thx Be-El, i'm reading...
[15:03] * tupper (~tcole@2001:420:2280:1272:647f:846:62bd:6086) has joined #ceph
[15:05] * ircolle (~Adium@2601:1:a580:145a:6cbf:c76:539a:d6de) has joined #ceph
[15:07] <flaf> kefu: the "gateway" is a samba client that mounts the cephfs of the cluster or is a samba server that offert a share which can be mounted by another samba clients?
[15:08] <kefu> flaf: it's the samba server. sorry for the confusion.
[15:09] <kefu> samba client is the one who just understands CIFS but no need to understand ceph or to use the ceph vfs module.
[15:10] <flaf> Ok, so the gateway is unique and is a spof, isn't it? Because a Samba client can request only one samba server (as with NFS).
[15:10] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:11] <kefu> flaf: in this sense, yes. not sure if the samba server is able to run as a cluster ...
[15:11] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) has joined #ceph
[15:11] * _ndevos (~ndevos@nat-pool-ams2-5.redhat.com) has joined #ceph
[15:12] * erikmack (~user@cpe-72-182-33-74.austin.res.rr.com) has joined #ceph
[15:12] <topro> hi there, anyone knows status of multiple-active-mds or when this is expected to be reliably usable?
[15:12] <flaf> kefu: ok I think it's more clear for me now. Thx.
[15:14] * _ndevos is now known as ndevos
[15:15] <flaf> (I knew what is a samba server and samba client but I did not understand who was the client and who was the server ;))
[15:19] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[15:21] * kefu (~kefu@114.92.101.83) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:21] * nitti (~nitti@162.222.47.218) has joined #ceph
[15:23] * kefu (~kefu@114.92.101.83) has joined #ceph
[15:23] <kefu> flaf: np =)
[15:24] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[15:29] * dyasny (~dyasny@198.251.52.196) Quit (Quit: Ex-Chat)
[15:29] * dyasny (~dyasny@198.251.52.196) has joined #ceph
[15:30] * kefu (~kefu@114.92.101.83) Quit (Max SendQ exceeded)
[15:39] * nitti (~nitti@162.222.47.218) Quit (Remote host closed the connection)
[15:40] * nitti (~nitti@162.222.47.218) has joined #ceph
[15:43] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[15:48] * brutuscat (~brutuscat@20.Red-88-23-166.staticIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[15:49] * brutuscat (~brutuscat@233.Red-83-34-47.dynamicIP.rima-tde.net) has joined #ceph
[15:52] * brutuscat (~brutuscat@233.Red-83-34-47.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[15:54] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[15:55] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:58] * JCLM (~JCLM@73.189.243.134) has joined #ceph
[16:03] * JCLM (~JCLM@73.189.243.134) Quit ()
[16:03] * jclm (~jclm@73.189.243.134) has joined #ceph
[16:08] <flaf> topro: personally (but I'm not an expert), I have not heard anything about "multiple-active-status is reliable etc". Currently, I have just heard that "active/standby-mds is ok".
[16:10] * sputnik13 (~sputnik13@74.202.214.170) has joined #ceph
[16:13] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[16:13] * Kioob (~Kioob@200.254.0.109.rev.sfr.net) has joined #ceph
[16:18] * dmsimard_away is now known as dmsimard
[16:19] * brutuscat (~brutuscat@233.Red-83-34-47.dynamicIP.rima-tde.net) has joined #ceph
[16:23] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[16:26] * karis (~karis@conf-nat.admin.grnet.gr) has joined #ceph
[16:31] * sprachgenerator (~sprachgen@12.1.126.253) has joined #ceph
[16:37] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[16:39] * hasues (~hazuez@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[16:40] * kapil (~ksharma@2620:113:80c0:5::2222) Quit (Quit: Leaving)
[16:41] <topro> flaf: thats what I could read from the docs,too. but i was hoping someone could tell me "it's coming, tomorrow!" :)
[16:41] <topro> I'm asking because the bottleneck I'm encountering seems to be my single MDS using up to 100% of its CPU core
[16:42] <topro> or is there a way to have one single MDS to use more than one CPU core (i.e. multiple threads) ?
[16:43] <flaf> topro: I understand. ;) Sorry for the cpu, I have no idea. It's a good question.
[16:43] <topro> I got a 8-core machine running some OSDs and the MDS with a lot of idle CPU cycles on 7 cores but MDS is eating one core completely
[16:44] <flaf> I'm interested by the answer too.
[16:44] <topro> from time to time that is, depending on workload of course
[16:44] * erikmack (~user@cpe-72-182-33-74.austin.res.rr.com) Quit (Quit: later!)
[16:44] * jrocha (~jrocha@vagabond.cern.ch) Quit (Read error: Connection reset by peer)
[16:45] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[16:46] * togdon (~togdon@74.121.28.6) has joined #ceph
[16:47] * yguang11 (~yguang11@vpn-nat.hongkong.corp.yahoo.com) has joined #ceph
[16:51] * eJunky (~markus@2001:638:812:100:9d36:c47d:1b21:cebf) Quit (Quit: Ex-Chat)
[16:56] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[16:57] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[17:01] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[17:01] * hasues (~hazuez@kwfw01.scrippsnetworksinteractive.com) has left #ceph
[17:01] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) Quit (Quit: Leaving.)
[17:03] * sudocat (~davidi@192.185.1.20) has joined #ceph
[17:04] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[17:04] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:05] * andrewschoen (~andrewsch@50.56.86.195) Quit (Max SendQ exceeded)
[17:05] * andrewschoen (~andrewsch@50.56.86.195) has joined #ceph
[17:06] * rljohnsn (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:10] * saltlake2 (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[17:10] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[17:10] * yguang11 (~yguang11@vpn-nat.hongkong.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[17:15] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[17:16] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[17:16] * clayb (~clayb@2604:2000:e1a9:ca00:b8c2:f130:8acf:8bbb) has joined #ceph
[17:17] * ralf_ (~oftc-webi@ip-84-118-159-80.unity-media.net) has joined #ceph
[17:17] * ralf_ (~oftc-webi@ip-84-118-159-80.unity-media.net) Quit ()
[17:19] * rzerres (~ralf@ip-84-118-159-80.unity-media.net) has joined #ceph
[17:21] * Kioob (~Kioob@200.254.0.109.rev.sfr.net) Quit (Quit: Leaving.)
[17:22] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[17:22] <rzerres> helo there
[17:23] <rzerres> i just encountered unseen behaviour before i have upgraded the customer cluster to 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578)
[17:24] <rzerres> i am able to list info of a given rbd inside the pool (e.g rbd info pool/vdi)
[17:25] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[17:25] <rzerres> trying to list all rbd objects inside the pool ends with an empty feedback!
[17:25] <rzerres> listing another pool behaves as expected.
[17:26] <Be-El> maybe different permissions for the pools?
[17:26] <rzerres> What is going on here? Has anybody seen this before?
[17:26] <rzerres> @ Be-El: no, using same cephx auth for both pools
[17:28] <rzerres> if I'm not misguided, the default is signing in as client.admin, if not explicitely redefined - right?
[17:28] <Be-El> right as far as i know
[17:31] * joshd1 (~jdurgin@24-205-54-236.dhcp.gldl.ca.charter.com) has joined #ceph
[17:31] <rzerres> if i create a new object inside the rbd-pool, ls is showing it correctly
[17:31] <rzerres> only the old entries are gone.
[17:32] <Be-El> hmm....sorry, no clue
[17:33] * allaok (c1317c6b@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[17:35] <rzerres> rados ls -p pool is listing all the objects, as i have cross-checkd the block_name_prefix
[17:37] * segutier (~segutier@rrcs-97-79-140-195.sw.biz.rr.com) has joined #ceph
[17:37] * BManojlovic (~steki@178-221-74-244.dynamic.isp.telekom.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:39] * oro (~oro@2001:620:20:16:4d95:ce96:ea5:edf8) Quit (Remote host closed the connection)
[17:42] <rzerres> if somebody can help, I'm glad to pastbin output of
[17:42] <rzerres> rbd ls -p <pool> --log-to-stderr --debug-ms 1 --debug-objecter 20 --debug-monc 20 2> /tmp/rbd_ls_debug.txt
[17:43] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:43] <rzerres> oh, i forgot to mention, that I'm using a cache-tier (writeback) as a hot-pool over the given cold-pool.
[17:43] <rzerres> is there any relation to that?
[17:49] * derjohn_mob (~aj@2001:6f8:1337:0:5948:fa30:6248:3a9b) Quit (Ping timeout: 480 seconds)
[17:51] * vbellur (~vijay@122.166.144.112) has joined #ceph
[17:53] * rmoe (~quassel@173-228-89-134.dsl.static.fusionbroadband.com) Quit (Ping timeout: 480 seconds)
[17:54] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:55] * roehrich (~roehrich@146.174.238.100) has joined #ceph
[17:56] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) Quit (Quit: Leaving.)
[17:57] * segutier (~segutier@rrcs-97-79-140-195.sw.biz.rr.com) Quit (Quit: segutier)
[17:59] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[18:02] * sprachgenerator (~sprachgen@12.1.126.253) Quit (Quit: sprachgenerator)
[18:04] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[18:05] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[18:05] * nitti (~nitti@162.222.47.218) Quit (Ping timeout: 480 seconds)
[18:05] * nitti (~nitti@162.222.47.218) has joined #ceph
[18:06] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) has joined #ceph
[18:07] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[18:08] <seapasul1i> what happens when you look at rbd ls -p ${cold-tier} ?
[18:09] <seapasul1i> ie isn't the cache tier supposed to flush the objects. So any old objects will not show up in this pool any longer
[18:09] <seapasul1i> so the behavior is expected?
[18:10] <seapasul1i> I have yet to set up a cache tier though
[18:12] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[18:12] * ScOut3R (~ScOut3R@catv-80-98-46-171.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[18:15] * rzerres1 (~ralf@ip-84-118-159-80.unity-media.net) has joined #ceph
[18:16] <rzerres1> @seapasul1: same as on old-pool -> no output
[18:16] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[18:19] * rzerres (~ralf@ip-84-118-159-80.unity-media.net) Quit (Ping timeout: 480 seconds)
[18:23] <seapasul1i> what? how about rados -p ${pool} ls | head -n20; do you see any output then?
[18:24] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:24] <rzerres1> yes, this is working out as expected
[18:28] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:29] * rohanm (~rohanm@c-67-168-194-197.hsd1.or.comcast.net) has joined #ceph
[18:29] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:32] * karis (~karis@conf-nat.admin.grnet.gr) Quit (Remote host closed the connection)
[18:33] * branto (~borix@ip-213-220-214-203.net.upcbroadband.cz) has left #ceph
[18:42] * rzerres1 (~ralf@ip-84-118-159-80.unity-media.net) Quit (Ping timeout: 480 seconds)
[18:43] * vakulkar (~vakulkar@c-50-185-132-102.hsd1.ca.comcast.net) has joined #ceph
[18:45] * rzerres (~ralf@ip-84-118-159-80.unity-media.net) has joined #ceph
[18:49] * rzerres1 (~ralf@ip-84-118-159-80.unity-media.net) has joined #ceph
[18:50] * rturk|afk is now known as rturk
[18:52] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[18:53] * rzerres (~ralf@ip-84-118-159-80.unity-media.net) Quit (Ping timeout: 480 seconds)
[18:55] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:55] <smiley_> does anyone know if the rbd kernel client supports fstrim?
[18:56] <saltlake2> joshd: I finally got back to working on the incremental diff issue: The log seems to show that there are a series of lines thus: 0000009495b [list-snaps] v0'0 uv0 ack = -2 ((2) No such file or directory)) v6 ==== 198+0+0 (287196501 0 0) 0x82400468 con 0x20a34718
[18:56] <saltlake2> 2015-02-13 17:12:40.085435 87b7b000 20 librbd: diff_iterate object rb.0.10ce.238e1f29.00000009495c
[18:56] <saltlake2> 2015-02-13 17:12:40.085482 87b7b000 1 -- 192.168.200.196:0/1024018 --> 192.168.200.205:6812/8159 -- osd_op(client.4566.0:608610 rb.0.10ce.238e1f29.00000009495c@snapdir [list-snaps] 2.289fa167 ack+read e158) v4 -- ?+0 0x20ad4a18 con 0x20a39b90"
[18:57] * rzerres1 (~ralf@ip-84-118-159-80.unity-media.net) Quit (Ping timeout: 480 seconds)
[18:58] * amospalla (~amospalla@amospalla.es) Quit (Ping timeout: 480 seconds)
[19:00] * nitti (~nitti@162.222.47.218) Quit (Remote host closed the connection)
[19:00] <jclm> smiley_: Kernel 3.18 supports trim/discard
[19:01] * rzerres (~ralf@ip-84-118-159-80.unity-media.net) has joined #ceph
[19:02] <smiley_> ok I'm running 3.18.1-031801-generic...so it looks like I should be all set
[19:02] * nitti (~nitti@162.222.47.218) has joined #ceph
[19:02] <smiley_> I'll test it on a test image first
[19:02] <smiley_> thanks
[19:03] * nitti (~nitti@162.222.47.218) Quit (Remote host closed the connection)
[19:03] * nitti (~nitti@162.222.47.218) has joined #ceph
[19:04] * cholcombe973 (~chris@7208-76ef-ff1f-ed2f-329a-f002-3420-2062.6rd.ip6.sonic.net) has joined #ceph
[19:05] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) has joined #ceph
[19:05] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[19:05] <jclm> smiley_: :-)
[19:11] <carmstrong> is it normal for a daemon to "journal close" immediately after doing a "journal open" on startup, and then to not write anything else in the logs? storage is btrfs
[19:11] <carmstrong> seems to run just fine
[19:17] * rturk is now known as rturk|afk
[19:17] * rzerres1 (~ralf@ip-84-118-159-80.unity-media.net) has joined #ceph
[19:19] * rzerres (~ralf@ip-84-118-159-80.unity-media.net) Quit (Ping timeout: 480 seconds)
[19:22] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[19:23] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[19:23] * lalatenduM (~lalatendu@122.167.232.62) has joined #ceph
[19:26] <rzerres1> when cleaning up benchmark objects from a cache-tier (writeback) hot/cold pool do i have to run
[19:26] <rzerres1> rados -p <hot-pool> cleanup --prefix benchmark ?
[19:31] <debian112> hello, I am building a custom server for ceph OSD server.
[19:31] <debian112> here are the specs: http://paste.debian.net/147393/
[19:32] <debian112> would this box saturate 10Gbps?
[19:33] * rzerres1 (~ralf@ip-84-118-159-80.unity-media.net) Quit (Read error: Connection reset by peer)
[19:33] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Read error: Connection reset by peer)
[19:33] <debian112> I assume it will, but looking for some feedback
[19:33] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[19:34] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[19:34] * rzerres (~ralf@ip-84-118-159-80.unity-media.net) has joined #ceph
[19:35] <rzerres> because i can't remember the run-name.
[19:35] * rljohnsn1 (~rljohnsn@ns25.8x8.com) has joined #ceph
[19:35] <rzerres> is it possible to derive it form the object name seen in rados ls?
[19:36] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Read error: Connection reset by peer)
[19:41] * rljohnsn1 (~rljohnsn@ns25.8x8.com) Quit (Quit: Leaving.)
[19:43] * brutuscat (~brutuscat@233.Red-83-34-47.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:43] * brutuscat (~brutuscat@233.Red-83-34-47.dynamicIP.rima-tde.net) has joined #ceph
[19:44] * brutuscat (~brutuscat@233.Red-83-34-47.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:44] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[19:45] <burley> debian112: In theory it would with sequential IO, I assume the sata drives are ~150MB/s and if the journaling keeps up at the same rate as the aggregate of those drives, that alone gets you to > 10Gb/s -- in practice I doubt it would
[19:45] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[19:45] * brutuscat (~brutuscat@233.Red-83-34-47.dynamicIP.rima-tde.net) has joined #ceph
[19:46] <debian112> burley: I am planning on keeping the journaling on the 300GB(SSDs)
[19:46] * brutuscat (~brutuscat@233.Red-83-34-47.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:47] <gsilvis> Should RGW return a 'content-type' header when the user GETs an object using the Swift API? It doesn't seem to, which breaks python-swiftclient
[19:50] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:52] * togdon (~togdon@74.121.28.6) has joined #ceph
[19:52] * lalatenduM (~lalatendu@122.167.232.62) Quit (Quit: Leaving)
[19:56] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[19:57] * barra204 (~shakamuny@209.66.74.34) has joined #ceph
[19:57] * hasues1 (~hazuez@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[19:58] * hasues1 (~hazuez@kwfw01.scrippsnetworksinteractive.com) has left #ceph
[19:59] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) Quit (Quit: Leaving.)
[20:06] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[20:07] * clayb (~clayb@2604:2000:e1a9:ca00:b8c2:f130:8acf:8bbb) Quit (Ping timeout: 480 seconds)
[20:12] * rohanm (~rohanm@c-67-168-194-197.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[20:12] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[20:12] <ibravo> Hello, do you have any guide on how to do ceps-deploy in an internal network (without Internet connectivity)
[20:13] * saltlake2 (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[20:17] <burley> ibravo: If you just create a local package repository it should work the same as documented
[20:17] <kraken> http://i.imgur.com/wSvsV.gif
[20:18] <ibravo> burley: Yes, I was able to install the RPMs but when issuing a ceph-deploy command, it fails here: sudo rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[20:19] <ibravo> should be a config file somewhere
[20:19] <burley> ibravo: I am sure you could copy that locally and then tweak whatever runs that to grab the local file
[20:19] <kraken> http://i.imgur.com/FwqHZ6Z.gif
[20:20] <ibravo> burley: Thats what I'm trying to figure out. Where is this code. ;-)
[20:20] <burley> another option would be to manually install and not use ceph-deploy
[20:20] <burley> which is what we did
[20:23] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[20:23] <ibravo> kinda like the easiness of ceps-deploy. Was the manual install too complex?
[20:26] * rljohnsn1 (~rljohnsn@ns25.8x8.com) has joined #ceph
[20:27] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Read error: Connection reset by peer)
[20:29] <VisBits> debian112 your specs are strange
[20:30] <VisBits> you should use a pci-e ssd for the journal drive and 2tbs for storage
[20:31] * rljohnsn2 (~rljohnsn@ns25.8x8.com) has joined #ceph
[20:33] * MACscr (~Adium@2601:d:c800:de3:fd26:7ac9:dbc0:5a60) has joined #ceph
[20:35] <debian112> VisBits: what are you using for OSD nodes? I am trying to build a storage node that has performance and storage capacity.
[20:36] <debian112> I have anything from 8, 16, 32 bay units to build from.
[20:36] * vbellur (~vijay@122.166.144.112) Quit (Ping timeout: 480 seconds)
[20:37] * rljohnsn1 (~rljohnsn@ns25.8x8.com) Quit (Ping timeout: 480 seconds)
[20:38] <debian112> VisBits: Good point on using pci-e
[20:38] <debian112> though
[20:38] <VisBits> we use OCZ revodrive 350
[20:38] <VisBits> 240gb
[20:38] <VisBits> the enterprise drivers support centos 7 and it works great
[20:39] * rzerres (~ralf@ip-84-118-159-80.unity-media.net) Quit (Quit: Leaving.)
[20:39] <debian112> VistBits: so if I swap out the 3x300GB(ssd), with 3x300GB pci-e
[20:40] <debian112> does everything looks cool
[20:40] <debian112> ?
[20:40] <debian112> yeah centos 7 is what I am planning on using
[20:41] * saltsa (~joonas@dsl-hkibrasgw1-58c01a-36.dhcp.inet.fi) Quit (Ping timeout: 480 seconds)
[20:42] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[20:44] * saltsa (~joonas@dsl-hkibrasgw1-58c01a-36.dhcp.inet.fi) has joined #ceph
[20:53] * ScOut3R (~ScOut3R@4E5CC061.dsl.pool.telekom.hu) has joined #ceph
[20:55] <saltlake> joshd: ping
[20:55] * clayb (~clayb@2604:2000:e1a9:ca00:3199:c0b2:35cc:bf0f) has joined #ceph
[20:56] * rohanm (~rohanm@mobile-166-171-250-247.mycingular.net) has joined #ceph
[21:00] * bandrus (~brian@54.sub-70-211-78.myvzw.com) has joined #ceph
[21:01] * barra204 (~shakamuny@209.66.74.34) Quit (Read error: Connection reset by peer)
[21:01] * barra204 (~shakamuny@209.66.74.34) has joined #ceph
[21:02] * ScOut3R (~ScOut3R@4E5CC061.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[21:16] * bandrus (~brian@54.sub-70-211-78.myvzw.com) Quit (Quit: Leaving.)
[21:18] * diegows (~diegows@190.190.5.238) has joined #ceph
[21:18] * barra204 (~shakamuny@209.66.74.34) Quit (Remote host closed the connection)
[21:32] <saltlake> jclm: ping
[21:33] <jclm> saltlake: pong
[21:33] <saltlake> jclm: How are you ? Good to see you (former rksvy)
[21:33] * barra204 (~shakamuny@209.66.74.34) has joined #ceph
[21:34] <jclm> I'm fine
[21:34] <saltlake> jclm: I din't find joshd today.. but I am wondering if I might have stumbled onto a bug wrt rbd diff.. where a diff file cannot be > 2G .
[21:34] <jclm> Yes I think I saw something somewhere about this.
[21:35] * barra204 (~shakamuny@209.66.74.34) Quit (Remote host closed the connection)
[21:35] <jclm> What distro and word size. Can you pastebin a uname -a somewhere for me?
[21:36] <saltlake> Actually the min log I need that shows a hang is more than 500K lines and am unable to paste it anywhere.. but this is the repetitive line
[21:37] <jclm> I just wanna start with a "uname -a"
[21:37] <jclm> In case I need to set something up on my side for testing
[21:38] <jclm> Looks like a furious 32bit limit you hit somewhere
[21:38] <saltlake> http://pastebin.com/Jbw3nVTk
[21:39] * dmsimard is now known as dmsimard_away
[21:39] * dgurtner (~dgurtner@178.197.231.49) Quit (Ping timeout: 480 seconds)
[21:40] * derjohn_mob (~aj@ip-95-223-126-17.hsi16.unitymediagroup.de) has joined #ceph
[21:40] <saltlake> jclm: I did see limitations on mongodb where there is a 2G limit on 32b systems.
[21:41] <saltlake> jclm: I saw the code in the diff_iterate() don't spot anything obvious.. will look closer
[21:43] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[21:44] * kefu (~kefu@114.92.101.83) has joined #ceph
[21:44] <saltlake> jclm: wanted to check with you to see if there was anything wrong with the command itself or something sick before I chase something that might not exist
[21:48] <jclm> Your command show rbd/<poolname>. Is this a typo? Should be <poolname>/rbdimagename
[21:48] * ircolle is now known as ircolle-afk
[21:49] <jclm> Have you tried export-diff pool/image - ???from-snap snapname to check if behavior is different
[21:51] <saltlake> jclm: Sorry it is rbd/<imagename> yes it was a mental blah!!
[21:52] <saltlake> jclm: Yes I started trying wit this : sudo rbd export-diff --from-snap snap1 rbd/<imagenmae>@snap2 ./y.diff
[21:52] <saltlake> jclm: That create a y.diff 2G and would hang. So joshd suggested I tried the command on pastebin..
[21:55] <jclm> You don't have any problem issuing a snap ls against this mage?
[21:55] * rljohnsn2 (~rljohnsn@ns25.8x8.com) Quit (Read error: Connection reset by peer)
[21:55] <jclm> s/mage/image/
[21:55] <kraken> jclm meant to say: You don't have any problem issuing a snap ls against this image?
[21:56] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[21:56] <saltlake> jclm, kraken, no issues with ls-ising snaps
[21:58] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[21:58] <saltlake> jclm, kraken: rbd snap ls rbd/<imagename> prints snap1 an snap2
[21:58] <jclm> Output of uname -a on the machine where you run the rbd command please?
[21:58] * roehrich (~roehrich@146.174.238.100) Quit (Quit: Leaving)
[21:59] <jclm> And how big is the RBD image?
[22:00] <saltlake> jclm: the rbd image is 14TB
[22:01] * rohanm (~rohanm@mobile-166-171-250-247.mycingular.net) Quit (Ping timeout: 480 seconds)
[22:01] <saltlake> jclm:http://pastebin.com/PQreq1a6
[22:01] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit ()
[22:02] * badone (~brad@203-121-198-226.e-wire.net.au) has joined #ceph
[22:03] * danieagle (~Daniel@201-95-103-54.dsl.telesp.net.br) has joined #ceph
[22:04] <jclm> Can you get the source code from http://pastebin.com/yrw4trSf
[22:05] <jclm> Compile it and then run a.out 1 << 33
[22:05] <jclm> This must be done on the machine where you run your command
[22:05] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[22:06] <jclm> Copy/pastebin the output of the command I gave you
[22:06] <jclm> Sorry command should be a.out 1 l 33
[22:06] <jclm> 1 "lowercase L" 33
[22:07] <jclm> This will show us the capabilities of the client side as far as 64 bit is concerned
[22:10] <saltlake> jclm: that was cool
[22:10] <saltlake> 1 << 33 = 0
[22:10] <saltlake> 1 << 33 = 0 = 0T
[22:10] * kefu (~kefu@114.92.101.83) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[22:10] <jclm> 1 l 33
[22:10] <jclm> do a.out 1 l 33
[22:10] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[22:11] <jclm> the character between 1 and 33 is a lowercase L
[22:11] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[22:11] <saltlake> jclm: yep. that is the o/p: ./a.out 1 l 33
[22:11] <saltlake> 1 << 33 = 0
[22:11] <saltlake> 1 << 33 = 0 = 0T
[22:12] <jclm> So your client is 32 bit only. That's where your problem comes from. I suspected it when I saw the ppc platform
[22:12] <jclm> I once chatted with someone from a PPC company that had the same problem
[22:12] <saltlake> jclm: Is it something that someone could fix or undooable ?
[22:13] <jclm> Upgarde the client to a 64 bit kernel
[22:13] <jclm> The machine where you issue the rbd command
[22:13] <saltlake> jclm :What if the client was 64b and I did the diffs from a 64b client but left the cluser on the 32b machines ?
[22:13] <jclm> Just to confirm can you do a.out 1 l 31
[22:14] <saltlake> Yeah tat works well :./a.out 1 l 31
[22:14] <saltlake> 1 << 31 = 2147483648 = 0T
[22:14] <dmick> the cluster won't run well on 32b either, I suspect
[22:14] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) Quit (Quit: Leaving.)
[22:15] <immesys> I am having some "No space left on device" problems on my OSDs that I am having some problems debugging
[22:15] <jclm> The deal here is just to find a 64bit client that can run your diff command to import it somewhere ele I guess
[22:15] <immesys> They are only 55% full
[22:15] <immesys> and I think they have free inodes
[22:15] <immesys> df -i shows only 5% use
[22:15] <immesys> They are XFS
[22:15] <immesys> does anyone familiar with XFS know where I should start looking?
[22:16] <saltlake> dmick: So far the cluster seems to look good on 32b there are limitations though like inability to create rbd devices > 16TB.. for that I am planning to use a client that is 64b but stay with the 32b cluster..
[22:16] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[22:17] <jclm> saltlake: So you're gonna have to set up a 64bit client just for your RBD export/import/diff operations I guess
[22:18] <saltlake> jclm: That was the plan to even use the 64b client to have rbds much larger than 16TB :-)
[22:19] <jclm> Is that you Ruchika?
[22:19] <saltlake> jclm: Of course !! Who else !! ?
[22:19] <saltlake> ;-)
[22:19] <jclm> You have so many nicknames young lady ;-)
[22:19] <saltlake> jclm: Yes I needed something cooler !! rksvy /rk was boring!!
[22:20] <dmick> I am pessimistic about using 32-bit hosts for cluster daemons. Even if they appear to work I would worry about how they respond to heavy rebalance loads.
[22:20] <dmick> but I have little experience there.
[22:20] * bandrus (~brian@54.sub-70-211-78.myvzw.com) has joined #ceph
[22:20] <saltlake> dmick: I am sure 32b platforms are an uncommon flavour right now .. but I will leave it at 'I have little choice'
[22:22] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[22:23] * bandrus (~brian@54.sub-70-211-78.myvzw.com) Quit ()
[22:23] * ibravo (~ibravo@72.198.142.104) Quit (Quit: This computer has gone to sleep)
[22:24] <immesys> Can anyone offer any advice on my XFS woes? I'll try anything...
[22:24] <immesys> I just lost my entire cache tier on production...
[22:25] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[22:26] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[22:26] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) has joined #ceph
[22:28] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[22:29] * ibravo (~ibravo@72.198.142.104) Quit ()
[22:31] <saltlake> jclm: thanks, dmick: thanks!! y'all
[22:35] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:43] <jclm> np
[22:44] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[22:47] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) has joined #ceph
[22:51] * amospalla (~amospalla@0001a39c.user.oftc.net) has joined #ceph
[22:51] * tupper (~tcole@2001:420:2280:1272:647f:846:62bd:6086) Quit (Ping timeout: 480 seconds)
[22:55] * linjan_ (~linjan@80.179.241.26) Quit (Ping timeout: 480 seconds)
[22:58] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) Quit (Quit: Leaving.)
[22:59] * eightyeight (~atoponce@pinyin.ae7.st) has left #ceph
[23:00] * jclm1 (~jclm@73.189.243.134) has joined #ceph
[23:00] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[23:03] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) Quit ()
[23:04] * jclm (~jclm@73.189.243.134) Quit (Ping timeout: 480 seconds)
[23:18] * rljohnsn1 (~rljohnsn@ns25.8x8.com) has joined #ceph
[23:18] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Read error: Connection reset by peer)
[23:18] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[23:19] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[23:32] <VisBits> immesys more placement groups maybe?
[23:32] <VisBits> what sizes volumes are we talking about here
[23:33] <immesys> So I solved the problem by chatting with Dave Chinner who was incredibly helpful
[23:33] <VisBits> debian112 if your still around, we run a single 240gb with 12 3TB drives using 12GB journals... the host are considered expendable in ceph.
[23:33] <immesys> It turns out that with the default inode size of 2k, I could not allocate more inodes
[23:33] <immesys> due to free space fragmentation
[23:33] <VisBits> what did you do to fix that?
[23:34] <VisBits> how did you change them
[23:34] <immesys> the problem can be solved by using 512 byte inodes (which glusterfs recommends) or by preallocating all the inodes
[23:34] <VisBits> is this on your data or cache drives
[23:34] <immesys> this is the cache drives
[23:34] <immesys> 512G ssds
[23:34] <immesys> but I have a pool with millions and millions of 70KB files
[23:34] <immesys> so I think thats why I hit this
[23:35] <immesys> s/files/objects/
[23:35] <VisBits> interesting, but your data drives dont get impacted by this?
[23:35] <immesys> the data drives are bigger, so free space fragmentation will hit later
[23:35] <immesys> but I think I will be hit by that later, yes
[23:35] <immesys> but I operate my cache at 80% usage, which is why I hit this
[23:35] <VisBits> any downside to 512byte inodes vs 2k? i guess the idea behind that is your wasting space with 2k inodes?
[23:36] <immesys> According to Dave there are no significant downsides as long as you are not using massive amounts of xattrs
[23:36] <immesys> which I am not.
[23:36] <immesys> I am using rados directly
[23:36] <immesys> things might be different with cephfs
[23:36] <VisBits> storing objects would be rados, where as cephfs storing actual flat files causes this then?
[23:36] <sage> rados is using xattrs extensively. go look at some actual objects to see how big they are before deciding to reduce inode size...
[23:37] <sage> (and by objects i mean the backing files on xfs)
[23:37] <sage> :)
[23:37] * badone (~brad@203-121-198-226.e-wire.net.au) Quit (Ping timeout: 480 seconds)
[23:37] <VisBits> im about to do a 500T cluster and it would suck running into this
[23:40] <immesys> If I can't reduce inode size then I will just preallocate them
[23:41] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[23:42] * puffy (~puffy@50.185.218.255) has joined #ceph
[23:50] * rohanm (~rohanm@c-67-168-194-197.hsd1.or.comcast.net) has joined #ceph
[23:54] * jclm1 (~jclm@73.189.243.134) Quit (Quit: Leaving.)
[23:56] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) Quit (Quit: Ex-Chat)
[23:57] * rljohnsn1 (~rljohnsn@ns25.8x8.com) Quit (Ping timeout: 480 seconds)
[23:58] * nitti (~nitti@162.222.47.218) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.