#ceph IRC Log

Index

IRC Log for 2014-01-08

Timestamps are in GMT/BST.

[0:00] <loicd> ip r kbader/32 via dmick
[0:00] <aarontc> loicd: :(... ip -6 r kbader/128 via dmick!
[0:00] <loicd> ouch
[0:01] <aarontc> xD
[0:01] <loicd> ahahah
[0:01] <aarontc> it's okay, I'm guilty of running IPv4 still
[0:02] <loicd> it's about the talk Mike Perez is going to deliver tomorrow in Portland. Wondering if he would be open to speaking about Ceph. Since he is from Dreamhost I figured maybe kbader_ knows :-)
[0:02] <aarontc> ahh. I';m going to that tomorrow :)
[0:02] <loicd> Oo ?
[0:02] <loicd> you are ?
[0:02] <aarontc> Indeed
[0:02] <kraken> http://i.imgur.com/bQcbpki.gif
[0:03] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[0:03] <aarontc> loicd: assuming you're talking about http://www.meetup.com/OpenStack-Northwest/events/151114422/?a=cr1_grp&rv=cr1&_af_eid=151114422&_af=event
[0:04] <loicd> absolutely. I wish I asked a few weeks ago. They were looking for someone to talk about Ceph. I sent a message but I must admit I was not very persistant in my quest.
[0:05] <aarontc> I saw the message early this AM my time.. I'm not sure I'm qualified enough to be a speaker, sorry :)
[0:05] * BillK (~BillK-OFT@124-148-103-108.dyn.iinet.net.au) has joined #ceph
[0:05] * sagelap (~sage@2001:388:a098:120:cdd8:d6c2:69ba:5c1f) has joined #ceph
[0:05] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[0:05] * sagelap (~sage@2001:388:a098:120:cdd8:d6c2:69ba:5c1f) has left #ceph
[0:05] <loicd> :-)
[0:06] <aarontc> (And I can barely keep up with the volume of email I receive and I'm not even employed atm... so you're doing pretty well)
[0:06] <loicd> maybe you will find Ceph users there. Rather : I'm sure you will ;-)
[0:07] <aarontc> Yeah it should be very interesting. I'm bringing some friends, too. I've never been to one of these before so it'll be a new experience for all of us :)
[0:07] * Cube (~Cube@199.168.44.193) Quit (Quit: Leaving.)
[0:09] <loicd> Amazingly I've never associated Mike Perez to Ceph or Dreamhost. Just to Cinder.
[0:09] <loicd> I'm not a people person I guess.
[0:09] <aarontc> I can relate. Computers make sense... people not so much ;)
[0:10] * mtanski (~mtanski@69.193.178.202) Quit (Ping timeout: 480 seconds)
[0:11] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[0:13] * i_m (~ivan.miro@2001:470:1f0b:1a08:3e97:eff:febd:a80c) has joined #ceph
[0:14] * mattbenjamin (~matt@aa2.linuxbox.com) Quit (Quit: Leaving.)
[0:15] * bandrus1 (~Adium@63.192.141.3) has joined #ceph
[0:15] * bandrus (~Adium@63.192.141.3) Quit (Read error: Connection reset by peer)
[0:18] * bandrus1 (~Adium@63.192.141.3) Quit ()
[0:19] <sherry> does anybody know how to assign weight to OSDs on HDD and SSD in the CRUSH map? in a case that I have more capacity and less performance on HDD and less capacity and more performance on SSD?
[0:20] <aarontc> sherry: based on the chatter I've been seeing in the channel recently, I think your best route is to assign the different drives to different pools
[0:20] <aarontc> You can use different crush maps per poll
[0:20] <aarontc> *pool
[0:20] * bandrus (~Adium@63.192.141.3) has joined #ceph
[0:20] <aarontc> (not different maps, but different rule sets, sorry)
[0:25] * Cube (~Cube@66-87-77-220.pools.spcsdns.net) has joined #ceph
[0:26] <sherry> aatontc: yes, I understand that it would be possible to assign one pool to each drives (pool1 to ssd,pool2 to hdd1, pool2 to hdd2), what I would like to do is that I have 3tiers (ssd, hdd1 and hdd2) So I want to write some rules in order to replicate hot data in ssd and cool and cold data in second and third tier in order. so I think that I should give OSDs in ssd less weight bt this conflict with its performance. am I right?
[0:28] <aarontc> sherry: as I understand CRUSH, yes. the weight is a single dimension, and that doesn't easily accommodate the concepts of "performance" and "size" at the same time
[0:28] * AfC (~andrew@2001:388:a098:120:2ad2:44ff:fe08:a4c) has joined #ceph
[0:28] <aarontc> I have seen talk of using the RGW tools to do things like move infrequently used data to lower-tier pools in the background and such, if that would work for your needs
[0:29] <sherry> yes please
[0:30] <aarontc> I haven't played with any of those tools myself, but if you get stuck and ask the right questions I bet the people who talked about it before will come out of the woodwork again :)
[0:30] <sherry> ah okay
[0:31] <sherry> how cn I find it out in the archive? is there any clue that I can look for it?
[0:31] <aarontc> You're talking about searching the mailing list archives?
[0:31] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[0:31] * dmsimard (~Adium@108.163.152.2) Quit (Quit: Leaving.)
[0:32] <sherry> ah, I thought it was discussed in iirc
[0:32] <aarontc> It's been discussed in both places
[0:33] * xarses (~andreww@12.164.168.115) Quit (Remote host closed the connection)
[0:33] <aarontc> you might also find this useful: http://ceph.com/docs/master/dev/cache-pool/
[0:33] <sherry> well I am aware of that
[0:33] <sherry> bt thats different
[0:34] * xarses (~andreww@12.164.168.115) has joined #ceph
[0:34] <sherry> since it uses another tier for caching
[0:34] <sherry> and I dont want to use this feature at the moment
[0:35] * Nats (~Nats@2001:8000:200c:0:293a:c91:c8c9:f37d) has joined #ceph
[0:35] <sherry> thanks aarontc
[0:35] <aarontc> Well that exactly describes how I understood your original question.. I guess I don't understand what you're asking :)
[0:37] * xarses (~andreww@12.164.168.115) Quit (Read error: Operation timed out)
[0:37] <sherry> well, the first case that I talked about is that I would like to apply tiering without using the cache mode (only by writing CRUSH rules)
[0:37] <aarontc> Ah, sorry. I think that can only be done by using a different CRUSH rule per pool
[0:38] <aarontc> Then if you want to move data between pools, you'll need an application layer on top of that (possibly the RGW tools could be bent to that purpose)
[0:38] * Cube (~Cube@66-87-77-220.pools.spcsdns.net) Quit (Read error: No route to host)
[0:38] * Cube (~Cube@66-87-77-220.pools.spcsdns.net) has joined #ceph
[0:39] <sherry> hmm, so u think that it wont be possible by assigning different weights?
[0:40] <aarontc> Right, the weights primarily affect utilization... if you give lower weight to an SSD it'll hold less data than a HDD
[0:40] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:40] <Gugge-47527> a pool with a single osd does not use weight for anything :)
[0:41] <Gugge-47527> and a pool with a single osd can not replicate the data anywhere :)
[0:42] <sherry> Gugge-47527: I got 4 osds on the first and second pool, and 6 osds in third pool
[0:42] <Gugge-47527> and how do you place data in the different pools?
[0:43] <sherry> thats what I doubt about : by assigning less weights to osds in the ssd pool
[0:43] <Gugge-47527> nono
[0:43] <Gugge-47527> weight it only used inside the pool
[0:44] <Gugge-47527> when you put an object into ceph, you choose the pool yourself
[0:44] <sherry> hmm, then if I write a rule that it use the pool by default?
[0:45] <Gugge-47527> a rule where?
[0:45] <Gugge-47527> in what application?
[0:45] <sherry> in CRUSH map
[0:45] <sherry> I think that I need to pastebin my CRUSH map
[0:45] <Gugge-47527> the crush map dies not choose the pool
[0:45] <Gugge-47527> does
[0:46] <Gugge-47527> you choose the pool for each object
[0:46] <Gugge-47527> the crushmap then figures out where in that pool to place the data
[0:46] <Gugge-47527> not in what pool
[0:47] <Freeaqingme> Hi folks; I'm looking at using cephfs for our shared webhosting platform. that means many small files divided over 50 - 150 webservers. Sticky load balancing would be possible. Is there any form of client caching that we could use (similar to bcache for block devices) to improve performance?
[0:47] <Gugge-47527> cachefs
[0:48] <Gugge-47527> but like cephfs, its not exactly ready for production :)
[0:49] <Gugge-47527> http://ceph.com/community/first-impressions-through-fscache-and-ceph/
[0:49] <Freeaqingme> Gugge-47527, I haven't done any testing _yet_. I do read here hear and there that cephfs isn't ready for production, but in what respects could we notice? Gmail was in beta forever ;)
[0:49] <Freeaqingme> and I hadn't found cachefs yet, so thanks for that
[0:50] <aarontc> Freeaqingme: the big issue I've had with CephFS is that some clients will stop seeing parts of directory contents
[0:50] <aarontc> and it requires some combination of rebooting the clients/the MDS/the entire Ceph cluster to fix
[0:50] <Freeaqingme> aarontc, ouch. That could be a bit of an issue ;)
[0:51] <Freeaqingme> I'm expecting to scale up to 80 storage servers with each 22 SSD's, so rebooting that at once may be problematic at best :P
[0:51] <Freeaqingme> aarontc, got an url to a ticket describing that problem?
[0:51] <aarontc> Freeaqingme: 'ndo' helps :)
[0:52] <aarontc> and no, I have never figured out a solid way to reproduce the problem on demand
[0:52] <aarontc> There have been some patches gone in to try and fix it but so far the issue is still happening for me, at least
[0:53] <Freeaqingme> ndo? Searching for that all I find is IRC logs of this channel with one particular user ;)
[0:53] <aarontc> ndo is a Ruby app to execute commands on groups of hosts
[0:53] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:53] <Freeaqingme> aah
[0:53] <aarontc> like 'ndo ceph-all /etc/init.d/ceph restart' ;)
[0:53] <sherry> Gugge-47527: http://paste.ubuntu.com/6712007/
[0:53] * xarses (~andreww@12.164.168.115) has joined #ceph
[0:54] <aarontc> Freeaqingme: https://rubygems.org/gems/ndo
[0:54] <Gugge-47527> sherry: the crushmap does not choose the pool to use for an object, you do
[0:54] <Freeaqingme> I got ansible for that
[0:54] <Freeaqingme> all the same, I guess
[0:55] <aarontc> Freeaqingme: yeah, there are a dozen other ways, I just happen to know about ndo
[0:55] <sherry> Gugge-47257: bt I cn assign each pool to the specific rule which uses one pool for storing objects
[0:55] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[0:56] <Gugge-47527> sherry: whatever application you use to put the objects into a pool, you have to tell what pool to use
[0:56] <Gugge-47527> sherry: the pool is not chosen by the crushmap
[0:56] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[0:56] * ChanServ sets mode +o elder
[0:57] <Freeaqingme> aarontc, I don't like your answer though. Had hoped that realistically speaking it would be generally stable
[0:57] <Freeaqingme> ;)
[0:57] <sherry> u can set the pool to use specific rule, can't you? ceph osd pool set poolname crush_ruleset number
[0:57] <aarontc> Freeaqingme: sorry to disappoint. The docs do presently state "CephFS is not currently recommended for production use"...
[0:58] <Freeaqingme> yeah, I'm fully aware
[0:58] <aarontc> I'm in the same boat, though, I want it to be usable also :)
[0:58] <Gugge-47527> sherry: yes you can specify what rule a pool uses.
[0:59] <aarontc> I was running it for several months before having to admit defeat and move my data to large RBD images that are shared over NFS via several VMs :/
[0:59] <Gugge-47527> sherry: and that tells ceph what to do with data you put in that pool
[0:59] <Gugge-47527> im pretty happy with my current rbd->zfs->nfs shares :)
[0:59] <Pedras> aarontc: what sort of trouble did you run into?
[1:00] <Freeaqingme> aarontc, so you share the RBD thingie using NFS?
[1:01] <aarontc> Pedras: aside from the disappearing files problem, I also experienced a pretty serious issue with a missing object stopping the MDS from being able to replay the journal. Sage helped out with a commit that let me skip that part of the journal replay, but I have no idea what's gone missing since that happened :)
[1:02] <sherry> Gugge-47527: well, then you specify a rule to assign objects to one pool only.
[1:02] <Pedras> yeah??? I have been trying to convince my crowd to use nfs/rbd and avoid cephfs
[1:02] <Gugge-47527> sherry: no
[1:04] <aarontc> I still have some stuff in CephFS and I'm still using it for less important things :) I haven't given up yet, Pedras
[1:04] * sarob_ (~sarob@2001:4998:effd:600:35f0:8be7:a373:e02c) Quit (Remote host closed the connection)
[1:05] <Gugge-47527> i dream about cephfs with stable snapshot support, and incremental snapshot send/receive support :)
[1:05] * sarob (~sarob@2001:4998:effd:600:35f0:8be7:a373:e02c) has joined #ceph
[1:05] <Freeaqingme> like ZFS, but then distributed ;)
[1:06] <Pedras> aarontc: I have not given up but I don't want stuff to vanish regardless of its importance
[1:06] <sherry> Gugge-47527: http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
[1:06] <Pedras> aarontc: how large was/is your tree (ie. depth, # of files)
[1:06] <Gugge-47527> sherry: as i said yes, the rule tells ceph where to put data you put in that pool
[1:06] <aarontc> Pedras: I had about 600 million files stored, but the deepest directories were probably 12 or 14 deep
[1:07] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[1:07] <Gugge-47527> sherry: but you need to manually decide what pool yuu want to use with each object
[1:08] * AfC (~andrew@2001:388:a098:120:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[1:09] <sherry> Gugge-47527: so whats the point to set the pool with specific rule?
[1:10] <Gugge-47527> the point is that you can put some data in one pool, which uses some osds, and you can put other data on another pool, which uses other osds
[1:10] <Gugge-47527> but _you_ chosse what data to put where
[1:10] <Gugge-47527> or you use the new not finished cache pool
[1:10] <Gugge-47527> to let ceph put more used data on the fast pool
[1:12] <sherry> I dont understand, u r saying that the point is to put some data in one pool, bt again I have to choose where to save data?!!!
[1:13] <Gugge-47527> lets say you have 10 million images
[1:13] * sarob (~sarob@2001:4998:effd:600:35f0:8be7:a373:e02c) Quit (Ping timeout: 480 seconds)
[1:13] <Gugge-47527> 100k of them are used a lot, and the rest are never used
[1:13] * ircolle (~Adium@2601:1:8380:2d9:d4fb:b147:ca39:993f) Quit (Quit: Leaving.)
[1:13] <Gugge-47527> you could put the 100k in a fast pool, and the rest in a slow pool
[1:13] <sherry> ah, I not talking abt migration
[1:14] <Pedras> aarontc: your data is probably still there. sounds like the mds went for sepuku
[1:14] <sherry> Im talking abt saving data at the first place
[1:14] <Gugge-47527> sherry: so am i
[1:14] <Gugge-47527> sherry: you put the 100k most used images in your fast pool
[1:14] <aarontc> Pedras: well, there was at least one case where an object in the 'metadata' pool which was a directory entry simply didn't exist, so who knows :)
[1:15] <Pedras> hopefully this year..
[1:15] * xarses (~andreww@12.164.168.115) Quit (Quit: Leaving)
[1:16] * xarses (~andreww@12.164.168.115) has joined #ceph
[1:16] <aarontc> Yeah, definitely :) CephFS is way more appealing than NFS in my book
[1:18] <Freeaqingme> aarontc, you're way more involved in this ceph thing than I am. what would be your estimate for it becoming sable enough for you to use?
[1:18] <Pedras> the rbd bit is quite robust
[1:18] <sherry> Gugge-47527: So u would say that 100k would be saved on my fast pool if I specify to save 100k in my fast pool?
[1:20] <Freeaqingme> Pedras, sorry, I meant the cephfs part specifically
[1:20] <aarontc> Freeaqingme: I'm not a Ceph developer (and I have too many of my own projects to hack on it much unless they hire me ;)) so I really can't say. Sorry :(
[1:20] <Gugge-47527> sherry: im saying you can put data in whatever pool you want, for whatever reason you want. but _you_ decide
[1:20] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) has joined #ceph
[1:21] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) Quit ()
[1:21] <Pedras> I am under the impression this year cephfs may see more action
[1:21] <Pedras> but I leave to someone closer to development to comment on that
[1:22] <Pedras> I have abused cephfs not to the extent aarontc has
[1:22] <Pedras> and I don't think I have lost files or anything
[1:22] <sherry> Gugge-47527: you're saying setting a rule to a pool does not save data to the pool by default?
[1:23] <Pedras> but it is a test env
[1:23] <Gugge-47527> sherry: go read the documentation, setup a system, play with it. :)
[1:24] * xarses (~andreww@12.164.168.115) Quit (Ping timeout: 480 seconds)
[1:24] <sherry> alright, thanks
[1:24] <aarontc> My gut feeling is that CephFS is very close, and I think Inktank will keep to their plan of stabilizing it w/ commercial support this year :)
[1:26] <Pedras> so this "cache pool" stuff??? you guys played with it?
[1:27] * dmick (~dmick@2607:f298:a:607:3db1:e6ae:1ece:5c0b) has left #ceph
[1:27] <Freeaqingme> Pedras, in theory it should 'just work' :)
[1:28] <Pedras> eheh
[1:29] <Pedras> since it is list in "Development" in the docs
[1:29] <Pedras> I was wondering about its state
[1:31] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[1:31] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) has joined #ceph
[1:32] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) Quit ()
[1:33] <Nats> its due for the feb release
[1:33] * Shmouel (~Sam@ns1.anotherservice.com) has joined #ceph
[1:33] * xarses (~andreww@12.164.168.115) has joined #ceph
[1:34] * dmick (~dmick@2607:f298:a:607:c426:e4bd:7e8d:8913) has joined #ceph
[1:39] <Psi-Jack> Ahh, release planned for Feb?
[1:39] <Pedras> firefly...
[1:39] <Psi-Jack> Oh, nice name. :)
[1:39] <Psi-Jack> When we get to the S's, you can call it serenity. :)
[1:40] <Pedras> I read/head somewhere about being the first "LTS"?
[1:40] <Pedras> is that true?
[1:40] <Pedras> head=heard
[1:40] <Psi-Jack> There's been many LTS's, including Dumpling, what I'm still on now.
[1:40] <Pedras> I stand corrected??? will this be one?
[1:41] <Psi-Jack> All named versions are LTS, in it's history.
[1:41] <Pedras> didn't realize that
[1:41] <Freeaqingme> have you guys always been able to upgrade rados without any problems?
[1:41] <Psi-Jack> But, I heard from a dev that emperor probably wouldn't have as long an LTS as dumpling, and firefly may be the next long term one.
[1:43] * clayb (~kvirc@proxy-nj1.bloomberg.com) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[1:45] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) Quit (Read error: Operation timed out)
[1:46] <Psi-Jack> Not always been flawless, Freeaqingme, but as far as ceph itself goes, I've usually had little to no issue, provided I insure my ceph repos have a higher priority than epel. I'm specifically masking the epel ceph packages these days to prevent them ever conflicting, since they're so behind.
[1:46] * nerdtron (~oftc-webi@202.60.8.250) has joined #ceph
[1:47] <Freeaqingme> "not always flawless" ... "had little issue" - that seems contradictory Psi-Jack ?
[1:47] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) has joined #ceph
[1:48] <dmick> not at all. "little to no" != "no"
[1:53] <Psi-Jack> Freeaqingme: I've had one hiccup that resulted in annoyance, but no data loss, and no downtime.
[1:53] <Freeaqingme> sweet
[1:53] <Psi-Jack> Just a network storm for a bit while it was replicating to the other servers.
[1:54] <Freeaqingme> I'm looking to run it on debian btw
[1:54] <Psi-Jack> I run, personally, Ceph on CentOS 6.5.
[1:54] <Freeaqingme> I can handle anything on bonded 10Gbit links ;)
[1:54] <Psi-Jack> heh.
[1:54] <Psi-Jack> Yeaaaah.. Not bad. I have bonded 1Gbit. :)
[1:54] <Freeaqingme> heh
[1:54] <Psi-Jack> 2x1Gb
[1:54] <Freeaqingme> we run storage servers with each 22 SSD's that currently run ZFS
[1:55] <Freeaqingme> networking was a clear bottle neck with 1gbit
[1:55] <Psi-Jack> Ouch yeah, And memory with ZFS can be a serious PITA.
[1:57] <Freeaqingme> we have special devices for that that act as ZIL that have a capacitor that acts like a battery. + something like 100 Gib RAM in the storage machines
[1:58] <Freeaqingme> it's a nice set up. We're actually considering to move to iscsi so we can use bcache, but that's because I didn't know cachefs ;) (thanks aarontc ! ). But the setup forms some major spofs, wherein many things go down once a single storage server is down
[1:59] <Freeaqingme> and when an SSD or HDD breaks, we must replace it semi-immediately
[2:01] <Psi-Jack> yeah, ceph is purely awesome.
[2:01] <Psi-Jack> I've been running Ceph through several LTS versions so far, since.... Oh... What was it.. I started on a development version then upgraded to cuttlefish and then dumpling, I believe.
[2:02] <Psi-Jack> And now, running for just at a year now.
[2:02] <Psi-Jack> Nope. Actually I started with a devel version just before bobtail.
[2:03] <Psi-Jack> because ceph-deploy wasn't around yet. :)
[2:04] <Freeaqingme> heh
[2:04] <Freeaqingme> that's my primary reason not choosing for gluster. I'm reading that it's harder to maintain
[2:05] <Freeaqingme> even though in theory it should perform better
[2:05] <Psi-Jack> Not to mention, it's slow as dirt, and each client acts as the server.
[2:05] <Psi-Jack> Horrible model.
[2:05] <Freeaqingme> there's that ;)
[2:05] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[2:07] <Freeaqingme> anyhows, I gotta get up in 4 hours or so. ttyl
[2:07] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Quit: Leaving.)
[2:08] * yanzheng (~zhyan@134.134.137.71) has joined #ceph
[2:09] <Psi-Jack> Still ... kinda sucks.. That ceph can't mount cephfs from itself.
[2:09] <Freeaqingme> what ya mean?
[2:09] <Psi-Jack> From the same server running the ceph-mon/osd
[2:09] <Psi-Jack> Err, mds even. ;)
[2:13] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:14] * AfC (~andrew@182.255.122.166) has joined #ceph
[2:16] <yanzheng> Psi-Jack, you can use ceph-fuse
[2:17] <Psi-Jack> Yeah.. I just don't want to use ceph-fuse.. Well, not sure how well it'd work with autofs anyway.
[2:17] <Psi-Jack> I want to have cephfs be used for all my autofs home-mounts so when I login to any server, I immediately have a shared home.
[2:18] <yanzheng> it's unlikely the issue will be fixed in near future
[2:21] * JoeGruher (~JoeGruher@134.134.137.71) Quit (Remote host closed the connection)
[2:22] * xarses (~andreww@12.164.168.115) Quit (Ping timeout: 480 seconds)
[2:22] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:23] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:23] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[2:23] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:25] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[2:31] * AfC (~andrew@182.255.122.166) Quit (Quit: Leaving.)
[2:32] * kaizh (~oftc-webi@128-107-239-235.cisco.com) Quit (Remote host closed the connection)
[2:34] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[2:48] * i_m (~ivan.miro@2001:470:1f0b:1a08:3e97:eff:febd:a80c) Quit (Ping timeout: 480 seconds)
[2:48] * LeaChim (~LeaChim@host86-174-30-7.range86-174.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[3:10] * shang_ (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[3:10] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Read error: Connection reset by peer)
[3:19] * angdraug (~angdraug@12.164.168.115) Quit (Quit: Leaving)
[3:19] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) Quit (Quit: Leaving)
[3:20] * diegows (~diegows@190.190.17.57) Quit (Ping timeout: 480 seconds)
[3:26] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[3:32] * Cube1 (~Cube@66-87-77-220.pools.spcsdns.net) has joined #ceph
[3:32] * Cube (~Cube@66-87-77-220.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[3:41] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Quit: Leaving.)
[3:55] * cephalobot (~ceph@ds2390.dreamservers.com) Quit (Remote host closed the connection)
[3:57] * shang_ (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[3:58] * kbader_ (~kyle@cerebrum.dreamservers.com) Quit (Remote host closed the connection)
[4:00] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) Quit (Quit: Leaving.)
[4:01] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (Ping timeout: 480 seconds)
[4:01] * iaXe (~axe@223.223.202.195) has joined #ceph
[4:02] <iaXe> hi there.
[4:03] <iaXe> I used fio to test ceph performance. I have 1 mon 1 mds 4 osds in the cluster.
[4:04] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[4:05] * bandrus (~Adium@63.192.141.3) Quit (Quit: Leaving.)
[4:07] <iaXe> The test is on a ubuntu client mounted a rbd. I compared pool minsize=1 and minsize=3 under bs=4K rand wirte condition. When minsize=1 the iops is 108. when minsize=3 the iops is 240. Why the ipos have so big difference?
[4:08] * iaXe (~axe@223.223.202.195) Quit (Quit: Leaving)
[4:10] * iaXe (~axe@223.223.202.194) has joined #ceph
[4:12] <iaXe> anyone can help me?
[4:24] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[4:24] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[4:26] * kraken (~kraken@gw.sepia.ceph.com) Quit (Ping timeout: 480 seconds)
[4:32] * AfC (~andrew@182.255.122.166) has joined #ceph
[4:33] * sagelap (~sage@182.255.123.109) has joined #ceph
[4:33] * sagelap (~sage@182.255.123.109) has left #ceph
[4:47] * shang_ (~ShangWu@175.41.48.77) has joined #ceph
[4:59] * dmick (~dmick@2607:f298:a:607:c426:e4bd:7e8d:8913) has left #ceph
[5:05] * fireD_ (~fireD@93-142-213-144.adsl.net.t-com.hr) has joined #ceph
[5:07] * fireD (~fireD@93-142-204-23.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:07] * AfC (~andrew@182.255.122.166) Quit (Quit: Leaving.)
[5:17] * bandrus (~Adium@12.111.91.2) has joined #ceph
[5:17] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[5:23] * sagelap (~sage@182.255.123.109) has joined #ceph
[5:28] * julian (~julianwa@125.70.133.219) has joined #ceph
[5:31] * sagelap (~sage@182.255.123.109) Quit (Ping timeout: 480 seconds)
[5:43] * Vacum_ (~vovo@i59F79F21.versanet.de) has joined #ceph
[5:46] * ScOut3R (~ScOut3R@54009895.dsl.pool.telekom.hu) has joined #ceph
[5:47] * sagelap (~sage@182.255.123.109) has joined #ceph
[5:47] * sagelap (~sage@182.255.123.109) has left #ceph
[5:47] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[5:50] * Vacum (~vovo@88.130.220.104) Quit (Ping timeout: 480 seconds)
[5:50] * bandrus (~Adium@12.111.91.2) Quit (Quit: Leaving.)
[5:55] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[6:00] * The_Bishop (~bishop@2001:470:50b6:0:cc9c:19d1:5a7e:36c8) Quit (Ping timeout: 480 seconds)
[6:04] * kbader (~kyle@cerebrum.dreamservers.com) has joined #ceph
[6:04] * cephalobot (~ceph@ds2390.dreamservers.com) has joined #ceph
[6:04] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[6:05] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[6:06] * i_m (~ivan.miro@2001:470:1f0b:1a08:3e97:eff:febd:a80c) has joined #ceph
[6:09] * The_Bishop (~bishop@2001:470:50b6:0:c109:31f4:46cc:6700) has joined #ceph
[6:11] * julian (~julianwa@125.70.133.219) Quit (Quit: afk)
[6:12] * julian (~julianwa@125.70.133.219) has joined #ceph
[6:14] * The_Bishop (~bishop@2001:470:50b6:0:c109:31f4:46cc:6700) Quit ()
[6:20] * kaizh (~oftc-webi@128-107-239-233.cisco.com) has joined #ceph
[6:22] * ScOut3R (~ScOut3R@54009895.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[6:23] <Nats> ceph.com seems to be offline
[6:30] * cephalobot (~ceph@ds2390.dreamservers.com) Quit (Remote host closed the connection)
[6:30] * KindTwo (KindOne@50.96.80.201) has joined #ceph
[6:31] * kraken (~kraken@gw.sepia.ceph.com) Quit (Read error: Operation timed out)
[6:31] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:31] * KindTwo is now known as KindOne
[6:32] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[6:33] * kbader (~kyle@cerebrum.dreamservers.com) Quit (Remote host closed the connection)
[6:36] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (Ping timeout: 480 seconds)
[6:36] * AfC (~andrew@182.255.122.166) has joined #ceph
[6:43] * cephalobot (~ceph@ds2390.dreamservers.com) has joined #ceph
[6:44] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[6:44] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[6:45] * kbader (~kyle@cerebrum.dreamservers.com) has joined #ceph
[6:47] * cofol1986 (~xwrj@110.90.119.113) has joined #ceph
[6:47] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[6:56] * AfC (~andrew@182.255.122.166) Quit (Quit: Leaving.)
[6:58] <cofol1986> Hello, I got an weird error. While using ceph-deploy the cluster, the deploying output only print the right ip in "extra_probe_peers" while in monmap the ip address is wrong.
[6:58] <cofol1986> does anybody meet this problem?
[6:58] <cofol1986> I use ceph emperor on centos 6.5 final
[7:04] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) Quit (Quit: Leaving.)
[7:13] * codice_ is now known as codice
[7:17] <codice> ceph.com down, I guess?
[7:20] <blahnana> looks that way
[7:30] * Cube1 (~Cube@66-87-77-220.pools.spcsdns.net) Quit (Quit: Leaving.)
[7:48] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[7:57] * leotreasure (~leotreasu@182.255.121.106) has joined #ceph
[8:00] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has left #ceph
[8:02] * leotreasure (~leotreasu@182.255.121.106) Quit (Quit: leotreasure)
[8:16] * bandrus (~Adium@12.111.91.2) has joined #ceph
[8:19] * KindTwo (KindOne@50.96.225.35) has joined #ceph
[8:21] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:21] * KindTwo is now known as KindOne
[8:25] * Cube (~Cube@66-87-77-220.pools.spcsdns.net) has joined #ceph
[8:28] * leotreasure (~leotreasu@182.255.121.106) has joined #ceph
[8:28] * Cube (~Cube@66-87-77-220.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[8:29] * Sysadmin88 (~IceChat77@2.218.8.40) Quit (Quit: Give a man a fish and he will eat for a day. Teach him how to fish, and he will sit in a boat and drink beer all day)
[8:33] * leotreasure (~leotreasu@182.255.121.106) Quit (Quit: leotreasure)
[8:37] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[8:39] * AfC (~andrew@182.255.122.166) has joined #ceph
[8:46] * leotreasure (~leotreasu@182.255.121.106) has joined #ceph
[8:51] * mkoo (~mkoo@chello213047069005.1.13.vie.surfer.at) has joined #ceph
[8:56] * codice_ (~toodles@75-140-64-194.dhcp.lnbh.ca.charter.com) has joined #ceph
[8:56] * leotreasure (~leotreasu@182.255.121.106) Quit (Quit: leotreasure)
[8:57] * codice (~toodles@75-140-64-194.dhcp.lnbh.ca.charter.com) Quit (Ping timeout: 480 seconds)
[9:00] * kaizh (~oftc-webi@128-107-239-233.cisco.com) Quit (Remote host closed the connection)
[9:01] * ikla (~lbz@c-71-237-62-220.hsd1.co.comcast.net) has joined #ceph
[9:08] * bandrus (~Adium@12.111.91.2) Quit (Quit: Leaving.)
[9:08] * sagelap (~sage@182.255.123.109) has joined #ceph
[9:09] * leotreasure (~leotreasu@182.255.121.106) has joined #ceph
[9:17] * mattt_ (~textual@94.236.7.190) has joined #ceph
[9:18] * leotreasure (~leotreasu@182.255.121.106) Quit (Quit: leotreasure)
[9:23] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[9:24] * bandrus (~Adium@12.111.91.2) has joined #ceph
[9:24] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) has joined #ceph
[9:25] <codice_> /nick codice
[9:25] * codice_ is now known as codice
[9:31] * ldurnez (~ldurnez@proxy.ovh.net) has joined #ceph
[9:31] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[9:32] * bandrus (~Adium@12.111.91.2) Quit (Quit: Leaving.)
[9:34] * leotreasure (~leotreasu@182.255.121.106) has joined #ceph
[9:34] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[9:35] * AfC (~andrew@182.255.122.166) Quit (Ping timeout: 480 seconds)
[9:38] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[9:41] * mancdaz_away is now known as mancdaz
[9:41] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:7ca6) Quit (Killed (NickServ (Too many failed password attempts.)))
[9:41] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:7ca6) has joined #ceph
[9:42] * garphy`aw is now known as garphy
[9:44] * leotreasure (~leotreasu@182.255.121.106) Quit (Quit: leotreasure)
[9:47] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[9:47] * ChanServ sets mode +v andreask
[9:47] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[9:47] * sagelap (~sage@182.255.123.109) Quit (Ping timeout: 480 seconds)
[9:50] * sagelap (~sage@182.255.123.109) has joined #ceph
[9:51] * Koma (~Koma@0001c112.user.oftc.net) Quit (Max SendQ exceeded)
[9:51] * Koma (~Koma@0001c112.user.oftc.net) has joined #ceph
[9:54] * LCF (ball8@193.231.broadband16.iol.cz) Quit (Ping timeout: 480 seconds)
[9:58] * sagelap (~sage@182.255.123.109) Quit (Ping timeout: 480 seconds)
[10:01] * julian_ (~julianwa@125.70.133.219) has joined #ceph
[10:02] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[10:03] * ksingh (~Adium@2001:708:10:10:a49a:45e1:fbfd:4edc) has joined #ceph
[10:04] <ksingh> how to check OSD disk device name
[10:04] <ksingh> any command for that ?
[10:05] <ksingh> i have osd.13 , i want to identify its device name ?
[10:05] <iaXe> ceph.com is alive now
[10:06] <iaXe> ssh osd.13 && mount
[10:07] * julian (~julianwa@125.70.133.219) Quit (Ping timeout: 480 seconds)
[10:10] <iaXe> ksingh, actually I think you should look into your ceph conf file
[10:11] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[10:12] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) Quit (Read error: No route to host)
[10:15] * Guest2916 is now known as joOnas
[10:17] * Cube1 (~Cube@66-87-77-220.pools.spcsdns.net) has joined #ceph
[10:23] * jhurlbert (~jhurlbert@216.57.209.252) Quit (Ping timeout: 480 seconds)
[10:28] * jhurlbert (~jhurlbert@216.57.209.252) has joined #ceph
[10:31] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[10:31] * AfC (~andrew@182.255.122.166) has joined #ceph
[10:31] * AfC (~andrew@182.255.122.166) Quit ()
[10:31] * BManojlovic (~steki@217-162-200-111.dynamic.hispeed.ch) has joined #ceph
[10:32] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) has joined #ceph
[10:33] * AfC (~andrew@182.255.122.166) has joined #ceph
[10:33] * LCF (ball8@193.231.broadband16.iol.cz) has joined #ceph
[10:38] * tnt (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) has joined #ceph
[10:38] * LeaChim (~LeaChim@host86-174-30-7.range86-174.btcentralplus.com) has joined #ceph
[10:38] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[10:38] <tnt> is it just me or is the ceph blog empty ? ( http://ceph.com/community/blog/ )
[10:38] * haomaiwa_ (~haomaiwan@118.187.35.10) Quit (Read error: Connection reset by peer)
[10:40] * Cube1 (~Cube@66-87-77-220.pools.spcsdns.net) Quit (Quit: Leaving.)
[10:40] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[10:41] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:45] * haomaiwang (~haomaiwan@117.79.232.238) has joined #ceph
[10:47] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:48] * garphy is now known as garphy`aw
[10:52] * yanzheng (~zhyan@134.134.137.71) Quit (Remote host closed the connection)
[10:59] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[11:01] * sagelap (~sage@2001:388:a098:120:c685:8ff:fe59:d486) has joined #ceph
[11:05] * linuxkidd (~linuxkidd@cpe-066-057-019-145.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[11:06] * leotreasure (~leotreasu@203-59-219-57.perm.iinet.net.au) has joined #ceph
[11:08] * nick (~nick@digo.dischord.org) Quit (Quit: ZNC - http://znc.in)
[11:12] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[11:14] * codice (~toodles@75-140-64-194.dhcp.lnbh.ca.charter.com) Quit (Read error: Operation timed out)
[11:14] * leotreasure (~leotreasu@203-59-219-57.perm.iinet.net.au) Quit (Quit: leotreasure)
[11:16] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Read error: Operation timed out)
[11:17] * codice (~toodles@75-140-64-194.dhcp.lnbh.ca.charter.com) has joined #ceph
[11:28] * linuxkidd (~linuxkidd@cpe-066-057-019-145.nc.res.rr.com) has joined #ceph
[11:29] * AfC (~andrew@182.255.122.166) Quit (Quit: Leaving.)
[11:31] * BManojlovic (~steki@217-162-200-111.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[11:36] * nerdtron (~oftc-webi@202.60.8.250) Quit (Quit: Page closed)
[11:38] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[11:41] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[11:45] * pressureman (~pressurem@62.217.45.26) has joined #ceph
[11:46] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[11:47] * leotreasure (~leotreasu@124-148-97-102.dyn.iinet.net.au) has joined #ceph
[11:49] * codice (~toodles@75-140-64-194.dhcp.lnbh.ca.charter.com) Quit (Read error: Operation timed out)
[11:51] * codice (~toodles@75-140-64-194.dhcp.lnbh.ca.charter.com) has joined #ceph
[11:54] * nick (~nick@zarquon.dischord.org) has joined #ceph
[11:59] * SvenPHX (~Adium@wsip-174-79-34-244.ph.ph.cox.net) Quit (Quit: Leaving.)
[12:00] * SvenPHX (~Adium@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[12:01] * mattt_ (~textual@94.236.7.190) Quit (Remote host closed the connection)
[12:02] * mattt_ (~textual@92.52.76.140) has joined #ceph
[12:02] <pressureman> anybody here who can answer a crush map question?
[12:04] * shang_ (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[12:05] <andreask> pressureman: give it a try
[12:06] <pressureman> hi andreask, i have a very small cluster (two hosts, each with two OSDs, replication 2), and looking at my pg dump, i see that some PGs have both replicas on the same host
[12:07] <pressureman> this is obviously not optimal...
[12:07] <pressureman> i have not edited a crush ruleset before... but after a bit of reading, it seems i should be looking at chooseleaf, right?
[12:07] * BManojlovic (~steki@217-162-200-111.dynamic.hispeed.ch) has joined #ceph
[12:07] <pressureman> something like "step chooseleaf firstn 0 type host" ?
[12:08] <andreask> yes, that looks correct
[12:10] <pressureman> the pool (libvirt-pool) actually isn't shown in the crushmap at all... so i guess i should just copy and paste a rule from one of the default pools, and change the choose/chooseleaf bit
[12:12] <tnt> you can also just assign a any existing rule to the pool
[12:12] * lxo (~aoliva@lxo.user.oftc.net) Quit (Read error: Connection reset by peer)
[12:12] <tnt> there is no need to have a 1:1 mapping between rule and pool.
[12:13] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (Remote host closed the connection)
[12:14] <pressureman> tnt, is that specified in the decompiled crushmap somewhere? i just see three rules, for data, metadata and rbd. no mention whatsoever of my libvirt-pool
[12:14] <tnt> which rule is used by which pool is not in the crush map
[12:15] <pressureman> ah yes, just found the command in the crush docs
[12:15] <tnt> ceph osd pool set ${pool_name} crush_ruleset ${crush_ruleset}
[12:15] <pressureman> thanks
[12:16] <pressureman> so in this situation, i should ideally tweak all three existing default rules to use chooseleaf, otherwise (if / when i use the default pools), the placement could be suboptimal... right?
[12:17] <pressureman> with only two hosts containing two OSDs each, there is a reasonable probability that a PG will have both replicas on the same host
[12:18] <tnt> yes
[12:21] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) Quit (Quit: Leaving.)
[12:22] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:25] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) has joined #ceph
[12:26] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) Quit (Ping timeout: 480 seconds)
[12:33] <pressureman> tnt, andreask, thanks for you help - changed the crushmap to use chooseleaf host, and pg dump now confirms that no two replicas of the same pg are on the same host
[12:35] <andreask> yw
[12:37] * garphy`aw is now known as garphy
[12:39] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[12:46] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[12:47] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[12:56] * codice (~toodles@75-140-64-194.dhcp.lnbh.ca.charter.com) Quit (Read error: Operation timed out)
[13:06] * pressureman (~pressurem@62.217.45.26) Quit (Quit: Ex-Chat)
[13:08] * allsystemsarego (~allsystem@188.26.167.66) has joined #ceph
[13:09] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[13:09] * ChanServ sets mode +v andreask
[13:15] * BManojlovic (~steki@217-162-200-111.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[13:37] * codice (~toodles@75-140-64-194.dhcp.lnbh.ca.charter.com) has joined #ceph
[13:39] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[13:41] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[13:42] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has left #ceph
[13:43] * iaXe (~axe@223.223.202.194) Quit (Ping timeout: 480 seconds)
[13:46] * garphy is now known as garphy`aw
[13:47] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:49] * zjf (~zjf@103.31.149.32) has joined #ceph
[13:51] <zjf> is there a document about deploy ceph with a single node ?
[13:54] * i_m (~ivan.miro@2001:470:1f0b:1a08:3e97:eff:febd:a80c) Quit (Ping timeout: 480 seconds)
[13:54] * guppy (~quassel@guppy.xxx) Quit (Ping timeout: 480 seconds)
[14:00] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[14:11] * garphy`aw is now known as garphy
[14:14] * ksingh (~Adium@2001:708:10:10:a49a:45e1:fbfd:4edc) Quit (Quit: Leaving.)
[14:15] * guppy (~quassel@guppy.xxx) has joined #ceph
[14:18] * i_m (~ivan.miro@pool-109-191-69-34.is74.ru) has joined #ceph
[14:18] <andreask> zjf: don't think so
[14:18] <andreask> ... but is possible
[14:21] * diegows (~diegows@190.190.17.57) has joined #ceph
[14:21] <loicd> leseb: http://www.sebastien-han.fr/blog/2014/01/08/ceph-admin-api-init/ is 404 it seems
[14:22] * sleinen1 (~Adium@2001:620:0:25:cdab:63ba:91aa:30a1) has joined #ceph
[14:24] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) has joined #ceph
[14:26] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:34] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[14:35] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[14:35] * psomas (~psomas@147.102.2.106) has joined #ceph
[14:39] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[14:40] * yanzheng (~zhyan@134.134.139.74) has joined #ceph
[14:41] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) Quit (Read error: No route to host)
[14:45] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[14:46] <loicd> zjf: there is http://dachary.org/?p=2374 and the associated tiny script http://dachary.org/wp-uploads/2013/10/micro-osd.txt ;-)
[14:46] * neary (~neary@62.129.6.2) has joined #ceph
[14:47] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[14:48] <loicd> more seriously if you just ceph-deploy with one mon on a given host + use the same host as an an osd, you will end up with a one node cluster. Reducing the required number of replica to 1 will allow you to put data into it ( that's what ceph osd pool set data size 1 is about )
[14:48] * psomas (~psomas@147.102.2.106) Quit (Ping timeout: 480 seconds)
[14:50] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[14:50] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) has joined #ceph
[14:51] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:56] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[14:58] <leseb> loicd: thanks! this is fixed now :)
[14:58] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[15:03] * BillK (~BillK-OFT@124-148-103-108.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:06] * clayb (~kvirc@69.191.241.59) has joined #ceph
[15:07] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (Ping timeout: 480 seconds)
[15:12] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) Quit (Ping timeout: 480 seconds)
[15:20] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[15:20] <zjf> andreask,loicd, thanks very much.
[15:23] * psomas (~psomas@147.102.2.106) has joined #ceph
[15:28] * BManojlovic (~steki@217-162-200-111.dynamic.hispeed.ch) has joined #ceph
[15:29] * ksingh (~Adium@85-76-71-174-nat.elisa-mobile.fi) has joined #ceph
[15:30] * zjf (~zjf@103.31.149.32) Quit (Remote host closed the connection)
[15:45] * i_m (~ivan.miro@pool-109-191-69-34.is74.ru) Quit (Quit: Leaving.)
[15:46] * i_m (~ivan.miro@pool-109-191-69-34.is74.ru) has joined #ceph
[15:47] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) has joined #ceph
[15:57] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[15:57] * AfC (~andrew@2001:388:a098:120:2ad2:44ff:fe08:a4c) has joined #ceph
[15:59] * markbby (~Adium@168.94.245.2) has joined #ceph
[16:00] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[16:12] * dvanders (~dvanders@dvanders-air.cern.ch) has joined #ceph
[16:12] * dvanders (~dvanders@dvanders-air.cern.ch) Quit ()
[16:19] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[16:21] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[16:24] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) has joined #ceph
[16:29] * bandrus (~Adium@12.111.91.2) has joined #ceph
[16:30] * yanzheng (~zhyan@134.134.139.74) Quit (Remote host closed the connection)
[16:35] * julian_ (~julianwa@125.70.133.219) Quit (Quit: afk)
[16:36] * AfC (~andrew@2001:388:a098:120:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[16:40] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[16:43] * ksingh1 (~Adium@85-76-74-117-nat.elisa-mobile.fi) has joined #ceph
[16:43] * Cube (~Cube@199.168.44.193) has joined #ceph
[16:47] * ksingh (~Adium@85-76-71-174-nat.elisa-mobile.fi) Quit (Ping timeout: 480 seconds)
[16:48] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:48] * chris38 (~chris38@193.49.124.64) Quit (Ping timeout: 480 seconds)
[16:52] * perfectsine (~perfectsi@if01-gn01.dal05.softlayer.com) Quit (Ping timeout: 480 seconds)
[16:52] * bandrus (~Adium@12.111.91.2) Quit (Quit: Leaving.)
[16:52] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) Quit (Quit: Ex-Chat)
[16:55] * sjustlaptop (~sam@199.58.187.248) has joined #ceph
[16:58] * SvenPHX (~Adium@wsip-174-79-34-244.ph.ph.cox.net) Quit (Quit: Leaving.)
[16:58] * mattbenjamin (~matt@aa2.linuxbox.com) has joined #ceph
[17:03] * SvenPHX (~Adium@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[17:04] * bandrus (~Adium@12.111.91.2) has joined #ceph
[17:04] * ksingh1 (~Adium@85-76-74-117-nat.elisa-mobile.fi) Quit (Ping timeout: 480 seconds)
[17:07] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:11] * i_m (~ivan.miro@pool-109-191-69-34.is74.ru) Quit (Ping timeout: 480 seconds)
[17:14] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) has joined #ceph
[17:17] * gregmark (~Adium@68.87.42.115) has joined #ceph
[17:17] * sjustlaptop (~sam@199.58.187.248) Quit (Ping timeout: 480 seconds)
[17:21] * bandrus (~Adium@12.111.91.2) Quit (Quit: Leaving.)
[17:23] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[17:28] * sjustlaptop (~sam@199.58.187.248) has joined #ceph
[17:34] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[17:34] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) Quit (Remote host closed the connection)
[17:36] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[17:39] * ScOut3R (~scout3r@54009895.dsl.pool.telekom.hu) has joined #ceph
[17:40] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[17:49] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:51] * ksingh (~Adium@85-76-69-180-nat.elisa-mobile.fi) has joined #ceph
[18:00] * ircolle (~Adium@2601:1:8380:2d9:6d13:f8e3:aae9:9c8d) has joined #ceph
[18:00] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:02] * JoeGruher (~JoeGruher@jfdmzpr01-ext.jf.intel.com) has joined #ceph
[18:03] * bandrus (~Adium@63.192.141.3) has joined #ceph
[18:03] * sarob (~sarob@2601:9:7080:13a:18fb:34c1:a74c:d7ae) has joined #ceph
[18:04] * sarob (~sarob@2601:9:7080:13a:18fb:34c1:a74c:d7ae) Quit (Remote host closed the connection)
[18:04] * mattt_ (~textual@92.52.76.140) Quit (Read error: Connection reset by peer)
[18:04] * sarob (~sarob@2001:4998:effd:7801::116a) has joined #ceph
[18:08] * bcat (~bcat@64-79-127-98.static.wiline.com) has joined #ceph
[18:09] * bcat (~bcat@64-79-127-98.static.wiline.com) Quit ()
[18:09] * sarob (~sarob@2001:4998:effd:7801::116a) Quit (Remote host closed the connection)
[18:09] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.yahoo.com) has joined #ceph
[18:10] <JoeGruher> is background scrub of PGs going to affect performance? i'm doing some performance testing and I can see scrub activity in the "ceph -w" output.
[18:10] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[18:11] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[18:11] * sarob (~sarob@2001:4998:effd:7801::116a) has joined #ceph
[18:12] * sarob (~sarob@2001:4998:effd:7801::116a) Quit (Remote host closed the connection)
[18:14] * sarob (~sarob@2001:4998:effd:7801::116a) has joined #ceph
[18:15] * sarob (~sarob@2001:4998:effd:7801::116a) Quit (Remote host closed the connection)
[18:16] * Guest2545 (~coyo@thinks.outside.theb0x.org) Quit (Ping timeout: 480 seconds)
[18:16] * xmltok (~xmltok@216.103.134.250) has joined #ceph
[18:17] * sarob (~sarob@2001:4998:effd:7801::116a) has joined #ceph
[18:24] * bjornar (~bjornar@ti0099a430-0436.bb.online.no) has joined #ceph
[18:25] * sarob (~sarob@2001:4998:effd:7801::116a) Quit (Ping timeout: 480 seconds)
[18:26] * angdraug (~angdraug@12.164.168.115) has joined #ceph
[18:27] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[18:30] * alram (~alram@38.122.20.226) has joined #ceph
[18:30] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[18:33] * kaizh (~oftc-webi@128-107-239-234.cisco.com) has joined #ceph
[18:33] * sleinen1 (~Adium@2001:620:0:25:cdab:63ba:91aa:30a1) Quit (Quit: Leaving.)
[18:33] * sleinen (~Adium@130.59.94.146) has joined #ceph
[18:41] * sleinen (~Adium@130.59.94.146) Quit (Ping timeout: 480 seconds)
[18:42] * Sysadmin88 (~IceChat77@2.218.8.40) has joined #ceph
[18:42] * mancdaz is now known as mancdaz_away
[18:42] * garphy is now known as garphy`aw
[18:44] * xarses (~andreww@12.164.168.115) has joined #ceph
[18:46] * Coyo (~coyo@thinks.outside.theb0x.org) has joined #ceph
[18:46] * Shmouel (~Sam@ns1.anotherservice.com) Quit (Ping timeout: 480 seconds)
[18:46] * Coyo is now known as Guest3052
[18:48] * Cube (~Cube@199.168.44.193) Quit (Quit: Leaving.)
[18:54] * thomnico (~thomnico@195.101.107.85) has joined #ceph
[18:55] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[18:58] * Guest3052 (~coyo@thinks.outside.theb0x.org) Quit (Ping timeout: 480 seconds)
[19:01] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[19:02] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[19:02] * chutz (~chutz@rygel.linuxfreak.ca) Quit (Quit: Leaving)
[19:03] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[19:04] * ldurnez (~ldurnez@proxy.ovh.net) Quit (Quit: Leaving.)
[19:06] * Codora (~coyo@thinks.outside.theb0x.org) has joined #ceph
[19:07] * BManojlovic (~steki@217-162-200-111.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:09] * sleinen (~Adium@2001:620:0:25:90d7:d95a:53d3:dc72) has joined #ceph
[19:10] * sleinen (~Adium@2001:620:0:25:90d7:d95a:53d3:dc72) Quit ()
[19:10] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[19:11] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[19:11] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[19:11] * Cube (~Cube@199.168.44.193) has joined #ceph
[19:11] * Sysadmin88 (~IceChat77@2.218.8.40) Quit (Quit: Never put off till tomorrow, what you can do the day after tomorrow)
[19:17] * sarob (~sarob@63.92.243.89) has joined #ceph
[19:17] * sarob (~sarob@63.92.243.89) Quit (Remote host closed the connection)
[19:17] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.ne1.yahoo.com) has joined #ceph
[19:18] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:18] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.ne1.yahoo.com) Quit (Remote host closed the connection)
[19:20] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.ne1.yahoo.com) has joined #ceph
[19:25] * neary (~neary@62.129.6.2) Quit (Ping timeout: 480 seconds)
[19:25] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.ne1.yahoo.com) Quit (Remote host closed the connection)
[19:26] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.ne1.yahoo.com) has joined #ceph
[19:26] * mkoo (~mkoo@chello213047069005.1.13.vie.surfer.at) Quit (Quit: mkoo)
[19:31] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.ne1.yahoo.com) Quit (Remote host closed the connection)
[19:31] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.ne1.yahoo.com) has joined #ceph
[19:31] * Cube (~Cube@199.168.44.193) Quit (Quit: Leaving.)
[19:32] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.ne1.yahoo.com) Quit (Remote host closed the connection)
[19:34] * Cube (~Cube@199.168.44.193) has joined #ceph
[19:35] * Pedras (~Adium@216.207.42.132) has joined #ceph
[19:36] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.ne1.yahoo.com) has joined #ceph
[19:36] * sarob_ (~sarob@63.92.243.89) has joined #ceph
[19:37] * thomnico (~thomnico@195.101.107.85) Quit (Quit: Ex-Chat)
[19:40] * sarob_ (~sarob@63.92.243.89) Quit (Remote host closed the connection)
[19:40] * sarob_ (~sarob@63.92.243.89) has joined #ceph
[19:41] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.ne1.yahoo.com) Quit (Read error: Connection reset by peer)
[19:42] * thomnico (~thomnico@195.101.107.85) has joined #ceph
[19:47] * Cube (~Cube@199.168.44.193) Quit (Quit: Leaving.)
[19:48] * sarob_ (~sarob@63.92.243.89) Quit (Ping timeout: 480 seconds)
[19:48] * Cube (~Cube@199.168.44.193) has joined #ceph
[19:53] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[19:59] * thomnico (~thomnico@195.101.107.85) Quit (Ping timeout: 480 seconds)
[20:00] * dmick (~dmick@2607:f298:a:607:b120:76f:b22:fec8) has joined #ceph
[20:00] * ScOut3R (~scout3r@54009895.dsl.pool.telekom.hu) Quit ()
[20:01] * ARichards (~textual@c-71-200-84-53.hsd1.md.comcast.net) has joined #ceph
[20:03] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[20:03] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[20:05] * gregsfortytwo (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Read error: Operation timed out)
[20:07] * Tamil2 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[20:09] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) Quit (Quit: Leaving.)
[20:09] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) has joined #ceph
[20:10] * ARichards (~textual@c-71-200-84-53.hsd1.md.comcast.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[20:11] * ARichards (~textual@50.240.86.181) has joined #ceph
[20:11] * ARichards (~textual@50.240.86.181) Quit ()
[20:13] <ponyofde1th> hi, i rebooted my secondary node in a 2 node cluster with 4 osd's each. now my health status is degraded with http://paste.kde.org/pd29cfc74 as the ceph -s output
[20:14] <ponyofde1th> any ideas how i can start up the secondary node back up? i have done /etc/init.d/ceph start and i see the mount points of the drives mounted under /var/lib/ceph/osd but i do not see a ceph process
[20:14] * Haksoldier (~isLamatta@88.234.32.103) has joined #ceph
[20:14] <Haksoldier> EUZUBILLAHIMINE??EYTANIRRACIM BISMILLAHIRRAHMANIRRAHIM
[20:14] <Haksoldier> ALLAHU EKBERRRRR! LA ?LAHE ?LLALLAH MUHAMMEDEN RESULULLAH!!
[20:14] <Haksoldier> EUZUBILLAHIMINE??EYTANIRRACIM BISMILLAHIRRAHMANIRRAHIM
[20:14] <Haksoldier> ALLAHU EKBERRRRR! LA ?LAHE ?LLALLAH MUHAMMEDEN RESULULLAH!!
[20:14] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[20:14] * Haksoldier (~isLamatta@88.234.32.103) has left #ceph
[20:14] <janos> I HEAR YAH, NUMBSKULL!
[20:14] <janos> dang
[20:15] * ChanServ sets mode +o dmick
[20:15] * neary (~neary@def92-9-82-243-243-185.fbx.proxad.net) has joined #ceph
[20:15] <dmsimard> that guy again
[20:16] <dmick> ponyofde1th: how did you install ceph?
[20:16] <ponyofde1th> dmick: ceph-deploy
[20:16] <dmick> distro?
[20:17] <ponyofde1th> dmick: ubuntu 12.04
[20:17] <ponyofde1th> there are acutally some updates http://paste.kde.org/p912a8ea7
[20:18] <ponyofde1th> should i update
[20:18] <dmick> can't hurt, but that's not the problem
[20:18] <dmick> init.d/ceph is not the startup script there; it's upstart
[20:18] <dmick> how about initctl -l | grep ceph?
[20:18] <ponyofde1th> yeah the osd's daemons are not starting up and i dont see anything in the logs
[20:18] <dmick> sorry, wrong option
[20:19] <dmick> initctl list | grep ceph
[20:19] <ponyofde1th> http://paste.kde.org/p5f4a7239
[20:19] <dmick> maybe check /var/log/upstart/*ceph-osd*?
[20:20] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[20:21] * sleinen1 (~Adium@2001:620:0:26:8485:ef25:a5d4:6c3d) has joined #ceph
[20:25] <ponyofde1th> dmick: thanks! figured it out my journal mount failed and it could not find the journals
[20:26] <ponyofde1th> dmick: how safe is it to just do a dist-upgrade then restart ceph?
[20:28] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:30] <dmick> should be ok
[20:31] <dmick> ceph isn't super OS-version-dependent
[20:31] <dmick> oh, dist-upgrade, sorry. uh...it should be fine
[20:34] * sarob (~sarob@63.92.243.89) has joined #ceph
[20:38] * ARichards (~textual@c-71-200-84-53.hsd1.md.comcast.net) has joined #ceph
[20:41] * ARichards (~textual@c-71-200-84-53.hsd1.md.comcast.net) Quit ()
[20:41] * ksingh (~Adium@85-76-69-180-nat.elisa-mobile.fi) Quit (Quit: Leaving.)
[20:47] <JoeGruher> is background scrub of PGs going to affect performance? i'm doing some performance testing and I can see scrub activity in the "ceph -w" output.
[20:50] <sjustlaptop> somewhat
[20:51] <sjustlaptop> let us know if you can quantify the effec
[20:51] <sjustlaptop> *t
[20:53] * sarob (~sarob@63.92.243.89) Quit (Remote host closed the connection)
[20:54] * sarob (~sarob@63.92.243.89) has joined #ceph
[20:55] <JoeGruher> k
[21:02] * sarob (~sarob@63.92.243.89) Quit (Ping timeout: 480 seconds)
[21:11] * bandrus (~Adium@63.192.141.3) Quit (Quit: Leaving.)
[21:11] * ksingh (~Adium@85-76-67-117-nat.elisa-mobile.fi) has joined #ceph
[21:15] * sleinen1 (~Adium@2001:620:0:26:8485:ef25:a5d4:6c3d) Quit (Quit: Leaving.)
[21:24] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[21:26] * sleinen1 (~Adium@2001:620:0:25:c8fd:750:5562:d352) has joined #ceph
[21:32] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[21:33] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:34] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[21:34] * ChanServ sets mode +v andreask
[21:34] * sjustlaptop (~sam@199.58.187.248) Quit (Ping timeout: 480 seconds)
[21:40] * ScOut3R (~scout3r@54009895.dsl.pool.telekom.hu) has joined #ceph
[21:41] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:42] * ircolle (~Adium@2601:1:8380:2d9:6d13:f8e3:aae9:9c8d) Quit (Quit: Leaving.)
[21:44] * JoeGruher (~JoeGruher@jfdmzpr01-ext.jf.intel.com) Quit (Ping timeout: 480 seconds)
[21:45] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[21:46] * Sysadmin88 (~IceChat77@2.218.8.40) has joined #ceph
[21:49] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[21:51] * psomas (~psomas@147.102.2.106) Quit (Ping timeout: 480 seconds)
[21:53] * bandrus (~Adium@63.192.141.3) has joined #ceph
[21:57] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[21:58] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[21:59] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[22:00] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:01] * sarob_ (~sarob@2001:4998:effd:600:3d04:9e3e:37f8:782e) has joined #ceph
[22:04] * kaizh (~oftc-webi@128-107-239-234.cisco.com) Quit (Remote host closed the connection)
[22:07] * sarob_ (~sarob@2001:4998:effd:600:3d04:9e3e:37f8:782e) Quit (Remote host closed the connection)
[22:07] * sarob_ (~sarob@2001:4998:effd:600:3d04:9e3e:37f8:782e) has joined #ceph
[22:08] * ksingh (~Adium@85-76-67-117-nat.elisa-mobile.fi) Quit (Quit: Leaving.)
[22:08] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[22:13] * bdonnahue (~tschneide@164.55.254.106) has joined #ceph
[22:14] <bdonnahue> a few days ago I noticed my esxi host was giving me issues about space contraints and was preventing write operations by the vm. I think it was running out of space. I deleted some stuff and made like 20GB of free space and restarted my hung VMs. I noticed though that my ceph cluster was not health. It was complaining that the monitor was slow or laggy
[22:14] <bdonnahue> does anyone know how this can be fixed and what the error means?
[22:15] * sarob_ (~sarob@2001:4998:effd:600:3d04:9e3e:37f8:782e) Quit (Ping timeout: 480 seconds)
[22:19] <bdonnahue> im sorry it was a mount 5 error (laggy??or crashed)
[22:19] <bdonnahue> i think
[22:20] <bdonnahue> it is the only monitor
[22:21] * sjustlaptop (~sam@65-122-15-184.dia.static.qwest.net) has joined #ceph
[22:24] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) has joined #ceph
[22:30] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Quit: No Ping reply in 180 seconds.)
[22:31] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[22:32] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[22:33] * mattbenjamin (~matt@aa2.linuxbox.com) Quit (Quit: Leaving.)
[22:33] * mkoo (~mkoo@chello213047069005.1.13.vie.surfer.at) has joined #ceph
[22:33] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[22:38] <jdmason> Is there a disadvantage to making a extremely large number of pgs?
[22:38] * sleinen1 (~Adium@2001:620:0:25:c8fd:750:5562:d352) Quit (Quit: Leaving.)
[22:38] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[22:39] <dmsimard> jdmason: The more PGs you have, the more computationally expensive your OSDs become
[22:39] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Ping timeout: 480 seconds)
[22:39] * Tamil2 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Read error: Operation timed out)
[22:40] <jdmason> dmsimard: Thanks.
[22:40] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[22:41] <bdonnahue> isnt there an equation to use to determine the right number (unless im remembering incorrectly)
[22:41] <dmsimard> The rule of thumb is something like 100 PGs per OSD
[22:42] <bandrus> (#osds * 100)/replicas
[22:43] * Cube (~Cube@199.168.44.193) Quit (Quit: Leaving.)
[22:44] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[22:47] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:49] <jdmason> seems like overkill, I would think it would be OSDs * replicas
[22:49] <jdmason> Is the reason performance based?
[22:49] <lurbs> jdmason: http://ceph.com/docs/master/rados/operations/placement-groups/
[22:50] <jdmason> thanks, I'm new and still finding my way :)
[22:55] <janos> jdmason: you want each file/object/whatever chucknked up into enough reasonably small pieces to support a fine-grained enough distribution
[22:55] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has joined #ceph
[22:55] <janos> especially as you start adding more disks
[22:56] * Cube (~Cube@66-87-78-47.pools.spcsdns.net) has joined #ceph
[22:58] <jdmason> Interesting, so this could also be thought of how many pieces of size X do you want this carved into
[22:58] * sjustlaptop (~sam@65-122-15-184.dia.static.qwest.net) Quit (Ping timeout: 480 seconds)
[22:59] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[23:01] <jdmason> the graphics confused me, as it looks like 1 PG to 1 OSD
[23:01] * BillK (~BillK-OFT@124-148-103-108.dyn.iinet.net.au) has joined #ceph
[23:04] * bjornar (~bjornar@ti0099a430-0436.bb.online.no) Quit (Ping timeout: 480 seconds)
[23:05] * bjornar (~bjornar@ti0099a430-0436.bb.online.no) has joined #ceph
[23:05] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[23:08] * sarob (~sarob@2001:4998:effd:600:11e2:b70:9af7:883d) has joined #ceph
[23:10] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[23:10] <bdonnahue> jdmason which graphics are you looking at?
[23:12] <jdmason> bdonnahue: http://ceph.com/docs/master/_images/ditaa-c7fd5a4042a21364a7bef1c09e6b019deb4e4feb.png
[23:12] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[23:12] <jdmason> I understand now :)
[23:13] <bdonnahue> im still wrapping my head around it but its startign to make sense
[23:13] * sjustlaptop (~sam@65-122-15-184.dia.static.qwest.net) has joined #ceph
[23:15] * bjornar (~bjornar@ti0099a430-0436.bb.online.no) Quit (Ping timeout: 480 seconds)
[23:19] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[23:21] <bandrus> just to clarify, your uploaded data won't be chunked into <number of pgs> parts
[23:22] <janos> right
[23:22] <janos> in a way you're setting a "resolution"
[23:22] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (Ping timeout: 480 seconds)
[23:23] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:25] * perfectsine (~perfectsi@if01-gn01.dal05.softlayer.com) has joined #ceph
[23:26] * sjustlaptop (~sam@65-122-15-184.dia.static.qwest.net) Quit (Ping timeout: 480 seconds)
[23:28] <bandrus> in the case of object storage, your entire object will be stored on ONE PG (across <x replica> drives). With block, your data will be split into 4MB chunks (default), and each chunk will have a home on one PG.
[23:28] * sarob (~sarob@2001:4998:effd:600:11e2:b70:9af7:883d) Quit (Ping timeout: 480 seconds)
[23:28] * xmltok (~xmltok@216.103.134.250) Quit (Quit: Leaving...)
[23:31] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[23:33] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[23:34] * Cube1 (~Cube@66-87-77-8.pools.spcsdns.net) has joined #ceph
[23:35] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[23:36] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[23:38] * sarob_ (~sarob@2001:4998:effd:600:35a3:52e5:7731:f0ef) has joined #ceph
[23:38] * sarob_ (~sarob@2001:4998:effd:600:35a3:52e5:7731:f0ef) Quit (Remote host closed the connection)
[23:38] * sarob_ (~sarob@nat-dip6.cfw-a-gci.corp.yahoo.com) has joined #ceph
[23:40] * Cube (~Cube@66-87-78-47.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[23:42] * bdonnahue (~tschneide@164.55.254.106) has left #ceph
[23:50] * sarob_ (~sarob@nat-dip6.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[23:55] * sarob (~sarob@2001:4998:effd:600:582b:c8da:d90a:b1f5) has joined #ceph
[23:55] * JoeGruher (~JoeGruher@jfdmzpr01-ext.jf.intel.com) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.