#ceph IRC Log

Index

IRC Log for 2013-08-29

Timestamps are in GMT/BST.

[0:00] <mozg> the slow requests have the following lines:
[0:00] <mozg> 2013-08-28 22:48:49.511335 osd.16 192.168.168.201:6805/22128 496 : [WRN] slow request 35.297928 seconds old, received at 2013-08-28 22:48:14.213265: osd_op(client.1857319.0:3880452 rb.0.1a5a6d.4bedc23.00000000193a [write 2646016~1024] 5.c47ce673 e15923) v4 currently commit sent
[0:00] <mozg> and these:
[0:00] <mozg> 2013-08-28 22:48:47.508904 osd.16 192.168.168.201:6805/22128 484 : [WRN] slow request 33.497300 seconds old, received at 2013-08-28 22:48:14.011442: osd_op(client.1857319.0:3879608 rb.0.1a5a6d.4bedc23.00000000167d [read 3437056~4096] 5.1fecd7fe e15923) v4 currently no flag points reached
[0:01] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[0:01] <mozg> i do not see any other types at the moment
[0:03] <mozg> joshd, should I worry about these types of slow requests?
[0:04] <mozg> also, do you know if it would help if I migrate all 8 journals onto the same ssd disk? At the moment i've only got 4 journals on the ssd
[0:04] <gregaf1> the "commit sent" ones no, the "no flag points reached" yes; those haven't made any progress yet
[0:05] <gregaf1> but I'm with Josh, sounds like you're sending more IOPS than your cluster can handle
[0:05] <gregaf1> what's your config and what benchmark are you doing?
[0:05] <gregaf1> if you're not on Dumpling I bet it'll behave better for this, btw — it won't let the clients run quite so far ahead of what's actually possible
[0:07] * rudolfsteiner (~federicon@200.68.116.185) Quit (Quit: rudolfsteiner)
[0:07] * diegows (~diegows@200.68.116.185) Quit (Read error: Operation timed out)
[0:08] <mozg> gregaf1, i am on dumpling
[0:08] <mozg> and the trouble is i've not had the kernel panic issues when i was on 0.61.7
[0:08] <kraken> http://i.imgur.com/tpGQV.gif
[0:08] <mozg> i've ran the benchmarks as soon as i've upgraded ceph
[0:08] <mozg> nothing else has changed
[0:09] <mozg> same servers and same vms
[0:09] <mozg> gregaf1, I am using phoronix-test-suite to run pts/disk benchmark sets
[0:09] <mozg> it uses like around 24 different tools
[0:09] <gregaf1> huh
[0:09] <mozg> like fio/iozone/dbench, etc
[0:09] * ishkabob (~c7a82cc0@webuser.thegrebs.com) Quit (Quit: TheGrebs.com CGI:IRC (Ping timeout))
[0:11] <mozg> joshd, regarding your comment earlier regarding the overloaded osds
[0:11] <mozg> shouldn't there be a mechanism that protects the cluster from being overloaded?
[0:12] <mozg> and not to cause stability issues?
[0:15] <nhm> mozg: been thinking about playing with the phoronix suite.
[0:16] <joshd> mozg: yes, the best way to do that now is qemu's built-in io throttling
[0:17] <mozg> nhm, oh, it is very cool
[0:17] <mozg> let's you automate things
[0:17] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[0:17] <mozg> and it's doing proper repetative testing and reporting
[0:17] <mozg> very nice indeed
[0:18] <mozg> joshd, could you please let me know a bit more about it? are there any examples how this could be implemented with ceph ?
[0:21] <joshd> mozg: if you're using libvirt you can configure it with the iotune element: http://libvirt.org/formatdomain.html#elementsDisks
[0:23] <mozg> joshd, okay, but from what i can see the iotune requires a block device
[0:23] <mozg> however, i am using rbd
[0:23] <mozg> i do not think there is a block device for that, is there?
[0:24] <joshd> <iotune> does not require a block device on the host, <blkiotune> does
[0:24] <joshd> the former is implemented in qemu itself, the latter via cgroups iirc
[0:27] <mozg> joshd, okay, i will read more into this
[0:27] * carif (~mcarifio@64.119.130.114) Quit (Ping timeout: 480 seconds)
[0:27] <mozg> so, the iotunning could be used to control the amount of io each vm generates?
[0:27] <mozg> is this the idea?
[0:28] * aliguori (~anthony@cpe-70-112-153-179.austin.res.rr.com) Quit (Remote host closed the connection)
[0:28] <mozg> or how does it work on a high level?
[0:29] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has left #ceph
[0:29] <joshd> it places limits on throughput or iops a vm can generate to a rbd
[0:32] <mozg> joshd, how would i determine the io limit of my ceph cluster?
[0:33] <mozg> can I use this with rados benchmark tool?
[0:33] <mozg> or is there a different method I could employ?
[0:36] * The_Bishop (~bishop@2001:470:50b6:0:9d4b:db1b:3883:df83) has joined #ceph
[0:37] * ScOut3R (~scout3r@54026B73.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[0:37] <wrencsok> mozg, i have phoronix running in a loop inside a qemu guest agasinta dumpling cluster. no panics.
[0:37] <kraken> http://i.imgur.com/rhNOy3I.gif
[0:40] <wrencsok> i had to do a few trick to get it to run against the right volumes, like move the entire package to the ceph rbd device and setup symbolic links to the directories that phoronix wants to use. and force it to use the ceph volume instead of the vm disk from the parent.
[0:41] <wrencsok> my guest is an ubuntu 12.04 client.
[0:44] <ircolle> kraken forget panic
[0:44] <kraken> roger
[0:45] <gregaf1> I don't think that works for built-in commands; I tried it earlier
[0:46] <n1md4> I'm attempting ceph storage with xenserver. I have an error, and believe it's to do with ceph, but can't yet work it out .. this isn't my work, but the results I'm getting http://www.spinics.net/lists/ceph-users/msg02667.html any assistance would be appreciated.
[0:47] * BillK (~BillK-OFT@203-59-133-124.dyn.iinet.net.au) has joined #ceph
[0:49] <gregaf1> wido? ^
[0:49] <gregaf1> afraid we don't have too many Xen users in here right now
[0:53] <mozg> wrencsok, are you also using pts/disk set of tests?
[0:53] <mozg> wrencsok, yeah, same here
[0:53] <joao> err... 'rbd' doesn't relay error to user space?
[0:53] <joao> is this a known issue?
[0:54] <joao> or an issue even?
[0:54] <mozg> i also run it inside ubuntu 12.04
[0:54] <mozg> but my ubuntu panics every time
[0:54] <kraken> http://i.imgur.com/SNvM6CZ.gif
[0:54] <mozg> and it seems to got worse after the upgrade to dumpling
[0:54] <mozg> in the previous release i've not had any panics
[0:54] <kraken> http://i.imgur.com/rhNOy3I.gif
[0:54] <mozg> just the hang tasks
[0:54] <mozg> now i have both
[0:54] <mozg> (((
[0:54] <ircolle> kraken forget panic
[0:54] <kraken> consider it done
[0:54] <joao> kraken, stand still
[0:54] <joao> lol
[0:55] <joao> the last one was pretty amusing though :p
[0:55] <loicd> :-)
[0:55] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[0:56] <mozg> wrencsok, what cache settings do you have?
[0:56] <mozg> also, do you use rbd caching on the host side?
[0:57] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has joined #ceph
[1:02] * sprachgenerator (~sprachgen@130.202.135.215) Quit (Quit: sprachgenerator)
[1:03] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[1:11] <gregaf1> sagewk: pushed the change patches on top of wip-tier-interface if you want to take a look; otherwise I'll squash them down and give it to sjust for review
[1:11] <sagewk> gregaf1: k will look shortly
[1:12] <gregaf1> nothing terribly interesting but check we're on the same page
[1:12] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) has joined #ceph
[1:15] * AfC (~andrew@2407:7800:200:1011:5910:8716:3d3c:bdb7) has joined #ceph
[1:15] <sagewk> gregaf1: looks good, 1 nit
[1:15] <joao> sagewk, gregaf1, any idea why we always return EXIT_FAILURE on rbd.cc, even if we have the proper error code?
[1:16] <sagewk> no good reason i'm sure
[1:16] <joao> do we have anything relying on that?
[1:16] * devoid (~devoid@130.202.135.221) Quit (Quit: Leaving.)
[1:16] <joao> tests maybe?
[1:16] <gregaf1> you sure, joao? I glanced at that briefly when Josh wanted me to use EXIT_FAILURE elsewhere and I think EXIT_FAILURE is only specified for failures (and there are lots more of them than success exits, which are all aggregated)
[1:16] <sagewk> shouldn't
[1:17] <gregaf1> sagewk: what's the nit?
[1:17] <sagewk> no need to cast -1 to uint64_t
[1:17] <gregaf1> EXIT_FAILURE and EXIT_SUCCESS are macros which generally map to 1 and 0, but are appropriate for the platform
[1:17] <gregaf1> oh, where did I leave that in?
[1:17] <sagewk> i'd also do >= 0 instead of != -1, but doesn't really matter
[1:17] <sagewk> ctor
[1:17] <gregaf1> I was thinking these were all uint64_t when I started writing and then realized it was int64_t here but I guess I missed one
[1:18] <joao> gregaf1, current rbd.cc's main() either returns EXIT_FAILURE or 0
[1:18] <joao> might have missed some cases, but I don't think so
[1:18] <gregaf1> hmm, I guess we could conceivably have other non-caching values for those, so the >= is a good plan too
[1:18] <gregaf1> well, poke joshd and tell him it's all weird
[1:19] <joao> joshd, ^
[1:20] <aciancaglini> hi, with the support of inktank we solved our problem... now we are thinking how to solve a great error done during the design phase : we have the O.S. filesystem on external USB (frequent ro filesystem). The server have 4 bay 3 are for osd and one is for SSD so we can't install a new HD..... is bad to put the O.S. on the SSD we have reserved for journal?
[1:20] <sagewk> aciancaglini: seems ok, if you don't mind that you may have some performance variation because of work done by the osd (logging etc).
[1:22] <joshd> joao: because they used to all just be exit(1) calls, and the error is already in stderr at least. no real reason not to return it as well
[1:22] <joao> oh, cool
[1:23] <joao> well, I'll make it return whatever is on the error call if any
[1:24] <joao> not that I *need* it, but it's useful for a monitor workunit I meant to push to wip-6047 but was failing on rbd (expecting 22, getting 1)
[1:24] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[1:24] <joao> :)
[1:24] <joshd> sounds good to me
[1:24] * yasu` (~yasu`@99.23.160.231) has joined #ceph
[1:25] * yasu` (~yasu`@99.23.160.231) Quit (Remote host closed the connection)
[1:26] * haomaiwang (~haomaiwan@218.71.72.122) has joined #ceph
[1:26] * Cube (~Cube@12.248.40.138) Quit (Read error: Connection reset by peer)
[1:26] * aliguori (~anthony@cpe-70-112-153-179.austin.res.rr.com) has joined #ceph
[1:26] * Cube (~Cube@12.248.40.138) has joined #ceph
[1:29] * houkouonchi-work (~linux@12.248.40.138) Quit (Read error: Connection reset by peer)
[1:31] * haomaiwa_ (~haomaiwan@218.71.124.49) Quit (Ping timeout: 480 seconds)
[1:34] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[1:34] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[1:35] <aciancaglini> sagewk: thanks
[1:36] <sagewk> aciancaglini: very welcome!
[1:37] * rudolfsteiner (~federicon@190.244.11.181) has joined #ceph
[1:38] <sagewk> gregaf1 sjust: https://github.com/ceph/ceph/pull/556 <-- make radosmodel verify user_versions, + 2 bug fixes
[1:39] <gregaf1> oh, how embarrassing
[1:39] <sagewk> er, 3
[1:40] <sagewk> librados.hpp was broken too
[1:40] <gregaf1> huh?
[1:40] <sagewk> int get_version() .. pushed 1 more patch
[1:40] <gregaf1> huh, where did that conversion happen? implicit in the return?
[1:41] <gregaf1> I don't think that your version reasoning is correct on the first bugfix though
[1:42] <mozg> guys, i wanted to run something against you
[1:42] <mozg> i've got two osd servers with 8 osds
[1:42] <mozg> and 1 ssd disk in each osd server
[1:42] <gregaf1> sagewk: the user_at_version is not increased unless you did a modify, so it and oi.user_version should be the same there
[1:42] <mozg> can i run journals on all 8 osds
[1:42] <gregaf1> and at_version is the location the op goes if it needs to get replayed — on a write op it happened at the bumped location, and on a read op it is blank
[1:43] <mozg> i know that it is recommended to have 3-4 osds per ssd for the sake of speed
[1:43] <mozg> but would I kill the performance if i put all 8 osd journals
[1:44] <mozg> on one ssd?
[1:44] <gregaf1> you definitely don't want replays to believe they happen at the time of the last modify
[1:44] <sagewk> gregaf1: should the reply version actually be 0 in this case anyway? it's a read
[1:44] <sagewk> wasn't sure about that bit
[1:44] <gregaf1> mozg: depends on your drives, but you are forcing all 8 OSDs to write to that SSD before they can do anything else
[1:44] <sagewk> replay rather, not reply
[1:44] <mozg> or would it improve the small block size performance and decrease the sequential write speed?
[1:44] <gregaf1> sagewk: you're saying "reply_version" and we don't have one of those — we have user versions and replay versions :)
[1:45] <gregaf1> I think you mean replay_version, and 0 is fine for that because it's not something you replay
[1:45] <mozg> i just have 8 sas 7k disks
[1:45] <mozg> nothing that quick
[1:46] <gregaf1> sagewk: in particular, nobody looks at the replay_version now except for replay of ops, and in the case of a write oi.version is definitely not the position you want the replay to be happening at
[1:46] <sagewk> right. changing that to eversion_t..
[1:47] <gregaf1> huh?
[1:47] <gregaf1> this path is not a read-only path
[1:47] <sagewk> oh! right
[1:48] <gregaf1> mozg: probably lower-latency small block writes and slower streaming writes, yes, but that could vary depending on the specific characteristics of your drives and your workload
[1:49] <mozg> gregaf1, I guess for the average vm you would expect to have a greater number of small block reads and writes
[1:49] <gregaf1> I'm really pretty sure ctx->at_version holds exactly what we want to be returning there in both read and write cases — 0 for reads and the correct pg version for writes
[1:49] <mozg> unless you are working with large data sets
[1:49] <mozg> so, having all journals on one ssd would speed up the work load
[1:49] <joao> sagewk, repushed wip-6047
[1:50] <gregaf1> mozg: probably, but it's also covered by the page cache so you're not likely to care about latency in the normal case
[1:50] <sagewk> gregaf1: it was returning something bigger.. i assuemd max(pg->user_version, oi.uer_version)
[1:50] <joao> sagewk, added two new commits on top
[1:50] <gregaf1> sagewk: what did you observe being bigger?
[1:50] <sagewk> the version returned by librados
[1:51] <mozg> gregaf1, so, are you saying that because of the caching the journal writes are grouped anyway into large size sequential writes?
[1:51] <gregaf1> sagewk: and you compared it to what?
[1:51] <gregaf1> sorry, I'm surprised you observed any difference from classic behavior so I want the full scenario :)
[1:51] <sagewk> what it was after i last wrote to it
[1:52] <sagewk> let me reproduce it..
[1:52] <gregaf1> I'm saying I have no idea, mozg, and you would need to put both configs through a realistic benchmark
[1:52] <gregaf1> should I just run the rados model against vstart and look at it? :)
[1:53] <sagewk> i'm doing that now
[1:55] * rturk-away is now known as rturk
[1:57] <mozg> gregaf1, I will do that. but in terms of the cluster stability, does it make a difference?
[2:00] <sglwlb> hi, any developers?Is it a mistake in kclient code: pgid.seed = ceph_stable_mod(pgid.seed, pool->pg_num, pool->pgp_num_mask);
[2:01] <gregaf1> mozg: only in that you're making 8 OSDs dependent on a single disk instead of 4
[2:01] <sglwlb> Use pg_num but pgp_num_mask
[2:02] <gregaf1> and sagewk was right; I forgot we were initializing the user_at_version the way we were so you can't use it as the existing version on reads :(
[2:02] <sagewk> repushed.. look ok?
[2:02] * tnt_ (~tnt@109.130.110.3) Quit (Ping timeout: 480 seconds)
[2:02] <sagewk> oh, i'll add a release note and backport tag to the rados patch
[2:02] <sagewk> that will bite any librados user using versions once they hit 2 billion updates in teh pg
[2:03] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:07] <gregaf1> I'm not very familiar with RadosModel but the rest looks good to me :)
[2:07] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[2:07] <sagewk> sjusthm: can you look at the radosmodel patch? not sure i kludged it in in the right way
[2:08] * DarkAce-Z (~BillyMays@50.107.55.36) has joined #ceph
[2:08] <gregaf1> sglwlb: I think using pgp_num_mask on pg_num is correct but sagewk would know for sure
[2:10] <sglwlb> gregaf1: What's the difference between pg_num and pgp_num?
[2:10] <gregaf1> just from the snippet you pasted I assume the intent there is to place the pg_num placement groups as if there are only pgp_num placement groups, and the way you do that is by ignoring some of the bits
[2:10] <sagewk> sglwlb: ooh, that does look suspicious
[2:10] <gregaf1> PlacementGroup_num and PlacementGroupPlacement_num
[2:10] <gregaf1> heh, good thing I asked
[2:11] <sagewk> oh, yeah gregaf1 is right, it's intentional.
[2:11] <gregaf1> sagewk: somebody else just sent an email to ceph-devel asking about mds locks, too :p
[2:11] <sagewk> oh.. no, it's wrong.
[2:11] <sagewk> should be pg_num_mask there.
[2:12] <sagewk> sglwlb: good catch!
[2:12] * diegows (~diegows@190.190.11.42) has joined #ceph
[2:12] <sagewk> which makes me think that the thrashosd task should play with pgp_num :)
[2:13] <gregaf1> sglwlb: in case that short explanation wasn't clear, I assume you know that each pool is sharded into Placement Groups; there are pg_num of those
[2:13] <gregaf1> however, you can place those pg_num shards as if there were only pgp_num of them
[2:14] <gregaf1> the intention is to make pg splitting and merging easier, although that's somewhat archaic now and sjust(aka sjusthm) found that we couldn't provide enough guarantees to really take advantage of them anyway when doing split (or the not-yet-planned merge)
[2:15] * aciancaglini (~quassel@79.59.209.97) Quit (Remote host closed the connection)
[2:17] <sglwlb> gregaf1:Sorry, my english is not so good, but i got something in you words
[2:17] <gregaf1> cool
[2:19] <sagewk> sglwlb: sent a patch to the list if you'd liek to review. thanks!
[2:21] <sglwlb> sagewk: Ok, i will try it!
[2:22] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[2:29] * sagelap1 (~sage@27.sub-70-197-72.myvzw.com) has joined #ceph
[2:31] * xarses (~andreww@204.11.231.50.static.etheric.net) Quit (Ping timeout: 480 seconds)
[2:32] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) Quit (Ping timeout: 480 seconds)
[2:32] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) Quit (Quit: Leaving.)
[2:39] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Quit: Leaving.)
[2:43] * yy-nm (~Thunderbi@122.233.46.4) has joined #ceph
[2:43] * rudolfsteiner (~federicon@190.244.11.181) Quit (Quit: rudolfsteiner)
[2:44] * jaydee (~jeandanie@124x35x46x8.ap124.ftth.ucom.ne.jp) has joined #ceph
[2:51] * rudolfsteiner (~federicon@190.244.11.181) has joined #ceph
[2:52] * AfC (~andrew@2407:7800:200:1011:5910:8716:3d3c:bdb7) Quit (Quit: Leaving.)
[2:53] <yy-nm> hay, all. i want to know there are some item in ceph.conf to set the ceph's facility levels and severity levels in syslog?
[2:54] * yanzheng (~zhyan@134.134.139.72) has joined #ceph
[2:56] <gregaf1> I don't remember what the knobs do, but in addition to the config options listed at http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/?highlight=syslog (search for syslog), there's also "clog to syslog level" (default "info") and "mon cluster log to syslog level" (default "info") and a *to_syslog_facility for both (default "daemon")
[2:56] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[2:57] * aliguori (~anthony@cpe-70-112-153-179.austin.res.rr.com) Quit (Remote host closed the connection)
[2:59] <gregaf1> joao and sjusthm, can you review the bits of https://github.com/ceph/ceph/pull/554 appropriate for your domains? :)
[3:00] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[3:01] <yy-nm> gregaf1: you mean "mon cluster log to syslog level" is an item in ceph.conf?
[3:01] <gregaf1> yeah
[3:02] <yy-nm> gregaf1: what *to_syslog_facility means?
[3:02] <gregaf1> I'm afraid I don't know; I just went through the config options and grepped for syslog
[3:03] <yy-nm> gregaf1: from where?
[3:04] <gregaf1> the source code, https://github.com/ceph/ceph/blob/master/src/common/config_opts.h has it
[3:04] <gregaf1> anyway I'm off; good night/morning all
[3:05] <yy-nm> gregaf1: ok, thanks a lot
[3:06] <sagelap1> gregaf1: you have a 1 instead of -1 in that last push
[3:06] * silversurfer (~jeandanie@124x35x46x8.ap124.ftth.ucom.ne.jp) has joined #ceph
[3:06] * jaydee (~jeandanie@124x35x46x8.ap124.ftth.ucom.ne.jp) Quit (Quit: Konversation terminated!)
[3:07] * Cube (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[3:07] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[3:08] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:08] * Cube1 (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[3:09] * cfreak200 (~cfreak200@p4FF3E172.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:09] * haomaiwa_ (~haomaiwan@218.71.79.165) has joined #ceph
[3:10] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[3:11] <sjusthm> sagewk: it does mess with pgp_num as well
[3:11] <gregaf1> I could never make such a mistake, sagelap1; check again ;)
[3:12] <gregaf1> I love not working on master
[3:13] * haomaiwang (~haomaiwan@218.71.72.122) Quit (Ping timeout: 480 seconds)
[3:13] * sagelap1 (~sage@27.sub-70-197-72.myvzw.com) Quit (Read error: Connection reset by peer)
[3:18] * rturk is now known as rturk-away
[3:19] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[3:20] <yy-nm> when i set 'clog to syslog = true' and 'clog to monitors = false', i type ceph -w in command-line and there not infomation come out. it's bug?
[3:20] <sage> yy-nm: that's because ceph -w watches the cluster log as seen by teh montiors, but you set clog to monitors = false, so there is nothing there
[3:22] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:25] <yy-nm> sage: oh.. i see. another question about syslog. i set 'log to syslog = true', 'err to syslog = true' and ' mon cluster log to syslog = true', then i restart ceph , and i found the host which has leader mon show '2013-08-29 09:24:18.627589 mon.0 [INF] pgmap v1967572: 1728 pgs: 1728 active+clean; ' info periodicity.
[3:28] * sherry (~sherry@wireless-nat-10.auckland.ac.nz) has joined #ceph
[3:30] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) has joined #ceph
[3:31] <sherry> Im a newbie and want to work ceph on my laptop, in the http://ceph.com/docs/master/install/hardware-recommendations/#data-storage said that "Running an OSD and a monitor or a metadata server on a single disk–irrespective of partitions–is NOT a good idea either." I have one laptop! what shall I do then?
[3:32] <yy-nm> sherry: ignore it ? i guess?
[3:34] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[3:34] <sherry> how much dangerous would it b?
[3:34] <yanzheng> sherry, just very slow
[3:34] <sherry> okay thanks
[3:38] * jaydee (~jeandanie@124x35x46x8.ap124.ftth.ucom.ne.jp) has joined #ceph
[3:38] * silversurfer (~jeandanie@124x35x46x8.ap124.ftth.ucom.ne.jp) Quit (Read error: Connection reset by peer)
[3:40] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Read error: Operation timed out)
[3:40] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[3:42] * AfC (~andrew@2407:7800:200:1011:5910:8716:3d3c:bdb7) has joined #ceph
[3:45] <sherry> yanzheng : is there any hardware test to make sure that ceph is gonna work on my laptop?
[3:45] <yanzheng> lots of memory + SSD
[3:46] <yanzheng> ceph also eat lots of cpu
[3:46] <sherry> it doesnt work on 32bits, does it?
[3:49] <yanzheng> ceph works for 32bits machine
[3:52] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[3:52] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[3:54] <joao> gregaf1, can the review wait for the morning?
[3:56] * rudolfsteiner (~federicon@190.244.11.181) Quit (Quit: rudolfsteiner)
[3:56] <joao> well, considering that I dozed off on the couch, I'm going to assume it can and head to bed :)
[3:56] <joao> 'night all
[3:59] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[4:00] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[4:02] <via> has anyone had luck with s3fs against ceph?
[4:02] <via> i can't even get it to actually connect to what i specify in -o url (either according to the server access log or strace)
[4:02] * smiley_ (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley_)
[4:04] <via> meanwhile i can curl it just fine
[4:10] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:10] * rongze_ (~quassel@117.79.232.249) Quit (Remote host closed the connection)
[4:12] * rongze (~quassel@117.79.232.249) has joined #ceph
[4:15] * AfC (~andrew@2407:7800:200:1011:5910:8716:3d3c:bdb7) Quit (Read error: No route to host)
[4:15] * rongze_ (~rongze@117.79.232.217) has joined #ceph
[4:16] * AfC (~andrew@2407:7800:200:1011:5910:8716:3d3c:bdb7) has joined #ceph
[4:16] * rongze (~quassel@117.79.232.249) Quit (Remote host closed the connection)
[4:17] * rongze_ (~rongze@117.79.232.217) Quit (Remote host closed the connection)
[4:28] * carif (~mcarifio@pool-96-233-32-122.bstnma.fios.verizon.net) has joined #ceph
[4:32] * diegows (~diegows@190.190.11.42) Quit (Ping timeout: 480 seconds)
[4:32] * cfreak200 (~cfreak200@p4FF3F540.dip0.t-ipconnect.de) has joined #ceph
[4:33] * nerdtron (~kenneth@202.60.8.252) has joined #ceph
[4:36] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[4:39] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[4:39] * carif (~mcarifio@pool-96-233-32-122.bstnma.fios.verizon.net) Quit (Quit: Ex-Chat)
[4:44] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[4:49] * jlhawn (~jlhawn@208-90-212-77.PUBLIC.monkeybrains.net) has joined #ceph
[4:49] <jlhawn> hi everyone
[4:50] <jlhawn> I'm trying to do some CephFS benchmarks. Does anyone know how to disable the client cache so I can better measure write performance?
[4:50] <jlhawn> I'd like the write to be committed on a call to close() but I can't quite figure out how to do it.
[4:51] <yanzheng> mount -o sync
[4:51] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[4:52] <jlhawn> thanks
[4:52] <jlhawn> let me try it out now
[4:53] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[4:57] <nerdtron> jlhawn on my system, I just mount the cephFS and them write on it. then use ceph -w to see the read/write speeds
[4:58] <jlhawn> nerdtron: and 'ceph -w' is run from the monitor?
[4:58] <jlhawn> or the client?
[4:58] <nerdtron> on the client or on the monitor the same will be shown
[4:59] <nerdtron> or you can perform this, go inside the CephFS folder and run dd if=/dev/zero of=samplefile bs=1G count=1 oflag=direct
[5:00] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[5:00] <nerdtron> it will directly write a 1GB file to the cephFS folder without caching. and the results will be like this
[5:00] <nerdtron> 1073741824 bytes (1.1 GB) copied, 33.4544 s, 32.1 MB/s
[5:00] <nerdtron> that is my write speed...it's pretty slow
[5:06] * fireD_ (~fireD@93-142-249-127.adsl.net.t-com.hr) has joined #ceph
[5:06] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[5:07] * fireD (~fireD@93-139-165-73.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:12] * BillK (~BillK-OFT@203-59-133-124.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[5:14] * BillK (~BillK-OFT@124-148-116-219.dyn.iinet.net.au) has joined #ceph
[5:14] * carif (~mcarifio@146-115-183-141.c3-0.wtr-ubr1.sbo-wtr.ma.cable.rcn.com) has joined #ceph
[5:15] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[5:28] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[5:33] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[5:40] * yy-nm1 (~Thunderbi@218.74.34.80) has joined #ceph
[5:40] * janos (~janos@static-71-176-211-4.rcmdva.fios.verizon.net) has left #ceph
[5:43] * yy-nm (~Thunderbi@122.233.46.4) Quit (Ping timeout: 480 seconds)
[5:45] <sage> nerdtron: dd with odirect is doing 1 io at a time; sort of a worst-case benchmark
[5:46] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) has joined #ceph
[5:47] <nerdtron> oh sorry..but still though, still accurate on my part, +-30MB/sec
[6:08] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[6:09] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[6:18] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has left #ceph
[6:20] * jlhawn (~jlhawn@208-90-212-77.PUBLIC.monkeybrains.net) Quit (Quit: jlhawn)
[6:37] <sage> nerdtron: try doing 10 of those in parallel to different offsets of the block device; i think you'll find that the total throughput will go up
[7:09] * bwesemann (~bwesemann@2001:1b30:0:6:6832:7482:7288:7e60) Quit (Remote host closed the connection)
[7:10] * bwesemann (~bwesemann@2001:1b30:0:6:4caf:a131:e03:fc2d) has joined #ceph
[7:11] * janos (~janos@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[7:15] * carif (~mcarifio@146-115-183-141.c3-0.wtr-ubr1.sbo-wtr.ma.cable.rcn.com) Quit (Quit: Ex-Chat)
[7:23] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[7:24] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[7:26] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[7:26] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit ()
[7:28] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[7:35] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[7:38] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[7:47] * themgt_ (~themgt@201-223-252-184.baf.movistar.cl) has joined #ceph
[7:52] * themgt (~themgt@201-223-239-26.baf.movistar.cl) Quit (Ping timeout: 480 seconds)
[7:52] * themgt_ is now known as themgt
[7:58] * tnt (~tnt@109.130.110.3) has joined #ceph
[8:05] * haomaiwa_ (~haomaiwan@218.71.79.165) Quit (Remote host closed the connection)
[8:19] * foosinn (~stefan@office.unitedcolo.de) has joined #ceph
[8:35] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[8:49] * ssejour (~sebastien@out-chantepie.fr.clara.net) has joined #ceph
[8:51] * yanzheng (~zhyan@134.134.139.72) Quit (Ping timeout: 480 seconds)
[8:52] * bwesemann (~bwesemann@2001:1b30:0:6:4caf:a131:e03:fc2d) Quit (Remote host closed the connection)
[8:57] * sherry (~sherry@wireless-nat-10.auckland.ac.nz) Quit (Quit: Konversation terminated!)
[9:04] * sleinen (~Adium@2001:620:0:2d:f93b:4a2c:1ff4:238c) has joined #ceph
[9:06] * vipr (~vipr@frederik.pw) has joined #ceph
[9:12] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:13] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[9:16] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:23] * sleinen (~Adium@2001:620:0:2d:f93b:4a2c:1ff4:238c) Quit (Quit: Leaving.)
[9:29] * Bada (~Bada@195.65.225.142) has joined #ceph
[9:30] * tnt (~tnt@109.130.110.3) Quit (Ping timeout: 480 seconds)
[9:30] * thanasisk (~akostopou@p5DDB8F77.dip0.t-ipconnect.de) has joined #ceph
[9:36] * ShaunR- (~ShaunR@staff.ndchost.com) has joined #ceph
[9:36] * ShaunR (~ShaunR@staff.ndchost.com) Quit (Read error: Connection reset by peer)
[9:39] * LeaChim (~LeaChim@176.24.168.228) has joined #ceph
[9:39] <thanasisk> hi all
[9:43] * sleinen (~Adium@130.59.94.179) has joined #ceph
[9:47] * roald (~oftc-webi@87.209.150.214) has joined #ceph
[9:51] * sleinen (~Adium@130.59.94.179) Quit (Ping timeout: 480 seconds)
[9:52] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:53] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) has joined #ceph
[9:53] * sleinen (~Adium@2001:620:0:2d:5c78:dc71:da0:4417) has joined #ceph
[9:53] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:55] <nerdtron> thanasisk hello :)
[9:56] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[9:58] * YD (YD@d.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[9:58] * YD (YD@a.clients.kiwiirc.com) has joined #ceph
[10:01] <thanasisk> [ceph_deploy.gatherkeys][WARNIN] Unable to find /etc/ceph/ceph.client.admin.keyring on ['brizo.classmarkets.net']
[10:01] <thanasisk> how do i create admin.keyring ?
[10:01] * thanasisk ceph newbie
[10:03] * JustEra (~JustEra@89.234.148.11) has joined #ceph
[10:04] <JustEra> Hello
[10:05] <JustEra> I got the "INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'" error when creating a ceph cluster if anyone can help me
[10:05] * hug (~hug@nuke.abacus.ch) Quit (Quit: Changing server)
[10:06] * sleinen (~Adium@2001:620:0:2d:5c78:dc71:da0:4417) Quit (Ping timeout: 480 seconds)
[10:07] * tobru_ (~quassel@213.55.184.247) has joined #ceph
[10:08] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Remote host closed the connection)
[10:08] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[10:09] * ssejour1 (~sebastien@out-chantepie.fr.clara.net) has joined #ceph
[10:12] * sleinen (~Adium@2001:620:0:2d:55c2:d8b4:4053:847f) has joined #ceph
[10:13] * ssejour (~sebastien@out-chantepie.fr.clara.net) Quit (Ping timeout: 480 seconds)
[10:14] * mschiff (~mschiff@p4FD7E1A0.dip0.t-ipconnect.de) has joined #ceph
[10:14] <ccourtaut> morning
[10:18] <thanasisk> Unable to find /etc/ceph/ceph.client.admin.keyring on <--- any ideas welcome :)
[10:19] <nerdtron> what directory are you when you perform the ceph-deploy command?
[10:19] <nerdtron> JustEra how many monitors are active in your cluster?
[10:21] <JustEra> nerdtron, I'm following the docs for a 3 host clustering with ceph-deploy, and I'm stuck with that error
[10:21] <thanasisk> nerdtron, ~/ceph/mycluster
[10:22] * tobru_ (~quassel@213.55.184.247) Quit (Ping timeout: 480 seconds)
[10:22] * Cube (~Cube@66-87-64-191.pools.spcsdns.net) has joined #ceph
[10:23] <nerdtron> JustEra did you followed the preflight check list? remember, you need to ssh and sudo password less on all nodes
[10:25] <nerdtron> thanasisk: uninstall ceph on all nodes using ceph-deploy purge and purgedata... then install ceph again. this will create new keys..basically start again from the storage quick start http://ceph.com/docs/master/start/quick-ceph-deploy/
[10:25] * niklas (~niklas@2001:7c0:409:8001::32:115) Quit (Remote host closed the connection)
[10:25] <thanasisk> gotcha thanks :)
[10:26] <thanasisk> all nodes including admin node?
[10:26] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:26] <nerdtron> ceph package will only be installed on nodes participating on a cluster, ceph-deploy package only installed on the admin node
[10:27] <nerdtron> so if your admin node will also participate in the cluster, ceph-deploy install admin-node node1 node2 node....
[10:27] <thanasisk> thanks
[10:28] * artwork_lv (~artwork_l@adsl.office.mediapeers.com) has joined #ceph
[10:29] * artwork_lv (~artwork_l@adsl.office.mediapeers.com) Quit ()
[10:31] <tnt> ... wtf ... why would I need to install -dev packages when installing ceph-common.
[10:34] <thanasisk> nerdtron, i tried nuking and reinstalling still the same behaviour
[10:35] <thanasisk> im using the latest stable of ceph on debian wheezy
[10:35] * Cube (~Cube@66-87-64-191.pools.spcsdns.net) Quit (Quit: Leaving.)
[10:37] * artwork_lv (~artwork_l@adsl.office.mediapeers.com) has joined #ceph
[10:37] <thanasisk> [ceph_deploy.gatherkeys][WARNIN] Unable to find /etc/ceph/ceph.client.admin.keyring on
[10:38] <thanasisk> is there a way to manually create the file?
[10:39] <wogri_risc> thanasisk: just secure-copy from your existing /etc/ceph/ - if you already have a node deployed.
[10:39] <thanasisk> wogri_risc, i dont have a node deploy, any alternative approaches?
[10:40] <wogri_risc> you can follow the manual instructions how to install a monitor node. that would probable describe how to create this file.
[10:42] <thanasisk> wogri_risc, i am following - it is just one command, it gets executed with no errors
[10:42] <nerdtron> ceph-deploy is the adviseable tool to create a cluster
[10:43] <nerdtron> when yo use the following commands, the keys SHOULD be automatically created, unless you delete them
[10:43] <nerdtron> ceph-deploy new node1 node2 node3
[10:43] <thanasisk> its not that simple
[10:43] <nerdtron> ceph-deploy install ceph-node1 node2 node3
[10:43] <thanasisk> the fix was I added a monitor and everything was fine and dandy :)
[10:44] <thanasisk> so mon create host1 fails
[10:44] <thanasisk> mon create host 1 host 2 worked
[10:44] <nerdtron> ceph-deploy mon create node1 node2 node3
[10:44] <nerdtron> of course, you should follow the guide step by step, create a mon first before gathering keys
[10:46] <thanasisk> thats what i did and still got failures :)
[10:46] <thanasisk> can i make do with 2 monitors? or the minimum is 3?
[10:47] <wogri_risc> it works with two
[10:47] <wogri_risc> but if one fails, the cluster stops working
[10:47] <nerdtron> 2 mons will do, but id one comes down, there is no quorum and the cluster will be down also
[10:47] <thanasisk> thanks again
[10:48] * BillK (~BillK-OFT@124-148-116-219.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[10:50] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) Quit (Remote host closed the connection)
[10:50] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[10:50] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) has joined #ceph
[10:51] * BillK (~BillK-OFT@124-148-224-108.dyn.iinet.net.au) has joined #ceph
[10:57] <tnt> "ERROR: failed to initialize watch" when startin radosgw, does that ring a bell ?
[10:59] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[11:01] * AfC (~andrew@2407:7800:200:1011:5910:8716:3d3c:bdb7) Quit (Ping timeout: 480 seconds)
[11:01] <JustEra> how to add osd to only a partition and not on a disk ?
[11:03] <nerdtron> JustEra recommended is that one osd per disk
[11:03] <nerdtron> partition the disk say sdb1 and sdb2
[11:03] <thanasisk> [10:57] root@brizo:~ # modprobe rbd
[11:03] <thanasisk> libkmod: ERROR ../libkmod/libkmod.c:554 kmod_search_moddep: could not open moddep file '/lib/modules/3.8.13-xxxx-grs-ipv6-64/modules.dep.bin' <--- does that mean my kernel is NOT compatible?
[11:03] <nerdtron> the ceph-deploy prepare node1:/dev/sdb1
[11:04] <nerdtron> JustEra then ceph-deploy activate node1:/dev/sdb1
[11:04] <thanasisk> nerdtron, shouldnt that be ceph-deploy osd prepare
[11:04] <nerdtron> do it on both partitions you should have 2 osd on a single disk
[11:04] <thanasisk> and osd activeate respectively?
[11:04] * niklas (niklas@home.hadiko.de) has joined #ceph
[11:04] <JustEra> nerdtron, there is no prepare cmd
[11:04] <JustEra> thanasisk, yeah think so
[11:05] <thanasisk> JustEra, osd prepare / osd activate
[11:05] <nerdtron> sorry it's ceph-deploy osd prepare
[11:05] <nerdtron> haha
[11:05] <nerdtron> ceph-deploy osd activate
[11:05] <JustEra> ceph-disk: Error: Command '['mkfs', '-t', 'xfs', '-f', '-i', 'size=2048', '--', '/dev/sda2']' returned non-zero exit status 1
[11:05] <JustEra> hmmm any log file or verbose mode ?
[11:06] <thanasisk> [10:57] root@brizo:~ # modprobe rbd
[11:06] <thanasisk> libkmod: ERROR ../libkmod/libkmod.c:554 kmod_search_moddep: could not open moddep file '/lib/modules/3.8.13-xxxx-grs-ipv6-64/modules.dep.bin' <-- is my kernel incompatible with rbd?
[11:06] <nerdtron> verbose mode ceph-deploy -v osd prepare
[11:07] <nerdtron> be sure that after creating partitions, you also make a file system to it..like mkfs.xfs /dev/sda2
[11:07] <nerdtron> then try ceph-deploy again
[11:07] <nerdtron> thanasisk what is your kernel?
[11:07] <nerdtron> did you install ceph-common?
[11:08] <thanasisk> yes ceph-common is installed
[11:08] <thanasisk> [11:08] root@brizo:/lib # find / -name modules.dep.bin
[11:08] <thanasisk> [11:08] root@brizo:/lib #
[11:09] <thanasisk> nerdtron, [11:08] root@brizo:/lib # uname -r
[11:09] <thanasisk> 3.8.13-xxxx-grs-ipv6-64
[11:09] <thanasisk> as provided by the OVH hosting provider
[11:10] * allsystemsarego (~allsystem@5-12-37-127.residential.rdsnet.ro) has joined #ceph
[11:10] <nerdtron> ceph-common installed on the client where you want to mount rbd?
[11:10] <thanasisk> yes
[11:11] <tnt> thanasisk: IIRC I'm not even sure the OVH kernel support modules at all.
[11:11] <tnt> does lsmod return anything ?
[11:11] <thanasisk> tnt: yes, the same error message :(
[11:11] <JustEra> per default ovh kernel is not modules capable you have to set another kernel
[11:11] <thanasisk> so i need to reformat my boxen with a stock kernel
[11:11] <thanasisk> thanks guys :)
[11:12] <tnt> you can just put another kernel ...
[11:12] <tnt> no need to _reformat_ everything ...
[11:12] <JustEra> thanasisk, Or just change the kernel
[11:12] <nerdtron> anybody using ceph rbd over gigabit network? how are your speeds?
[11:13] <tnt> nerdtron: I'm using it onve GigE. (a dual bonded link).
[11:13] <nerdtron> dual bonded? you use 2 nics for the client?
[11:13] <tnt> nerdtron: most of the time network is not the limitation, except for pure sequential reads ...
[11:14] * mozg (~andrei@host86-185-78-26.range86-185.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[11:14] <tnt> nerdtron: for client and the ceph machines. They're all connected via 2 NICs to 2 distincs switch of the same stack with LACP.
[11:14] <tnt> I'm also using jumbo frames on that network. Not sure if it changes much for ceph.
[11:15] <nerdtron> tnt and how's the speed?
[11:15] <nerdtron> read and write?
[11:17] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[11:17] <tnt> can't find the bench numbers anymore. But it wasn't limited by the network.
[11:17] * roald (~oftc-webi@87.209.150.214) Quit (Quit: Page closed)
[11:17] <tnt> except for pure sequential read where I could saturate the 200 Mo/s link.
[11:18] <tnt> but for write the hdd were the bottle neck (no ssd journal). But mostly the xen/rbd interface was the limiting factor.
[11:19] <tnt> I think in the end, inside the VM it was like 60 Mo/s read and 30 Mo/s write or so.
[11:20] <thanasisk> kernel changed!
[11:23] <thanasisk> ok i have mounted the block device
[11:37] * tobru_ (~quassel@213.55.184.233) has joined #ceph
[11:40] * sleinen (~Adium@2001:620:0:2d:55c2:d8b4:4053:847f) Quit (Quit: Leaving.)
[11:40] <nerdtron> tnt yes that is also my performance, rbd, no seperate ssd journal...so it seems my slow write speeds are just normal sigh
[11:40] * gillesMo (~gillesMo@00012912.user.oftc.net) has joined #ceph
[11:40] <nerdtron> thanasisk what will you use the rdb for? why not cephFS?
[11:41] <thanasisk> storing images
[11:41] <thanasisk> a *lot* of images
[11:41] <nerdtron> for kvm?
[11:42] <thanasisk> not VM images, a lot of small jpg images i meant
[11:43] <ofu_> how many? billions?
[11:43] <thanasisk> millions
[11:44] <loicd> thanasisk: why not store them as objects using the swift/S3 API ?
[11:45] <tnt> nerdtron: well, everydata is getting written twice. A ssd journal would probably be the biggest improvement rather than going to 10G
[11:45] <thanasisk> loicd, that is the next step
[11:45] <thanasisk> i only started working with ceph yesterday afternoon :)
[11:45] <loicd> thanasisk: when re-installing your OVH box you can require a vanillia kernel instead of the custom one. An option you don't have when you install the box immediately after renting it.
[11:46] <loicd> thanasisk: :-) if you make it your first step you won't have to worry about cephfs / rbd
[11:46] <thanasisk> loicd, care to point me to the documentation?
[11:47] <thanasisk> OVH is just for testing, the real environment will be Hetzner (yeah, I know, I know)
[11:47] <loicd> http://ceph.com/docs/master/radosgw/
[11:47] <nerdtron> swift/S3 API --->>> what is this?? I have no idea about this. we jsut use rbd and cephfs fro backup
[11:47] <tnt> Anybody has any issue when upgrading a radosgw from 0.61 to 0.67 ?
[11:48] <loicd> thanasisk: I actually prefer Hetzner over OVH. Last time I looked Hetzner was cheaper than OVH.
[11:48] <tnt> here it just doesn't start ...
[11:48] <thanasisk> loicd, the hardware in hetzner is better too
[11:49] <loicd> nerdtron: http://ceph.com/docs/master/radosgw/ shortly explains what S3 and swift are about
[11:53] * tobru__ (~quassel@213.55.184.140) has joined #ceph
[11:54] <ccourtaut> nerdtron: basically, Radosgw provides an Object Store over Rados, and it implements the Amazon S3 API and the OpenStack Swift API
[11:54] <ccourtaut> instead of proposing another new API
[11:54] <ccourtaut> it relies on the "standards" to communicate with object store
[11:55] <ccourtaut> and by doing it, it allows migration from a another provider to ceph
[11:56] * wiwengweng (~oftc-webi@183.62.249.162) has joined #ceph
[11:56] <nerdtron> ccourtaut ahhhmmmmm
[11:56] <nerdtron> still too much for our needs :)
[11:57] <wiwengweng> some one told me oftc
[11:57] <wiwengweng> is much more active
[11:57] * tobru_ (~quassel@213.55.184.233) Quit (Ping timeout: 480 seconds)
[11:57] <wiwengweng> :)
[11:57] <thanasisk> i just said more not much more
[11:58] <wiwengweng> :D
[11:58] <wiwengweng> but see much more people and you
[12:00] * smiley_ (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[12:01] <nerdtron> hhmm lately there are a lot of people in here... :D
[12:09] * yy-nm1 (~Thunderbi@218.74.34.80) Quit (Quit: yy-nm1)
[12:10] * sleinen (~Adium@2001:620:0:2d:48b2:af55:6eb:cca0) has joined #ceph
[12:11] * wiwengweng (~oftc-webi@183.62.249.162) Quit (Remote host closed the connection)
[12:20] * diegows (~diegows@190.190.11.42) has joined #ceph
[12:21] <thanasisk> is the metadata server a single point of failure?
[12:21] <thanasisk> and how can I provide high availability to it?
[12:23] * sleinen (~Adium@2001:620:0:2d:48b2:af55:6eb:cca0) Quit (Ping timeout: 480 seconds)
[12:24] <nerdtron> it is not recommended to add several mds but it is possible
[12:24] <nerdtron> but you can always declare a new mds when the old one fails.
[12:25] <thanasisk> with what loss?
[12:25] <thanasisk> and will my data still be there and usable?
[12:25] * tobru_ (~quassel@213.55.184.200) has joined #ceph
[12:28] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[12:30] * tobru__ (~quassel@213.55.184.140) Quit (Ping timeout: 480 seconds)
[12:31] <thanasisk> so question: what happens if my metadata server goes down?
[12:33] * nerdtron (~kenneth@202.60.8.252) Quit (Ping timeout: 480 seconds)
[12:35] * carif (~mcarifio@146-115-183-141.c3-0.wtr-ubr1.sbo-wtr.ma.cable.rcn.com) has joined #ceph
[12:36] * jks (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[12:36] * tobru__ (~quassel@213.55.184.215) has joined #ceph
[12:36] * jksM (~jks@3e6b5724.rev.stofanet.dk) Quit (Read error: Connection reset by peer)
[12:40] * tobru_ (~quassel@213.55.184.200) Quit (Ping timeout: 480 seconds)
[12:45] <indego> I have a question regarding how people manage their rbd images with virtualization. Does one make 1 rbd per server or per partition. If per server do you run LVM on that to handle dynamic resizing?
[12:48] * sleinen (~Adium@2001:620:0:46:3061:914f:31a:209e) has joined #ceph
[12:49] <indego> Also, what are the best optimizations that you can make before adding SSD as a journal?
[12:53] <indego> I am running Debian 7 with ceph 0.67.2 from the ceph repository. I have installed the 3.10.2 kernel from 'testing' and have rebuilt qemu/qemu-kvm to enable rbd/rados support.
[12:54] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[12:55] * tobru__ (~quassel@213.55.184.215) Quit (Ping timeout: 480 seconds)
[12:56] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[13:00] * tobru_ (~quassel@213.55.184.248) has joined #ceph
[13:03] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[13:07] <wogri_risc> indego: I du create 1 rbd per virtual drive, that means, if I create one rbd for /
[13:07] <wogri_risc> and one for /var/log
[13:07] <wogri_risc> so I can re-size however I want
[13:07] <wogri_risc> don't create partitions inside the virtual machine. it will only make things harder.
[13:08] <wogri_risc> if you don't happen to have an SSD as we do in our production cluster, use the writeback cache of your SAS controller.
[13:08] <wogri_risc> in other words: don't disable that one :)
[13:10] <indego> wogri_risc, OK, thanks for the info.
[13:10] <indego> that is what I was thinking, but I just wanted to check what people are doing.
[13:13] <wogri_risc> I believe there's not much you can do if you don't have an SSD.
[13:13] <wogri_risc> in our case it really performs ok without the SSD.
[13:14] <joelio> yea, same here - no SSDs (yet!) and performance is good. RBD Flash based cache hypervisor side would be more than welcome of course!
[13:14] * joelio waits for an Emperor
[13:15] * sleinen (~Adium@2001:620:0:46:3061:914f:31a:209e) Quit (Quit: Leaving.)
[13:18] <wogri_risc> we tweaked a bit with caching for the vm, of course.
[13:18] * diegows (~diegows@190.190.11.42) Quit (Ping timeout: 480 seconds)
[13:19] <tnt> thanasisk: the mds doesn't store anything itself. So you can re-create one from scratch without loss.
[13:21] * sleinen (~Adium@130.59.94.179) has joined #ceph
[13:22] * sleinen1 (~Adium@2001:620:0:2d:14ed:df0c:a2ad:9d54) has joined #ceph
[13:23] * sleinen (~Adium@130.59.94.179) Quit (Read error: Connection reset by peer)
[13:24] * carif (~mcarifio@146-115-183-141.c3-0.wtr-ubr1.sbo-wtr.ma.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[13:27] * berant (~blemmenes@gw01.ussignalcom.com) has joined #ceph
[13:40] * gillesMo (~gillesMo@00012912.user.oftc.net) Quit (Quit: Konversation terminated!)
[13:46] * rudolfsteiner (~federicon@190.244.11.181) has joined #ceph
[13:47] * rudolfsteiner (~federicon@190.244.11.181) Quit ()
[13:49] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Remote host closed the connection)
[13:49] * niklas (niklas@home.hadiko.de) Quit (Quit: Lost terminal)
[13:49] * rudolfsteiner (~federicon@190.244.11.181) has joined #ceph
[13:49] * markbby (~Adium@168.94.245.1) has joined #ceph
[13:50] <thorus> my ceph osd tree shows this: http://paste.ubuntu.com/6040024/. How to manipulate the reweight?
[13:51] <thorus> I want to set all to 1
[13:54] * rudolfsteiner (~federicon@190.244.11.181) Quit ()
[13:55] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[13:58] <ofu_> ceph osd crush set 8 1.0
[13:58] * carif (~mcarifio@pool-96-233-32-122.bstnma.fios.verizon.net) has joined #ceph
[14:02] * tobru_ (~quassel@213.55.184.248) Quit (Ping timeout: 480 seconds)
[14:05] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[14:06] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[14:06] <thorus> ofu_ this sets the weight not the reweight
[14:08] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[14:15] * jluis (~joao@89.181.146.94) has joined #ceph
[14:15] * ChanServ sets mode +o jluis
[14:16] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[14:16] * yanzheng (~zhyan@101.82.235.121) has joined #ceph
[14:20] <thorus> ok got it ceph osd reweight 8 1, got it from mailing list
[14:21] * joao (~joao@89.181.146.94) Quit (Ping timeout: 480 seconds)
[14:22] * imjustmatthew (~imjustmat@c-24-127-107-51.hsd1.va.comcast.net) has joined #ceph
[14:23] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[14:24] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[14:27] * smiley_ (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley_)
[14:29] <indego> any idea why when I add a 3rd OSD node rados bench goes from ~85MB/s to about 17MB/s? OSD node has 2 4TB disks that is ~50% of the total capacity. Local disk tests show good speed to the disks...
[14:38] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Read error: Connection reset by peer)
[14:39] * alexxy[home] (~alexxy@2001:470:1f14:106::2) has joined #ceph
[14:40] * rudolfsteiner (~federicon@190.244.11.181) has joined #ceph
[14:42] * BadaBoum (~Bada@195.65.225.142) has joined #ceph
[14:47] * rudolfsteiner (~federicon@190.244.11.181) Quit (Quit: rudolfsteiner)
[14:47] * Bada (~Bada@195.65.225.142) Quit (Ping timeout: 480 seconds)
[14:54] <matt_> indego, what version of Ceph?
[14:55] <joelio> indego: no replication happening to balance the OSDs and taking the rest of your b/w?
[14:55] * carif (~mcarifio@pool-96-233-32-122.bstnma.fios.verizon.net) Quit (Quit: Ex-Chat)
[15:06] * tobru_ (~quassel@213.55.184.223) has joined #ceph
[15:06] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) Quit (Remote host closed the connection)
[15:07] * sleinen (~Adium@2001:620:0:46:e509:3116:fd50:5546) has joined #ceph
[15:08] <indego> matt_, the latest dumpling (0.67.2) - joelio , no I did a 'ceph osd set noout' before and was checking with 'ceph -w'. I guess it is something on that host. Performing another test with the 3rd host and one of the 'fast' ones offlined gave worse results ~12.5MB/s
[15:10] * sleinen2 (~Adium@2001:620:0:46:e509:3116:fd50:5546) has joined #ceph
[15:10] * X3NQ (~X3NQ@195.191.107.205) Quit (Quit: Leaving)
[15:10] * mozg (~andrei@86.188.208.210) has joined #ceph
[15:10] <mozg> wrencsok, hello mate
[15:10] <mozg> how's it going?
[15:11] <mozg> i was wondering if you've managed to run any tests on the Dumpling release?
[15:11] <mozg> sorry, i had to drop off yesterday
[15:12] * vanham (~vanham@gateway.mav.com.br) has joined #ceph
[15:12] <indego> There is no special disk controller here, just straight SATA to the motherboard. Machine One is a HP with raid controller but 'single' disks exported. The second a supermicro with a 3ware and 'single' disks. I need to check the config of the 3ware but the tw_cli tool seg-faults on the new linux kernels (something to do with uname changes)
[15:12] <vanham> Good day folks
[15:13] <vanham> How do you get a real data usage for a RBD image?
[15:13] * sleinen1 (~Adium@2001:620:0:2d:14ed:df0c:a2ad:9d54) Quit (Ping timeout: 480 seconds)
[15:14] <vanham> rbd ls will give its full size
[15:14] <vanham> but, because of thin provisioning, it actually uses less
[15:14] <wogri_risc> rbd ls -l actually.
[15:15] <vanham> rbd ls -l if giving a total of 1228 GBs of images
[15:16] <vanham> But, because of how ceph provisions the data, it uses a lot less
[15:16] <wido> vanham: You would have to count all the rbd objects
[15:16] <vanham> Cloning, COW, snapshots, thin provisioning all influence that
[15:16] <vanham> Hi wido!
[15:16] <wido> there is currently no tool that does it for you. Since there is no central register of which objects exist
[15:16] * sleinen (~Adium@2001:620:0:46:e509:3116:fd50:5546) Quit (Ping timeout: 480 seconds)
[15:19] <vanham> rbd diff allows something like it
[15:19] <vanham> but it only count the changes
[15:19] <vanham> so, there is no way to get this information right now?
[15:19] <vanham> With any of the currently available ceph tools?
[15:20] * tobru__ (~quassel@213.55.184.196) has joined #ceph
[15:20] <wido> vanham: No, there is no way. Since RBD doesn't keep track of objects it doesn't know which one exists
[15:20] <vanham> K, thanks!
[15:20] <wido> The only way would be to count all raw RADOS objects and sum up their sizes
[15:20] * tobru_ (~quassel@213.55.184.223) Quit (Ping timeout: 480 seconds)
[15:20] <wido> vanham: Doing this will always be a heavy operation for the cluster
[15:20] <wido> will never be instant
[15:22] <vanham> Oh, ok
[15:22] <vanham> I was looking into how to do that
[15:23] * sleinen2 (~Adium@2001:620:0:46:e509:3116:fd50:5546) Quit (Ping timeout: 480 seconds)
[15:25] <tnt> Query string authentication when using response-* params also seem broken in 0.67.x
[15:27] <matt_> indego, you might want to test using the wip-dumpling-perf2 branch. 0.67.2 has a pretty serious cpu usage bug that causes bad performance (well it did for me and a few others)
[15:27] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) has joined #ceph
[15:27] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) Quit (Remote host closed the connection)
[15:30] <indego> matt_, oh, OK. I thought that I read in the release notes that the high-cpu bug was fixed from .1 to .2
[15:31] * sleinen (~Adium@130.59.94.179) has joined #ceph
[15:32] * sleinen1 (~Adium@2001:620:0:2d:3827:2cd4:a9a3:9c49) has joined #ceph
[15:33] <matt_> indego, it got better but not entirely fixed. The full fix is in that branch which should be in .3 when it gets released
[15:36] <indego> matt_, ok, thanks. This cluster is still in testing. I guess I need to see if this is a hardware issue on this node.
[15:36] <matt_> Just check your CPU load during benching, if it's through the roof then you're probably being cpu limited. Otherwise it might be something else
[15:37] * haomaiwang (~haomaiwan@124.161.8.8) has joined #ceph
[15:39] * sleinen (~Adium@130.59.94.179) Quit (Ping timeout: 480 seconds)
[15:40] * Vjarjadian (~IceChat77@176.254.37.210) Quit (Quit: OUCH!!!)
[15:43] * jerker (jerker@82ee1319.test.dnsbl.oftc.net) has joined #ceph
[15:47] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Quit: wogri_risc)
[15:50] * allsystemsarego (~allsystem@5-12-37-127.residential.rdsnet.ro) Quit (Quit: Leaving)
[15:51] * niklas (niklas@vm15.hadiko.de) has joined #ceph
[15:52] <loicd> anyone willing to review https://github.com/ceph/ceph/pull/538 ? It's a lot of fun, I promise :-)
[15:56] <indego> matt_, just ran another bench, system load is under 1. Looks like iowait is a little on the high side so I guess it is the lack of a caching disk controller. Will see if I can resolve that and test.
[15:57] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:58] * yanzheng (~zhyan@101.82.235.121) Quit (Ping timeout: 480 seconds)
[16:16] * mozg (~andrei@86.188.208.210) Quit (Ping timeout: 480 seconds)
[16:19] * yanzheng (~zhyan@101.82.235.121) has joined #ceph
[16:20] <thanasisk> how often does the ceph-admin server communicates with the rest of ceph servers?
[16:21] <vanham> thanasisk, http://ceph.com/docs/master/rados/configuration/mon-osd-interaction/ for MON-OSD
[16:21] * markbby (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[16:21] <thanasisk> im thinking of hosting the ceph-admin server on a cheap cloud provider whereas the rest of the ceph is going to be on dedicated commodity hardware
[16:21] * gaveen (~gaveen@175.157.90.40) has joined #ceph
[16:23] * janisg (~troll@85.254.50.23) Quit (Ping timeout: 480 seconds)
[16:23] * markbby (~Adium@168.94.245.1) has joined #ceph
[16:27] * tserong__ (~tserong@58-6-101-181.dyn.iinet.net.au) has joined #ceph
[16:27] * sagelap (~sage@2600:1012:b006:d202:f945:a530:9596:aa59) has joined #ceph
[16:28] <JustEra> How I can restart an osd ?
[16:28] * vata (~vata@2607:fad8:4:6:2dcd:32ef:bfab:c53c) Quit (Quit: Leaving.)
[16:28] * vata (~vata@2607:fad8:4:6:2dcd:32ef:bfab:c53c) has joined #ceph
[16:31] * wrale (~wrale@wrk-28-217.cs.wright.edu) has joined #ceph
[16:31] * janisg (~troll@85.254.50.23) has joined #ceph
[16:31] <wrale> is there anyway to natively encrypt data at rest with ceph (especially it's s3 api)? (separate from in layers above ceph services)
[16:32] * tserong_ (~tserong@124-168-231-241.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[16:32] <jluis> don't think rgw does that, if that's what you mean
[16:32] * jluis is now known as joao
[16:33] <wrale> joao: okay.. thank you
[16:34] <tnt> wrale: you can encrypt the disks if you want ...
[16:35] <tnt> that's a layer below ceph :)
[16:35] <wrale> that's what i was thinking.. LUKS, passphrase on boot
[16:36] <wrale> makes for difficult scaling.. perhaps an orchestration layer like ansible could be used to get things off the ground
[16:36] <wrale> or maybe only the osd would need encrypting.. so boot wouldn't be an issue
[16:36] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[16:37] <wrale> (trying to build a system which complies with USA HIPAA laws)
[16:37] <tnt> wrale: yes, encrypting only the osd disk and have a 'key server' that can serve the key to unlock to the servers only when you manually 'unlocked' that server first or something like that.
[16:38] <wrale> tnt: cool.. glad to hear that i'm on the right track
[16:38] <wrale> thank you
[16:41] <berant> Speaking of Ansible, is anyone aware of any efforts to create ceph modules for Ansible?
[16:42] <tnt> now that python-ceph is mandatory on all the machines, that would make things easier I guess :)
[16:42] <wrale> berant: if i can't find one in the next few weeks, i'll probably be writing them
[16:43] <wrale> as i'll be using ceph, regardless of hipaa in a different manner
[16:43] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Quit: Leaving.)
[16:43] <alfredodeza> berant: that doesn't look too hard, I wrote some basic stuff in Ansible to get a host ready for ceph-deploy usage
[16:44] * sleinen1 (~Adium@2001:620:0:2d:3827:2cd4:a9a3:9c49) Quit (Quit: Leaving.)
[16:44] * sleinen (~Adium@2001:620:0:2d:3827:2cd4:a9a3:9c49) has joined #ceph
[16:44] <berant> wrale: I'd be happy to assist I'm going to be spinning up a cluster next week and I'd prefer to expand my ansible usage rather than use another tool just for this
[16:45] <berant> alfredodeza: yeah, I haven't yet looked at what it would take from a module perspective, I've not written any of my own Ansible modules
[16:45] <wrale> berant: that sounds cool. i vaguely remember coming across something for ansible, but perhaps that was a playbook vs. module.. come to think of it, i may not be worthy of writing a module :)
[16:45] <alfredodeza> ansible modules are pretty easy, maybe I could help out with an hour or so every week to get something
[16:46] <berant> wrale: you looking to do HIPAA compliant compute or just storage?
[16:46] * sleinen1 (~Adium@2001:620:0:2d:45a5:c187:68a7:ed00) has joined #ceph
[16:46] <wrale> berant: a project i'm working on needs it all throughout the stack.. :) fun times
[16:46] <berant> wrale: yeah it might not be, could potentially leverage ceph-deploy via a cmd: module
[16:47] * diegows (~diegows@190.190.11.42) has joined #ceph
[16:47] <berant> wrale: however seems like for idempotent function it would really need to tie in well to the ceph andmin tools
[16:47] <dmsimard> Where exactly in the process of installing an OSD through ceph-deploy does one use —fs-type ? I'm having the same problem as this poor guy: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-May/001800.html I don't see where to use it.
[16:47] <wrale> agreed..
[16:48] <JustEra> berant : don't you have to format the partiton manually ?
[16:49] * tryggvil (~tryggvil@217.28.181.130) has joined #ceph
[16:49] <alfredodeza> dmsimard: ceph-deploy does not support filetype flags
[16:49] <berant> wrale: yeah I'm going through a lot of those same meetings right now. The primary purpose of my cluster is to store the logs/compliance information from other systems (not storing ePHI) in the cluster
[16:50] <berant> wrale: also a good excuse to get ceph in the door and show people the errors of the Enterprise storage ways ;)
[16:50] <dmsimard> alfredodeza: Really ? The documentation seems to imply otherwise: http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/ "By default, ceph-deploy will create an OSD with the XFS filesystem. You may override the filesystem type by providing a --fs-type FS_TYPE argument, where FS_TYPE is an alternate filesystem such as ext4 or btrfs."
[16:50] <wrale> berant: i can relate.. phi over here.. developers writing to local filesystems for a long pipeline........
[16:50] <wrale> so they co-located the processes
[16:50] <wrale> :)
[16:50] <wrale> no scale for you
[16:51] <berant> JustEra: in what regards? using ceph-deploy? or in ansible?
[16:51] <wrale> someday, i would like to see ceph use docker containers, but i suppose that's way off
[16:51] <alfredodeza> dmsimard: wow, I had no idea the docs said that
[16:51] * alfredodeza looks
[16:51] <berant> yeah, and ridiculous pricing. especially for something simple that just requires a large amount of nearline
[16:52] * sleinen (~Adium@2001:620:0:2d:3827:2cd4:a9a3:9c49) Quit (Ping timeout: 480 seconds)
[16:52] <berant> wrale: docker containers? not familiar with that
[16:52] <dmsimard> alfredodeza: So we're both confused then :D
[16:52] <alfredodeza> indeed!
[16:52] <alfredodeza> flip some tables
[16:52] <kraken> (╯°□°)╯︵ ┻━┻
[16:52] <wrale> http://docker.io is the link, i think.. it's off the wall, in some ways
[16:53] <wrale> what i'd really like is a mesos cluster, using cephfs, running docker containers for everything..lol
[16:53] <berant> wrale: so sort of like vagrant? but more of from an app perspective vs. system/vm?
[16:53] <alfredodeza> I confirm that are no such flags in ceph-deploy
[16:53] <wrale> berant: precisely.. and it's lxc based, so not a full vm
[16:53] * rudolfsteiner (~federicon@200.68.116.185) has joined #ceph
[16:54] * sprachgenerator (~sprachgen@130.202.135.204) has joined #ceph
[16:54] * sprachgenerator (~sprachgen@130.202.135.204) Quit ()
[16:54] <wrale> (for everything except what the berkley big data stack does, which runs on mesos)
[16:54] <dmsimard> alfredodeza: It was a long shot but even browsing through the code in the github repo for ceph-deploy, there's nothing as well. Huh.
[16:54] <berant> wrale: interesting, I hadn't heard of Mesos either. yay for new tools to read about!
[16:54] <alfredodeza> ah found it
[16:55] <alfredodeza> it is not in ceph-deplo9y
[16:55] <alfredodeza> *ceph-deploy
[16:55] <wrale> berant: right on... mesos blows my mind..lol
[16:55] <alfredodeza> it is in `ceph-disk prepare`
[16:55] <alfredodeza> `ceph-disk prepare --help` gives that option
[16:55] <alfredodeza> --fs-type FS_TYPE file system type to use (e.g. "ext4")
[16:55] <dmsimard> Ah, i'll look that up - it would be on the OSDs themselves then
[16:56] * sprachgenerator (~sprachgen@130.202.135.204) has joined #ceph
[16:56] <alfredodeza> hrmnnn, I guess ceph-deploy should just pass any extra args to the remote `ceph-disk {action}`
[16:57] <wrale> berant: i bet spark and spark streaming (parts of the berkley big data stack, both of which run on mesos -- load balanced) would probably work fantastically for your logs/compliance processing.. it's like hadoop + hive + storm on steroids.. not sure about the encryption bit though
[16:57] <dmsimard> Agreed, should we document this ?
[16:57] <thanasisk> i have 2 nodes running - how can I add a third one? can someone point me to the documentation?
[16:58] <wrale> berant: they are also working on graphx which should do things like google pregel (graph analytics)
[16:59] <berant> wrale: right now I'm likely going to be using elasticsearch and logstash for basic things that don't need much analytics. I have looked at storm before, but we just don't have much demand for streaming event processing.
[16:59] * BillK (~BillK-OFT@124-148-224-108.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[16:59] * Fetch_ (fetch@gimel.cepheid.org) Quit (Ping timeout: 480 seconds)
[16:59] <wrale> berant: cool. i look at logstash frequently and wish i had time to get going with it..lol.. sounds good.. greylog2 looks cool, but maybe a duplication of function
[17:00] <wrale> berant: i recently asked the logstash people if they could mesh with the berkley stuff.. no answer
[17:00] * gaveen (~gaveen@175.157.90.40) Quit (Read error: Connection reset by peer)
[17:01] <berant> yeah I hadn't heard of spark before so I'm not sure of the input/output compatability with spark
[17:01] <wrale> berant: i would just like to have a unified cluster.. i read a paper about warehouse scale computers by the google engineers, which made me try to focus more on unified clusters
[17:01] * haomaiwang (~haomaiwan@124.161.8.8) Quit (Read error: Connection reset by peer)
[17:02] * sagelap (~sage@2600:1012:b006:d202:f945:a530:9596:aa59) Quit (Read error: No route to host)
[17:02] <wrale> unified clusters would do well for small biz deployments, too
[17:02] <wrale> which is what i'm working on in my spare time
[17:02] <wrale> ceph is a cornerstone of that
[17:02] <thanasisk> stupid question: what is defined as a "unified cluster"?
[17:03] <berant> wrale: I have a 3node ES/logstash cluster right now processing DNS/Cisco syslogs using legacy syslog > logstash frontend > zeromq > logstash filtering > ES
[17:03] <berant> wrale: yeah I've read that same paper
[17:03] <joao> thanasisk, you get multiple storage solutions without having to have multiple clusters, one for each solution for instance
[17:03] <wrale> thanasisk: a cluster which can do jobs like mpi, streaming, bid data batch, infrastructure nodes, etc
[17:03] <berant> wrale: fascinating thinking
[17:04] <wrale> berant: cool. thank you
[17:04] <dmsimard> alfredodeza: perhaps it could be an argument passed to "ceph-deploy osd prepare"
[17:04] <berant> wrale: I agree, I think ceph is going to be a key for a lot of those kind of solutions
[17:04] <alfredodeza> yeah something like that dmsimard
[17:05] <wrale> thanasisk: technically something like openstack could become a "unified cluster" i speak of.. i'm looking for something more intelligent and efficient at scheduling and h/a
[17:05] <absynth> hrrm, docker looks really interesting
[17:05] <berant> wrale: I'd love ES to be able to be aware of ceph and not have it's own replication on top of ceph
[17:05] <absynth> thanks for that drive-by-pointer
[17:06] <wrale> absynth: no problem.. check out coreOS, too.. they're taking from ChromeOS to build a HTTP REST API for deploying docker containers to a bare kernel (and updating that kernel, by the same API)
[17:06] <wrale> berant: maybe with cephfs, it could do that?
[17:06] <dmsimard> alfredodeza: I'd like to document something for this, feel like there's an opportunity there but there is no issue reports on github - what is the proper way ? I'd do a pull request but I'm not comfortable enough with ceph just yet.
[17:07] <berant> wrale: yeah, though from what I see ES got rid of the shared storage backend (i.e. shared NFS/cephfs among nodes)
[17:08] <alfredodeza> dmsimard: issues are handled in the tracker.ceph.com site
[17:08] <alfredodeza> including ceph-deploy ones
[17:08] <alfredodeza> I wouldn't mind a pull request :)
[17:08] <dmsimard> Good to know, I'll see if I can do something on this later on - reporting it is a first good step.
[17:09] * gaveen (~gaveen@175.157.181.209) has joined #ceph
[17:09] <wrale> i hope someday ceph tackles the encryption at rest problem.. seems that could be done in this layer, separate from how the objects are exposed
[17:09] <wrale> i just read hekafs does it, but i'm nearly completely a stranger to hekafs
[17:10] * tobru_ (~quassel@213.55.184.138) has joined #ceph
[17:10] <wrale> xtreemfs has an article about their supercomputer 2012 outing in which they erroneously report they do encryption on disk.. i just asked them, and that is false.. boooo
[17:12] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[17:13] <wrale> alright.. sorry for the blabbing.. back to work.. i'll check in later .. thanks for the discussion and help
[17:14] <wrale> (and here i thought the ceph community was small, because the freenode channel is basically dead)..lol\
[17:14] * tobru__ (~quassel@213.55.184.196) Quit (Ping timeout: 480 seconds)
[17:14] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) has joined #ceph
[17:15] <berant> wrale: what would the benefit to doing it within ceph vs. with dm-crypt as you already mentioned?
[17:15] <berant> wrale: or is it just getting more things handled via ceph to keep it easy?
[17:16] <wrale> berant: easy factor, but also a higher ratio of keys randomness to PHI files, in my case
[17:16] <nhm> I'm not sure that hekafs exists anymore (since RH bought glusterfs, I think it's being roled in)
[17:17] <berant> wrale: under the thinking that each object (may or not be single PHI object) could have it's own key?
[17:17] <wrale> berant: perhaps, if configured that way.. performance would be dismal, i expect, though
[17:17] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:17] * zhyan_ (~zhyan@101.84.115.105) has joined #ceph
[17:17] <wrale> key generation alone would kill it
[17:17] * rudolfsteiner (~federicon@200.68.116.185) Quit (Quit: rudolfsteiner)
[17:18] <wrale> i think you make a good point.. perhaps it's best to be lower
[17:18] <berant> wrale: true, lot of PKI opperations
[17:18] <wrale> berant: i think tripping over the unlocking of things could wipe the cluster, though.. since ceph wouldn't have any idea what was going on
[17:18] <wrale> i think that's my primary reason for wanting it to know
[17:19] <berant> wrale: would depend if you needed to be able to handle different keys for different groups or entities as clearly the dm-crypt would be ignorant to that
[17:19] <wrale> berant: something about keeping patients' health information together in the same bucket makes me nauseous
[17:20] <wrale> encrypted or no
[17:20] <berant> wrale: though dm-crypt doing base encryption could solve your "at rest" requirements and other entities or groups could be responsible for their own application/object level requirements to keep the dm-crypt key from being the only protection
[17:20] <berant> wrale: true
[17:20] * ChoppingBrocoli (~quassel@rrcs-74-218-204-10.central.biz.rr.com) has joined #ceph
[17:20] <berant> wrale: and to your point about the cluster, I think it would certainly trigger a lot of potentially needless rebalancing
[17:21] <wrale> berant: from an efficiency perspective.. i am beginning to think the only smart way to do this encryption at rest thing is to have the app do it...
[17:21] <wrale> drop encryption lower in the stack
[17:21] <wrale> except where normal security deems
[17:21] <ChoppingBrocoli> Quick question, what would be the better technology to use with ceph....Cloudstack or Openstack? I am a SMB with a team of 1 and 50 virt servers
[17:22] <wrale> ChoppingBrocoli: i can't speak to cloudstack, but i know when i spoke to mirantis about openstack + ceph, they were quite happy with the proposition (and comfortable).. inktanks supports it
[17:22] <thanasisk> how much is an intank support contract? :)
[17:23] <wrale> :) however much they say it is ..lol
[17:23] <wrale> free 99
[17:23] * haomaiwang (~haomaiwan@112.193.130.53) has joined #ceph
[17:23] <berant> they can certainly speak to it but it's generally cluster size (TB) based
[17:23] <wrale> i really don't know, to be honest.. i'm awaiting my quote
[17:23] * yehudasa_ (~yehudasa@2602:306:330b:1410:ea03:9aff:fe98:e8ff) Quit (Ping timeout: 480 seconds)
[17:24] <berant> (from what we received)
[17:24] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[17:24] <wrale> (mirantis subs inktank, in my case)
[17:24] <ChoppingBrocoli> I heard something along the linkes of $10K for a small cluster a year
[17:24] * yanzheng (~zhyan@101.82.235.121) Quit (Ping timeout: 480 seconds)
[17:24] <alphe> hello everyone
[17:24] * sleinen1 (~Adium@2001:620:0:2d:45a5:c187:68a7:ed00) Quit (Quit: Leaving.)
[17:24] <wrale> if it were my own, i'd just go IRC support contract :)
[17:24] * sleinen (~Adium@2001:620:0:2d:45a5:c187:68a7:ed00) has joined #ceph
[17:25] <ChoppingBrocoli> Ah found it...
[17:25] <ChoppingBrocoli> "Pricing is based on cluster capacity, annual support starts at clusters up to 64 TB, $9,600 for Silver and $13,440 for Gold. The cost per TB decreases as your volume increases. No license fees, just the cost of support."
[17:25] <alphe> I have a problem with radosgw when I update it then it shows cannot read region map
[17:25] <wrale> sounds expensive to me
[17:26] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) Quit (Ping timeout: 480 seconds)
[17:26] <alphe> do I have to go through the keyring importation process ?
[17:26] * tobru_ (~quassel@213.55.184.138) Quit (Ping timeout: 480 seconds)
[17:26] <ChoppingBrocoli> Yea, that is why I am giving serious thought to what cloud I roll out. I have one made a 1 node instance of openstack with ceph and it worked nice, how hard is it to do multiple nodes?
[17:26] <wrale> (but i appreciate their presence, for entities needing such support)
[17:28] <wrale> ChoppingBrocoli: my impression of the industry is that it's centering on openstack.. i don't know if cloudstack has as many features, or is as complicated (as a tradeoff).. i don't know if quantum is in cloudstack, but it's a wow factor, if you are looking for network offloading to hardware
[17:29] <wrale> (quantum is in openstack)
[17:29] <ChoppingBrocoli> Yea that is what I hear. Do you have openstack running?
[17:30] <wrale> and i say this after banging my head for months on my desk trying to get openstack folsom running.. i do have folsom running now, but it's a vertical scale single node (64 cores + 512GB ram).. i used the rackspace alamo installer, because my timeline ran out ( i wouldn't choose that again)
[17:30] * berant_ (~blemmenes@gw01.ussignalcom.com) has joined #ceph
[17:31] <wrale> i'm excited to see red hat take a stab at openstack, though they seem to be off their good game at this point
[17:31] * artwork_lv (~artwork_l@adsl.office.mediapeers.com) Quit (Quit: artwork_lv)
[17:31] <wrale> ansible seems to be building recipes for it
[17:31] <wrale> overall, though.. i like openstack.. planning is key, though.. people complain to me about storage i/o
[17:32] <wrale> i've got cinder pointing to a 8-disk hardware raid 10 sata 7.2k array.. only like four users, and writes are cached out of the vm to hypervisor memory
[17:32] <ChoppingBrocoli> Interesting, is it true that all network traffic with destination client network streams through 1 physical box? I am affraid that if I lose 1 box I will loose whole cluster
[17:32] * sleinen (~Adium@2001:620:0:2d:45a5:c187:68a7:ed00) Quit (Ping timeout: 480 seconds)
[17:32] <wrale> tough sell
[17:33] * thorus (~jonas@212.114.160.100) has left #ceph
[17:33] <wrale> things seem to be changing all the time with that stuff.. i believe nova-network allows load balancing h/a
[17:33] <wrale> but quantum complicated matters.. i don't know if they've cleared it up yet
[17:34] <wrale> you can still choose nova-network, if you want
[17:34] <wrale> if you don't care about losing per-tenant private networks and the like
[17:34] <ChoppingBrocoli> Oh I thought that went away with grizzly?
[17:34] <wrale> oh.. maybe so.. eek
[17:34] <wrale> i read that it would stick around
[17:34] * scalability-junk (uid6422@id-6422.ealing.irccloud.com) has joined #ceph
[17:34] <wrale> grain of salt
[17:35] <wrale> i'm running away from classical IaaS, if i can manage to
[17:35] <wrale> lol
[17:35] <ChoppingBrocoli> I hope so, I would rather have earch compute node managing their own traffic...otherwise think of the throughput requirments for the network controller
[17:36] * berant (~blemmenes@gw01.ussignalcom.com) Quit (Ping timeout: 480 seconds)
[17:36] * berant_ is now known as berant
[17:36] <wrale> i use ovirt, too.. that's good for long running power/disk/mem draw vm's
[17:36] <wrale> they can reboot a vm on a different host if a hypervisor fails, if things are on shared storage
[17:37] <wrale> openstack doesn't automate that, yet
[17:37] <wrale> (i think)
[17:37] <ChoppingBrocoli> Yea but storage is gluster or NFS..gluster is SO slow
[17:37] <wrale> yeah.. i hear that
[17:37] <wrale> it's somewhat refreshing to think ahead of industry, sometimes, though :)
[17:37] <ChoppingBrocoli> My hardware that is running ceph now was running gluster and I was getting about 1/10th the speed and there was no self healing
[17:38] <wrale> ouch.. that's good to know
[17:38] <wrale> technically, you could do a classical SAN with ovirt .. iscsi and the like
[17:39] <wrale> (could probably pull an ocf2 or something, i suppose)
[17:39] <wrale> shrug
[17:39] <ChoppingBrocoli> I am not a fan of ocf2's locking, really slows things down. THat is why when I found out about ceph it was like finding gold
[17:39] <wrale> i want to eventually break cephfs to find out where people are having such problems
[17:40] <wrale> that's my holy grail right now
[17:40] <ChoppingBrocoli> Right now I have 12 hosts each running standard KVM with storage through RBD. It gets the job done but I am tired of remoting into 13 diff servers
[17:40] <wrale> brb
[17:40] <ChoppingBrocoli> 12*
[17:41] * JustEra (~JustEra@89.234.148.11) Quit (Read error: Operation timed out)
[17:43] <wrale> ChoppingBrocoli: if it's single tenant, you might look at something like salt-cloud
[17:43] <wrale> i'm not sure if it's mature yet
[17:43] <berant> wrale: how come you did RAID 10 with an OSD on top vs. 8 OSDs? From what I've seen in testing (and anecdotally heard) it won't perform as well (and you still have to deal with large RAID rebuild times)
[17:44] <wrale> berant: there's not ceph in my openstack at this time.. only cinder.. it sucks..lol
[17:44] <wrale> it's lvm
[17:44] <berant> gotcha
[17:44] <ChoppingBrocoli> how much maint do you go through with your openstack deployment?
[17:44] <wrale> what's worse is glance writes snapshots to os disk.. it's terrible
[17:45] <wrale> ChoppingBrocoli: very little, really, so long as you give files a place to fall
[17:45] <wrale> (or rotate them) (logs)
[17:45] <wrale> i don't like having no visibility into the lvm though.. that really bothers me
[17:45] <wrale> (from the web interface, specifically)
[17:45] <ChoppingBrocoli> Do you manage most of the time from the web or from cmd?
[17:46] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[17:47] * sleinen (~Adium@2001:620:0:2d:2486:b96a:c1cc:8b0b) has joined #ceph
[17:47] <wrale> ChoppingBrocoli: yeah.. i actually do about 50% for both.. i like to boot to cinder volume, so my vm isn't a 10GB disk nightmare.. in folsom, there's no way to do that from the gui
[17:50] <ChoppingBrocoli> And if you had to do it over again, you would still choose this over standard KVM?
[17:50] <ChoppingBrocoli> Using Virt-Manager and virsh*
[17:50] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[17:50] <wrale> ahh.. brilliant question.. i would choose virsh
[17:51] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:51] <wrale> of course, i'm only one hypervisor in
[17:51] <wrale> with many, i might change my mind... \
[17:51] <wrale> however, now that i'm more in the devops mentality, i think ansible could help me manage
[17:51] <wrale> and even that is only because i use it for the most part as a single tenant system
[17:52] <wrale> multiple tenants wouldn't work without something like openstack, i think
[17:52] <ChoppingBrocoli> Yea I am starting to think openstack has come along way...but still is too much for a 1 man IT and like you said not for single tenant
[17:52] <wrale> without openstack or the like, you also don't get to deploy stacks to fancy api's with things like cloudify
[17:53] * ggreg (~ggreg@int.0x80.net) Quit (Ping timeout: 480 seconds)
[17:53] <wrale> depends on what you're after.. i like my ovirt for my infrastructure, but i'm fairly certain a active active virsh cluster would work better and be less temperamental
[17:54] <ChoppingBrocoli> Yea I think I have found my answer for now, thanks!
[17:54] <wrale> sure thing.. glad to help..
[17:54] <wrale> thanks for the info about ceph and performance :)
[17:55] * tnt (~tnt@109.130.110.3) has joined #ceph
[17:55] <wrale> off to do some work
[17:56] * yehudasa_ (~yehudasa@mfb0536d0.tmodns.net) has joined #ceph
[17:57] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Read error: Operation timed out)
[17:58] * BadaBoum (~Bada@195.65.225.142) Quit (Ping timeout: 480 seconds)
[18:00] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[18:00] * indego (~indego@91.232.88.10) Quit (Quit: Leaving)
[18:04] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) Quit (Quit: Leaving.)
[18:04] * devoid (~devoid@130.202.135.225) has joined #ceph
[18:05] * haomaiwa_ (~haomaiwan@183.220.20.213) has joined #ceph
[18:06] * haomaiwa_ (~haomaiwan@183.220.20.213) Quit (Remote host closed the connection)
[18:06] * haomaiwa_ (~haomaiwan@183.220.20.213) has joined #ceph
[18:06] * ggreg (~ggreg@int.0x80.net) has joined #ceph
[18:06] * foosinn (~stefan@office.unitedcolo.de) Quit (Quit: Leaving)
[18:09] <alphe> when I update radosgw from 61.8 to 67.2 I get a rgw port /rgw socket path error
[18:09] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[18:09] <alphe> I saw my /etc/ceph/ceph.conf and there is a rgw socket path defined there
[18:09] * xarses (~andreww@204.11.231.50.static.etheric.net) has joined #ceph
[18:11] <alphe> ... I think I will reset my ceph cluster and use ceph-fuse /samba it is less a pain to deal with ....
[18:12] * haomaiwang (~haomaiwan@112.193.130.53) Quit (Ping timeout: 480 seconds)
[18:13] * thanasisk (~akostopou@p5DDB8F77.dip0.t-ipconnect.de) Quit (Quit: Leaving)
[18:13] * tziOm (~bjornar@ti0099a340-dhcp0395.bb.online.no) has joined #ceph
[18:14] * janisg (~troll@85.254.50.23) Quit (Ping timeout: 480 seconds)
[18:16] * yehudasa_ (~yehudasa@mfb0536d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[18:17] * haomaiwang (~haomaiwan@119.6.71.165) has joined #ceph
[18:17] <alphe> ok I found a regionmap update so now I don t have anymore the regionmap prob
[18:17] * sleinen1 (~Adium@2001:620:0:25:49da:4c95:f5e4:5c66) has joined #ceph
[18:18] <alphe> now I have a prob with empty user ...
[18:19] <alphe> if I try to create a new user "unable to store user info"
[18:21] * haomaiwang (~haomaiwan@119.6.71.165) Quit (Read error: Connection reset by peer)
[18:21] * haomaiwa_ (~haomaiwan@183.220.20.213) Quit (Ping timeout: 480 seconds)
[18:21] * sleinen2 (~Adium@2001:620:0:2d:45fe:c339:8317:37c0) has joined #ceph
[18:22] * sleinen (~Adium@2001:620:0:2d:2486:b96a:c1cc:8b0b) Quit (Ping timeout: 480 seconds)
[18:22] * sleinen (~Adium@2001:620:0:2d:6554:b9e7:15dc:bb72) has joined #ceph
[18:24] * yehudasa_ (~yehudasa@2602:306:330b:1410:ea03:9aff:fe98:e8ff) has joined #ceph
[18:24] * codice (~toodles@75-140-71-24.dhcp.lnbh.ca.charter.com) Quit (Quit: leaving)
[18:24] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[18:25] * mschiff (~mschiff@p4FD7E1A0.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[18:25] * haomaiwang (~haomaiwan@183.220.22.254) has joined #ceph
[18:27] * sleinen1 (~Adium@2001:620:0:25:49da:4c95:f5e4:5c66) Quit (Ping timeout: 480 seconds)
[18:29] * sleinen2 (~Adium@2001:620:0:2d:45fe:c339:8317:37c0) Quit (Ping timeout: 480 seconds)
[18:32] * sleinen1 (~Adium@2001:620:0:25:d9e3:e08b:6aab:a8a7) has joined #ceph
[18:32] * hybrid5121 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[18:32] * sleinen1 (~Adium@2001:620:0:25:d9e3:e08b:6aab:a8a7) Quit ()
[18:32] * haomaiwa_ (~haomaiwan@119.6.71.165) has joined #ceph
[18:33] <mtl1> Hi. I'm trying to rbd map a single 5TB rbd device. I hope the 5TB device isn't the problem. When I try to do the map I get this show up a few times: libceph: mon0 10.64.0.3:6789 feature set mismatch, my 40002 < server's 2040002, missing 2000000…. and then "rbd: add failed (5) Input/output error"
[18:33] * haomaiwa_ (~haomaiwan@119.6.71.165) Quit (Read error: Connection reset by peer)
[18:35] * tryggvil (~tryggvil@217.28.181.130) Quit (Quit: tryggvil)
[18:37] * devoid (~devoid@130.202.135.225) Quit (Quit: Leaving.)
[18:37] * devoid (~devoid@130.202.135.225) has joined #ceph
[18:37] * devoid (~devoid@130.202.135.225) Quit ()
[18:38] * devoid (~devoid@130.202.135.225) has joined #ceph
[18:38] * sleinen (~Adium@2001:620:0:2d:6554:b9e7:15dc:bb72) Quit (Ping timeout: 480 seconds)
[18:39] * haomaiwang (~haomaiwan@183.220.22.254) Quit (Ping timeout: 480 seconds)
[18:39] * sagelap (~sage@38.122.20.226) has joined #ceph
[18:41] * devoid (~devoid@130.202.135.225) has left #ceph
[18:46] * zynzel (zynzel@spof.pl) Quit (Read error: Connection reset by peer)
[18:51] * ishkabob (~c7a82cc0@webuser.thegrebs.com) has joined #ceph
[18:52] <ishkabob> hey guys, is there any way that i can tell what host has a particular rbd mounted?
[18:54] * buck (~buck@bender.soe.ucsc.edu) has joined #ceph
[18:57] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[18:58] * raso (~raso@deb-multimedia.org) Quit (Ping timeout: 480 seconds)
[18:59] * loicd (~loicd@bouncer.dachary.org) Quit (Remote host closed the connection)
[18:59] * sleinen1 (~Adium@2001:620:0:25:a05a:b9cf:ab9f:e9a9) has joined #ceph
[19:01] <Gugge-47527> ishkabob: with rados listwatchers
[19:01] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Remote host closed the connection)
[19:01] * aliguori (~anthony@cpe-70-112-153-179.austin.res.rr.com) has joined #ceph
[19:01] * ssejour1 (~sebastien@out-chantepie.fr.clara.net) Quit (Quit: Leaving.)
[19:03] * Tamil1 (~Adium@cpe-108-184-67-79.socal.res.rr.com) has joined #ceph
[19:04] <ishkabob> Gugge-47527: thanks. is the block_name_prefix: output from rbd the object that i want to look at?
[19:05] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[19:05] <Gugge-47527> no, something like rbdname.rbd if i remember correctly
[19:05] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:07] * sleinen (~Adium@2001:620:0:25:1565:c679:8cd0:bd12) has joined #ceph
[19:07] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) has joined #ceph
[19:10] <ishkabob> Gugge-47527: according to this (http://www.spinics.net/lists/ceph-users/msg03674.html), you need to get the rbd_header object out of rbd info
[19:10] <ishkabob> but that doesn't seem to work for me
[19:11] <alphe> why after updating radosgw the command radosgw-admin user info --uid=myuser returns me that there is no user
[19:11] <Gugge-47527> i dont see any header object listed in rbd info
[19:12] * sleinen1 (~Adium@2001:620:0:25:a05a:b9cf:ab9f:e9a9) Quit (Ping timeout: 480 seconds)
[19:12] <ishkabob> yeah, when i run rbd info on this guy, i get this
[19:12] <ishkabob> block_name_prefix: rb.0.1103.2ae8944a
[19:13] * YD (YD@a.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[19:13] * YD (YD@d.clients.kiwiirc.com) has joined #ceph
[19:13] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[19:14] * rturk-away is now known as rturk
[19:15] <Gugge-47527> ishkabob: im pretty sure the prefix has nothing to do with the header object
[19:15] <Gugge-47527> which is why i said i think i remember it beeing rbdname.rbd
[19:16] * raso (~raso@deb-multimedia.org) has joined #ceph
[19:18] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[19:19] <ishkabob> ah cool, i thought that didn't work, but in fact, it did
[19:19] <ishkabob> now i just need to convert "watcher=client.5554" into an IP :)
[19:20] * sleinen (~Adium@2001:620:0:25:1565:c679:8cd0:bd12) Quit (Quit: Leaving.)
[19:20] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[19:20] <Gugge-47527> upgrade to dumpling :)
[19:21] <ishkabob> heh, yup, that would do it :)
[19:21] <alphe> upgraded to dumpling and my radosgw stoped to work ...
[19:22] <alphe> I have to format the drives reinstall all
[19:23] * YD (YD@d.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[19:23] * YD (YD@a.clients.kiwiirc.com) has joined #ceph
[19:28] * YD (YD@a.clients.kiwiirc.com) Quit ()
[19:28] * YD (YD@a.clients.kiwiirc.com) has joined #ceph
[19:28] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:29] <Tamil1> alphe: did you restart radosgw after upgrading?
[19:29] <alphe> yes
[19:29] * mschiff (~mschiff@85.182.236.82) has joined #ceph
[19:29] <alphe> I even updated the region map
[19:30] <alphe> it was keeping saying that my there was no user info on command radosgw user info
[19:31] <ishkabob> Gugge-47527: thanks for the help again!
[19:31] <alphe> well anyway the s3 solution was not satisfactory since there is no decent client for windows and since the configuration is so complicated ..
[19:32] * raso (~raso@deb-multimedia.org) Quit (Ping timeout: 480 seconds)
[19:32] <alphe> I mean 2 weeks of work to get the s3 rados /ssl/100-continue working comes the very next update a bang all is to restart from scratch
[19:33] <alphe> what is the best way to clean my hard drives in order to reinstall a brand new ceph-cluster?
[19:33] <alphe> i did ceph-deploy purgedata/purge/uninstall on all nodes
[19:34] <alphe> when I reinstall I need to create a new cluster then do a ceph-deploy disk prepare on all my disks nodes ?
[19:34] <alphe> or do I have to fdisk and remove the patitions ?
[19:34] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[19:35] * gaveen (~gaveen@175.157.181.209) Quit (Remote host closed the connection)
[19:37] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit ()
[19:39] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[19:40] * ishkabob (~c7a82cc0@webuser.thegrebs.com) Quit (Quit: TheGrebs.com CGI:IRC)
[19:41] * dpippenger (~riven@tenant.pas.idealab.com) has joined #ceph
[19:44] * YD (YD@a.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[19:44] * YD (YD@a.clients.kiwiirc.com) has joined #ceph
[19:55] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) has joined #ceph
[19:57] * jlu (~chatzilla@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:59] <jlu> hello - i am a newbie to Ceph. would someone out there please point me to a doc for redhat? i couldn't find it on the main ceph site.
[19:59] <jlu> sorry..for installation doc
[20:00] <dmick> http://ceph.com/docs/master/install/rpm/
[20:02] <jlu> i saw this page, i thought i have to perform "ceph-deploy" at the beginning
[20:02] <jlu> thanks
[20:02] <jlu> so, i don't need "ceph-deploy". I like to setup a 5 nodes cluster
[20:03] * jluis (~JL@89.181.146.94) has joined #ceph
[20:03] * ChanServ sets mode +o jluis
[20:03] <jlu> i am following this page: http://ceph.com/docs/master/start/quick-ceph-deploy/#install-ceph
[20:03] <xarses> you will also need to install redhat-lsb-core on your ceph nodes
[20:03] <dmick> jlu: you can install with ceph-deploy, yes. that would be different
[20:05] <dmick> ceph-deploy isolates you from caring about deb or rpm, Ubuntu or Redhat, or is supposed to
[20:06] <dmick> xarses: ceph-deploy is supposed to be fixed not to require lsb, although it's better
[20:07] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) Quit (Ping timeout: 480 seconds)
[20:09] <jlu> i do have latest lsb-core installed on my nodes
[20:10] <jlu> dmick: can follow the link you provided earlier with the RPM install instead of ceph-deploy?
[20:12] <sjustlaptop> sagewk, gregaf1: seems to me that read_tier/write_tier is less specific than read_cache_pool/write_cache_pool
[20:12] <sjustlaptop> and cache_mode
[20:12] <gregaf1> haha
[20:12] <sjustlaptop> I know, it started that way
[20:12] <sjustlaptop> how do you distinguish between A is a cache pool for B and B is a lower tier of A?
[20:13] <dmick> jlu: use ceph-deploy; it's easier
[20:13] <gregaf1> sjustlaptop: not a problem for me but sagewk asked for it, so let's hear from him?
[20:14] <jlu> dmick: i would love to but the instructions are for ubuntu or debian. i will give it a try
[20:15] <sjustlaptop> gregaf1, sagewk: or is "B is a lower tier of A" something which doesn't happen in the OSDMap?
[20:15] <sagewk> in uds session, done in a bit
[20:15] <sjusthm> k
[20:16] <gregaf1> sjustlaptop: responded to your github comments
[20:16] <xarses> dmick: still needed for cuttlefish repos
[20:16] <gregaf1> at present we would specify B.tier_of = A and A.tiers includes B
[20:17] * jluis (~JL@89.181.146.94) Quit (Quit: Leaving)
[20:17] <dmick> xarses: ceph-deploy is not tied to cuttlefish, releases indepedently
[20:17] <dmick> unless there's something I'm missing, that ought to be fixed
[20:17] <gregaf1> but the other members would be unused and the tiering mechanics would happen with the not-yet-defined (first thing I do today, probably) redirect response from OSD to client
[20:17] * zhyan__ (~zhyan@101.83.125.91) has joined #ceph
[20:17] <gregaf1> when the client tries to access something in pool A that has been tiered into pool B
[20:20] <sjusthm> would A or B have a cache_mode/
[20:20] <sjusthm> ?
[20:21] <xarses> dimick: I'll try again, but the current version threw errors at me last time.
[20:22] <xarses> i just figured it wasn't backwards compat
[20:22] <gregaf1> sjusthm: we'd only have cache_mode for caching pools, so whichever one isn't root
[20:22] <gregaf1> but with your question I thought you were talking about erasure-coded tiering rather than cache tiering
[20:22] <sjusthm> didn't realize there was a sharp distinction
[20:22] <sjusthm> wouldn't EC tiering just have restricted modes?
[20:22] <sjusthm> like "promote on write"
[20:22] <sjusthm> or something
[20:23] <dmick> xarses: if you could that'd be great; we'll file a bug if so
[20:23] * zhyan_ (~zhyan@101.84.115.105) Quit (Read error: Operation timed out)
[20:24] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[20:25] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[20:26] <wrencsok> This use case may not make sense, but I made discovery on a bobtail (56.6) and dumpling (67.2) cluster using a 2G win2k8 qemu vm mapping a 150G rbd volume on the ceph clusters. Both clusters had a minor load on them and load averages around 4 to 5. Using iometer on the win2k8 guest and setting outstanding io's to 40 or 50, then running the default iometer test with 2 workers against the mounted rbd volume will kill my clusters. It somehow
[20:26] <xarses> dmick: I also use a closed pakage env, will that be a problem with the current ceph-deploy?
[20:26] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[20:26] <gregaf1> sjusthm: it's more like the "tiers" member specifies pools that are somehow related to or subordinate to this pool, and that tier_of means that this pool is a subordinate of the specified pool
[20:26] <sagewk> back
[20:26] <sjusthm> I suppose that seems a bit fuzzy to me
[20:26] <sjusthm> the goal is to try to unify the cacher/cachee and tier_of/teir_for relationships?
[20:27] <gregaf1> the read, write, and cache_mode members are only for caching, and would mean "proactively redirect reads/writes to this pool" and "OSDs hosting this pool, this is your caching behavior" respectively
[20:27] <sjusthm> I see
[20:27] <joshd> mtl1: your kernel doesn't support crush tunables v2 (http://ceph.com/docs/master/rados/operations/crush-map/#which-client-versions-support-crush-tunables2)
[20:27] <dmick> xarses: don't know what that means
[20:27] <sjusthm> for A is a read-only cache for B
[20:27] <sjusthm> would be A has tier_of = B
[20:27] <sagewk> the redirects (for cold/EC tier) will just redirect into a copy in one of the tiers; i don't htink any other pg_pool_t stuff will be needed to describe client behavior
[20:27] <sjusthm> B has read_tier = A, tiers={A}
[20:27] <sagewk> sjusthm: yeah
[20:28] <sjusthm> A has cache_mode = READONLY
[20:28] <sjusthm> ?
[20:28] <gregaf1> so clients care about the read and write members; OSDs care about the cache_mode (if specified) and might in the future care about the others; monitors use the tier_of and tiers members to verify that users are making valid configuration choices
[20:28] <sagewk> i think the main remaining unknown here is where the policy bits are described (when deos the osd promote, etc.); that will be added as it gets better defined i think
[20:28] <sagewk> cache_mode captures the caching case at least
[20:28] <sjusthm> yeah
[20:29] <sjusthm> multiple read_tiers might be a thing though (multiple read-only cache pools)
[20:29] <sjusthm> or not
[20:29] <sagewk> yeah
[20:29] <mtl1> joshd: thanks, I'll try to get to updating that in a bit.
[20:29] * sprachgenerator (~sprachgen@130.202.135.204) Quit (Quit: sprachgenerator)
[20:30] <sjusthm> can tiers ever be anything other than {read_tier, write_tier}?
[20:30] <sagewk> that's what that customer wants. at that point we'll need to figure out how to describe the policy (e.g, use the crush hierarchy to choose the closest read tier)
[20:30] <sagewk> yes; the ec would be neither
[20:30] <sagewk> read/write_teir just tell teh objecter where to go first
[20:30] <sjusthm> ok
[20:30] <sjusthm> makes sense
[20:30] <sagewk> could start_tier_{read,write} maybe
[20:31] <sjusthm> if the tiering case, wouldn't the fast pool be the "main" pool and the slow pool be the tier?
[20:31] * sprachgenerator (~sprachgen@130.202.135.204) has joined #ceph
[20:31] <sjusthm> seems like read_tier/write_tier would always be caches
[20:31] <sagewk> yeah
[20:32] <sjusthm> how would the tiers set be used?
[20:33] <sjusthm> it seems like the only part of tiering which would need to be in the OSDMap would be the policy information, everything else would be captured in redirect objects?
[20:36] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:37] <sagewk> sjusthm: right now it's just used for sanity checking on the cli. semantically i don't think it's necessary yet, but i like the idea of enforcing the relationship (i.e., not making a single pool a cache for 2 others)
[20:37] <sjusthm> ok
[20:39] <sjusthm> seems like read_tier/write_tier should be write_cache/read_cache unless they can ever refer to a pool without cache_mode set
[20:39] <sjusthm> looks good otherwise
[20:39] <sjusthm> and I don't feel strongly about that one
[20:40] * Tamil1 (~Adium@cpe-108-184-67-79.socal.res.rr.com) Quit (Quit: Leaving.)
[20:40] <sagewk> caching is what we're talking about now, but i'd like to keep it more general if we can in other possibilities comeup
[20:40] <sagewk> *if
[20:42] <sjusthm> ok
[20:44] * raso (~raso@deb-multimedia.org) has joined #ceph
[20:50] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[20:52] * sprachgenerator (~sprachgen@130.202.135.204) Quit (Quit: sprachgenerator)
[20:58] * Tamil1 (~Adium@cpe-108-184-67-79.socal.res.rr.com) has joined #ceph
[20:59] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[20:59] <xarses> dmick: we only use packages that are added to a internal repo after a version has been approved. I need ceph-deploy to not fetch from the Internet if the package manager can provide the package using the pre-configured repos. It looks like ceph-deploy (install) wget's everything from the ceph.com sites
[20:59] * devoid (~devoid@130.202.135.225) has joined #ceph
[20:59] * devoid (~devoid@130.202.135.225) has left #ceph
[21:00] <alfredodeza> xarses: have you seen the proxy config in ceph-deploy docs?
[21:00] <alfredodeza> that addresses this problem
[21:00] <alfredodeza> ceph-deploy docs?
[21:00] <kraken> https://github.com/ceph/ceph-deploy#ceph-deploy----deploy-ceph-with-minimal-infrastructure
[21:00] <alfredodeza> xarses: ^ ^
[21:01] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[21:05] <xarses> alfredodeza: I don't think it does, I must use the package manager repo if it can satisfy the deps, in some cases the nodes wont even have internet access.
[21:06] <alfredodeza> you mean, wget is not the problem, but the actual repos?
[21:06] <alfredodeza> and by repos I mean the repos location
[21:08] <xarses> wget is part of the problem, the packages must come from the local repository, and it should be using the native package manager for the distro
[21:08] <alfredodeza> ceph-deploy does use the native package manager for the distro
[21:09] <alfredodeza> when you say 'local repository' you mean *your* repository ?
[21:09] <xarses> yes
[21:13] <xarses> I'm guessing that i dont understand enough of what ceph-deploy install does
[21:13] <xarses> i though it was wget'ing the packages
[21:13] <alfredodeza> it does not
[21:13] <alfredodeza> at least, not the packages
[21:13] <alfredodeza> it uses native distros to install (yum, apt, rpm)
[21:14] <alfredodeza> it does use wget, but to fetch the repo keys
[21:14] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[21:15] <xarses> can /is that bypassed if the packages are in my repo?
[21:16] * sleinen1 (~Adium@2001:620:0:25:60b5:2380:9cf:3b53) has joined #ceph
[21:16] <alfredodeza> interesting question
[21:16] <alfredodeza> I don't think so, but let me double check
[21:18] <alfredodeza> xarses: what distro are you using?
[21:18] * allsystemsarego (~allsystem@5-12-37-127.residential.rdsnet.ro) has joined #ceph
[21:18] <xarses> centos/rhel
[21:19] <xarses> ubunto eventually
[21:19] <alfredodeza> it will first import the key from: 'su -c \'rpm --import "https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/{key}.asc"
[21:19] <xarses> ubuntu even
[21:19] <alfredodeza> ah
[21:19] <alfredodeza> well, similarly
[21:20] <alfredodeza> let me share a link to the actual process it follows
[21:20] <alfredodeza> I recently fixed a bit that and should be readable
[21:20] <alfredodeza> xarses: https://github.com/ceph/ceph-deploy/blob/master/ceph_deploy/hosts/debian/install.py#L6
[21:21] <alfredodeza> so I guess the thing here would be to allow a flag to specify the repo
[21:22] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:22] <xarses> we would already have the key installed for the repo
[21:22] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[21:23] <alfredodeza> ok
[21:24] <alfredodeza> so, something like `ceph-deploy install --local {node}` would work?
[21:24] <alfredodeza> where
[21:24] <alfredodeza> where `--local` means something where we would not fetch keys or alter sources.list ?
[21:25] <xarses> yes, that would be great, also expecting it to die if it cant complete its work
[21:25] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:26] <alfredodeza> or maybe a better flag name like `--install-pkgs-only`
[21:26] <alfredodeza> xarses: it currently does, doesn't it?
[21:30] * sleinen1 (~Adium@2001:620:0:25:60b5:2380:9cf:3b53) Quit (Quit: Leaving.)
[21:30] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[21:31] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[21:32] <xarses> yes, I think it rases an exception
[21:33] * athrift (~nz_monkey@203.86.205.13) Quit (Remote host closed the connection)
[21:35] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) Quit (Remote host closed the connection)
[21:37] <xarses> either way a flag would be tremendously helpful.
[21:37] <alfredodeza> an exception?
[21:37] * jskinner (~jskinner@199.127.136.233) has joined #ceph
[21:37] <alfredodeza> I am pretty sure ceph-deploy is handling all remote issues
[21:37] <alfredodeza> are you using the latest version?
[21:38] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:39] <xarses> no
[21:39] <xarses> 1.0.0
[21:39] <alfredodeza> aha!
[21:39] <xarses> i need to re-try 1.2.2
[21:39] <alfredodeza> you should!
[21:39] <alfredodeza> there is a *massive* amount of bugs that have been fixed
[21:39] <xarses> it threw an exception the first time i tried it so i didn't think it was compatble with cuttlefish
[21:39] * ssejour (~sebastien@lif35-1-78-232-187-11.fbx.proxad.net) has joined #ceph
[21:40] <xarses> my perception of the old version being placed with the cuttlefish repo was that it was the highest version that is supposed to work with cuttlefish
[21:40] * bandrus (~Adium@12.248.40.138) has joined #ceph
[21:40] <alfredodeza> issue 6160
[21:40] <kraken> alfredodeza might be talking about: http://tracker.ceph.com/issues/6160 [allow installation of packages only]
[21:40] <alfredodeza> xarses: ^ ^
[21:40] <xarses> ty
[21:41] <alfredodeza> xarses: oh not at all. ceph-deploy has a separate release cycle because of this
[21:41] <alfredodeza> we are concentrating in improving it *as much as we can* so therefore we need to release as often as possible
[21:41] * Cube (~Cube@12.248.40.138) has joined #ceph
[21:41] <alfredodeza> in the past 4 weeks there have been about 30 bug fixes
[21:41] <alfredodeza> which you are missing out :)
[21:42] <xarses> yes, like i said, because there an old version in the cuttlefish repo, i thought it was the only version i was supposed to use
[21:43] * sel (~sel@212.62.233.233) has joined #ceph
[21:45] <sel> I'm working on a ceph cluster to be a backend for a nfs server, Things are going quite well, but I've got a question about snapshots, is there a way to make the snapshot consistent in regards to the filesystem (I believe lvm can do filesystem consistent snapshots)
[21:46] <sagewk> alfredodeza: fwiw jamespage has a patch that does something similar for the ubuntu version of ceph-deploy so that it uses the distro packages.. i think it just skips the apt sources step(s)
[21:46] <alfredodeza> is that patch somewhere?
[21:46] <sagewk> alfredodeza: maybe --use-default-sources or --use-distro-sources would make more sense
[21:47] <sagewk> probably but it's not suitable as-is because it does that by default (for ubuntu), and we want ceph.com by default
[21:47] <sagewk> http://packages.ubuntu.com/saucy/ceph-deploy
[21:48] <sagewk> i think the patch lives in teh debian.tar.gz linked on the right?
[21:49] <joshd> sel: not automatically yet for kernel rbd, but you can do it manually with xfs_freeze/snapshot/xfs_thaw
[21:49] * alfredodeza looks
[21:50] <sel> Ok, so I need to use xfs, what about ext4, any similar feature there?
[21:50] <joshd> sel: it's actually a vfs feature, the binary to use it is just named after xfs for historical reasons
[21:51] <xarses> on a separate topic, is there a method for ceph-deploy to add monitors to the cluster. We are using puppet is some overly complicated way. The jist is that we wont know exactly which nodes will form the monitors until puppet evaluates them. I do know that the first node evaluated will always have a mon installed.
[21:51] <joshd> sel: so it'll work for any fs
[21:51] <sel> Ok, thanks, that'll solve my problem I think
[21:55] * allsystemsarego (~allsystem@5-12-37-127.residential.rdsnet.ro) Quit (Quit: Leaving)
[21:56] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has left #ceph
[22:00] <dmsimard> Is there some documentation around how to tweak the amount of placement groups/pools to use for optimal performance ?
[22:00] * rturk is now known as rturk-away
[22:00] <alfredodeza> sagewk: just gave that tar.gz a look and I see everything (for installtion) exactly the same :/
[22:01] <sagewk> dmsimard: http://ceph.com/docs/master/rados/operations/placement-groups/
[22:01] <sagewk> alfredodeza: http://fpaste.org/35890/13778064/
[22:01] <sagewk> is the patch
[22:02] * ssejour1 (~sebastien@lif35-1-78-232-187-11.fbx.proxad.net) has joined #ceph
[22:02] <dmsimard> sagewk: Yeah, I came across that document too. That's a good rule of thumb to use ?
[22:02] <sagewk> (was in debian/patches)
[22:02] * alfredodeza looks
[22:02] <alfredodeza> ah, no wonder I did not see any difference
[22:02] <dmsimard> sagewk: Are there different calculations for various usage such as block storage, object storage or filesystem ?
[22:02] <alphe> i likwe the verbose new ceph-deploy
[22:03] <alphe> :)
[22:03] <sagewk> dunno if that's the right approach, but whatever you do will hopefully capture this use-case too (modulo the ubuntu package tweaking the default behavior)
[22:03] <alfredodeza> alphe: \o/
[22:03] <alfredodeza> high five
[22:03] <kraken> \o
[22:03] <sagewk> dmsimard: not really; same rules of thumb apply
[22:03] <alphe> exactly !
[22:03] <sagewk> * 30 or whatever it says, i forget exactly
[22:03] <alfredodeza> we are working really hard to make it even better :)
[22:03] <dmsimard> sagewk: Thanks.
[22:04] <alphe> can someone tells me why my ceph.admin.client.keyring is nowhere to be found after creating the monitors ?
[22:04] <alphe> my gues is that I didnt cleaned enought the ceph prevous install ...
[22:04] <sagewk> maybe; ceph daemon mon.`hostname` mon_status
[22:06] <sel> On the topic of ceph-deploy, it would be very nice to be able to set the osd id for each osd created...
[22:07] * Tamil1 (~Adium@cpe-108-184-67-79.socal.res.rr.com) Quit (Quit: Leaving.)
[22:07] * ssejour (~sebastien@lif35-1-78-232-187-11.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[22:07] <alphe> when I do the ceph-deploy new the node list has to be all my ceph-cluster s node ?
[22:08] <alphe> or only the nodes that will receive a monitor ?
[22:08] * Tamil1 (~Adium@cpe-108-184-67-79.socal.res.rr.com) has joined #ceph
[22:08] <alphe> Tamil welcome back
[22:09] <alphe> when I do the ceph-deploy new the node list has to be all my ceph-cluster s node ?
[22:09] <alphe> or only the nodes that will receive a monitor ?
[22:09] <sagewk> alphe: only the mon nodes
[22:09] <sagewk> that is probably the problem
[22:10] <alphe> ok thanks and any clue why after creating the mon on my 3 nodes I used the command new on I don t have the ceph.admin.client.keyring ?
[22:10] <alphe> I did the ceph-deploy new on my 3 first node
[22:11] <alphe> then did the mon create on the first node of those 3
[22:11] <alphe> then waited like a minutes for the ceph-create-keys to disapear from active process list
[22:11] <alphe> the created the 2 other mons of those 3
[22:12] <alphe> then did the gatherkeys on all my nodes and ceph.admin.client.keyring was not found anywhere ...
[22:12] <alphe> which is odd
[22:12] <alphe> I m doing the ceph-deploy from a 4th machine and have not that file there neither ...
[22:13] * loicd (~loic@brln-4dbab0de.pool.mediaWays.net) has joined #ceph
[22:13] <sagewk> the key appears on the mon nodes only, but shoudl be generated shortly after a majority of them have the ceph-mon's created.
[22:13] <alphe> hey loicd
[22:13] <loicd> \o alphe
[22:14] <alphe> sagewk weird thing was i hace the other keys ...
[22:14] <alphe> ls -al
[22:14] <Tamil1> alphe: it is ceph.client.admin.keyring
[22:14] <alphe> ceph.bootstrap-mds.keyring ceph.mon.keyring ceph.bootstrap-osd.keyring
[22:14] <alphe> Tamil ok :)
[22:15] * loicd_ (~loicd@bouncer.dachary.org) has joined #ceph
[22:16] <Tamil1> alphe: do you not see it in /etc/ceph?
[22:16] <alphe> on which machines ?
[22:16] <Tamil1> alphe: on the nodes, where ceph-mon is running
[22:17] <alphe> hum actually I rmed all
[22:17] <alphe> but there was a /etc/ceph
[22:18] <Tamil1> alphe: what do you mean, you removed all?
[22:19] <alphe> cleaned all to start again ...
[22:19] <alphe> purgedata purge uninstall ...
[22:19] <Tamil1> alphe: oh, just purge followed by purgedata will do
[22:19] * zynzel (zynzel@spof.pl) has joined #ceph
[22:20] <Tamil1> alphe: no need for uninstall as purge will take care of uninstalling ceph packages as well
[22:20] <Tamil1> alphe: also please make sure to delete the mon.keyring and ceph.conf on your admin node, when you clean start
[22:21] <alphe> yep
[22:22] <alphe> I rm the local dir too ...
[22:22] <sagewk> rm ceph.* inside the ceph-deploy dir will normally do the trick
[22:22] <loicd> loicd_: hi
[22:22] <loicd> /kick loicd_
[22:22] <loicd> ...
[22:24] <alphe> ...
[22:28] * berant (~blemmenes@gw01.ussignalcom.com) Quit (Quit: berant)
[22:28] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:33] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) Quit (Read error: Connection reset by peer)
[22:33] * mozg (~andrei@host86-185-78-26.range86-185.btcentralplus.com) has joined #ceph
[22:34] <mozg> wrencsok, hi there
[22:34] <mozg> are you online by any chance?
[22:34] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) has joined #ceph
[22:37] <mozg> is anyone else noticing higher server loads after upgrading to Dumpling?
[22:37] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[22:37] <mozg> i've noticed that my osd server's load is around 1.5-2 even when my cluster is pretty idle
[22:38] * rturk-away is now known as rturk
[22:38] <mozg> the load used to be around 0.5 on 0.61.7
[22:38] <mozg> i've just noticed that
[22:43] * jlhawn (~jlhawn@208-90-212-77.PUBLIC.monkeybrains.net) has joined #ceph
[22:45] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) Quit (Ping timeout: 480 seconds)
[22:46] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[22:46] <sagewk> sjusthm: sent you something about temp objects
[22:47] * ScOut3R (~scout3r@54026B73.dsl.pool.telekom.hu) has joined #ceph
[22:49] <ircolle> mozg - what version?
[22:49] <sjusthm> sagewk: when is good for hangout?
[22:49] <sagewk> anytime
[22:49] <sagewk> now?
[22:51] <sjusthm> sure
[22:51] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[22:54] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[22:55] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[22:57] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[22:57] * ChanServ sets mode +v andreask
[22:59] * loicd (~loic@brln-4dbab0de.pool.mediaWays.net) Quit (Ping timeout: 480 seconds)
[22:59] * loicd_ is now known as loicd
[23:03] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) Quit (Ping timeout: 480 seconds)
[23:04] <ircolle> mozg - you can try wip-dumpling-perf2 or wait until 67.3 comes out - they resolve the CPU load increase
[23:04] * rudolfsteiner (~federicon@200.68.116.185) has joined #ceph
[23:11] <alphe> ok so I am reinstalled the whole thing I am in the stage after the first ceph-deploy mon create deploy
[23:11] <alphe> at this stage should it create the ceph.client.admin.keyring ?
[23:12] <sagewk> alphe: right; keys don't appear until a majority of the mons have been created
[23:12] <sagewk> need a quorum first
[23:12] * vata (~vata@2607:fad8:4:6:2dcd:32ef:bfab:c53c) Quit (Quit: Leaving.)
[23:14] <alphe> cd /sagewk ok
[23:15] <alphe> after 3 minutes the ceph-create-keys is still runing is that ok ?
[23:15] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[23:15] <alphe> do i have to wait until it stops to create the other mons ?
[23:15] <dmick> you need to create the other mons before it will succeed, and stop
[23:15] <sagewk> sjusthm: wip-dumpling-perf2 ok to merge then?
[23:16] <sjusthm> sagewk: it's in master
[23:16] <alphe> ok done
[23:16] <sjusthm> that's what caused 6151 :P
[23:16] <alphe> all my mons are created
[23:16] <sagewk> oh right, ok
[23:17] * DLange (~DLange@dlange.user.oftc.net) Quit (Quit: take the red pill)
[23:17] <alphe> ok now where should I have the {cluster}.client.admin.keyring ?
[23:17] <alphe> ok found it !
[23:17] <alphe> i think I got what happends the first time I tried ...
[23:18] <alphe> I waited until the ceph-create-keys stops ...
[23:18] * zhyan_ (~zhyan@101.83.160.172) has joined #ceph
[23:18] <alphe> then I created the mon ...
[23:19] <alphe> I will change my personal documentation says I will wait until the return of the ceph deploy and create the other mon like 30 seconds after
[23:19] <alphe> the first mon is created
[23:19] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) has left #ceph
[23:19] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) has joined #ceph
[23:19] * ChanServ sets mode +o sagewk
[23:19] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[23:20] <mozg> ircolle, this is on 0.67.2
[23:20] <mozg> ircolle, do you know when .3 is coming out?
[23:21] * tziOm (~bjornar@ti0099a340-dhcp0395.bb.online.no) Quit (Remote host closed the connection)
[23:21] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[23:22] <alphe> after I created the mon and did the gatherkeys do I have to create osd right away or do i have to prepare the disks on each nodes ?
[23:23] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) Quit (Ping timeout: 480 seconds)
[23:23] * loicd (~loicd@bouncer.dachary.org) Quit (Ping timeout: 480 seconds)
[23:24] * zhyan__ (~zhyan@101.83.125.91) Quit (Read error: Operation timed out)
[23:24] * dmsimard (~Adium@108.163.152.2) Quit (Quit: Leaving.)
[23:24] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[23:25] <sagewk> alphe: you can create the osds whenever you want..
[23:26] <alphe> ok but will that prepare my disks if they have no partition on them ?
[23:26] * zynzel (zynzel@spof.pl) Quit (Ping timeout: 480 seconds)
[23:26] * loicd (~loicd@bouncer.dachary.org) has joined #ceph
[23:26] * rudolfsteiner (~federicon@200.68.116.185) Quit (Quit: rudolfsteiner)
[23:27] <alphe> will ceph-deploy osd create node1:/dev/sda will create initialise etc the disk to be used by the ceph cluster ?
[23:27] <alphe> or do I have to do an extra step ?
[23:27] * ScOut3R (~scout3r@54026B73.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[23:28] * ScOut3R (~scout3r@54026B73.dsl.pool.telekom.hu) has joined #ceph
[23:28] * loicd1 (~loic@brln-4db8015a.pool.mediaWays.net) has joined #ceph
[23:29] * ssejour1 (~sebastien@lif35-1-78-232-187-11.fbx.proxad.net) Quit (Quit: Leaving.)
[23:29] <alphe> ok ceph prepare is run but as there is no partition on my disk the process goes in error
[23:29] <sagewk> that will do everything, including creating partitions. you probably need to add --zap-disk to make it clobber the existing data and partition table.
[23:29] <ircolle> mozg - soonish :-) it's in testing now
[23:30] <mozg> nice
[23:30] <mozg> hopefully it will address the kernel panics that I am having on my vms
[23:31] <sagewk> sjusthm: do you know what the log_keys_debug thing is?
[23:31] <sjusthm> not yet, trying to look at the core now
[23:35] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[23:35] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[23:37] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[23:37] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[23:39] * bandrus (~Adium@12.248.40.138) Quit (Quit: Leaving.)
[23:40] * zynzel (zynzel@spof.pl) has joined #ceph
[23:41] <jlhawn> how much space should I allocate for a journal partition on an OSD?
[23:44] <lurbs> jlhawn: It depends. See: http://ceph.com/docs/master/rados/configuration/osd-config-ref/#journal-settings
[23:45] <lurbs> dmick: Yeah, I think I *am* a bot.
[23:46] <mozg> jlhawn, I am allocating 20gb
[23:46] <mozg> but it all depends on your work load
[23:46] <jlhawn> I see, thanks
[23:46] <mozg> if you have a bunch of writes
[23:46] <mozg> a lot per second
[23:46] <mozg> you might want to allocate more
[23:47] <mozg> but 20gb should be enough
[23:47] <jlhawn> it says the default journal size is 0. does that mean no journaling?
[23:47] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) Quit (Ping timeout: 480 seconds)
[23:49] <alphe> ok I created my osds and they don t appear in the osdmap ...
[23:50] * vanham (~vanham@gateway.mav.com.br) Quit (Remote host closed the connection)
[23:52] <Tamil1> alphe: do you have the ceph-osd process running?
[23:52] <alphe> nope ...
[23:52] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[23:53] <Tamil1> alphe: what does "./ceph-deploy disk list <host> " show?
[23:53] <alphe> 2 disks in xfs
[23:53] <alphe> and 1 system
[23:53] <xarses> if you did ceph-deploy prepare, then you have to activate after
[23:54] <Tamil1> alphe: wanted to see if the disks are active or in prepared state
[23:54] <xarses> if they are active, they should have been mounted and entered into the osdmap
[23:54] * bandrus (~Adium@12.248.40.138) has joined #ceph
[23:54] <alphe> xares i did a ceph-deploy osd create node
[23:55] <xarses> older version of ceph-deploy osd create don't allways activate
[23:55] <alphe> only that triggered the prepare of the disk and said it was all ok
[23:55] <alphe> xarses it is the lastest version of ceph-deploy 1.2.2
[23:55] <xarses> ok, you can try activate anyway
[23:56] <xarses> the most it will do is yell at you if it's allready good
[23:56] <alphe> ok how i do that ?
[23:56] * jlu (~chatzilla@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 23.0.1/20130814063812])
[23:56] <xarses> replace create with activate in the ceph-deploy line
[23:56] <alphe> ok
[23:57] * jskinner (~jskinner@199.127.136.233) Quit (Remote host closed the connection)
[23:58] <alphe> i have ceph-mon but for a strange reason not ceph-osd ...
[23:58] <xarses> still?
[23:59] <alphe> yep
[23:59] <xarses> you can try service ceph -a start and see if it has any messages on the service start

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.