#ceph IRC Log

Index

IRC Log for 2011-11-04

Timestamps are in GMT/BST.

[0:00] <mgalkiewicz> wiki only describes how to authenticate from client perspective
[0:00] <mgalkiewicz> I need separate credentials for each volume
[0:03] <joshd> mgalkiewicz: you can give each user access on a per-pool basis, so you can create a pool for each user, but this doesn't scale that well
[0:03] <mgalkiewicz> I have tens of clients
[0:03] <joshd> it's better handled by whatever management layer you have above rbd
[0:04] <mgalkiewicz> any example?
[0:04] <joshd> openstack
[0:05] <mgalkiewicz> well I am using eucalyptus
[0:05] <joshd> I'm not sure how far along authorization is though - whether it supports volumes well or not
[0:06] <mgalkiewicz> the second idea (except ceph authentication) is to use iscsi
[0:06] <mgalkiewicz> what do you think about that?
[0:09] <todin> btw. is there an easy way create an ext4 via mkcephfs, or do I have to make it by hand?
[0:09] <joshd> I'm guessing it's more work to implement an iscsi exporter than a simple user authorization tool, but I don't know much about eucalytus internals
[0:10] <joshd> todin: by hand
[0:11] <mgalkiewicz> I dont want to implement any software just use existing one to serve my needs
[0:11] <mgalkiewicz> authentication in ceph is poorly documented
[0:12] <mgalkiewicz> I dont have any idea how to start
[0:12] <mgalkiewicz> I would like to share ceph storage among clients which cannot have access to each others files
[0:14] <mgalkiewicz> I thought that rbd is a good way to achieve this. Just create separate volume with different login/pass for each client.
[0:15] <mgalkiewicz> is it possible?
[0:16] <joshd> mgalkiewicz: it is possible by using a different pool for each rbd volume, but it's not going to work well with very large numbers of pools
[0:17] <mgalkiewicz> ok could you point to some documentation describing how to do this?
[0:18] <joshd> mgalkiewicz: http://glance.openstack.org/configuring.html#configuring-the-rbd-storage-backend
[0:18] <joshd> that has the commands to make a pool and user with access to it - you'd just do that before creating a volume
[0:19] <mgalkiewicz> great and I understand that using openstack is not obligatory?
[0:19] <joshd> correct, that's just the clearest place it's documented right now
[0:20] <joshd> we definitely need better documentation about our authentication
[0:21] <mgalkiewicz> ok so I have to create keyring, add key for the client and add information about keyring to ceph.conf
[0:22] <mgalkiewicz> clients' keys should be in the same keyring?
[0:24] <joshd> the keyring contains the secret that lets a client access the image, so you'd want to separate them so that each client can only access their own keyring
[0:24] <mgalkiewicz> ok
[0:24] <mgalkiewicz> and how to associate keyring with rbd volume?
[0:25] <joshd> you associate it with the pool, and then create the volume in that pool
[0:25] <mgalkiewicz> right the pool instead of keyring my mistake
[0:25] <joshd> the 'ceph auth add -i keyring' associates it with the pool
[0:26] <joshd> and you make the volume in a pool with 'rbd create -p poolname -s size imagename'
[0:27] <mgalkiewicz> ok I get it
[0:29] <mgalkiewicz> right know without authentication I can modify rbd images on the client
[0:29] <mgalkiewicz> like creating, removing etc.
[0:30] <mgalkiewicz> I assume that this authentication limits this?
[0:32] <joshd> it limits it to be within the pool
[0:32] <mgalkiewicz> ok
[0:32] <joshd> but if you're using these volumes inside vms, they can't do anything except access the volume(s) you give them
[0:33] <mgalkiewicz> and the last question is it possible to build rbd kernel module?
[0:33] <mgalkiewicz> I have debian squeeze with kernel 2.6.32 and I would like to stay with it
[0:34] <joshd> ah, I'm afraid not
[0:34] <mgalkiewicz> ok anyway thank you for your help
[0:34] <joshd> if you're using qemu, you don't need the kernel module though
[0:35] <joshd> you're welcome :)
[0:35] <mgalkiewicz> no I am using xen but I will figure sth out
[0:35] <mgalkiewicz> once again big thanks
[0:35] <mgalkiewicz> take care, cu
[0:36] * mgalkiewicz (~maciej.ga@85.89.186.247) Quit (Quit: Ex-Chat)
[0:40] * cp (~cp@c-98-234-218-251.hsd1.ca.comcast.net) Quit (Quit: cp)
[0:44] * tserong (~tserong@124-168-227-41.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[0:46] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Quit: Ex-Chat)
[0:48] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[0:54] * tserong (~tserong@124-168-226-185.dyn.iinet.net.au) has joined #ceph
[1:08] * Tv (~Tv|work@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[1:21] <todin> what kind of idelness does a osd need do start scrubbing?
[1:24] <joshd> load - configured by osd_scrub_load_threshold
[1:25] <todin> joshd: can I set it via the ceph.conf file?
[1:25] <joshd> yeah
[1:26] <todin> ok, btw, after a short test, I think it fixed it. 23:55 < gregaf> looks like it's commit 6d6a435190bdf2e04c9465cde5bdc3ac68cf11a4 in the ext4 git tree
[1:27] <joshd> cool, good to know
[1:30] <todin> does it mean if the load is lower than 0.5 the scrub will start? OPTION(osd_scrub_load_threshold, OPT_FLOAT, 0.5)
[1:31] <joshd> yeah, with some probability - looks like 1/3 of the time when load is lower that that it'll schedule scrubbing for a pg
[1:33] <todin> that doesn't work, all of my nodes have load at around 0.22, but there is no scrub
[1:33] * nwatkins` (~user@kyoto.soe.ucsc.edu) Quit (Remote host closed the connection)
[1:35] <joshd> if you turn up osd debugging to 20 there should be something interesting in the logs
[2:11] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[2:13] * nwatkins` (~user@c-50-131-197-21.hsd1.ca.comcast.net) has joined #ceph
[2:13] * nwatkins` (~user@c-50-131-197-21.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[2:30] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[3:30] * The_Bishop (~bishop@port-92-206-76-12.dynamic.qsc.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[4:59] * grape (~grape@c-76-17-80-143.hsd1.ga.comcast.net) Quit (Quit: leaving)
[6:16] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[6:40] * tserong (~tserong@124-168-226-185.dyn.iinet.net.au) Quit (Read error: Connection reset by peer)
[6:53] * tserong (~tserong@124-168-231-155.dyn.iinet.net.au) has joined #ceph
[8:24] * gregaf1 (~Adium@aon.hq.newdream.net) has joined #ceph
[8:28] * gregaf (~Adium@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[10:04] * fronlius (~Adium@testing78.jimdo-server.com) has joined #ceph
[12:20] * mgalkiewicz (~mgalkiewi@194.28.49.118) has joined #ceph
[15:00] * mgalkiewicz (~mgalkiewi@194.28.49.118) Quit (Quit: Leaving)
[15:04] * Iribaar (~Iribaar@200.111.172.138) Quit (Quit: Leaving)
[16:16] * tserong (~tserong@124-168-231-155.dyn.iinet.net.au) Quit (Remote host closed the connection)
[16:27] * slang (~slang@chml01.drwholdings.com) has joined #ceph
[16:39] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:23] * Tv (~Tv|work@aon.hq.newdream.net) has joined #ceph
[17:27] * cp (~cp@c-98-234-218-251.hsd1.ca.comcast.net) has joined #ceph
[17:35] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[17:41] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[18:02] * fronlius (~Adium@testing78.jimdo-server.com) Quit (Quit: Leaving.)
[18:26] * cp (~cp@c-98-234-218-251.hsd1.ca.comcast.net) Quit (Quit: cp)
[18:51] * cp (~cp@206.15.24.21) has joined #ceph
[20:36] * verwilst (~verwilst@dD576F1A9.access.telenet.be) has joined #ceph
[21:03] * cp (~cp@206.15.24.21) Quit (Quit: cp)
[21:04] * cp (~cp@206.15.24.21) has joined #ceph
[22:06] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[22:11] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[22:22] <cp> Quick question: it seems that things like ceph -s contact a particular monitor. What happens if this monitor is down?
[22:22] <gregaf1> cp: right now, they fail
[22:22] <gregaf1> which is pretty stupid since they should just contact one of the other monitors
[22:22] <cp> do they hang or return?
[22:22] <cp> (ie is there a timeout)
[22:22] <gregaf1> I think they give up eventually, though I'd have to check
[22:22] <cp> right - contacting another would make sense.
[22:23] <gregaf1> you can also specify the monitor to connect to with -m, so if you're writing a management script maybe that's an option
[22:25] <gregaf1> I'd have to check on how hard it would be to actually fix this, and how much of it still being there is just inertia (because really, your monitors should be up, and who uses ceph -s that can't just hit ctrl-c?)
[22:25] <gregaf1> :)
[22:36] <joshd> right now the ceph tool doesn't time out because it doesn't pass a timeout to monclient->authenticate
[22:42] <cp> Right now I'm having issues adding my second monitor - it gets hung
[22:45] <cp> I'm adding to the ceph.conf file, copying it around, then running 'ceph mon add', and then copying across the mon dir and actually starting the mon
[22:45] <cp> is that the right order of things?
[22:48] <cp> joshd: and it's possible to add that timeout (given how monclient->authenticate works) and then try another monitor when the first one fails?
[22:50] <joshd> the timeout is easy to add, I'm not familiar wih the part that chooses which monitor to talk to
[22:50] <joshd> and I think that's the right order for adding a monitor
[22:52] <gregaf1> Tv would know better; he's been doing some work with that stuff for chef
[22:53] <Tv> cp: monitor addition is gnarly right now
[22:53] <Tv> cp: http://ceph.newdream.net/wiki/Monitor_cluster_expansion is correct as far as i know
[22:54] <Tv> cp: what does the new mon log?
[22:55] <cp> The new mon is never actually created.
[22:55] <cp> the ceph mon add step hangs
[22:55] <Tv> cp: sounds like you don't have a healthy cluster to begin with, then
[22:55] <Tv> cp: "ceph -s"?
[23:01] <cp> Tv: dragged away - hopefully back later (thanks for helping)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.