#ceph IRC Log

Index

IRC Log for 2012-05-26

Timestamps are in GMT/BST.

[0:01] <CristianDM> Thanks.
[0:01] * jmlowe (~Adium@140-182-131-63.dhcp-bl.indiana.edu) Quit (Quit: Leaving.)
[0:02] <CristianDM> In future, is it possible than the rbd volumes can be mounted in many instances and not only in one?
[0:02] <CristianDM> Or this is impossible
[0:03] <gregaf> you can already do this, but you need to be careful with the options you use and how you access it if you want any sort of coherency
[0:04] <CristianDM> When I try this openstack show
[0:04] <CristianDM> Error: Error attaching volume: Invalid volume: status must be available
[0:04] <CristianDM> When attach to one instance, I can??t attach to another
[0:08] * stass (stas@ssh.deglitch.com) Quit (Read error: Connection reset by peer)
[0:08] * stass (stas@ssh.deglitch.com) has joined #ceph
[0:09] <gregaf> that's an openstack thing, not an rbd thing, you'll have to ask them :)
[0:13] <CristianDM> Ok. I will
[0:15] <iggy> you have (and probably will again) received this warning: you can't just mount a normal filesystem in more than one place at a time
[0:19] * s[X]_ (~sX]@ppp59-167-154-113.static.internode.on.net) has joined #ceph
[0:23] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) has joined #ceph
[0:30] <elder> joshd, can you explain why an OSD's peer number (in the client) is not set until somewhat late during initialization?
[0:31] <elder> It looks like there's the notion of an "acting" osd or something.
[0:32] <joshd> elder: which code are you looking at? I'm not sure what you mean by peer number
[0:32] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) Quit (Quit: Leaving.)
[0:32] <elder> In ceph-client, net/ceph/osd_client.c
[0:32] <elder> Each oconnectino has a "peer name" structure.
[0:32] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) has joined #ceph
[0:32] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) Quit ()
[0:33] <elder> And that structure defines a type and a number for the peer at the other end of the connectino.
[0:33] <elder> So for an OSD, it's type is CEPH_ENTITY_TYPE_OSD.
[0:33] <elder> The number seems to be a small unique integer. For an MDS it's set immediately at initialization.
[0:34] <elder> For an OSD it seems to calculate it a bit later on, and I'm just trying to understand what's going on at a conceptual level.
[0:36] <elder> Note that whenever my fingers say "connectino" they really mean "connection."
[0:36] * sjust (~sam@aon.hq.newdream.net) has joined #ceph
[0:37] <elder> gregaf, I suppose I should be asking you. I gather you know more about the communication channels.
[0:37] <elder> But I don't know if the peer name is visible at the other end of the connection.
[0:37] <gregaf> yeah, I don't remember off-hand and I'm trying to see if I can dig it up in my brain
[0:37] <elder> You use an ice pick for that?
[0:38] <elder> Or a tiny shovel?
[0:38] <nhm> elder: shop vac maybe
[0:39] <joshd> are mds ids not reused for different addresses maybe?
[0:39] <gregaf> no, they are
[0:39] <elder> I think that's most likely it. That is, it would identify one uniquely despite the socket address that gets assigned (or reassigned) to it.
[0:39] <gregaf> I think this must be a kernel client thing, as the SimpleMessenger initializes type and ID at the beginning, unless I'm very confused
[0:40] <elder> What does it use the id for
[0:40] <elder> (question mark)
[0:40] <gregaf> it's the ID for the entity on the other end
[0:40] <gregaf> client.42, mds.0, osd.12
[0:41] <elder> Is it used though? Or just carried along for the benefit of puny humans?
[0:42] <gregaf> the type is used in userspace to determine the kind of connection (lossy/not, server/peer/client)
[0:42] <elder> Yes I see that.
[0:42] <elder> But I don't think the number is actually ever used.
[0:43] <elder> Just set.
[0:43] <elder> And shown.
[0:44] <gregaf> I'm actually not seeing a corresponding number that's individually stored in the userspace version, so yes
[0:44] <elder> It also makes me wonder whether the number is correlated between clients, or if, for example, OSD 0 on client A might be named OSD 2 on client B.
[0:44] <gregaf> it's stored as part of the entity_addr_t or one of its related types in userspace
[0:45] <gregaf> no, they're definitely the same between clients; it's part of the daemon's logical identity for the cluster!
[0:45] <elder> Well then does the peer define its number then?
[0:46] <gregaf> well, the client initiates all connections and it should know them before starting the connection
[0:47] <elder> That makes sense. It might be nice to have the server identify itself though. We might be able to use something like that to support automatic configuration someday.
[0:48] <elder> So I suppose an OSD's number will be something in the map from the monitor?
[0:49] <gregaf> yes
[0:49] <elder> OK.
[0:49] <gregaf> they also exchange those numbers as part of the connection handshake
[0:49] <gregaf> so it is identifying itself
[0:50] <gregaf> it's just the kernel client should know the identity of anybody it's connecting to because only clients and peers initiate connections, and nobody is a client or peer to the kernel client ;)
[0:50] <elder> OK. I'm just looking at the code and wanting to encapsulate some initialization a bit better. But before doing so I had to get a better idea of what was going on at a higher level.
[0:50] <elder> Now I think I have what I need, thank you for working through it with me.
[0:51] <gregaf> yep, np
[1:13] * Tv_ (~tv@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[1:35] * detaos (~quassel@c-50-131-106-101.hsd1.ca.comcast.net) has left #ceph
[2:10] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) Quit (Remote host closed the connection)
[2:14] * BManojlovic (~steki@212.200.243.232) Quit (Quit: Ja odoh a vi sta 'ocete...)
[2:36] * gregorg_taf (~Greg@78.155.152.6) Quit (Ping timeout: 480 seconds)
[2:48] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[2:59] * lofejndif (~lsqavnbok@19NAAI2GV.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[3:41] * eternaleye_ (~eternaley@tchaikovsky.exherbo.org) has joined #ceph
[3:42] * eternaleye (~eternaley@tchaikovsky.exherbo.org) Quit (Remote host closed the connection)
[4:16] * jpieper (~josh@209-6-86-62.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) Quit (Quit: Ex-Chat)
[4:18] * renzhi (~renzhi@180.169.73.90) has joined #ceph
[4:23] * chutzpah (~chutz@216.174.109.254) Quit (Quit: Leaving)
[4:25] <joshd> renzhi: object locators are described in librados.h, you can set them by calling rados_ioctx_locator_set_key() and then writing to objects
[4:26] <renzhi> joshd: :O
[4:27] <renzhi> I didn't notice that one
[4:28] <joshd> usually you don't need them
[4:29] <renzhi> so, this should be the pg id?
[4:32] <joshd> it's a special property you can set so that multiple objects map to the same pg
[4:33] <joshd> normally the pg an object is in is determined by it's name
[4:34] <joshd> the key is listed as null if no locator is set
[4:34] <renzhi> So, it's not really used to determined where the object is placed, it's more like used to determine which objects are grouped together if they share the same locator key.
[4:34] <renzhi> If I understand this correctly
[4:34] <joshd> right
[4:35] <dmick> well it does control placement in terms of the pg, when it's set, right?...
[4:35] <joshd> yeah, but it doesn't tell it to store the object on a specific osd, or a specific pg
[4:35] <dmick> oh ok
[4:35] <renzhi> can you describe a use case for this locator key?
[4:36] <joshd> the only one I know of is rados_clone_range, which requires the source and destination objects to be in the same place
[4:37] <renzhi> ok
[4:37] <renzhi> therefore, if I do a clone range, I should get the same key for the original and the cloned, by default?
[4:38] <joshd> I think so, I'm not sure about that
[4:38] <renzhi> ok, thx
[4:39] <joshd> you might need to have called rados_ioctx_locator_set_key to the same value as the source used
[4:39] <renzhi> ok
[4:40] <renzhi> Another question, not related to this one, if you know the answer.
[4:40] <renzhi> I'm building a backup storage, I want the backup app to be able to write and read, but not delete. I.e, any object written to rados can be read, but no object shall be deleted.
[4:41] <renzhi> is there a way to set the policy?
[4:45] <joshd> there's no way right now, no
[4:45] <joshd> you might consider the backup app having only read permissions though
[4:45] <joshd> and having a separate user with write access for restoring
[4:46] <renzhi> I see
[4:47] <joshd> you could emulate any permission model by giving them only execute permissions, and making every operation go through an osd class, but that's a bunch of extra work
[4:47] <renzhi> actually, that's what I was thinking, put in a class to do that, but it seems a lot work.
[4:48] <renzhi> And I thought this policy settings should be a built-in feature instead :)
[4:49] <joshd> we've though a bunch about the capabilities model, and it will definitely be more useful in the future, but I'm not sure what it'll look like yet
[4:49] <joshd> not being able to delete is one I hadn't though of
[4:50] <joshd> time for me to go now though, have a good weekend
[4:50] <renzhi> thx, you too
[4:51] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[5:02] * Ryan_Lane (~Adium@216.38.130.168) Quit (Quit: Leaving.)
[5:58] * joao (~JL@aon.hq.newdream.net) Quit (Remote host closed the connection)
[6:11] * The_Bishop (~bishop@cable-86-56-102-91.cust.telecolumbus.net) has joined #ceph
[6:12] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[6:26] * aa (~aa@r190-135-11-50.dialup.adsl.anteldata.net.uy) has joined #ceph
[6:42] * s[X]_ (~sX]@ppp59-167-154-113.static.internode.on.net) Quit (Remote host closed the connection)
[6:42] * dmick (~dmick@aon.hq.newdream.net) has left #ceph
[7:09] * cattelan is now known as cattelan_away
[7:26] * Ryan_Lane (~Adium@c-98-210-205-93.hsd1.ca.comcast.net) has joined #ceph
[7:31] * andresambrois (~aa@r190-135-11-50.dialup.adsl.anteldata.net.uy) has joined #ceph
[7:31] * aa (~aa@r190-135-11-50.dialup.adsl.anteldata.net.uy) Quit (Read error: Connection reset by peer)
[8:15] * Theuni (~Theuni@89.204.153.54) has joined #ceph
[8:33] * Theuni (~Theuni@89.204.153.54) Quit (Quit: Leaving.)
[8:47] * Qten (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Ping timeout: 480 seconds)
[8:49] * Qten (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[9:06] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) Quit ()
[9:33] * gregaf (~Adium@aon.hq.newdream.net) Quit (Read error: Connection reset by peer)
[9:33] * gregaf (~Adium@aon.hq.newdream.net) has joined #ceph
[9:38] * The_Bishop (~bishop@cable-86-56-102-91.cust.telecolumbus.net) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[10:05] * andresambrois (~aa@r190-135-11-50.dialup.adsl.anteldata.net.uy) Quit (Ping timeout: 480 seconds)
[11:37] * renzhi (~renzhi@180.169.73.90) Quit (Quit: Leaving)
[11:50] * Theuni (~Theuni@82.113.106.163) has joined #ceph
[12:21] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[12:40] * jpieper (~josh@209-6-86-62.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) has joined #ceph
[12:53] * nhm (~nh@68.168.168.19) Quit (Ping timeout: 480 seconds)
[12:57] * nhm (~nh@68.168.168.19) has joined #ceph
[13:04] * Qten (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Read error: Connection reset by peer)
[13:04] * Qten (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[13:28] * BManojlovic (~steki@212.200.243.232) has joined #ceph
[13:57] * Theuni (~Theuni@82.113.106.163) Quit (Ping timeout: 480 seconds)
[14:06] * Theuni (~Theuni@dslb-088-066-111-066.pools.arcor-ip.net) has joined #ceph
[14:58] * pmdz (~pmdz@cmv29.neoplus.adsl.tpnet.pl) has joined #ceph
[15:20] * lofejndif (~lsqavnbok@83TAAGABL.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:28] * lofejndif (~lsqavnbok@83TAAGABL.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[15:36] * Theuni (~Theuni@dslb-088-066-111-066.pools.arcor-ip.net) Quit (Quit: Leaving.)
[15:44] <pmdz> hello. Is it possible to use ceph on machine with grsecurity kernel patch?
[15:45] <pmdz> I'm constantly getting this: FATAL: Could not load /lib/modules/3.2.13-grsec-xxxx-grs-ipv6-64/modules.dep: No such file or directory
[16:16] * Theuni (~Theuni@dslb-088-066-111-066.pools.arcor-ip.net) has joined #ceph
[17:08] * lofejndif (~lsqavnbok@28IAAE1X2.tor-irc.dnsbl.oftc.net) has joined #ceph
[17:31] * lofejndif (~lsqavnbok@28IAAE1X2.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[18:03] * pmdz (~pmdz@cmv29.neoplus.adsl.tpnet.pl) Quit (Ping timeout: 480 seconds)
[18:03] * nhm (~nh@68.168.168.19) Quit (Read error: Connection reset by peer)
[18:32] * cattelan_away is now known as cattelan
[18:33] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:05] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:13] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:25] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:30] * xnox (~xnox@kursk.surgut.co.uk) Quit (Remote host closed the connection)
[19:33] * xnox (~xnox@kursk.surgut.co.uk) has joined #ceph
[19:33] * xnox is now known as Guest1559
[19:45] * andresambrois (~aa@r186-52-134-218.dialup.adsl.anteldata.net.uy) has joined #ceph
[21:28] * joao (~JL@aon.hq.newdream.net) has joined #ceph
[21:29] <joao> hello #ceph
[21:43] * raul (~raul@cpe-70-123-201-24.satx.res.rr.com) has joined #ceph
[21:43] * raul is now known as qwerty
[21:44] * qwerty is now known as unknow1212
[21:45] <unknow1212> Hey guys. I want to learn and test ceph. Can somebody tell me if i can do a cluster with it and instead of exporting a mount point to the end clients to export a logical drive or something like that via iscsi ?
[21:55] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) has joined #ceph
[22:17] <ajm-> unknow1212: your looking for rbd rbd
[22:17] <ajm-> s/rbd rbd/rbd/
[22:17] <unknow1212> what do you mean ?
[22:17] <unknow1212> is there a tutorial somewhere that i can follow ?
[22:18] <ajm-> unknow1212: http://ceph.com/ceph-storage/block-storage/
[22:23] * aa_ (~aa@r190-135-19-17.dialup.adsl.anteldata.net.uy) has joined #ceph
[22:30] * andresambrois (~aa@r186-52-134-218.dialup.adsl.anteldata.net.uy) Quit (Ping timeout: 480 seconds)
[22:38] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[22:39] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[22:39] * unknow1212 (~raul@cpe-70-123-201-24.satx.res.rr.com) Quit (Quit: unknow1212)
[22:41] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) has joined #ceph
[22:42] <CristianDM> Is it possible mount rbd outsite a virtual machine?
[22:43] <jmlowe> sure
[22:43] <CristianDM> How?
[22:43] <jmlowe> rbd map pool/imagename
[22:44] <jmlowe> will get you /dev/rbd0
[22:44] <jmlowe> I do it all the time
[22:44] <CristianDM> Perfect :D
[22:44] <jmlowe> rbd showmapped
[22:45] <jmlowe> will show your existing mappings
[22:49] <CristianDM> Thanks
[22:49] <jmlowe> np
[23:04] <CristianDM> for small files, I need mkfs of an rbd image to export as NFS for files of webserver and mail server
[23:05] <CristianDM> Best xfs or ext4?
[23:05] * lofejndif (~lsqavnbok@28IAAE19M.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:11] <jmlowe> http://serverfault.com/questions/190625/what-file-system-is-better-for-a-linux-mail-server
[23:14] <CristianDM> Thanks.
[23:16] * Theuni (~Theuni@dslb-088-066-111-066.pools.arcor-ip.net) Quit (Quit: Leaving.)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.