#ceph IRC Log

Index

IRC Log for 2011-04-01

Timestamps are in GMT/BST.

[0:03] <sagewk> fwiw i'm happy without quotes, as long as \ still works.
[0:04] <sagewk> (for things like device lists for btrfs devs)
[0:04] <sagewk> the configs on cosd will need updating in that case.
[0:07] <cmccabe> k
[0:09] * stingray (~stingray@stingr.net) has left #ceph
[0:12] * eternaleye__ (~eternaley@195.215.30.181) has joined #ceph
[0:12] * eternaleye__ is now known as eternaleye
[0:28] <Dantman> sage, (sagewk?): Does the ceph wiki still need that help? What do you need; Just some tips on what to setup? Some consulting on how to deal with it? Or do you want hosting experienced with dealing with that kind of stuff and maintaining MW?
[0:28] <sagewk> dantman: i think we have the security fixed up now
[0:28] <sagewk> there are still a bunch of junk fioles that need removal tho at http://ceph.newdream.net/wiki/Special:ListFiles
[0:33] <Dantman> Still getting a bit of spam in the rc too... unless you haven't turned stuff on yet
[0:37] <cmccabe> I'm having trouble overwriting objects in rgw
[0:37] <cmccabe> I keep doing a put on this one object and it is not changing...
[0:38] * Meths_ (rift@customer381.pool1.unallocated-106-192.orangehomedsl.co.uk) has joined #ceph
[0:41] * allsystemsarego (~allsystem@188.27.164.67) Quit (Quit: Leaving)
[0:43] <cmccabe> I have looked at the pool via radostool and the object is unchanged
[0:45] * Meths (rift@customer5994.pool1.unallocated-106-192.orangehomedsl.co.uk) Quit (Ping timeout: 480 seconds)
[0:46] * votz (~votz@dhcp0020.grt.resnet.group.UPENN.EDU) has joined #ceph
[0:46] * votz_ (~votz@dhcp0020.grt.resnet.group.upenn.edu) has joined #ceph
[0:48] * votz_ (~votz@dhcp0020.grt.resnet.group.upenn.edu) Quit ()
[0:50] <sagewk> dantman: spam where?
[0:50] <Dantman> sagewk, http://ceph.newdream.net/wiki/Special:RecentChanges
[0:51] <Dantman> http://ceph.newdream.net/wiki/16_Day_Diet http://ceph.newdream.net/wiki/Reverse_Phone_Lookup_Services etc...
[0:51] <sagewk> argh
[0:51] <sagewk> k letting the admin know
[0:54] <Dantman> Pretty sure I've dealt with that format of spambot before
[1:11] <sagewk> dantman: ok should be gone.
[1:11] <sagewk> he added some additional filters
[2:21] * greglap (~Adium@166.205.138.71) has joined #ceph
[2:23] * Tv (~Tv|work@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[2:26] * DJLee (82d8d198@ircip1.mibbit.com) has joined #ceph
[2:43] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[2:48] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) Quit (Quit: Leaving.)
[3:06] * bchrisman (~Adium@70-35-37-146.static.wiline.com) Quit (Quit: Leaving.)
[3:13] * greglap (~Adium@166.205.138.71) Quit (Read error: Connection reset by peer)
[3:20] * cmccabe (~cmccabe@c-24-23-254-199.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[3:25] * greglap (~Adium@cpe-76-170-84-245.socal.res.rr.com) has joined #ceph
[3:35] * DJLee (82d8d198@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[4:34] * samsung (~samsung@111.160.209.226) has joined #ceph
[4:48] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[5:02] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[5:13] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[7:44] * neurodrone (~neurodron@cpe-76-180-162-12.buffalo.res.rr.com) Quit (Ping timeout: 480 seconds)
[7:47] * neurodrone (~neurodron@cpe-76-180-162-12.buffalo.res.rr.com) has joined #ceph
[8:01] * jiqiren (~jiqiren@c-67-188-179-41.hsd1.ca.comcast.net) has joined #ceph
[8:01] * jiqiren (~jiqiren@c-67-188-179-41.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[8:52] * allsystemsarego (~allsystem@188.27.164.67) has joined #ceph
[9:08] * Dantman (~dantman@S0106001eec4a8147.vs.shawcable.net) Quit (Ping timeout: 480 seconds)
[9:35] * Dantman (~dantman@S0106001eec4a8147.vs.shawcable.net) has joined #ceph
[9:46] * hijacker_ (~hijacker@213.91.163.5) Quit (Remote host closed the connection)
[9:49] * Meths_ is now known as Meths
[9:56] * neurodrone (~neurodron@cpe-76-180-162-12.buffalo.res.rr.com) Quit (Quit: neurodrone)
[9:59] * Yoric (~David@213.144.210.93) has joined #ceph
[10:05] * Yoric (~David@213.144.210.93) Quit (Quit: Yoric)
[10:05] * Yoric (~David@213.144.210.93) has joined #ceph
[10:05] * Yoric (~David@213.144.210.93) has left #ceph
[11:49] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[14:08] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[14:13] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit ()
[14:19] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[14:27] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[16:29] * samsung (~samsung@111.160.209.226) Quit (Quit: Leaving)
[16:35] * neurodrone (~neurodron@cpe-76-180-162-12.buffalo.res.rr.com) has joined #ceph
[17:04] * neurodrone (~neurodron@cpe-76-180-162-12.buffalo.res.rr.com) Quit (Quit: neurodrone)
[17:06] * neurodrone (~neurodron@cpe-76-180-162-12.buffalo.res.rr.com) has joined #ceph
[17:06] * neurodrone (~neurodron@cpe-76-180-162-12.buffalo.res.rr.com) Quit ()
[17:38] * greglap (~Adium@cpe-76-170-84-245.socal.res.rr.com) Quit (Quit: Leaving.)
[17:40] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:52] * greglap (~Adium@166.205.137.227) has joined #ceph
[17:54] * neurodrone (~neurodron@dhcp215-232.wireless.buffalo.edu) has joined #ceph
[18:01] * neurodrone (~neurodron@dhcp215-232.wireless.buffalo.edu) Quit (Quit: neurodrone)
[18:19] * Tv (~Tv|work@ip-66-33-206-8.dreamhost.com) has joined #ceph
[18:22] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) has joined #ceph
[18:30] * bchrisman (~Adium@70-35-37-146.static.wiline.com) has joined #ceph
[18:37] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[18:49] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[18:51] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[18:52] * greglap (~Adium@166.205.137.227) Quit (Ping timeout: 480 seconds)
[19:04] * neurodrone (~neurodron@dhcp205-068.wireless.buffalo.edu) has joined #ceph
[19:04] * greglap (~Adium@ip-66-33-206-8.dreamhost.com) has joined #ceph
[19:22] * cmccabe (~cmccabe@208.80.64.121) has joined #ceph
[19:28] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[19:42] * breed (~breed@lo6.cfw-a-gci.greatamerica.corp.yahoo.com) has joined #ceph
[19:43] <breed> hey when we use the rados command, how do we specify authentication information?
[19:49] <Tv> breed: i'd expect it to read -c ceph.conf, find the keyring mentioned there, use the client.admin key
[19:49] <gregaf> or use the -k option to define a keyring location, and —user-name to define the user
[19:49] <Tv> breed: you can pass --name=client.foo to use another key
[19:49] <gregaf> (there's some single-letter shortcut for user but I don't remember what it is)
[19:50] <Tv> --user-name? that's new to me
[19:50] <gregaf> oh, no, I'm wrong
[19:50] <gregaf> —name it is
[19:50] <Tv> yeah grep agrees with me ;)
[19:50] <Tv> $ ./rados --name=foo lspools
[19:50] <Tv> You must pass a string of the form ID.TYPE to the --name option.
[19:51] <Tv> shouldn't that be TYPE.ID ?
[19:51] <breed> the name corresponds to the name in the keyring right?
[19:51] <gregaf> yeah, although I don't remember if it requires the prepended "client." or not...Tv?
[19:53] <cmccabe> --name requires <type>.<id>
[19:53] <cmccabe> you can also specify just id with --id
[19:53] <Tv> gregaf: one of the hardcoded types
[19:54] <Tv> gregaf: and some things force it to client even if you try to use something else
[19:54] <cmccabe> I'm pretty sure that --name overrides anything the program would do
[19:55] <Tv> cmccabe: feel free to be, i tried to use client.* for kclient and cfuse.* for fuse, and it insisted on client.*
[19:55] <cmccabe> "cfuse" is not a type
[19:56] <cmccabe> 5 is the number of entity types, and the number of entity types shall be 5
[19:56] <cmccabe> in include/msgr.h
[19:56] <cmccabe> mon, mds, osd, client, auth
[19:56] <Tv> hah
[19:56] <Tv> zee hoooly handgranade
[19:56] <cmccabe> I guess it should warn you rather than just silently defaulting to client
[19:56] <Tv> thy shall count to three, three shall be the number you count to, no more no less
[19:56] <cmccabe> but that was the traditional behavior and I kept it
[19:57] <cmccabe> 5 is right out
[19:57] <breed> we are trying to run rados bench 10 write and we are getting client.admin authentication error Operation not permitted
[19:57] <breed> is that because of a bad key or some acl or something?
[19:58] <Tv> breed: sounds like it
[19:58] <breed> actually i was hoping to narrow it down :)
[19:59] <Tv> breed: can you write to the pool at all?
[20:00] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[20:00] <Tv> breed: as in, rados --pool=bar put foo /etc/motd
[20:00] <breed> no we are are getting authentication errors
[20:01] <breed> is there a wiki page that describes how cephx works? we had it working at one time
[20:01] <Tv> breed: is this with vstart.sh or your own ceph.conf? does it point to the right keyring, can you read the keyring, does it contain a key for client.admin
[20:02] <Tv> breed: there's http://ceph.newdream.net/wiki/Cluster_configuration#Cephx_auth but it's quite brief
[20:02] <breed> our own ceph.conf
[20:02] <breed> yeah i saw that. it is very brief! it would be nice to have a bit of an overview of how it works and how to debug problems
[20:03] <Tv> breed: yeah and we need better error messages
[20:03] <Tv> breed: does your ceph.conf have "auth supported = cephx"?
[20:04] <gregaf> that really should be all you need to make it work if you're just setting up the cluster and then trying to access it
[20:04] <breed> yes
[20:04] <gregaf> did you previously run another cluster on this/these machines?
[20:04] <breed> but i think we might have messed up our keys
[20:05] <Tv> breed: the keyring files are nice & readable, you could eyeball them to make sure
[20:05] <gregaf> there are default search paths for the ceph.conf file and we've had a few people lately who had old conf files getting read by mistake
[20:05] <Tv> breed: also note that there's an in-memory copy, so if you edit the keys, you might want to restart the daemons just to be sure
[20:05] <breed> actually we just realized that we are running them out of the same nfs directory, so all osds and our client are sharing the same .ceph_keyring file
[20:05] <breed> is that a problem?
[20:06] <Tv> breed: that should actually make it easier, as long as the file hasn't been clobbered
[20:06] <Tv> breed: does it have a section [client.admin]?
[20:06] <breed> yes
[20:06] <breed> so is the key is a shared secret?
[20:06] <gregaf> Tv: I though mkcephfs just gave each node its own key, did that change at some point?
[20:06] <Tv> breed: yes
[20:07] <Tv> breed: does the key there match "ceph auth list" output?
[20:07] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[20:07] <breed> i'll check we are restarting everything
[20:07] <Tv> gregaf: osds etc get their own keys, but rados tool will use client.admin
[20:08] <gregaf> I know that, what I mean is that I thought each OSD got a keyring file containing its key and nobody else's
[20:08] <Tv> gregaf: yeah there's a chance the keyring file got clobbered because of multiple writers, and no longer contains all the necessary keys
[20:09] <Tv> actually, if the client.admin key is messed up, even "ceph health" won't work anymore
[20:09] <breed> ok so the .ceph_keyring should have an entry for each osd?
[20:09] <Tv> breed: can you run any ceph commands anywhere? if yes, you should be able to run the bench from there
[20:09] <gregaf> breed: yes
[20:10] <gregaf> or at least everybody using it should have an entry or things will break
[20:10] <Tv> well, there might be multiple keyrings, that altogether hold all the keys
[20:10] <gregaf> I'm not sure what will happen over NFS as I think the config and setup has changed since I last spent time on it
[20:12] <breed> yeah, it looks like we have really messed up our keys. we will try running without authentication for now
[20:12] <Tv> e.g. vstart.sh writes osd-specific keyrings to dev/osd<id>/keyring, and puts "[osd<id>] keyring=dev/osd<id>/keyring" in ceph.conf
[20:12] <breed> is it sufficient to comment out the auth_supported line?
[20:12] <Tv> breed: that and a full restart of daemons is definitely enough
[20:13] <breed> great thanx!
[20:16] <breed> cool it works now thanx you guys!
[20:18] <cmccabe> tv: do you have an rgw test instance up?
[20:18] <Tv> cmccabe: yup
[20:18] <cmccabe> tv: if so, can you try getting a key that doesn't exist from a bucket?
[20:18] <cmccabe> tv: does it also return success for you?
[20:18] <Tv> cmccabe: let me check
[20:19] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[20:19] <Tv> cmccabe: i have a test for key.get_contents_as_string() to 404 on non-existent key
[20:19] <Tv> cmccabe: and it works right
[20:19] <cmccabe> hmm
[20:20] <cmccabe> somehow it isn't for me
[20:20] <cmccabe> must be a problem on my end, thanks
[20:25] <cmccabe> tv: it looks like the problem is in boto_tool.py (the little tool I wrote)
[20:26] <cmccabe> tv: I assumed that a bad get would cause an exception, but it looks like it doesn't
[20:27] <Tv> cmccabe: raises boto.exception.S3ResponseError for me
[20:27] <cmccabe> let me try with amazon
[20:27] <cmccabe> nope, no exception...
[20:28] <cmccabe> are you using connection::lookup to get the bucket object?
[20:29] <Tv> connection.create_bucket(name) or connection.get_bucket(name)
[20:29] <cmccabe> tv: looks like lookup returns None sometimes, but get_bucket throws
[20:29] <cmccabe> ok... mystery solved
[20:30] <Tv> cmccabe: seeing how bucket.lookup is deprecated, i guess connection.lookup is not exactly their favorite either
[20:30] <cmccabe> except it's still weird that downloading from bucket None throws no error, but whatever...
[20:31] <cmccabe> tv: I find connection.lookup useful when I don't know whether the bucket exists
[20:32] <bchrisman> is there a public full path lookup function in client/Client.cc ?
[20:32] <cmccabe> tv: I suppose the same result could be accomplished by catching whatever exception is thrown by get_bucket
[20:32] <bchrisman> something calling path_walk I think?
[20:32] <cmccabe> tv: but that exception is not clearly specified anywhere and I worried that it might change in the future
[20:34] <Tv> cmccabe: well both connection.lookup and .get_bucket are undocumented
[20:34] <gregaf> bchrisman: based on the inode, you mean?
[20:34] <Tv> cmccabe: but the exception makes very much sense, i'm happy relying on it for now
[20:34] <cmccabe> tv: yeah, I've been reading the source of boto a lot
[20:35] <bchrisman> gregaf: based on full path from root
[20:35] <cmccabe> tv: there were some snippets of code on the web, but I'm not sure if there's a javadoc type thing anywhere
[20:35] <Tv> bchrisman: i don't know that area well yet but isn't path_walk exactly that?
[20:35] <bchrisman> gregaf: it looks like a lot of public methods like chown etc call path_walk
[20:35] <gregaf> but I mean, you want to pass in an ino and get back its full path from root?
[20:35] <bchrisman> Tv: path_walk is not public
[20:35] <Tv> ah
[20:35] * breed (~breed@lo6.cfw-a-gci.greatamerica.corp.yahoo.com) has left #ceph
[20:35] <bchrisman> gregaf: hold on.. I'll explain context on the question
[20:36] <bchrisman> gregaf: I want to implement libceph.cc/getxattr()… that should take a path to a file and other arguments
[20:37] <bchrisman> gregaf: the ll_* methods used by fuse track some internal inode.. but libceph doesn't need to reference inodes
[20:37] <Tv> bchrisman: wouldn't that be just like setattr in Client.h/.cc
[20:38] <Tv> hmm there's Client::_getxattr
[20:38] <bchrisman> Tv: yeah.. _getxattr is what I was going to call…effectively… after caling path_walk
[20:38] <gregaf> ah, there just isn't any kind of path-based getxattr
[20:38] <bchrisman> so.. path_walk -> get Inode * -> *xattr
[20:39] <gregaf> isn't libceph just a thin wrapper around Client?
[20:39] <bchrisman> I'm hesitant to add it to Client.cc just because… well.. that class is ginormous...
[20:39] <Tv> yeah it needs to be added to Client, that's it
[20:39] <bchrisman> heh… ok
[20:39] <gregaf> heh
[20:39] <bchrisman> I'll add that.
[20:39] * morse (~morse@supercomputing.univpm.it) Quit (Ping timeout: 480 seconds)
[20:39] <Tv> gregaf: my understanding is that libceph is C wrapper for C++ Client
[20:39] <gregaf> ah, I suppose it does that too
[20:39] <Tv> so if you need a new operation in libceph, that's not in Client either, first add it in Client, then in libceph
[20:40] <gregaf> that's what I was going for, but then I wondered if maybe there was some real functionality in libceph that I'd forgotten
[20:40] <bchrisman> good enough.. I'll implement in Client then wrap in libceph like the other methods
[20:41] <Tv> i wish there'd be a mock protocol implementation and you could do Client operations and check that they send the right messages, react the right way to incoming messages :(
[20:41] <bchrisman> Client is a good candidate for clean/split/refactor/whatever at some point.. :)
[20:41] * breed (~breed@lo6.cfw-a-gci.greatamerica.corp.yahoo.com) has joined #ceph
[20:42] * breed (~breed@lo6.cfw-a-gci.greatamerica.corp.yahoo.com) has left #ceph
[20:42] <gregaf> I suspect you just say that because it's the part of the code you've been working in…there's a long list of good candidates :/
[20:42] <bchrisman> absolutely! :)
[20:43] <bchrisman> well.. I wouldn't suggest it if I weren't wiling to put it on our priority list at some point.. :)
[20:49] * jjchen (~jjchen@lo4.cfw-a-gci.greatamerica.corp.yahoo.com) has joined #ceph
[20:50] <jjchen> hi how do I set number of replicas using crushtool
[20:53] <Tv> jjchen: i may be wrong (i'm still new to that part of the code), but i think you need to edit the crushmap and repeat these lines:
[20:53] <Tv> step take root
[20:53] <Tv> step choose firstn 0 type device
[20:54] <Tv> step emit
[20:54] <Tv> or perhaps just edit the firstn number, not sure about that
[20:55] <Tv> hmm not so sure about that after all
[20:55] <Tv> clearly i need to play with this, myself ;)
[20:55] <jjchen> I saw crushtool has a option num_rep. I tried to set it using the tool directly and also dump out the file and added a line num_rep 3 in the map, both didn't work out
[20:58] <Tv> sagewk: would you have a moment to clarify crush usage?
[20:58] <sagewk> sure
[20:59] <sagewk> the crush rules are normally independent of the specific replica count
[20:59] <sagewk> each rule has a min_rep and max_rep value (or similar) that set the bounds. if you specify 0, that means N (fed in by the osd mapping code in ceph)
[20:59] <sagewk> -1 is N-1, -2 is N-2, etc.
[20:59] <Tv> ok so how do you adjust the replication level?
[21:00] <sagewk> http://ceph.newdream.net/wiki/Adjusting_replication_level
[21:00] <Tv> oh hey, docs!
[21:00] <Tv> ah i was looking in rados..
[21:00] <sagewk> :)
[21:01] <Tv> oh yeah and http://ceph.newdream.net/wiki/Monitor_commands#pool
[21:01] <Tv> there's lots of hidden depth in the ceph command
[21:02] <Tv> my agenda has "make it more discoverable" on it.. ;)
[21:02] <sagewk> btw that particular page is more likely to be out of date
[21:02] <sagewk> yeah
[21:02] <sagewk> on the main wiki page there's a admin section with most of the common cluster stuff people want to do
[21:02] <sagewk> adding/removing osds, monitors, etc.
[21:03] <Tv> yeah i think it's more a question of organizing the information somehow better
[21:03] <Tv> because honestly even when i know it's there, i rarely stumble on it
[21:04] <sagewk> yep
[21:04] <Tv> i think a single, master, document table of contents will help
[21:04] <Tv> i have good tools for that, just need the time ;)
[21:05] <Tv> lunch time..
[21:08] * neurodrone (~neurodron@dhcp205-068.wireless.buffalo.edu) Quit (Quit: neurodrone)
[21:53] <Tv> i dislike it when tests start working all of a sudden.. :-/
[21:53] <Tv> i can always hope that some of the commits in the meanwhile actually fixed the bug, and didn't just hide it
[21:54] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[22:08] * neurodrone (~neurodron@dhcp212-193.wireless.buffalo.edu) has joined #ceph
[22:11] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[22:18] * greglap (~Adium@ip-66-33-206-8.dreamhost.com) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * cclien (~cclien@ec2-175-41-146-71.ap-southeast-1.compute.amazonaws.com) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * lidongyang (~lidongyan@222.126.194.154) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * sjust1 (~sam@ip-66-33-206-8.dreamhost.com) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * johnl (~johnl@109.107.34.14) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * neurodrone (~neurodron@dhcp212-193.wireless.buffalo.edu) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * eternaleye (~eternaley@195.215.30.181) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * ghaskins (~ghaskins@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * Guest576 (quasselcor@bas11-montreal02-1128535815.dsl.bell.ca) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * jjchen (~jjchen@lo4.cfw-a-gci.greatamerica.corp.yahoo.com) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * cmccabe (~cmccabe@208.80.64.121) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * Tv (~Tv|work@ip-66-33-206-8.dreamhost.com) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * Meths (rift@customer381.pool1.unallocated-106-192.orangehomedsl.co.uk) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * lxo (~aoliva@201.82.32.113) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * todin (tuxadero@kudu.in-berlin.de) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * Jiaju (~jjzhang@222.126.194.154) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * iggy (~iggy@theiggy.com) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * nolan (~nolan@phong.sigbus.net) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * bchrisman (~Adium@70-35-37-146.static.wiline.com) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * Dantman (~dantman@S0106001eec4a8147.vs.shawcable.net) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * monrad-51468 (~mmk@domitian.tdx.dk) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * josef (~seven@nat-pool-rdu.redhat.com) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * [ack]_ (ANONYMOUS@208.89.50.168) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * atgeek (~atg@please.dont.hacktheinter.net) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * sage (~sage@dsl092-035-022.lax1.dsl.speakeasy.net) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * __jt__ (~james@jamestaylor.org) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * allsystemsarego (~allsystem@188.27.164.67) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * votz (~votz@dhcp0020.grt.resnet.group.UPENN.EDU) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * darkfaded (~floh@188.40.175.2) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * sagewk (~sage@ip-66-33-206-8.dreamhost.com) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * MK_FG (~MK_FG@188.226.51.71) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * Anticimex (anticimex@netforce.csbnet.se) Quit (charon.oftc.net synthon.oftc.net)
[22:18] * pruby (~tim@leibniz.catalyst.net.nz) Quit (charon.oftc.net synthon.oftc.net)
[22:19] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) has joined #ceph
[22:19] * Guest576 (quasselcor@bas11-montreal02-1128535815.dsl.bell.ca) has joined #ceph
[22:19] * ghaskins (~ghaskins@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[22:19] * eternaleye (~eternaley@195.215.30.181) has joined #ceph
[22:19] * neurodrone (~neurodron@dhcp212-193.wireless.buffalo.edu) has joined #ceph
[22:19] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[22:19] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[22:19] * pruby (~tim@leibniz.catalyst.net.nz) has joined #ceph
[22:19] * Anticimex (anticimex@netforce.csbnet.se) has joined #ceph
[22:19] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[22:19] * sagewk (~sage@ip-66-33-206-8.dreamhost.com) has joined #ceph
[22:19] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) has joined #ceph
[22:19] * darkfaded (~floh@188.40.175.2) has joined #ceph
[22:19] * votz (~votz@dhcp0020.grt.resnet.group.UPENN.EDU) has joined #ceph
[22:19] * allsystemsarego (~allsystem@188.27.164.67) has joined #ceph
[22:19] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) has joined #ceph
[22:19] * bchrisman (~Adium@70-35-37-146.static.wiline.com) has joined #ceph
[22:19] * Dantman (~dantman@S0106001eec4a8147.vs.shawcable.net) has joined #ceph
[22:19] * monrad-51468 (~mmk@domitian.tdx.dk) has joined #ceph
[22:19] * josef (~seven@nat-pool-rdu.redhat.com) has joined #ceph
[22:19] * [ack]_ (ANONYMOUS@208.89.50.168) has joined #ceph
[22:19] * atgeek (~atg@please.dont.hacktheinter.net) has joined #ceph
[22:19] * sage (~sage@dsl092-035-022.lax1.dsl.speakeasy.net) has joined #ceph
[22:19] * __jt__ (~james@jamestaylor.org) has joined #ceph
[22:20] * jjchen (~jjchen@lo4.cfw-a-gci.greatamerica.corp.yahoo.com) has joined #ceph
[22:20] * cmccabe (~cmccabe@208.80.64.121) has joined #ceph
[22:20] * Tv (~Tv|work@ip-66-33-206-8.dreamhost.com) has joined #ceph
[22:20] * Meths (rift@customer381.pool1.unallocated-106-192.orangehomedsl.co.uk) has joined #ceph
[22:20] * lxo (~aoliva@201.82.32.113) has joined #ceph
[22:20] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[22:20] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[22:20] * greglap (~Adium@ip-66-33-206-8.dreamhost.com) has joined #ceph
[22:20] * cclien (~cclien@ec2-175-41-146-71.ap-southeast-1.compute.amazonaws.com) has joined #ceph
[22:20] * lidongyang (~lidongyan@222.126.194.154) has joined #ceph
[22:20] * sjust1 (~sam@ip-66-33-206-8.dreamhost.com) has joined #ceph
[22:20] * johnl (~johnl@109.107.34.14) has joined #ceph
[22:20] * Jiaju (~jjzhang@222.126.194.154) has joined #ceph
[22:20] * iggy (~iggy@theiggy.com) has joined #ceph
[22:20] * nolan (~nolan@phong.sigbus.net) has joined #ceph
[22:34] * lxo (~aoliva@201.82.32.113) Quit (Read error: Connection reset by peer)
[22:35] * lxo (~aoliva@201.82.32.113) has joined #ceph
[22:42] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[22:43] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit ()
[22:54] * neurodrone (~neurodron@dhcp212-193.wireless.buffalo.edu) Quit (Quit: neurodrone)
[23:27] * bchrisman (~Adium@70-35-37-146.static.wiline.com) has left #ceph
[23:28] * bchrisman (~Adium@70-35-37-146.static.wiline.com) has joined #ceph
[23:37] <bchrisman> gregaf: so I should be able to do path_walk, get the Inode * out of that, and call vino() there to get a vinodeno_t, and pass that to ll_getxattr(…) or other xattr operators just like the cfuse client does?
[23:38] <gregaf> well at that point just use the internal _getxattr interface, right?
[23:39] <bchrisman> the ll_* routines implement locking that I'd need to pull into my Client::getxattr() stuff?
[23:39] <gregaf> no need to transform data types back and forth
[23:39] <bchrisman> was looking to reuse as much as possible..
[23:39] <gregaf> oh, yeah, I guess they do do that
[23:39] <gregaf> but the public methods all implement locking and that's probably a tradition to maintain
[23:39] <bchrisman> especially where locking comes in.. probably best to have as few points of entry as possible?
[23:40] <bchrisman> ok.. can do that...
[23:40] <bchrisman> less marshalling/unmarshalling of data..
[23:46] * allsystemsarego (~allsystem@188.27.164.67) Quit (Quit: Leaving)
[23:48] <bchrisman> as for locking, I'm guessin Mutex::Locker lock(client_lock); has the lock go away once scope is left? I see no unlock calls in these Client methods..
[23:49] <gregaf> bchrisman: yeah, it's just an object that takes the lock in constructor and unlocks it in destructor
[23:49] <bchrisman> ahh cool
[23:53] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) Quit (Quit: Yoric)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.