#ceph IRC Log

Index

IRC Log for 2011-03-24

Timestamps are in GMT/BST.

[0:04] * cmccabe (~cmccabe@c-24-23-254-199.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[0:07] <Tv> well well well
[0:07] <Tv> it seems i can crash kclient easily
[0:07] <Tv> even the one in master
[0:11] * cmccabe (~cmccabe@c-24-23-254-199.hsd1.ca.comcast.net) has joined #ceph
[0:36] * bchrisman (~Adium@70-35-37-146.static.wiline.com) Quit (Quit: Leaving.)
[1:01] * samsung (~samsung@61.184.205.46) Quit (Ping timeout: 480 seconds)
[1:16] * Tv (~Tv|work@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[1:31] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[1:36] <darkfader> hey all, thanks for your FUSE advice. I had the great opportunity to flame fuse and introduce zfsonlinux the next day. we couldn't do ceph as the training was just 3 days, but i'll definitely edit the course script for the next time and kick out iscsi in favor of ceph.
[1:37] <cmccabe> darkfader: glad we were helpful. I don't think any of us would flame fuse per se... it's just great for doing certain things, not so good for others.
[1:38] <darkfader> hehehe
[1:39] <darkfader> like i told, this was for guys who got defined availability and throughput req's
[1:39] <cmccabe> ok
[1:39] <darkfader> so giving them a hint at "zfs might be stable in a year" was great, i couldn't have said that via FUSE
[1:39] <darkfader> but I still showed a little about ceph
[1:40] <darkfader> i think the scope should include what things look 1-2 years in the future and there i see ceph kicking ass :)
[1:40] <darkfader> i skipped like 15 sections about iscsiadm client handling
[1:40] <darkfader> they can read that in manuals
[1:41] <darkfader> cmccabe: the funniest thing was today we found out that one of the guys is in the team i led at $old_job
[1:41] <darkfader> i think normally a course doesn't include advice like "get the hell out of there"
[1:42] <cmccabe> depends on the course... :)
[1:42] <cmccabe> kind of interesting that oracle is working on both zfs and btrfs
[1:42] <darkfader> cmccabe: yeah i went into that too
[1:42] <darkfader> i think they can afford to run both things
[1:42] <darkfader> and btrfs is legally much safer
[1:43] <darkfader> zfs is (sorry, i like netapp) so definitely stoled :)
[1:43] <cmccabe> well, for oracle, they're both safe, since it owns the sun IP
[1:43] <darkfader> cmccabe: on that side, yeah
[1:43] <darkfader> and i think they keep running them both - like mysql + oracle
[1:44] <cmccabe> I guess netapp settled with oracle in 2010
[1:44] <cmccabe> over the supposed zfs patents
[1:44] <darkfader> but if zfs gets stable on linux it would be what i'd like to run under ceph (because of the L2ARC cahcing)
[1:44] <darkfader> cmccabe: did they? oups :(
[1:44] <cmccabe> I don't think either side admitted fault
[1:44] <darkfader> hehehe
[1:44] <cmccabe> there was a countersuit of course
[1:45] <darkfader> they worked together on pNFS and DAFS so maybe they pushed it aside
[1:45] <darkfader> pity i missed that :/
[1:46] <darkfader> sun always hated netapp because the netapp guys went to do their own thing... would be good for all of us if it's finally settled
[1:46] <darkfader> IP should not get in the way like that...
[1:46] <cmccabe> was chris mason at oracle when he started btrfs?
[1:47] <darkfader> i dunno.
[1:47] <darkfader> but they ran it for many years now...
[1:47] <cmccabe> I think btrfs began at oracle. One of the few open source projects it actually did well with.
[1:48] <darkfader> i think their oldest projects were raw devices, then aio, then ocfs then btrfs?
[1:48] <darkfader> i might be too trusting there, but i trust the oracle OSS projects a lot more than IBM or redhat
[1:49] <darkfader> i.e. redhat cashing in if you wanna use XFS instead of ext3 really pissed me off
[1:50] <cmccabe> that's a rare opinion these days
[1:50] <darkfader> hehe
[1:50] <cmccabe> most developers view oracle's attitude towards open source as pretty negative
[1:51] <darkfader> well i surely notice how they piss off the devs of any project they inherited
[1:51] <cmccabe> I think ever since they started trying to assert the Java patents, people have been pretty upset with them
[1:51] <cmccabe> "alleged Java patents"
[1:52] <darkfader> cmccabe: they have been assholes. i'll go with that any day. but i think there are "certain parties" who set them (and the OSS world) for a lot of FUD
[1:52] <darkfader> and oracle has been idiots and fallen for it
[1:52] <cmccabe> I think both red hat and oracle have an incentive to sell you products and services you may not need
[1:52] <darkfader> hehe
[1:52] <cmccabe> but oracle has much more lock-in with their proprietary database so I would be more wary of them
[1:52] <darkfader> cmccabe: my new job is 100% linux and i get so much FUD right now that i'm going allergic
[1:52] <cmccabe> I mean, that database and the associated middleware is still pretty much all their revenue, right?
[1:53] <darkfader> he's really a genious in linux
[1:53] <darkfader> but he doesnt even know UNIX95
[1:53] <darkfader> so at that point I really wonder where some FUD comes from
[1:53] <cmccabe> FUD about what?
[1:53] <darkfader> i tend to say not redhat, but IBM
[1:53] <cmccabe> about oracle?
[1:53] <darkfader> the short version "unix evil, linux easy"
[1:54] <cmccabe> heh
[1:54] <darkfader> about oraclei t was like noone can install it, why dont they use RPM
[1:54] <darkfader> and I was like .. er... oracle had native installers until 7.x, then they made that sucky universal installer to get of all portability issues
[1:54] <cmccabe> I think tarball vs. rpm is probably the least of your configuration worries when you're a DBA
[1:55] <darkfader> and (a linux person) he won't see portability issues since he jsut sees linux
[1:55] <darkfader> cmccabe: many many DBAs cant even config response files for the installer
[1:55] <darkfader> but yeah, it's just a totally side part of their job
[1:56] <darkfader> but i.e. try to find someone who can set up audit logging on mysql tables
[1:56] <darkfader> d'oh
[1:56] <darkfader> so imho... we still gotta crawl to where oracle shat.
[1:56] <gregaf> bchrisman: did you get those logs, or do you have a timeline for them?
[1:56] <gregaf> (not in a hurry, just trying to plan :))
[1:57] <darkfader> are there any current "slideshows" on ceph?
[1:57] <darkfader> for not-filesystem-fan audience?
[1:57] <gregaf> there's the talk sage gave at SCALE targeted towards admins, I think
[1:58] <cmccabe> darkfader: sage gave some talks
[1:59] <darkfader> cmccabe: i know i watched them :)
[2:00] <darkfader> but thats like... for people like us? :)
[2:00] <darkfader> or at least for people who attend SCALE
[2:00] <gregaf> not much of a target audience among people who aren't like that...
[2:00] <darkfader> gregaf: may i digress? :)
[2:01] <darkfader> like my old colleagues, they're so deep into vxvm/real lvm they make the lvm2 devs look like iditos
[2:01] <gregaf> well, I guess we could write bs white papers for your boss saying "Ceph is awesome! It is seamless and exploits the power of the cloud and will guarantee the safety of your data at 1/5 the cost of anybody else!"
[2:01] <gregaf> but we're not ready for that yet
[2:01] <darkfader> but they dont care about fs intristics the saem
[2:01] <darkfader> (sorry for the typos)
[2:02] <darkfader> gregaf: no, not like that... hmm
[2:02] <cmccabe> yeah, I think we have a little bit to go before we could put out a more non-technical whitepaper
[2:02] <cmccabe> at the very least, we probably would need to do a bunch of tuning
[2:02] <gregaf> there aren't too many slide talks
[2:02] <gregaf> if you just mean stuff like the block storage without the fs bits, I think there are a few older papers just on rados
[2:03] <darkfader> what i think of is for an audience of the kind "we have 1-2PB of storage in SAN luns and failover is chaotic and we don't scale out. tell us about why ceph would be different"
[2:03] <cmccabe> gregaf: that's still more of a tech report
[2:04] <darkfader> gregaf: i know the papers/abstracts. they're cool if you sit at a university and got spare time, but not a condensed summary for clueful people
[2:04] <gregaf> heh
[2:04] <cmccabe> as our previous president said "you have to keep repeating things to catapault the propaganda"
[2:04] <darkfader> maybe i can do something. just not sure. but right now there's a little gap, and i don't mean for execs ...
[2:04] <darkfader> cmccabe: yuck :)
[2:05] <gregaf> I begin to see what you're getting at
[2:05] <cmccabe> reminds me of the yogi berra sayings... so memorable
[2:05] <gregaf> I suspect we'll start producing those in the next couple months
[2:05] <gregaf> but not really sure; I'm not into the support planning really
[2:06] <darkfader> i'll keep it in mind, maybe i can do something more helpful than wiki updates there. i promise i'll train ceph as a "sneak peak" the next time
[2:06] <darkfader> most people don't expect to be able to do something like that without buying crappy hp lefthand boxes
[2:07] <gregaf> not familiar with those?
[2:07] <darkfader> (one sec) one guy was asking the right questions ... like "how would I build an iscsi target on top of drdb and make it active / active"
[2:08] <darkfader> i kinda said "you already figured, but thats boring and doesn't get you anywhere worth going"
[2:08] <darkfader> gregaf: lefthand was some startup doing scaleout block storage
[2:08] <darkfader> they got eated by HP and now HP charges $15k per brick. thus companies buy only 2-4 bricks and find it's low
[2:08] <darkfader> *slow
[2:09] <darkfader> kinda a dead birth by now
[2:09] <gregaf> ah
[2:09] <gregaf> hmm, looks like you could run just the software via VMWare too
[2:10] <cmccabe> "network raid"
[2:10] <darkfader> if they'd buy 40 bricks it would surely scale and be fast. but the HP "bonus" is just too high
[2:10] <cmccabe> I hope those stripes are pretty big!
[2:10] <darkfader> gregaf: i d/led the vmware thing once
[2:10] <darkfader> you need a eval license from their sales to try, so i deleted it
[2:10] <darkfader> a friend said the vmware app sucks big time
[2:11] <cmccabe> do the lefthand boxes sit on their own network, or do you put them on the regular corporate network
[2:11] <cmccabe> I mean a lot of clustered systems like isilon's were designed kind of as a pre-packaged cluster inside a rack
[2:11] <darkfader> cmccabe: i honestly don't know. i suspect on the corporate network, as they can be accessed from i.e. vmware....
[2:11] <darkfader> should be unlike isilon who do it in the backend
[2:12] <gregaf> probably you can configure it as you like
[2:12] <darkfader> feel free to find out hehe
[2:12] <cmccabe> for 15,000 I hope they can configure it as I like!
[2:12] <cmccabe> and hopefully I don't have to worry about configuring it
[2:13] <cmccabe> seriously, though, I wouldn't be surprised if maintenance contracts were a big part of the total cost for a product like that.
[2:13] <darkfader> with HP thats always given
[2:14] <darkfader> they love pricing on TB or FC port count
[2:14] <darkfader> any trap for the customer, they got it.
[2:14] <darkfader> anyway, you didn't even know about it, just forget it again... i'd be surprised if it still exists in 3-4 years
[2:15] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) Quit (Quit: Leaving.)
[2:37] <bchrisman> gregaf: probably tomorrow… got distracted with other stuff dumped on my plate.
[2:37] <gregaf> k, no worries
[2:37] <gregaf> been dealing with the mds all day today anyway
[3:09] * greglap (~Adium@166.205.137.196) has joined #ceph
[3:21] * wungkun (~samsung@58.51.197.101) has joined #ceph
[4:03] * greglap (~Adium@166.205.137.196) Quit (Ping timeout: 480 seconds)
[4:21] * MK_FG (~MK_FG@219.91-157-90.telenet.ru) has joined #ceph
[4:48] * greglap (~Adium@cpe-76-170-84-245.socal.res.rr.com) has joined #ceph
[4:53] * MKFG (~MK_FG@188.226.51.71) has joined #ceph
[4:57] * MK_FG (~MK_FG@219.91-157-90.telenet.ru) Quit (Ping timeout: 480 seconds)
[4:57] * MKFG is now known as MK_FG
[5:11] * lxo (~aoliva@201.82.54.5) Quit (Ping timeout: 480 seconds)
[5:15] * lxo (~aoliva@201.82.54.5) has joined #ceph
[6:10] * sagewk (~sage@ip-66-33-206-8.dreamhost.com) Quit (Read error: Operation timed out)
[6:12] * sjust (~sam@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[6:12] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[6:13] * yehuda_wk (~quassel@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[6:28] * yehudasa (~quassel@ip-66-33-206-8.dreamhost.com) has joined #ceph
[6:29] * sjust (~sam@ip-66-33-206-8.dreamhost.com) has joined #ceph
[6:30] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) has joined #ceph
[6:30] * sagewk (~sage@ip-66-33-206-8.dreamhost.com) has joined #ceph
[7:07] * neurodrone (~neurodron@cpe-76-180-162-12.buffalo.res.rr.com) Quit (Quit: neurodrone)
[7:16] * lidongyang_ (~lidongyan@222.126.194.154) Quit (Read error: Connection reset by peer)
[7:18] * lidongyang (~lidongyan@222.126.194.154) has joined #ceph
[8:16] * allsystemsarego (~allsystem@188.25.132.91) has joined #ceph
[8:20] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[8:47] * ChrRaible (5fd07168@ircip1.mibbit.com) has joined #ceph
[8:47] <ChrRaible> hi
[8:48] <ChrRaible> i got a problem / question...
[8:49] <ChrRaible> is it possbile in ceph to set the location of the file to a specific "server / cluster" without setting up special "zones"
[8:49] <ChrRaible> ß
[8:49] <ChrRaible> ?
[9:12] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) Quit (Quit: Yoric)
[9:28] <wungkun> ?
[9:29] <wungkun> ChrRaible:you mean you didn't use btrfs partition?
[10:04] <ChrRaible> i want to define one big filesystem with 4 OSD's and wan't to say safe file test.txt on "ods1"...
[10:04] * Yoric (~David@213.144.210.93) has joined #ceph
[10:15] <wungkun> maybe we can not do this
[10:16] <wungkun> an object was persum reflect to pg /osd
[10:19] * lxo (~aoliva@201.82.54.5) Quit (Read error: Connection reset by peer)
[10:20] * lxo (~aoliva@201.82.54.5) has joined #ceph
[10:28] * Yoric (~David@213.144.210.93) Quit (Quit: Yoric)
[10:30] * Yoric (~David@80.70.32.140) has joined #ceph
[10:42] * Administrator_ (~samsung@61.184.206.180) has joined #ceph
[10:49] * wungkun (~samsung@58.51.197.101) Quit (Ping timeout: 480 seconds)
[10:52] * Administrator_ is now known as wungkun
[11:06] * damoxc (~damien@94-23-154-182.kimsufi.com) has joined #ceph
[11:11] * Administrator_ (~samsung@61.184.206.180) has joined #ceph
[11:12] * agaran (~agaran@static-78-8-120-176.ssp.dialog.net.pl) has joined #ceph
[11:12] <agaran> hello,
[11:13] <agaran> i havent tried ceph yet, but maybe somebody can tell if its possible to make (by coding i guess) fake nodes which represent read-only stored copies on dvd?
[11:14] * Administrator__ (~samsung@61.184.205.40) has joined #ceph
[11:18] * wungkun (~samsung@61.184.206.180) Quit (Ping timeout: 480 seconds)
[11:21] * Administrator_ (~samsung@61.184.206.180) Quit (Ping timeout: 480 seconds)
[12:36] <lxo> hey, folks, I got a question about some odd behavior I'm observing with standby-replay nodes becoming active
[12:37] <lxo> say the mds that took over after a full restart of the cluster is taking forever to replay the mds journal
[12:37] <lxo> I kill it, and the standby-replay node immediately takes over and becomes active
[12:38] <lxo> does the lengthy job that the killed mds was doing get done by the node that takes over?
[12:38] <lxo> and, if so, why couldn't the killed node have behaved that way so as to become active sooner?
[12:39] <lxo> it's not like the standby-replay node had been active for long
[12:41] * Hugh (~hughmacdo@soho-94-143-249-50.sohonet.co.uk) has joined #ceph
[12:48] * Yoric (~David@80.70.32.140) Quit (Quit: Yoric)
[14:50] * neurodrone (~neurodron@cpe-76-180-162-12.buffalo.res.rr.com) has joined #ceph
[15:01] * agaran (~agaran@static-78-8-120-176.ssp.dialog.net.pl) has left #ceph
[15:10] * Administrator__ (~samsung@61.184.205.40) Quit (Ping timeout: 480 seconds)
[16:01] * ghaskins (~ghaskins@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: Leaving)
[16:07] * ghaskins (~ghaskins@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[16:16] * MK_FG (~MK_FG@188.226.51.71) Quit (Quit: o//)
[16:20] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[16:21] * ChrRaible (5fd07168@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[16:41] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[16:42] * greglap (~Adium@cpe-76-170-84-245.socal.res.rr.com) Quit (Quit: Leaving.)
[17:03] * Tv (~Tv|work@ip-66-33-206-8.dreamhost.com) has joined #ceph
[17:06] * neurodrone (~neurodron@cpe-76-180-162-12.buffalo.res.rr.com) Quit (Quit: neurodrone)
[17:13] * greglap (~Adium@static-72-67-79-74.lsanca.dsl-w.verizon.net) has joined #ceph
[17:15] * greglap1 (~Adium@166.205.138.189) has joined #ceph
[17:18] <greglap1> Ixo: hmm, a node in standby-replay should behave the same upon replay as a freshly-started node
[17:18] <greglap1> they're going through the same code...
[17:18] <greglap1> client requests don't ever get lost
[17:19] <greglap1> Ixo: can you describe your symptoms a little more?
[17:23] * greglap (~Adium@static-72-67-79-74.lsanca.dsl-w.verizon.net) Quit (Ping timeout: 480 seconds)
[17:38] <Tv> unf autotest server filled 200GB with logs from tests
[17:43] * bchrisman (~Adium@70-35-37-146.static.wiline.com) has joined #ceph
[17:52] * neurodrone (~neurodron@dhcp211-034.wireless.buffalo.edu) has joined #ceph
[17:52] * cmccabe1 (~cmccabe@208.80.64.121) has joined #ceph
[18:08] <lxo> greglap1, I didn't have any specific case in mind, just a worry. it might be that the mds was taking too long to recover because of a frozen btrfs osd, and then, upon restart, the other mds recovered quickly
[18:08] <lxo> thanks for easing my mind ;-)
[18:09] <greglap1> Ixo: ah, if it was on a full cluster restart it's also possible that the OSDs were taking a while to peer and the first MDS got stuck waiting on them
[18:11] * joshd1 (~joshd@ip-66-33-206-8.dreamhost.com) has joined #ceph
[18:19] <bchrisman> greglap1: I wanted to check if these commands for expansion are still correct http://ceph.newdream.net/wiki/OSD_cluster_expansion/contraction
[18:20] <wido> bchrisman: I think so
[18:20] <wido> I'm using them all the time
[18:20] <greglap1> bchrisman: they look right, although I don't really use them
[18:20] <bchrisman> I wasn't sure whether the monmap was required, because it seems like if we're passing the ceph.conf location to cosd, then it would know where to grab the monmap from via the ceph.conf.
[18:20] <bchrisman> ok
[18:21] <greglap1> you're setting up the on-disk bits there
[18:21] <greglap1> usually they're done by mkcephfs
[18:21] <bchrisman> yeah.. ok cool
[18:29] <lxo> greglap1, speaking of peering... is it normal for a replicated pg or another to get stuck, like, forever, in crash+replay+peering or other faulty state after a disk failure that didn't recover before it was kicked out for good?
[18:29] <lxo> I suppose it's casdata or rbd, since it doesn't seem to be affecting the distributed filesystem, but it does look odd
[18:29] <greglap1> Ixo: we've seen some bugs with that recently
[18:30] <greglap1> I think there was one bug where it was only a notification bug and it actually got cleaned up
[18:30] <greglap1> sjust knows more about that
[18:30] <lxo> any reason for concern (say, better re-create the filesystem) or something that a future upgrade will probably clear up?
[18:31] <greglap1> well you're not going to lose any data that isn't already lost
[18:31] <greglap1> (though like I said I'm not sure you actually lost any data)
[18:31] <sjust> Ixo: the crash/replay code has recently been changed
[18:32] <sjust> how recent a version of ceph are you using?
[18:32] <lxo> 0.25.1
[18:34] <lxo> I guess I might as well switch to some git version. not sure it helps to test 0.25.1 at this point, does it?
[18:34] <sjust> ok, 0.25.1 is before the replay/crash changes, you might have better luck with master
[18:36] <lxo> thanks
[18:47] <wido> the messages in logm, they can be safely removed from the monitor, can they?
[18:48] <wido> I actually ran out of inodes on my 10G filesystem
[18:48] <wido> So I had to remove them in order to get the mon running again
[18:50] <greglap1> wido: hmm, not sure
[18:50] <greglap1> you'll have to ask sagewk but he's in a brief meeting right now
[18:52] <wido> removing them did not kill my cluster (for now)
[18:56] <wido> greglap1: there is actually a issue about this: http://tracker.newdream.net/issues/250
[18:56] <greglap1> wido: yes, and the logm is the safest to toss out but I'm not sure of its exact implementation details
[18:57] <greglap1> I suspect that since it's just log messages that the old "map" are never referred to
[18:57] <greglap1> in which case it's safe
[18:57] <greglap1> but I don't know for certain
[18:57] <wido> It's not a killer in a test setup, but this cluster has been running for about 2 months now
[18:58] <wido> So it took some time to get to this point
[19:10] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[19:11] * greglap1 (~Adium@166.205.138.189) Quit (Ping timeout: 480 seconds)
[19:17] * ghaskins (~ghaskins@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Ping timeout: 480 seconds)
[19:18] <sagewk> wido: yeah, should be save to throw out. if you do hit a problem, it should be easy to fix.
[19:19] * ghaskins (~ghaskins@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[20:33] <bchrisman> does libceph expose getattr as well as setattr? I'm seeing only setattr exported in libceph.cc
[20:35] <cmccabe1> bchrisman: it looks like an oversight
[20:36] <bchrisman> ahhh ok...
[20:36] <bchrisman> want me to file a bug on that?
[20:37] <cmccabe1> yeah
[20:38] * Hugh (~hughmacdo@soho-94-143-249-50.sohonet.co.uk) Quit (Quit: Ex-Chat)
[20:59] <gregaf> bchrisman: sorry I'm not real familiar with xattrs, but what do you mean the security namespace needs to be opened up?
[21:00] <bchrisman> gregaf: right now Client.c is rejecting any xattr which isn't prefaced with 'user.'
[21:01] <gregaf> hmmm
[21:01] <bchrisman> gregaf: to work with samba (or other projects), that really needs to allow 'security.' as well..
[21:01] <cmccabe1> bchrisman: I have a vague memory that the kernel uses a lot of the xattrs namespace for itself
[21:01] <bchrisman> I hacked it in my repo, so I figured I'd put the bug in… for tracking.
[21:02] <cmccabe1> brb, lunch
[21:02] <bchrisman> yeah… makes sense to limit it to the minimum namespaces possible..
[21:03] <bchrisman> but for samba acls, we'll need security.NTACLS … so that would be the minimum required to support samba without going all the way out to CTDB acl storage, which is a bit of a mess..
[21:03] <bchrisman> (for a distributed fs like this)
[21:04] <gregaf> bchrisman: ah, sage says he was just trying to follow the xattr rules about who can be in what zones
[21:05] <gregaf> the server daemons don't care
[21:05] <bchrisman> yeah...
[21:05] <bchrisman> once I bypassed the client restriction, it worked just fine.
[21:06] <bchrisman> 1-line patch in my tree probably wont have any conflicts going forward.
[21:07] <gregaf> if you think it's a generally-allowed access case you should just submit the patch :)
[21:07] <gregaf> I don't think any of us are very familiar with the security rules about xattr access
[21:07] <sagewk> bchrismas: yeah send the patch and i'll merge it
[21:07] * WesleyS (~WesleyS@12.248.40.138) has joined #ceph
[21:08] <bchrisman> ok
[21:09] * WesleyS (~WesleyS@12.248.40.138) Quit ()
[21:13] <gregaf> err, shouldn't that be an and in the patch?
[21:18] <Tv> gregaf: remember strcmp==0 means match, now it says "not user.* and not security.*"
[21:19] <Tv> err
[21:19] <Tv> i need more caffeine
[21:21] <Tv> and would make it return -EOPNOTSUPP iff it matches *both* user and security, that's impossible
[21:21] <Tv> or makes it return -EOPNOTSUPP or it doesn't match either one -- that's *always*
[21:22] <Tv> erf
[21:22] <gregaf> yes, that's my concern :)
[21:23] <bchrisman> heh.. thought I tested that… sorry :)
[21:24] <Tv> the right logic is not user and not security aka u && s, yeah
[21:24] <Tv> i hate the strncmp returning 0 thing
[21:24] <Tv> in a context where you don't care about sort order
[21:24] <bchrisman> yeah
[21:24] <Tv> i just want a strneq
[21:24] <bchrisman> wil resend
[21:25] <sagewk> bchrisman: can you include a changelog description and signed-off-by line?
[21:26] <sagewk> client: allow security.* attrs\n\nblah samba blah\n\nSigned-off-by: ...
[21:26] <bchrisman> ok
[21:28] <bchrisman> who do I put in for the Signed-off-by:?
[21:28] <Tv> you
[21:28] <bchrisman> okie :)
[21:28] <Tv> if you agree to the contributor agreement
[21:28] <bchrisman> ahh okay.. cool
[21:28] <Tv> http://ceph.newdream.net/git/?p=ceph.git;a=blob;f=SubmittingPatches;h=1c2f1e6932563dd8345a0ecc05b77cbabebaad8f;hb=HEAD#l21
[21:34] * neurodrone (~neurodron@dhcp211-034.wireless.buffalo.edu) Quit (Quit: neurodrone)
[21:35] * verwilst (~verwilst@dD576FAAE.access.telenet.be) has joined #ceph
[21:37] <cmccabe1> I'm pretty used to strcmp
[21:37] <cmccabe1> and the more general pattern of returning 0 on success
[21:39] <cmccabe1> it probably is usually a good idea to spell out the == 0 though
[21:39] <cmccabe1> for strcmp at least
[21:46] <bchrisman> Tv: thx for the link to the submission guide…that's what I was looking for.
[21:47] <Tv> bchrisman: very much the opposite, thanks for offering patches
[21:47] <gregaf> hiding patches in your company repo makes Ceph sad :(
[21:47] <gregaf> and look at that cute octopus, you know you want it to be happy! ;)
[21:54] * bchrisman (~Adium@70-35-37-146.static.wiline.com) Quit (Quit: Leaving.)
[22:22] * verwilst (~verwilst@dD576FAAE.access.telenet.be) Quit (Quit: Ex-Chat)
[22:36] * allsystemsarego (~allsystem@188.25.132.91) Quit (Quit: Leaving)
[23:36] * Meths_ (rift@customer5994.pool1.unallocated-106-192.orangehomedsl.co.uk) has joined #ceph
[23:43] * Meths (rift@91.106.232.128) Quit (Ping timeout: 480 seconds)
[23:43] * Meths_ is now known as Meths
[23:58] <johnl> lo

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.