#ceph IRC Log

Index

IRC Log for 2011-12-12

Timestamps are in GMT/BST.

[0:00] * grape_ (~grape@216.24.166.226) has joined #ceph
[0:02] * grape (~grape@216.24.166.226) Quit (Ping timeout: 480 seconds)
[0:03] * aa (~aa@r190-135-22-231.dialup.adsl.anteldata.net.uy) Quit (Ping timeout: 480 seconds)
[0:30] * fronlius (~fronlius@f054181126.adsl.alicedsl.de) Quit (Quit: fronlius)
[1:37] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[1:40] * eryc (~eric@internetjanitor.com) has joined #ceph
[1:41] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[1:41] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:04] * NightDog (~karl@52.84-48-58.nextgentel.com) Quit (Quit: Leaving)
[2:54] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) has joined #ceph
[2:56] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:26] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[3:30] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:35] * lxo (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[3:36] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:11] * aa (~aa@r190-135-22-231.dialup.adsl.anteldata.net.uy) has joined #ceph
[4:45] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[4:48] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:50] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[4:54] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[5:15] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[5:20] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[5:26] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[5:42] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[5:52] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) has joined #ceph
[5:58] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[6:40] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) has joined #ceph
[6:47] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[7:00] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) has joined #ceph
[7:00] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) Quit ()
[7:09] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[8:48] * Lo-lan-do (~roland@mirenboite.placard.fr.eu.org) has joined #ceph
[8:52] * gregorg (~Greg@78.155.152.6) has joined #ceph
[8:52] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[9:10] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[9:10] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[9:24] * morse (~morse@supercomputing.univpm.it) Quit (Ping timeout: 480 seconds)
[9:37] * fronlius (~fronlius@testing78.jimdo-server.com) has joined #ceph
[9:47] * cclien (~cclien@ec2-175-41-146-71.ap-southeast-1.compute.amazonaws.com) Quit (Remote host closed the connection)
[9:53] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[10:58] * andresambrois (~aa@r190-64-70-252.dialup.adsl.anteldata.net.uy) has joined #ceph
[11:01] * aa (~aa@r190-135-22-231.dialup.adsl.anteldata.net.uy) Quit (Ping timeout: 480 seconds)
[11:14] * stxShadow (~jens@p4FD06FC8.dip.t-dialin.net) has joined #ceph
[13:34] * mtk (~mtk@ool-44c35967.dyn.optonline.net) has joined #ceph
[13:53] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[13:53] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[14:30] * andresambrois (~aa@r190-64-70-252.dialup.adsl.anteldata.net.uy) Quit (Read error: Operation timed out)
[14:31] * fronlius_ (~fronlius@testing78.jimdo-server.com) has joined #ceph
[14:31] * fronlius (~fronlius@testing78.jimdo-server.com) Quit (Read error: Connection reset by peer)
[14:31] * fronlius_ is now known as fronlius
[14:32] * andresambrois (~aa@r190-64-70-252.dialup.adsl.anteldata.net.uy) has joined #ceph
[15:08] * fghaas (~florian@85-127-155-32.dynamic.xdsl-line.inode.at) has joined #ceph
[15:12] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[15:13] * fghaas (~florian@85-127-155-32.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[15:13] * fghaas (~florian@85-127-155-32.dynamic.xdsl-line.inode.at) has joined #ceph
[16:35] * andresambrois (~aa@r190-64-70-252.dialup.adsl.anteldata.net.uy) Quit (Remote host closed the connection)
[16:38] * grape_ is now known as grape
[16:51] * stxShadow (~jens@p4FD06FC8.dip.t-dialin.net) Quit (Quit: Ex-Chat)
[17:01] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) has joined #ceph
[17:36] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:52] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) has joined #ceph
[18:15] * Tv (~Tv|work@aon.hq.newdream.net) has joined #ceph
[18:23] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[18:37] * MK_FG (~MK_FG@188.226.51.71) Quit (Quit: o//)
[18:38] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[18:40] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[18:41] <gregaf> Lo-Lo-lan-do: there isn't any load balancing on the data right now; the assumption is that because files are striped across objects that you're unlikely to need something more intelligent
[18:52] <Lo-lan-do> Well, if I have one node accessing some data much more often than the other nodes, wouldn't it make sense if the data was stored locally?
[18:53] <gregaf> in that case it'll generally be cached in local memory
[18:54] <Lo-lan-do> I don't have that much memory :-)
[18:54] <gregaf> and while you definitely can co-locate the computation, Ceph isn't designed as a computation system and most systems like that will place the computation next to the data
[18:54] <Lo-lan-do> My intended use case is sharing my music collection across all my home computers + some dedicated servers outside.
[18:55] <Lo-lan-do> If I need to go through the ADSL to play music… not good.
[18:55] <gregaf> I think maybe you are looking at the wrong tool for that job
[18:55] <Lo-lan-do> And there will be some locality: the computer next to my drumkit is playing some music files much more often that the computer near the couch in the living-room.
[18:57] <Lo-lan-do> As for the tool… I dunno, I'd like to have something distributed and redundant, and Ceph seems to fit the bill.
[18:59] <gregaf> it's pretty complicated to set up and maintain for that kind of consumer-oriented use case; if I were looking for something like that I'd check out a NAS device...
[19:01] <Lo-lan-do> The current thing is a DAAP server running on a Sheevaplug with a (big) USB stick. Honestly, I'm trying to go distributed.
[19:02] <Lo-lan-do> And I envision more use cases later (for much bigger data), the music thing is just to get started.
[19:03] <gregaf> all right, then :)
[19:03] <gregaf> in any case, the data doesn't do any kind of adaptive balancing — that sort of placement actually goes against Ceph's design
[19:04] <gregaf> but I don't think you'll find it's a problem if you have the network capacity for the filesystem to actually run anyway
[19:05] <Lo-lan-do> Do you mean that it's unrealistic to hope running Ceph if there's a slow link in the middle?
[19:06] <gregaf> depends on how you set it up and how much write you're planning to do across it
[19:07] <gregaf> but it's a synchronous replication system, so be aware
[19:07] <gregaf> given what you're describing you might look at xtreemfs too
[19:07] <gregaf> be back later
[19:08] <Lo-lan-do> Yeah, I'm going for dinner too :-)
[19:30] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[19:37] * fronlius (~fronlius@testing78.jimdo-server.com) Quit (Ping timeout: 480 seconds)
[19:48] * adjohn (~adjohn@208.90.214.43) has joined #ceph
[19:48] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[19:49] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:52] <Tv> yehudasa_: you said swift acls looked like pretty much non-documented -- if there's anything out there, can you please send links?
[20:21] * verwilst (~verwilst@dD576F09C.access.telenet.be) has joined #ceph
[20:29] * fghaas (~florian@85-127-155-32.dynamic.xdsl-line.inode.at) Quit (Ping timeout: 480 seconds)
[20:32] <yehudasa_> Tv: looking for anything now
[20:32] <Tv> yehudasa_: greg just found something..
[20:32] <Tv> http://swift.openstack.org/misc.html#acls
[20:34] <Tv> eww referers
[20:35] <Tv> "bobs_account,sues_account:sue" -- does anything say what that colon means?
[20:35] <yehudasa_> Tv: in swift it's account:username
[20:36] <Tv> ah just hierarchical usernames
[20:36] <yehudasa_> since in rgw account equals user, then we call it 'subuser;
[20:36] <yehudasa_> 'subuser'
[20:36] <gregaf> I'm more concerned by their passing reference to write and the .rlistings directive
[20:36] <Tv> http://swift.openstack.org/overview_auth.html http://programmerthoughts.com/openstack/swift-permissions/
[20:38] <yehudasa_> gregaf: in what sense?
[20:38] <gregaf> the fact that such things exist and they don't tell us how to use them
[20:39] <gregaf> presumably the format is ".rlisting: sues_account:sue"
[20:39] <gregaf> but then they're mixing up their language, so maybe not?
[20:39] <gregaf> and it doesn't even tell us what the write directive is...
[20:40] <yehudasa_> gregaf: the specific format is not that important I think
[20:40] <Tv> it seems like swift has separate acls
[20:40] <Tv> there's the "list of things that can read" and a "list of things that can write"
[20:40] <Tv> in the X-Container-Read: and X-Container-Write: headers
[20:40] <gregaf> or is this just the account specifier format and then you post to READ_ACL and WRITE_ACL?
[20:40] <gregaf> okay
[20:41] <yehudasa_> Tv: yes, that's how you specify the acls there
[20:41] <Tv> hahaha http://www.sports-injury-info.com/swifts-acl-reconstruction.html
[20:42] <Lo-lan-do> gregaf: I'm off for tonight, but I'll have a look at xtreemfs before coming back with more questions :-)
[20:42] <Lo-lan-do> I'm also looking at Tahoe-LAFS, to be honest.
[20:42] <gregaf> so the permissions you can specify are read objects in this container, list the objects in this container, and write to the objects in this container?
[20:42] <yehudasa_> gregaf: yes
[20:42] <gregaf> those all have very different replication systems and expectations, you should probably try and figure those out before picking one to deploy :)
[20:43] * fghaas (~florian@85-127-155-32.dynamic.xdsl-line.inode.at) has joined #ceph
[20:46] <Tv> http://docs.openstack.org/bexar/openstack-object-storage/admin/content/ch02s02.html
[20:47] <yehudasa_> Tv: yeah. It changed a bit in the latest docs version
[20:48] <yehudasa_> http://docs.openstack.org/diablo/openstack-object-storage/admin/content/authentication-and-access-permissions.html
[20:48] <yehudasa_> the ACLs probably didn't change
[20:48] <yehudasa_> they did tear the api specification around 1.2
[20:48] <yehudasa_> tear our
[20:49] <Tv> what's this: "The public container settings are used as the default authorization over access control lists. For example, using X-Container-Read: referer:any allows anyone to read from the container regardless of other authorization settings."
[20:49] <Tv> what's a "public container setting"?
[20:50] <Tv> or is that just a convoluted way of saying "if the ACL includes referer:any, nothing else matters"?
[20:50] <yehudasa_> Tv: I assume that's a container that anyone can read all the objects inside
[20:50] <Tv> it sounds like it's a "if you fall off the end of an ACL without finding a match, look at the public container settings"
[20:50] <Tv> but i don't see anything more
[20:51] <Tv> "For metadata, you should not exceed 90 individual key/value pairs for any one object and the total byte length of all key/value pairs should not exceed 4KB (4096 bytes)." hee-hee
[20:53] <Tv> okay it does sounds like swift acls are really trivial
[20:53] <fghaas> installing an RPM built from 0.39 (make distcheck from git clone, subsequent rpmbuild -tb) fails on opensuse 12.1 as it requires initscripts which suse evidently considers obsolete on systemd based releases -- am I doing something wrong, or is this a spec bug that needs fixing?
[20:54] <Tv> fghaas: i think we still have spec changes from the suse guys coming in, it's not yet completely integrated
[20:54] <fghaas> yep, that line is unchanged in HEAD too
[20:56] <fghaas> Tv, "the suse guys" in this case being the team around dr. hannes?
[20:56] <Tv> fghaas: build.opensuse.org isn't loading for me right now, or I'd try to point you to their work
[20:56] <Tv> fghaas: i do believe so, yes
[20:56] <fghaas> I had a look earlier, the build in Factory has been failing for some time and the filesystems repo is stuck on 0.20, afaict
[20:57] <fghaas> which is a bit disheartening :)
[20:57] <Tv> :(
[20:57] <Tv> i thought i overheard a conversation about this just now
[20:57] <Tv> sagewk: can you enlighten us?
[20:59] <Tv> fghaas: https://build.opensuse.org/package/show?package=ceph&project=home%3Aliewegas etc do look sad
[20:59] <fghaas> I've been looking at the "filesystems" project
[20:59] <Tv> i'm pointing to that one mostly because it at least is pulling the latest sources
[21:00] <fghaas> https://build.opensuse.org/package/show?package=ceph&project=filesystems <-- what I used
[21:00] <Tv> honestly sage is the only one here who looked much at the suse builder
[21:01] <fghaas> I hope I do understand correctly that the filesystems repo mentioned in http://ceph.newdream.net/wiki/Installing_on_SuSE corresponds to the OBS filesystems project
[21:01] <Tv> but that page says 0.20 a lot :(
[21:02] <fghaas> however, the offending line in the spec file is just "Requires(preun): initscripts", and that's not conditional on any distro ... so I guess this would also fail on recent fedoras or anything else that ships systemd, no?
[21:03] <Tv> fghaas: we haven't tried, yet
[21:51] * fghaas (~florian@85-127-155-32.dynamic.xdsl-line.inode.at) Quit (Ping timeout: 480 seconds)
[21:58] * sjustlaptop (~sam@aon.hq.newdream.net) has joined #ceph
[22:01] <todin> joshd: hi, after your pathces for libvirt everything is working again, thanks.
[22:01] <joshd> todin: cool, good to know
[22:03] <todin> joshd: one question, in the xml the montior adress has to be an ip? because a hostname doesn't work.
[22:04] <joshd> hmm, I'm not sure
[22:16] <joshd> todin: looks like the code path isn't special there, it should resolve the names
[22:40] * votz (~votz@pool-108-52-122-97.phlapa.fios.verizon.net) has joined #ceph
[22:44] <todin> joshd: now dnsname work as well, i will test it further
[22:45] * Lo-lan-do (~roland@mirenboite.placard.fr.eu.org) Quit (Ping timeout: 480 seconds)
[22:58] * sjustlaptop (~sam@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[23:03] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[23:12] * sjustlaptop (~sam@aon.hq.newdream.net) has joined #ceph
[23:13] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[23:18] <todin> Tv: hi, do you have more information about the guy who should run the benchmarks on ceph, which you mentioned in your email?
[23:19] <Tv> todin: ideally, he'd live in LA and sit at one of the empty desks i'm looking at right now ;)
[23:19] <Tv> http://ceph.newdream.net/jobs/
[23:20] <todin> Tv: is that the qa postion?
[23:20] <Tv> todin: i'm not sure if that's any of the ones listed ... :-/
[23:22] <todin> is the qa postion in LA as well?
[23:22] <Tv> todin: ideally everything is qa, we've looked at remote people when they're worth it
[23:24] <Tv> todin: hold on
[23:24] <Tv> todin: so here's the deal.. we're trying to hire the "Director of Professional Services & Support" person first, because a good candidate for that role will be able to handle a lot of the recruiting process
[23:24] <todin> Tv: you mean LA?
[23:24] <Tv> todin: but if you're interested, you should definitely send in a CV and say you talked about this with me
[23:25] <Tv> todin: even if the job is not really yet posted
[23:25] <todin> Tv: if understand you right, there a two postions qa and the benchmark thing?
[23:25] <Tv> todin: and yes, ideally LA, almost everyone is in LA, we'll consider remote people when they're good enough to make it worth it, but things will be easier if you're in LA (/willing to move)
[23:26] <Tv> todin: don't take the list you see as the only things we're looking for, it's just that we need to focus somewhere first
[23:26] <todin> ok, I have a strong qa backround and like to break things ;-)
[23:27] <Tv> todin: that sounds very much like something we could use
[23:27] <Tv> todin: send in a CV, we'll adjust the position to fit later
[23:28] <todin> ok do you guys have any other requierment?
[23:28] * verwilst (~verwilst@dD576F09C.access.telenet.be) Quit (Quit: Ex-Chat)
[23:29] <Tv> todin: be allowed to work in the US, live near LA or be willing to fly around, don't be an arsonist, etc
[23:29] <todin> damm, the arsonist got me
[23:32] <yehudasa_> todin: in this case then Client Relations is probably a better fit
[23:37] * cp (~cp@76-220-17-197.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[23:37] <cp> what's the latest version of ceph?
[23:37] <todin> cp: 0.39
[23:40] <cp> todin: thanks
[23:40] <cp> I'm having problems with monitors seg-faulting when I do ceph mon remove
[23:48] <todin> cp: if you have stack traces, you could send them to the ml, or into a pastebin.
[23:56] <yehudasa_> Tv: and s3tests.py, under task() we call create_users with clients and s3tests_conf
[23:56] <yehudasa_> Tv: clients are just the list of clients, but not the clients conf
[23:56] <yehudasa_> Tv: and s3tests_conf is the object that's supposed to get that conf
[23:57] <Tv> yehudasa_: after the meeting
[23:57] <yehudasa_> Tv: ok
[23:57] * sjustlaptop (~sam@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[23:59] <cp> If I want to go back to an old version, is there a way to get, say, 0.38 from the ceph repo?

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.