#ceph IRC Log

Index

IRC Log for 2011-08-16

Timestamps are in GMT/BST.

[0:11] * Dantman (~dantman@199-7-158-34.eng.wind.ca) Quit (Ping timeout: 480 seconds)
[0:31] <Tv> yehudasa: are there known bugs in rgw chunked mode transfers?
[0:32] <Tv> because this looks wonky
[0:32] <yehudasa> Tv: what do you see?
[0:32] <Tv> i think https://dev.newdream.net/issues/10771 is because of chunked mode brokenness
[0:32] <yehudasa> there was some short lived bug, but other than that I'm not aware of any
[0:32] <Tv> i see it now on dho
[0:32] <Tv> still confirming
[0:33] <Tv> like, reading wireshark
[0:33] <yehudasa> are you running it now on dho?
[0:33] <yehudasa> because I'm messing with the environment
[0:33] <Tv> i ran tcpdump locally
[0:33] <Tv> captured my browsers interaction
[0:33] <yehudasa> yeah, but dho is currently unstable
[0:34] <Tv> hrrmph
[0:34] <Tv> ok, writing down my notes & ignoring that bug agan
[0:49] <cp> How do I create a new pool to access with rbd?
[0:49] <cp> I want to do something like "rbd create foo --size 1024 -p mypool
[0:52] <Tv> cp: the usage "usage: rbd [-n <auth user>] [OPTIONS] <cmd> ..." makes me think "rbd --pool mypool create foo"
[0:52] <Tv> and yes, confirmed that what is what it parses right
[0:53] <Tv> now, that doesn't answer the question about creating new pools..
[0:53] <cp> Tv: "error opening pool mypool (err=-2)" Yeah, still trying to figure out the pool creation.
[0:54] <Tv> so apparently that goes all the way back to osdmaps...
[0:54] <Tv> ahh http://ceph.newdream.net/wiki/Monitor_commands#pool
[0:55] <cp> :) Thanks
[0:55] <cp> was just scrolling down the page too
[0:55] <Tv> yup, that works
[0:56] * monrad-51468 (~mmk@domitian.tdx.dk) Quit (Ping timeout: 480 seconds)
[0:56] <Tv> ./rados mkpool seems like the more straightforward route
[0:56] * monrad-51468 (~mmk@domitian.tdx.dk) has joined #ceph
[0:58] <cp> Tv: thanks
[1:19] * greglap (~Adium@166.205.136.156) has joined #ceph
[1:32] * monrad-51468 (~mmk@domitian.tdx.dk) Quit (Read error: Connection reset by peer)
[1:32] * monrad (~mmk@domitian.tdx.dk) has joined #ceph
[1:53] * The_Bishop (~bishop@p4FCDEC9C.dip.t-dialin.net) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[2:04] * pruby (~tim@leibniz.catalyst.net.nz) has joined #ceph
[2:05] * Tv (~Tv|work@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[2:13] * greglap (~Adium@166.205.136.156) Quit (Read error: Connection reset by peer)
[2:36] * huangjun (~root@113.106.102.8) has joined #ceph
[2:36] <huangjun> hi,all
[2:55] * bchrisman (~Adium@70-35-37-146.static.wiline.com) Quit (Quit: Leaving.)
[2:56] * jojy (~jojyvargh@70-35-37-146.static.wiline.com) Quit (Quit: jojy)
[3:07] <cp> Trying to add a monitor to an existing ceph cell seems to cause problems: "ceph mon add beta 192.168.1.101:6789" 2011-08-15 18:03:12.912521 mon <- [mon,add,beta,192.168.1.101:6789]
[3:07] <cp> 2011-08-15 18:03:13.376895 mon0 -> 'added mon.beta at 192.168.1.101:6789/0' (0)
[3:08] <cp> But "ceph -s" now hangs, none of my mount points work and "df -h" hangs also.
[3:08] <cp> Any ideas?
[3:12] * yehuda_hm (~yehuda@99-48-179-68.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[3:45] * jojy (~jojyvargh@75-54-231-2.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[3:45] * jojy (~jojyvargh@75-54-231-2.lightspeed.sntcca.sbcglobal.net) Quit ()
[3:56] <huangjun> i think you should finish the add procedure, then you can use the cmd
[3:57] <huangjun> we tried this before
[4:05] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[4:13] <u3q> guy fawkes masks?
[4:13] <u3q> er ww
[4:13] <u3q> was re: http://www.mercurynews.com/top-stories/ci_18687074?nclick_check=1 the BART police protest in SF
[4:14] * macana (~ml.macana@159.226.41.129) has joined #ceph
[4:22] * cp (~cp@74.85.19.35) Quit (Quit: This computer has gone to sleep)
[5:50] <huangjun>
[5:50] <huangjun>
[5:50] <huangjun>
[5:55] * The_Bishop (~bishop@port-92-206-21-65.dynamic.qsc.de) has joined #ceph
[6:01] * macana (~ml.macana@159.226.41.129) Quit (Read error: Connection reset by peer)
[6:01] * macana (~ml.macana@159.226.41.129) has joined #ceph
[6:01] * huangjun (~root@113.106.102.8) Quit (Quit: Lost terminal)
[6:02] * huangjun (~root@113.106.102.8) has joined #ceph
[6:12] * cmccabe (~cmccabe@c-24-23-254-199.hsd1.ca.comcast.net) has left #ceph
[7:27] * amichel (~wircer@ip68-230-56-203.ph.ph.cox.net) has joined #ceph
[7:28] * Dantman (~dantman@S0106001731dfdb56.vs.shawcable.net) has joined #ceph
[7:39] <amichel> So, is ceph stable enough to take into a production environment? I'm looking to deploy a large distributed storage system for my campus and I like the look of Ceph over other alternatives I've investigated.
[8:30] * huangjun (~root@113.106.102.8) Quit (Read error: Connection reset by peer)
[8:50] <votz> amichel: Out of curiosity, which alternatives have you looked at?
[8:51] <amichel> gluster
[8:51] <amichel> Fraunhofer
[8:51] <amichel> gpfs, holy crap expensive
[8:52] <votz> haha
[8:56] <amichel> I like the cheap snapshots, the block export capability, the magic sauce autobalancing metadata and storage, the s3 interface... ceph has a lot going for it :D
[8:59] <votz> amichel: You might have to wait a bit to get input on your query. It's late night/early morning in US timezones
[9:00] <votz> Though it looks like you're in the US anyhow, too :)
[9:02] <amichel> oh yeah, I'm just a night owl
[9:37] * votz (~votz@pool-72-78-219-212.phlapa.fios.verizon.net) Quit (Quit: Leaving)
[9:44] * votz (~votz@pool-72-78-219-212.phlapa.fios.verizon.net) has joined #ceph
[10:44] * amichel (~wircer@ip68-230-56-203.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[11:32] * gregorg (~Greg@78.155.152.6) has joined #ceph
[12:01] * hijacker (~hijacker@213.91.163.5) Quit (Quit: Leaving)
[12:16] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[13:02] * mgalkiewicz (~mgalkiewi@84-10-109-25.dynamic.chello.pl) has joined #ceph
[13:02] <mgalkiewicz> hello guys
[13:03] <mgalkiewicz> I have a question about ceph authentication
[13:03] <mgalkiewicz> is it possible to mount different directories with different login/pass?
[13:04] <mgalkiewicz> My clients have data which I would like to store separately.
[13:05] <mgalkiewicz> It would be nice if they could not mount someone else directory
[13:06] <mgalkiewicz> currently I am using glusterfs and store data on separate glusterfs volumes
[14:02] * Juul (~Juul@130.225.93.59) has joined #ceph
[15:15] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[15:15] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[15:15] * gregorg_taf (~Greg@78.155.152.6) Quit ()
[15:15] * gregorg (~Greg@78.155.152.6) has joined #ceph
[15:48] * greglap (~Adium@166.205.136.156) has joined #ceph
[15:53] <greglap> mgalkiewicz: Ceph doesn't presently support multiple Ceph volumes in a single cluster, but you can give each client a different RADOS pool that only they have access permissions for and then set their home dir (or whatever) to reside in that pool so nobody else can read the data
[15:58] * Juul (~Juul@130.225.93.59) Quit (Ping timeout: 480 seconds)
[16:01] <mgalkiewicz> hmm any more info on wiki or sth?
[16:07] <greglap> let me check, not sure if we wrote that up properly anywhere or just in the mailing list
[16:08] * Juul (~Juul@gw1.imm.dtu.dk) has joined #ceph
[16:11] <greglap> mgalkiewicz: I don't think there's a good writeup anywhere, no :(
[16:12] <greglap> and it's not a trivial thing, but what you'd want to do is create a CRUSH pool for each user, then give them access perms to that pool, then set their home dir or whatever to use that pool
[16:13] <greglap> right now you create pools with rados or ceph tool, set up the caps with cauthtool (http://ceph.newdream.net/wiki/FAQ#How_to_allow.2Fdeny_clients_to_access_.28ro.2Crw.29_the_ceph.3F), and set the home dir using the cephfs tool with the admin client
[16:45] * greglap (~Adium@166.205.136.156) Quit (Ping timeout: 480 seconds)
[16:49] * pserik (~pserik@eduroam-55-179.uni-paderborn.de) has joined #ceph
[16:59] * greglap (~Adium@aon.hq.newdream.net) has joined #ceph
[17:00] * greglap (~Adium@aon.hq.newdream.net) Quit ()
[17:05] <pserik> hi folks, i'm trying to setup a test environment with ceph. i can successfully start ceph but i can't mount it. i always get the "mount error 5". link to the output: http://pastebin.com/nQ5MNZmt
[17:12] <pserik> aim??? anybody here? :)
[17:30] * pserik (~pserik@eduroam-55-179.uni-paderborn.de) has left #ceph
[17:30] * pserik (~pserik@eduroam-55-179.uni-paderborn.de) has joined #ceph
[17:35] * pserik (~pserik@eduroam-55-179.uni-paderborn.de) has left #ceph
[17:50] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:55] * Juul (~Juul@gw1.imm.dtu.dk) Quit (Quit: Leaving)
[18:33] * jojy (~jojyvargh@70-35-37-146.static.wiline.com) has joined #ceph
[18:56] * bchrisman (~Adium@70-35-37-146.static.wiline.com) has joined #ceph
[19:21] * cmccabe (~cmccabe@69.170.166.146) has joined #ceph
[19:26] * Dantman (~dantman@S0106001731dfdb56.vs.shawcable.net) Quit (Remote host closed the connection)
[19:35] * Dantman (~dantman@S0106001731dfdb56.vs.shawcable.net) has joined #ceph
[20:09] * amichel (~amichel@salty.uits.arizona.edu) has joined #ceph
[20:10] <amichel> I swung through last night, but no one was around really. Is Ceph stable enough to put into production? I'm looking to deploy a distributed file service for my campus and I like the look of Ceph more than the alternatives I've been looking at. Wanted to get an idea of whether all the awesome feature checkboxes work :D
[20:20] <darkfaded> 0.3x and production? :)
[20:20] <darkfaded> it's on a very good way
[20:20] <darkfaded> and most or all checkboxes are there
[20:22] <darkfaded> amichel: i think if you join in the beta support program it might work out
[20:23] <amichel> There's a beta support program? :D
[20:23] <amichel> Outstanding
[20:23] <darkfaded> http://ceph.newdream.net/support/
[20:23] <darkfaded> just noticed it became official
[20:24] <amichel> Added myself to the list
[20:24] <amichel> So, what showstoppers are still out there
[20:24] <darkfaded> wait for one of the devs :)
[20:25] <darkfaded> i mean
[20:25] <amichel> Can do. Thanks for pointing me to the support program!
[20:26] <darkfaded> production ready to me means "feature stable and under heavy testing for 1-2 years", so the showstopper is mostly time
[20:27] <amichel> Oh, well. I'm willing to live on the edge to some degree. I'm mostly concerned about potential "all my data disappeared" type of issues :D
[20:27] <darkfaded> i think if you use ext4 instead of btrfs you shouldn't have those a lot :)
[20:28] <darkfaded> and tbh whenever i tested it felt that ceph "prealpha" is more stable than gluster "production" anyway
[20:28] <amichel> But then I don't get the awesome snapshots and whatnot, right?
[20:28] <darkfaded> amichel: but brtfs is *experimental* :)
[20:28] <amichel> Oh yeah, I know
[20:28] <darkfaded> well... idk
[20:29] <amichel> But their on-disk format is crystalized now, right? No more "reformat to update the tools" situations
[20:29] <darkfaded> any data on experimental FS means you'll end up fired
[20:29] <amichel> Ha
[20:29] <amichel> Well, I wouldn't worry about that too much
[20:29] <darkfaded> mmkay
[20:29] <darkfaded> never been in a campus IT
[20:30] <darkfaded> well anyway, a daemon might die or get hung, then you'll need someone to look at it and fix the bug. but those aren't as many either
[20:30] <amichel> It's really hard to get fired
[20:30] <amichel> We'll leave it at that :D
[20:30] <darkfaded> :))
[20:31] <darkfaded> also a good thing - if you should need to restore, ceph is very fast :)
[20:31] <amichel> Yeah, that was a quesiton I had, what's the best way to back this kind of thing up?
[20:32] <amichel> Just mount the filesystems on a client and go or is there a methodology that takes some advantage of magic sauce cephness
[20:33] <darkfaded> theres no zfs send or such
[20:33] <darkfaded> so yup, mount and backup
[20:33] <darkfaded> what are you using for backup?
[20:34] <amichel> EMC Avamar for most of our stuff
[20:34] <amichel> Though I doubt I can afford it for the cluster size I'm planning
[20:34] <darkfaded> ok :)
[20:34] <darkfaded> i just know bacula seems to *not scale* well in the many-tb-on-one-client area
[20:34] <darkfaded> they just do one stream, still
[20:35] <amichel> ew
[20:35] <darkfaded> is avamar licensed by backup volume then?
[20:36] <amichel> Yeah, by capacity
[20:36] <amichel> But you also have to have enough nodes
[20:37] <amichel> It's not a cheap backup solution :D
[20:37] <amichel> I may potentially just create a second file cluster and replicate and call that good enough
[20:37] <darkfaded> as long as the restore is cheap :))
[20:39] <darkfaded> amichel: ask around when more people are there. not sure if anyone even lost data in the last few months
[20:39] <amichel> Nice
[21:51] * votz (~votz@pool-72-78-219-212.phlapa.fios.verizon.net) Quit (Quit: Leaving)
[21:51] * votz (~votz@pool-72-78-219-212.phlapa.fios.verizon.net) has joined #ceph
[22:17] * bchrisman (~Adium@70-35-37-146.static.wiline.com) Quit (Quit: Leaving.)
[22:40] * Juul (~Juul@3408ds2-vbr.4.fullrate.dk) has joined #ceph
[22:50] * gregorg (~Greg@78.155.152.6) Quit (Ping timeout: 480 seconds)
[23:26] * bchrisman (~Adium@64.164.138.146) has joined #ceph
[23:33] * bchrisman (~Adium@64.164.138.146) Quit (Quit: Leaving.)
[23:36] * bchrisman (~Adium@64.164.138.146) has joined #ceph
[23:46] * bchrisman (~Adium@64.164.138.146) Quit (Quit: Leaving.)
[23:48] * bchrisman (~Adium@64.164.138.146) has joined #ceph
[23:53] <slang> my mds just crashed: http://fpaste.org/cyGt/
[23:54] <slang> and when I try to restart it, it prints out a bunch of messages about missing files:
[23:54] <slang> 2011-08-16 16:46:49.194080 7f88566ea700 mds0.server missing 1000000313b
[23:54] <slang> and then crashes again
[23:55] <slang> after second crash: http://fpaste.org/eiNt/
[23:56] <slang> both standby mds servers crashed as well
[23:58] <cmccabe> slang: greg and sage are out of the office at the moment
[23:59] <cmccabe> slang: it's probably best to file a bug and maybe email them
[23:59] <slang> cmccabe: k

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.