#ceph IRC Log

Index

IRC Log for 2010-09-14

Timestamps are in GMT/BST.

[1:43] * ghaskins_mobile (~ghaskins_@66-189-114-103.dhcp.oxfr.ma.charter.com) has joined #ceph
[1:57] <Led_Zeppelin> hi
[2:08] * greglap (~Adium@166.205.139.165) has joined #ceph
[2:10] <greglap> Led_Zeppelin: hi again
[2:11] <Led_Zeppelin> no need to apoligize greglap
[2:11] <Led_Zeppelin> thanks for answering my question
[2:11] <greglap> np :)
[2:11] <Led_Zeppelin> Very excited about ceph. we want to start using it at our lab but still trying to figure out if its the right solution for us
[2:11] <greglap> fyi the devs are all on the US' Pacific coast so about 9:30-5 Pacific time is when you'll see the most people who can talk :)
[2:12] <Led_Zeppelin> the least we can do is start using it and start finding "bugs"
[2:12] <Led_Zeppelin> Is it possible to use ceph without using fuse and a standard Linux distribution? Can I compile the module for it if I have my kernel-dev or kernel src?
[2:13] <greglap> there's an in-kernel client that comes in recent kernels, but you can always update it or build as a module
[2:13] <greglap> instructions are available at ceph.newdream.net
[2:14] <greglap> you're going to need a reasonably recent kernel though; the backports branches go back to I think 2.6.28
[2:14] <Led_Zeppelin> hmm ok. Our kernel is pretty old.
[2:14] <greglap> that's you on the mailing list with RHEL 5.2?
[2:15] <Led_Zeppelin> yep
[2:15] <Led_Zeppelin> thats me
[2:15] <Led_Zeppelin> and why don't you guys move to freenode?
[2:15] <greglap> freenode?
[2:15] <greglap> oh, for irc
[2:15] <greglap> no idea
[2:16] <Led_Zeppelin> wierdos :-)
[2:17] <darkfader> well RHEL5.2 or 5.5 doesn't really matter backport-wise
[2:18] <darkfader> always the same old song
[2:18] <greglap> :(
[2:18] <darkfader> greglap: it's ok
[2:19] <darkfader> us unix people can't live without some dependency hell :)))
[2:19] <greglap> if it matters, all the server stuff is userspace so that should run on anything and you could just use desktop boxes for client testing?
[2:19] <darkfader> fair enough
[2:20] <darkfader> anyway, gnite
[2:20] <greglap> night, darkfader
[2:50] * greglap (~Adium@166.205.139.165) Quit (Quit: Leaving.)
[3:05] * Brock (~berwin@66-189-196-132.dhcp.yakm.wa.charter.com) has joined #ceph
[3:05] <Brock> anyone here?
[3:11] * ghaskins_mobile (~ghaskins_@66-189-114-103.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[4:14] * ghaskins_mobile (~ghaskins_@66-189-114-103.dhcp.oxfr.ma.charter.com) has joined #ceph
[5:27] * ghaskins_mobile (~ghaskins_@66-189-114-103.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[6:00] * Osso (osso@AMontsouris-755-1-7-230.w86-212.abo.wanadoo.fr) Quit (Quit: Osso)
[6:07] * deksai (~deksai@96-35-100-192.dhcp.bycy.mi.charter.com) Quit (Ping timeout: 480 seconds)
[7:55] * allsystemsarego (~allsystem@188.27.166.252) has joined #ceph
[12:54] * ghaskins_mobile (~ghaskins_@66-189-114-103.dhcp.oxfr.ma.charter.com) has joined #ceph
[13:10] * hijacker (~hijacker@213.91.163.5) Quit (Remote host closed the connection)
[13:12] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[13:36] * Led_Zeppelin (~user@ool-4573f43b.dyn.optonline.net) Quit (Remote host closed the connection)
[13:39] * hijacker (~hijacker@213.91.163.5) Quit (Remote host closed the connection)
[13:41] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[15:10] * deksai (~deksai@96-35-100-192.dhcp.bycy.mi.charter.com) has joined #ceph
[15:12] * Yoric (~David@213.144.210.93) has joined #ceph
[15:24] * Osso (osso@AMontsouris-755-1-7-230.w86-212.abo.wanadoo.fr) has joined #ceph
[15:25] * hijacker (~hijacker@213.91.163.5) Quit (Quit: Leaving)
[16:08] * f4m8 is now known as f4m8_
[16:22] * deksai (~deksai@96-35-100-192.dhcp.bycy.mi.charter.com) Quit (Ping timeout: 480 seconds)
[16:46] * guacamole (87f50805@ircip3.mibbit.com) has joined #ceph
[16:47] <guacamole> hi there
[16:47] * greglap (~Adium@166.205.136.99) has joined #ceph
[16:48] <guacamole> is there a way to find out at which osds a given file is stored?
[16:48] <guacamole> i'm just trying to track where my file is stored physically on object level, and just wondering whether there's an easy way to track it
[16:52] <greglap> guacomole: not really
[16:53] <greglap> files are striped across many OSDs so in most cases it wouldn't make sense to ask which OSD holds the file
[16:54] <guacamole> i understand that, but it would be nice to be able to check on that, perhaps for debugging purpose?
[16:55] <greglap> I'm not sure I know what you mean
[16:55] <guacamole> to check whether files are distributed as expected according to placement rules
[16:55] <greglap> ah
[16:56] <guacamole> say you want files to be distributed to different racks, and how can i tell whether that's really happening
[16:57] <greglap> well I guess it would be possible to expose the placement group via some tool, but I don't think there's anything that does it right now
[16:57] <guacamole> ok
[16:58] <greglap> well you can look at which OSDs are in which placement groups, although I don't remember the command off-hand
[16:59] <greglap> and make sure there are OSDs from separate racks in each or whatever
[17:00] <guacamole> then can i know which placement groups my file is assigned to?
[17:00] <guacamole> perhaps not?
[17:00] <greglap> I don't think there's an explicit check for that available right now
[17:00] <guacamole> okay.
[17:01] <guacamole> another thing is, i tried looking up xattr of my file
[17:01] <greglap> there wouldn't be much point since right now all the filesystem files go in one pool, which has to have the same distribution rules
[17:01] <greglap> although allowing multiple pools (so different distribution rules) linked to different directories is in the pipeline
[17:01] <guacamole> it gives me: ceph.layout: chunk_bytes=4194304 stripe_count=1 object_size=4194304 preferred_osd=-1
[17:02] <guacamole> what does the last bit mean? preferred_osd?
[17:02] <guacamole> okay
[17:02] <greglap> so you were looking at one of the object files on an OSD?
[17:03] <guacamole> i created on file in my ceph mount, and i looked up its xattr using xattr command
[17:03] <guacamole> *one file
[17:03] <greglap> you saw that exposed via the ceph fs?
[17:03] <guacamole> yes
[17:03] <greglap> well that's not right I don't think
[17:04] <greglap> Ceph should be hiding its own xattrs
[17:04] <guacamole> oh
[17:04] <greglap> the preferred_osd lets you set, on creation for a single file, the osd it would like to use as the primary
[17:04] <guacamole> -1 means it's not set?
[17:04] <greglap> yeah
[17:04] <greglap> that's the default
[17:04] <guacamole> hmm okay
[17:05] <greglap> sorry, it lets you set the primary for a single object, not a single file
[17:06] <guacamole> weird. it shows up on a single file, perhaps it doesn't make much sense then
[17:07] <greglap> well objects are stored as regular files on the local filesystem
[17:07] <greglap> and ceph files are striped across objects but those xattrs go on each object
[17:09] <guacamole> okay perhaps i'm looking at kinda default xattr for all those objs
[17:10] <greglap> yes, those are default settings for everything
[17:10] <greglap> they really shouldn't be appearing unless you're looking at the Ceph objects via a local fs
[17:10] * deksai (~deksai@dsl093-003-018.det1.dsl.speakeasy.net) has joined #ceph
[17:13] <guacamole> anyway i wanted to play with different placement rules, and it appears ceph doesn't expose much info as you said :(
[17:13] <guacamole> thanks for helping me out thou, greg
[17:14] <greglap> yep
[17:14] <greglap> what kind of placement rules did you want to play with?
[17:14] <greglap> you can still modify those via the CRUSH map
[17:14] <guacamole> yeah but it's not very flexible.
[17:15] <greglap> what were you hoping for?
[17:15] <guacamole> what we came up with is some kinda file placement strategies for users
[17:15] <guacamole> so we are hoping to use ceph to evaluate the strategies, but in the current setting, it's not possible to define placements for each file flexibly
[17:16] <greglap> ah, yeah
[17:16] <guacamole> do you think there could be some easy hack to do that?
[17:16] <greglap> no :(
[17:16] <greglap> we plan to eventually support placing different directories in different pools
[17:17] <guacamole> i suppose that's also not an easy hack :)
[17:17] <greglap> so you could have 4 or 5 different pools with different data storage strategies and put different users on different pools and mission-critical data on a pool with more replication or whatever
[17:18] <greglap> I don't recall exactly how Sage was thinking we could do it
[17:18] <greglap> there's been some discussion about it on the list, let me see if I can find the bug
[17:18] <guacamole> how about let users have their own crush rule, and whenever they read/write files, they use the rule?
[17:19] <greglap> I can't think of any way to make that happen, unfortunately
[17:19] <guacamole> ok
[17:19] <greglap> the Ceph FS is built on top of RADOS and doesn't really know about CRUSH
[17:20] <guacamole> hmm ok
[17:21] <guacamole> the feature you mentioned (multiple pools for different users) will be available soon?
[17:22] <greglap> mmmm, not sure
[17:22] <guacamole> ok
[17:22] <greglap> I think it's slated for a version before the end of the year
[17:23] <guacamole> ok i guess i'll check back with you later if i have more questions. i'm glad ceph team is active on irc
[17:23] <guacamole> it's very helpful for us
[17:23] <greglap> :)
[17:23] <greglap> just try and keep it during business hours Pacific time and we should be around
[17:23] <guacamole> okie dokie
[17:24] * guacamole (87f50805@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[17:25] <greglap> wido: are you back now?
[17:32] * greglap (~Adium@166.205.136.99) Quit (Quit: Leaving.)
[17:52] <gregaf> Brock: we are now
[18:27] * alexxy (~alexxy@79.173.82.178) has joined #ceph
[19:07] * Osso (osso@AMontsouris-755-1-7-230.w86-212.abo.wanadoo.fr) Quit (Quit: Osso)
[19:16] * Yoric_ (~David@213.144.210.93) has joined #ceph
[19:16] * Yoric (~David@213.144.210.93) Quit (Read error: Connection reset by peer)
[19:16] * Yoric_ is now known as Yoric
[19:32] * Osso (osso@AMontsouris-755-1-7-230.w86-212.abo.wanadoo.fr) has joined #ceph
[19:44] * Yoric (~David@213.144.210.93) Quit (Quit: Yoric)
[20:15] <wido> gregaf: i'm back
[20:15] <gregaf> hope you had a good trip!
[20:16] <gregaf> I was thinking of you for some reason but now I can't remember why ;)
[20:16] <wido> hehe, ok :)
[20:16] <wido> well, i just found a bug again, but i think it's btrfs
[20:20] <iggy> too bad you aren't getting paid per bug you find
[20:21] <gregaf> mmm, Dilbert bug loops
[20:21] <wido> situation: One of my OSD's was down due to #371, since sagewk couldn
[20:21] <wido> ah damn, pressed enter
[20:22] <wido> since sagewk couldn't find the cause, i formatted the OSD and brought it up. The cluster started to recover, but then my OSD's all started to go down with: http://pastebin.com/RfitcJfh
[20:23] <wido> all the OSD's were upgraded to the unstable of this morning and running 2.6.35 (not sure which minor version)
[20:23] <wido> after a reboot I can start the OSD again, but it's hit and miss if it hits this bug again.
[20:24] <wido> Should I open a issue? Right now it brought 8 of my 12 OSD's down
[20:27] <gregaf> I think Sage is looking at this; it's a btrfs bug though so don't put anything in the tracker
[20:29] <wido> I thought so, for now i'll wait what sagewk has to say about it
[20:29] <wido> Can't play with my cluster though :(
[20:34] * Osso (osso@AMontsouris-755-1-7-230.w86-212.abo.wanadoo.fr) Quit (Quit: Osso)
[20:37] * Osso (osso@AMontsouris-755-1-7-230.w86-212.abo.wanadoo.fr) has joined #ceph
[21:00] * Osso (osso@AMontsouris-755-1-7-230.w86-212.abo.wanadoo.fr) Quit (Quit: Osso)
[21:03] * Osso (osso@AMontsouris-755-1-7-230.w86-212.abo.wanadoo.fr) has joined #ceph
[21:08] * Osso (osso@AMontsouris-755-1-7-230.w86-212.abo.wanadoo.fr) Quit (Quit: Osso)
[21:10] * Osso (osso@AMontsouris-755-1-7-230.w86-212.abo.wanadoo.fr) has joined #ceph
[21:29] * deksai (~deksai@dsl093-003-018.det1.dsl.speakeasy.net) Quit (Ping timeout: 480 seconds)
[21:43] * Osso (osso@AMontsouris-755-1-7-230.w86-212.abo.wanadoo.fr) Quit (Quit: Osso)
[21:45] * Osso (osso@AMontsouris-755-1-7-230.w86-212.abo.wanadoo.fr) has joined #ceph
[22:09] * allsystemsarego (~allsystem@188.27.166.252) Quit (Quit: Leaving)
[22:13] * deksai (~deksai@dsl093-003-112.det1.dsl.speakeasy.net) has joined #ceph
[22:21] * deksai (~deksai@dsl093-003-112.det1.dsl.speakeasy.net) Quit (Ping timeout: 480 seconds)
[22:21] <yehudasa> wido: are you there?
[22:50] * ghaskins_mobile (~ghaskins_@66-189-114-103.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[23:01] * ghaskins_mobile (~ghaskins_@66-189-114-103.dhcp.oxfr.ma.charter.com) has joined #ceph
[23:04] * ghaskins_mobile (~ghaskins_@66-189-114-103.dhcp.oxfr.ma.charter.com) Quit ()
[23:13] * deksai (~deksai@dsl093-003-112.det1.dsl.speakeasy.net) has joined #ceph
[23:48] * sagelap (~sage@ip-66-33-206-8.dreamhost.com) has joined #ceph
[23:48] * sagelap (~sage@ip-66-33-206-8.dreamhost.com) has left #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.