#ceph IRC Log

Index

IRC Log for 2012-09-04

Timestamps are in GMT/BST.

[23:02] -coulomb.oftc.net- *** Looking up your hostname...
[23:02] -coulomb.oftc.net- *** Checking Ident
[23:02] -coulomb.oftc.net- *** No Ident response
[23:02] -coulomb.oftc.net- *** Found your hostname
[23:02] * CephLogBot (~PircBot@rockbox.widodh.nl) has joined #ceph
[23:02] * Topic is 'ceph development, discussion'
[23:02] * Set by sage!~sage@cpe-76-94-40-34.socal.res.rr.com on Tue Jul 03 03:56:33 CEST 2012
[23:03] [mikeryan VERSION]
[23:03] <elder> sagewk, I'm going to update testing by adding those two commits. If you want me to use something other than current ceph-devel/testing please let me know.
[23:03] <sagewk> k
[23:04] <sagewk> going to smoke-test wip-btrfs2 before updating to that
[23:04] <sagewk> mikeryan: can i kill off the other jobs then? which one hit it/
[23:04] <sagewk> ?
[23:04] <mikeryan> sagewk: 15643
[23:05] <mikeryan> kill the rest, please!
[23:06] <sagewk> i did a kill -STOP on your teuth job.. can you clean it up when you're done with it?
[23:06] <mikeryan> yep, you got it
[23:06] <mikeryan> thanks
[23:07] <amatter> gregf: trying to document it, but it's in a production environemnt so I'm trying to recreate the same issue in my lab
[23:08] * nhmhome (~nh@67-220-20-222.usiwireless.com) Quit (Ping timeout: 480 seconds)
[23:08] * nhm_ (~nh@67-220-20-222.usiwireless.com) Quit (Ping timeout: 480 seconds)
[23:10] <stan_theman> can you mount a specific pool with ceph-fuse or cephfs?
[23:10] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[23:10] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[23:11] <gregaf> amatter: what versions of the kernel and ceph userspace are you using?
[23:11] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[23:12] <amatter> hi stan_theman: yes, you can. you need to create a directory in the root cephfs that you want to map to the specific pool then
[23:12] <gregaf> and elder, does http://pastebin.com/GbxwdK5M look familiar to you? I'm thinking specifically that there was some parsing bug we ran into and fixed a few weeks (?) ago
[23:13] <gregaf> otherwise this is just very odd to me and I'm not involved enough in the kernel to know what that trace means
[23:13] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[23:14] <elder> Looks like a ceph fs bug.
[23:14] <elder> But I haven't been working with the file system in quite some time so the fact that it's not familiar doesn't surprise me.
[23:14] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[23:14] <Burnie> weird
[23:14] <Burnie> file /usr/local/bin/ceph-osd
[23:14] <Burnie> /usr/local/bin/ceph-osd: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped
[23:15] <Burnie> compiled against kernel 3.3.3 :)
[23:15] <dmick> Burnie: yeah, not sure what that "for GNU/Linux..." really means
[23:16] <dmick> I've seen the same thing tho
[23:16] <stan_theman> amatter: it got cut off after "specific pool then". reading a mailing list page on it now too
[23:16] <amatter> stan_theman: use "ceph mds add_data_pool xx" where xx is the pool id determined by examining "ceph ods dump" to make the pool available to mds
[23:16] <stan_theman> ah
[23:16] * EmilienM (~EmilienM@ADijon-654-1-107-27.w90-39.abo.wanadoo.fr) Quit (Quit: kill -9 EmilienM)
[23:16] <amatter> stan_theman: then you need to map the actual directory to the pool which is a little more complex.
[23:17] <stan_theman> so i should stick with rbd where i can? :P
[23:17] <Burnie> ceph osd's must be identified by numbers and not names ? :)
[23:18] <amatter> Burnie: it's a bug at the moment, there is a tracker for the fix
[23:18] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[23:18] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[23:18] * Leseb_ is now known as Leseb
[23:18] <Burnie> allright :) one thing less to worry about then
[23:18] <amatter> stan: "cephfs /mnt/ceph1-kernel/pool-hs-san-1-ha set_layout --pool 6 --stripe_unit 4194304 -object_size 4194304 --stripe_count 1" is what I used to map a folder to pool 6
[23:19] <stan_theman> i'd read a blog post about that single line amatter :)
[23:19] <stan_theman> thanks though, the real world example definitely helps
[23:19] <amatter> stan: apparently you should be able to omit the non-pool arguments but it seems to also be a bug where it won't go if you do
[23:20] <amatter> I should add a page on the wiki because this seems to be a common request
[23:21] <amatter> stan: those numbers I used in the additional arguments are the defaults the utility should be populating itself
[23:21] <stan_theman> oh! i was wondering where they were coming from
[23:24] <amatter> stan: btw, they are setting how the striping is configured for that folder. More info here if you're interested in tuning http://ceph.com/docs/master/dev/file-striping/
[23:26] <amatter> gregaf: Linux rmi-orem-ceph1-mds1.readymicro.local 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
[23:27] <amatter> gregaf: ceph version 0.48.1argonaut (commit:a7ad701b9bd479f20429f19e6fea7373ca6bba7c)
[23:30] <dmick> com.cloud.utils.exception.CloudRuntimeException: DB Exception on: null
[23:30] <dmick> yes. I would have an exception on null as well.
[23:30] <dmick> well played CloudStack.
[23:33] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[23:41] * adjohn is now known as Guest5758
[23:41] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[23:45] * Guest5758 (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[23:45] * adjohn is now known as Guest5759
[23:45] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[23:46] <Tobarja1> now i have an expiriment for the weekend... i took a tar of my mounted cephfs filesystem and just untar'ed it to another folder. a folder that had about 40 .avi's of 200MB or more is completely hosed: none are over 25MB, most are <10MB.
[23:49] * Guest5759 (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.