#ceph IRC Log

Index

IRC Log for 2012-11-16

Timestamps are in GMT/BST.

[0:04] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[0:06] <flesh> dmick after trying to luch so many times mkcephfs ... there might be some rubish around, right?
[0:06] <flesh> is there a way of cleaning everything?
[0:07] * drokita (~drokita@199.255.228.10) Quit (Quit: Leaving.)
[0:09] <flesh> *launch
[0:10] * vjarjadian (~IceChat7@5ad6d001.bb.sky.com) has joined #ceph
[0:12] <vjarjadian> hi, just found ceph... planning to start testing once i've finished reading up a bit more... any suggestions/tips for someone new to Ceph?
[0:13] <joshd> vjarjadian: look at the docs (http://ceph.com/docs/master/), not the wiki
[0:14] <Psi-jack> And good luck. :)
[0:14] <vjarjadian> from the look of it... i'll need it...
[0:14] <Psi-jack> I /still/ can't find docs on everything the 'ceph' command does.
[0:14] <vjarjadian> looks absolutely brilliant but very complex
[0:16] <Psi-jack> I'm still also trying to determine if it's worth anything to me. ;)
[0:16] <vjarjadian> the price is certainly attractive...
[0:17] <Psi-jack> Definitely not faster than my other solution, but all I care about at this moment is high availability for my Guest OS disks, and I can CRM the shared volumes they bring it. ;)
[0:18] <vjarjadian> what is your other solution? if you dont mind me asking
[0:18] <joshd> Psi-jack: http://ceph.com/docs/master/rados/operations/control/
[0:18] * MikeMcClurg (~mike@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) Quit (Quit: Leaving.)
[0:19] <Psi-jack> joshd: Dude! What the heck, this is burried deep in. Thanks. :)
[0:20] <Psi-jack> heh, ceph.com's going to need to implement solr search indexing on the site for it to be searchable. stat!
[0:20] <joshd> Psi-jack: stuff was just rearranged, some things are deeper now
[0:20] <joshd> hmm, there used to be a search. I wonder where it went
[0:21] <Psi-jack> heh
[0:21] <joshd> ah, it's on the bottom of the sidebar now
[0:21] <Psi-jack> Oh sheash.. Horrible spot. ;)
[0:21] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[0:23] <Psi-jack> Hmmm
[0:23] <Psi-jack> Looks like it is possibly hooked up to solr.
[0:24] <Psi-jack> And: http://ceph.com/docs/master/man/8/ceph - The URL in there needs to be updated.
[0:24] <Psi-jack> Which is why I couldn't find it. ;)
[0:25] <dmick> flesh: you can always blow away the daemons manually (ceph-*) and the data dirs (/var/lib/ceph dirs by default)
[0:27] <joshd> Psi-jack: you found the one place something in the repository references a specific doc url
[0:27] <dmick> heh
[0:27] <Psi-jack> Anyways, between dmick and joshd, I have one question I think one of you two might know. Proxmox VE uses qemu-rbd for the ceph block device. Is it reasonably safe to say that the allocation factor of the disks are thin provisioned, not thick provisioned?
[0:27] <Psi-jack> joshd: heh, heck yeah I did. ;)
[0:28] <dmick> Psi-jack: yes. space is not used until written to
[0:28] <Psi-jack> That's what I thought. I wonder how well this would apply if I did a qemu-img to convert qcow2 disks to rbd-ceph.. If it would reduce, remain about the same, or grow in size during the conversion.
[0:29] <benpol> Psi-jack: I think you'd lose the sparseness in the qcow2 image in the process.
[0:30] <benpol> (but maybe I'm missing the context here)
[0:31] <Psi-jack> benpol: No, you got the context right.
[0:31] <Psi-jack> I'm of course hoping to /keep/ the sparseness (aka thin provision)
[0:31] <benpol> yeah, my thin-provisioned qcow2 images just ballooned up to the full size in the process of conversion.
[0:31] * flesh (547908cc@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[0:32] <Psi-jack> Most of my qcow2's were actually converted from raw files or lv's and they shrunk pretty good.
[0:33] <dmick> did you see the -S switch to qemu-img?
[0:33] * benpol did not see the -S switch
[0:33] <dmick> I don't know if it works
[0:34] <Psi-jack> dmick: That's what I think I used to convert them TO sparse qcow2's, actually. ;)
[0:34] <Psi-jack> Miraculously saved hundreds of gigabytes in the process. :)
[0:34] * vjarjadian (~IceChat7@5ad6d001.bb.sky.com) Quit (Ping timeout: 480 seconds)
[0:34] <dmick> I guess what I'm asking is did you use it when converting from qcow2 to rbd
[0:35] <Psi-jack> dmick: Haven't yet gotten that far, yet. I'm still testing actual ceph out itself to see if it's reasonable to my needs I'm aiming for.
[0:35] <benpol> dmick: I'd be happy to try it sometime soon.
[0:35] <dmick> oh that was benpol that ballooned. I see.
[0:36] <Psi-jack> Hmm speaking of which... How does qemu-rbd know what "disk" it's using for it's pool? And how can I see what disks there are in the pool?
[0:36] * benpol someone stick a needle in me
[0:36] * Psi-jack sticks a leather-working needle in benpol.
[0:36] <dmick> uh...a pool is distributed across the cluster. Did you mean "which pool it's using for its disk image"?
[0:37] <dmick> (when you say "pool" are you talking about RADOS pools?)
[0:38] <Psi-jack> Hmmm. Not quite. Or are pools individualized per instance? And yes, RADOS pools.
[0:38] <dmick> so can you rephrase the question? I don't know "what disk it's using for it's pool" means
[0:39] <Psi-jack> Hmm, trying to figure out how to ask the question. ;)
[0:39] <joshd> I suspect this will help: http://ceph.com/docs/master/rados/operations/data-placement/
[0:39] <dmick> rbd images live in RADOS pools. By default, that's the pool named rbd, but it can be your own.
[0:39] <Psi-jack> basically, I provisioned one storage pool in my Proxmox VE storage.cfg, which is using 1 pool named rbd.
[0:39] <dmick> you specify pool name when you refer to them
[0:39] <Psi-jack> When I made the VM itself, I had it provision 20GB of that pool
[0:40] <dmick> oh, so "pool" is used in two contexts here. I'm not familiar with Proxmox terminology
[0:40] <Psi-jack> dmick: I'm not sure the terminilogy or contexts are different at all.
[0:41] <Psi-jack> Because when I setup the storage.cfg to use pool: rbd, it /created/ the pool when it initialized the new storage, and when I created a disk image, it created it within the rbd pool.
[0:41] <Psi-jack> And it still shows I have 88GB of 99GB available within the rbd pool.
[0:41] <dmick> well as far as RADOS is concernted
[0:42] <dmick> you can see which rbd images exist with rbd ls
[0:42] <dmick> (which takes -p pool, or -l for more info)
[0:42] <Psi-jack> Aha!
[0:42] <Psi-jack> That's what I was looking for! :)
[0:42] <dmick> I don't know what "created the pool" means from the Proxmox side
[0:42] <Psi-jack> dmick: Basically, osd pool create rbd
[0:43] <Psi-jack> Err, ceph osd create pool rbd :)
[0:43] <dmick> so when you say "it created the pool", you mean Proxmox issued that ceph command?
[0:43] <Psi-jack> dmick: Yes. The "rbd" didn't exist until after I configured Proxmox VE to have anything.
[0:44] <Psi-jack> But, rbd ls shows my vm-107-disk-1, which is nice. ;)
[0:44] <dmick> I am very surprised that Proxmox created a RADOS pool, but hey, good news?...
[0:44] <Psi-jack> dmick: They're working on really vamping up stuff for ceph. ;)
[0:44] <dmick> Search for "ceph" returns nothing :)
[0:44] <dmick> ah well
[0:44] <Psi-jack> Eventually, ceph support will be fully implemented in the WebUI. ;)
[0:45] <Psi-jack> For now, minimal support for it within the existing UI constraints are there. Just not the initial storage setup stuff for it.
[0:45] <dmick> search sucks, everywhere. google finds http://pve.proxmox.com/wiki/Storage:_Ceph which is very cool
[0:45] <Psi-jack> heh
[0:46] <Psi-jack> Yeah. It's definitely looking promising, this ceph. So far a LOT better than sheepdog was, holy crap was that a nightmare.
[0:46] <Psi-jack> Easy to use. Sure.. Easy for it to totally crap itself out for lunch, dinner, desert, and everything, for no apparent reason. ;)
[0:47] <dmick> OK. I'd thought you were on your own here trying to glue them together, but yeah, ok, I see there is Ceph support in Proxmox now. Nice.
[0:47] <Psi-jack> I had sheepdog running on 6 systems, 4 of which were my actual hypervisor hosts.
[0:47] <Psi-jack> dmick: Yep. :)
[0:47] <dmick> so there may be a Proxmox way to list images, but you certainly can directly manipulate the cluster as well
[0:48] <Psi-jack> dmick: Which is why I'm trying to learn the ceph commands as well, so /I/ know what's going on, and how I can possibly tune things to run a little better. ;)
[0:48] <dmick> understood
[0:48] <Psi-jack> Promox VE was.. Amazingly easy to work with Ceph. :)
[0:48] <Psi-jack> After I upgraded to 2.2, which was it's own nightmare. ;)
[0:49] <Psi-jack> Somehow, between 2.1 and 2.2, they made kvm memory balloon into a blackhole. OOM Kill kernel panicing at bootup, just because you setup a balloon on the VM. ;)
[0:50] <Psi-jack> memory: 2048, balloon: 2048, OOM Kill EVERYTHING, and panic. :)
[0:53] * PerlStalker (~PerlStalk@perlstalker-1-pt.tunnel.tserv8.dal1.ipv6.he.net) Quit (Quit: rcirc on GNU Emacs 24.2.1)
[0:55] <elder> joshd, now that I've all set with refcounting rbd_dev structures, can you tell me a scenario for me to test the problem?
[0:56] * vjarjadian (~IceChat7@5ad6d001.bb.sky.com) has joined #ceph
[0:58] * BManojlovic (~steki@85.222.180.134) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:58] <dmick> benpol: fwiw, the default -S for qemu-img seems to be 8 (4kB)
[0:59] <benpol> dmick: thanks
[1:00] <benpol> Anyone know if qemu-img can create "format 2" images?
[1:00] <joshd> elder: mount an fs on top of an rbd device, and then try to unmap it. it should fail with EBUSY
[1:00] <joshd> benpol: no, it can't
[1:02] <elder> Ok.
[1:03] <benpol> another way to lose sparseness is in converting a "format 1" image to "format 2" via the export/import path.
[1:04] <joshd> benpol: yeah, import supports sparse files, but export doesn't yet
[1:04] <Psi-jack> Blasted!
[1:05] <benpol> joshd: that's worth a chuckle ;)
[1:05] <davidz> Should the "testing" kernel branch be compatible with 0.54 ceph?
[1:05] <dmick> benpol Psi-jack: http://tracker.newdream.net/issues/3499
[1:08] <benpol> dmick: excellent, I'll keep an eye on that
[1:09] <Psi-jack> Hehe, indeed.
[1:09] <dmick> I don't have the time to really figure out whether it will or won't work without that, but it does look relevant at least, and needs some further study
[1:11] * danieagle (~Daniel@177.99.134.146) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[1:11] * miroslav (~miroslav@wlan-clients-8094.sc12.org) Quit (Quit: Leaving.)
[1:12] * gaveen (~gaveen@112.134.113.249) Quit (Remote host closed the connection)
[1:17] <sagewk> opinions on http://tracker.newdream.net/issues/3052 ?
[1:18] <sagewk> i suppose we should really only try the ioctl if it is actually btrfs... :/
[1:18] * LarsFronius (~LarsFroni@2a02:8108:380:12:286c:83c3:8d20:7c6e) has joined #ceph
[1:30] <dmick> is there a way to sense btrfs that won't fail on some-other-old-filesystem-that-doesn't-do-ioctls-right?
[1:32] * buck (~buck@bender.soe.ucsc.edu) has left #ceph
[1:33] <sagewk> yeah, statfs(2).
[1:33] <sagewk> i will be less lazy.
[1:35] <dmick> well it's kind of a shame that there isn't a reliable ioctl() failure, but, then, ioctl() is always a crapshoot
[1:36] <dmick> (and amusingly statfs(2) doesn't enumerate btrfs yet :) )
[1:40] <sagewk> dmick: wip-3052
[1:40] <sagewk> we #define the btrfs magic ourselves
[1:47] <dmick> lgtm
[1:50] <Psi-jack> Well, so far, with 400 pgs, I'm not getting any kernel messages while trying to use the disk I/O.
[1:51] <Psi-jack> And.. I spoke just too soon.
[1:51] <Psi-jack> LOL
[1:51] <benpol> Psi-jack: out of curiousity, what fs are you using on your OSDs?
[1:51] <Psi-jack> benpol: ext4 on top of a ZFS zvol subpool.
[1:52] <benpol> Ah yes, you're the ZFS person. :)
[1:52] <Psi-jack> hehe yep.
[1:53] <dmick> wasn't paying close attention to the pre-400-pgs story Psi-jack; what kind of kernel complaints?
[1:54] <Psi-jack> dmick: It's on my VM's using the rbd-ceph storage. Disk I/O locks up enough to cause CPU stuck for 22s (and up), from kworker.
[1:54] <Psi-jack> Things as simple as apt-get update will trigger it off.
[2:01] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[2:01] <Psi-jack> It's happening seemingly less with the pgs up to 400, from 128 it was originally at.
[2:02] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[2:08] <dmick> oh so you get "stuck process" messages?
[2:08] <dmick> "cause CPU stuck". yeah, ok.
[2:09] <Psi-jack> Yeah. :)
[2:12] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[2:13] * LarsFronius (~LarsFroni@2a02:8108:380:12:286c:83c3:8d20:7c6e) Quit (Quit: LarsFronius)
[2:18] * yoshi (~yoshi@p11108-ipngn1401marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[2:18] * sagelap1 (~sage@104.sub-70-197-150.myvzw.com) has joined #ceph
[2:19] * sagelap (~sage@2607:f298:a:607:88df:e5ed:8448:a287) Quit (Ping timeout: 480 seconds)
[2:27] * aliguori (~anthony@cpe-70-123-145-75.austin.res.rr.com) Quit (Remote host closed the connection)
[2:28] * senner (~Wildcard@24-196-37-56.dhcp.stpt.wi.charter.com) has joined #ceph
[2:28] * senner (~Wildcard@24-196-37-56.dhcp.stpt.wi.charter.com) Quit ()
[2:30] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[2:30] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[2:50] * davidz1 (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[2:51] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Read error: Operation timed out)
[2:55] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Read error: Operation timed out)
[2:59] * wilson (~wilson@CPE001c1025d510-CM001ac317ccea.cpe.net.cable.rogers.com) Quit ()
[3:01] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[3:06] * sagelap (~sage@197.sub-70-197-139.myvzw.com) has joined #ceph
[3:06] * pedahzur (~jkugler@216-67-98-32.static.acsalaska.net) Quit ()
[3:11] * sagelap1 (~sage@104.sub-70-197-150.myvzw.com) Quit (Ping timeout: 480 seconds)
[3:12] * yoshi_ (~yoshi@2400:4030:d0:f200:cc1a:8a73:55e3:f423) has joined #ceph
[3:14] * sagelap (~sage@197.sub-70-197-139.myvzw.com) Quit (Ping timeout: 480 seconds)
[3:16] * yoshi (~yoshi@p11108-ipngn1401marunouchi.tokyo.ocn.ne.jp) Quit (Read error: Operation timed out)
[3:18] * maxiz (~pfliu@111.194.207.227) Quit (Quit: Ex-Chat)
[3:24] * winston-d (~zhiteng@pgdmzpr01-ext.png.intel.com) has joined #ceph
[3:25] <winston-d> hi, ceph gurus. I encountered some problem when using ceph.
[3:26] <winston-d> the / partition is filled up as ceph continue to run, but if i stop or restart ceph, the space will be freed and then as ceph runs, it slowly fills up again
[3:30] * yoshi (~yoshi@2400:4030:d0:f200:7472:907e:116d:a643) has joined #ceph
[3:34] * yoshi_ (~yoshi@2400:4030:d0:f200:cc1a:8a73:55e3:f423) Quit (Ping timeout: 480 seconds)
[3:36] * yoshi_ (~yoshi@2400:4030:d0:f200:74ed:2c79:1910:d16c) has joined #ceph
[3:41] <Psi-jack> Which filesystem would be most optimal for Ceph right now? XFS, ext4, or btrfs?
[3:42] * yoshi (~yoshi@2400:4030:d0:f200:7472:907e:116d:a643) Quit (Read error: Operation timed out)
[3:43] * yoshi_ (~yoshi@2400:4030:d0:f200:74ed:2c79:1910:d16c) Quit (Read error: Operation timed out)
[3:44] * yoshi (~yoshi@EM117-55-68-33.emobile.ad.jp) has joined #ceph
[3:52] * adjohn (~adjohn@69.170.166.146) has left #ceph
[3:57] * rweeks (~rweeks@64.55.78.101) has joined #ceph
[4:06] <Psi-jack> Wow..
[4:07] <Psi-jack> Okay, ceph runs MUUUUUCh faster, so far, on straight XFS than on a ZVol subvolume with ext4.
[4:09] <rweeks> how are you writing to ceph
[4:09] <rweeks> ?
[4:09] <rweeks> (object, rbd or cephfs)
[4:13] <Psi-jack> rbd
[4:13] <rweeks> interesting
[4:13] <rweeks> newer kernel?
[4:14] <Psi-jack> My storage servers all use ZFS right now, so I tossed a couple OSD's directly on 2 of my Proxmox VE 2.2 servers (Debian 6.0.6 with kernel 2.6.32), and I can see a huuuuuge difference.
[4:14] <dmick> winston-d: I assume you mean / on the host where the ceph daemons are running (and not inside the ceph filesystem)?
[4:14] <rweeks> Psi-jack: we have not done any testing or dev on ceph with zfs underneath.
[4:15] <Psi-jack> Yeah, it's slow. ;)
[4:15] <winston-d> dmick, yeah, / on ceph daemon.
[4:15] <dmick> Psi-jack: interesting. slightly depressing, but interesting
[4:15] <Psi-jack> I literally had to make a ZVol with ext4.
[4:15] <dmick> winston-d: so is it logs?
[4:15] <rweeks> and I can't speak for sage but I don't think we intend on doing much with zfs
[4:15] <Psi-jack> Else, mkcephfs itself would flat out fail.
[4:15] <dmick> rweeks: zfs keeps being interesting for its snapshotting
[4:15] <rweeks> agreed
[4:15] <rweeks> but it's not interesting for it's licensing.
[4:16] <dmick> rweeks: Not My Problem :)
[4:16] <rweeks> I know...
[4:16] <Psi-jack> rweeks: heh, yeah. It'll never make it to mainline, but PPA's are out for ubuntu to utilize DKMS no problem.
[4:16] <rweeks> I think btrfs will get there
[4:16] <rweeks> and it's all unencumbered
[4:16] <Psi-jack> btrfs has some nice ideas, but, I dunno...
[4:16] <dmick> Psi-jack: how was mkcephfs failing again? was it just the direct IO for the journal? Did you try turning that off?
[4:16] <Psi-jack> Their idea of "fixing" corrupt files is to have another copy, without that, you can't fix anything.
[4:17] <Psi-jack> dmick: I believe it was direct io, yes, and no, didn't turn it off, didn't know how.
[4:17] <dmick> argh. I pasted the command but it got lost in the shuffle I guess; I thought you were trying that
[4:17] <Psi-jack> hehe
[4:17] * mdrnstm (~mdrnstm@206-169-78-213.static.twtelecom.net) Quit (Quit: Leaving.)
[4:17] <rweeks> dmick: how did your concert go?
[4:18] * yoshi_ (~yoshi@p30114-ipngn1501marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[4:18] <dmick> the LACDC gig is tomorrow night
[4:18] <rweeks> ohhhh
[4:18] <rweeks> I misread
[4:18] <rweeks> can't go, but I hope you rock out
[4:19] <winston-d> dmick, no, i don't think so. actually i checked every folder under /, the total space they consumed is less then 5G, but my / is ~40G
[4:20] <Psi-jack> Still seems a little slow now that's it's write heavy, but it's on 2 singlular SATA disks.
[4:20] <dmick> Psi-jack: <joshd> you can set "journal dio = false" in the osd section of your ceph.conf, and it should work (using fsync or fdatasync instead of directio)
[4:20] <rweeks> DIO
[4:20] * rweeks laughs
[4:20] <Psi-jack> heh
[4:20] <dmick> like an OSD in the dark
[4:20] * rweeks sporfles
[4:21] <rweeks> /m\
[4:21] <dmick> winston-d: ? you mean there is unaccounted space used in /? That doesn't make much sense
[4:21] <dmick> and, rweeks: \m/. geez. :)
[4:21] <Psi-jack> dmick: I'll try that, as well as making sure omap isn't used.
[4:22] <rweeks> i'm typing upside down. deal with it.
[4:22] <winston-d> dmick, i know it doesn't make sense at all. but take a look at here: http://paste.openstack.org/show/25993/
[4:22] <dmick> omap? why?
[4:22] <Psi-jack> Since I know that's more specifically for ext4.
[4:22] <Psi-jack> filestore xattr use omap = true
[4:22] <Psi-jack> That.
[4:23] <dmick> well, it's for filesystems that aren't known to support large xattrs
[4:23] <dmick> don't know where lzfs is on that
[4:23] <dmick> winston-d: weird. Maybe there are unlinked-but-still-open files?...
[4:23] <dmick> I wonder if there's some other disk examination tool that could find them
[4:24] <dmick> nameless space-consuming inodes
[4:24] <winston-d> dmick, are you aware of any?
[4:24] <rweeks> Nameless Inodes is the name of my next band
[4:24] <winston-d> rweeks, :)
[4:24] * yoshi (~yoshi@EM117-55-68-33.emobile.ad.jp) Quit (Ping timeout: 480 seconds)
[4:25] <rweeks> I've been at a conference all week. I'm a bit loopy.
[4:25] <dmick> winston-d: don't know. maybe fsck? It certainly will try to put them in /lost+found, I think
[4:25] <winston-d> this has driven me crazy.
[4:26] * yehudasa_ (~yehudasa@static-66-14-234-139.bdsl.verizon.net) has joined #ceph
[4:26] <winston-d> dmick, fsck only works with unmounted parition.
[4:26] <dmick> in actual repair mode, yes, but maybe it can scan-but-not-repair?...
[4:27] <dmick> -n maybe?
[4:27] <dmick> ext4 does that
[4:29] <winston-d> hmm, i'll just shut it down and plug the disk to another system and try fsck.
[4:29] <dmick> fsck -n failed?
[4:29] <dmick> also, may be easier to boot from CD/USB rather than move disk
[4:29] <dmick> (or PXE if you're so equipped)
[4:30] <dmick> also perhaps debugfs
[4:31] * nhmlap (~nhm@64.55.78.101) has joined #ceph
[4:31] <dmick> hey there's nhmlap, maybe he knows
[4:31] <Psi-jack> Hmm.
[4:31] <nhmlap> dmick: I know nothing!
[4:31] <dmick> nhmlap: you aware of a way to find nameless space-consuming inodes?
[4:32] <dmick> ISTR things like "list things by inode" on other Unices
[4:32] <dmick> Linux has namei, but I'm thinking, I dunno, checki?...
[4:32] <nhmlap> dmick: what filesystem?
[4:32] <dmick> probably ext4? winston-d?
[4:32] * rweeks waves at nhmlap from floor 19
[4:33] <nhmlap> rweeks: I'm on floor 19. :)
[4:33] <rweeks> I know!
[4:33] <nhmlap> lol
[4:33] <winston-d> fsck -n works, but it said 'skipping journal recovery blahblahblah'
[4:33] <dmick> right
[4:33] <dmick> which is fine
[4:34] <Psi-jack> What was that command to initiate bench?
[4:34] <elder> Do you have open files that might be holding a whole lot of space?
[4:34] <winston-d> and fsck -n said / is clean
[4:34] <elder> (Sorry, late to the discussion)
[4:35] <dmick> elder: that's what we're wondering
[4:35] <nhmlap> elder: yeah, I was just reading that.
[4:35] <nhmlap> can you reboot the box?
[4:35] <dmick> nhmlap: but I'm trying to find out if it could be ceph at fault
[4:35] <winston-d> no need to reboot, stopping ceph, free space comes up again.
[4:35] <rweeks> ohhhh
[4:35] <rweeks> weird.
[4:36] <winston-d> and actually, moving the disk isn't an option, since that will shutdown ceph, which means disk will be fine.
[4:36] <Psi-jack> ceph osd tell \* bench ?
[4:36] <rweeks> I _swear_ I had this issue many moons ago wtih the veritas filesystem
[4:36] <winston-d> so, i'm wondering using 'dd' to dump all the data to somewhere else.
[4:37] <winston-d> rweeks, and how did you fix that?
[4:37] <Psi-jack> Heh, yeah, that was it. heh
[4:37] <rweeks> I am pretty sure it was invisible inodes
[4:37] <rweeks> or something like that
[4:37] <rweeks> sorry, this was something like 15 years ago
[4:38] <winston-d> am i time-travelling? :)
[4:38] <rweeks> could be
[4:39] <winston-d> which means i'm only a teenager now, hell yeah~
[4:39] <rweeks> go to bed, young man
[4:39] <Psi-jack> yeah, DEFINITELY getting slightly faster performance off ceph with just XFS. 30~40MB/s, while it's fricken active installing Ubuntu 12.04 in a VM.
[4:39] <dmick> http://www.adamcrume.com/blog/archive/2011/06/30/viewing-deleted-but-open-files-on-linux
[4:40] <Psi-jack> dmick: Fun stuff. I do that sometimes. ;)
[4:40] <Psi-jack> Not exactly "invisible" inodes, just a file descriptor still open with the original inode reference point.
[4:40] <dmick> so lsof with no args shows deleted files
[4:41] <dmick> "invisible" meaning "inaccessible by any pathname"
[4:41] <Psi-jack> VFS is still a path, and you can go to /proc/pid#/fd
[4:42] <dmick> :-P
[4:42] <Psi-jack> hehe
[4:42] <dmick> anyway winston-d, lsof | grep deleted might be interesting
[4:42] <dmick> or even that whole awk command
[4:43] <winston-d> dmick, thx. yes, those zombies are logs of ceph
[4:43] <winston-d> damn
[4:43] <dmick> hm. so why are they still open?
[4:44] <winston-d> no idea.
[4:44] <dmick> is it the ceph processes that have them open?
[4:45] <winston-d> dmick, let me paste some output.
[4:46] <winston-d> dmick, here: http://paste.openstack.org/show/25994/ sorry about the format
[4:46] <winston-d> dmick, i need to run for a quick lunch. will be back soon.
[4:46] <dmick> o
[4:46] <dmick> k
[4:51] * nhmlap (~nhm@64.55.78.101) Quit (Quit: Lost terminal)
[4:52] <Psi-jack> Maaaan.. ZFS has so many powerful features that I love about it.. Subvolumes, subvolumes where you can set volume-specific quotas (which is VERY nice), mount points for them. hehe
[4:53] <Psi-jack> Why's it gotta be a bit slow? LOL
[4:55] * yehudasa_ (~yehudasa@static-66-14-234-139.bdsl.verizon.net) Quit (Ping timeout: 480 seconds)
[4:55] <Psi-jack> Meh, anyway, time for sleep.
[4:55] <Psi-jack> Thanks for all the help guys. :)
[4:59] <dmick> yw
[5:02] <dmick> hmm
[5:03] <dmick> ceph daemon log rotation is apparently supposedly done with SIGHUP
[5:03] <dmick> but
[5:03] <dmick> it doesn't seem to restart the log
[5:03] <dmick> because it closes and reopens the log with O_CREAT|O_APPEND
[5:04] <dmick> does that not seem counterproductive?
[5:05] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[5:09] <rweeks> it does
[5:09] <dmick> I *think* that O_APPEND is wrong
[5:10] <dmick> I think instead it ought to be O_TRUNC
[5:11] <winston-d> dmick, so this is a bug or ?
[5:11] <dmick> still not sure if this is causing your space wastage, but it's not working how I expect at least
[5:15] * chutzpah (~chutz@199.21.234.7) Quit (Quit: Leaving)
[5:16] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) Quit (Ping timeout: 480 seconds)
[5:18] <winston-d> so, i'm using the ceph package in Ubuntu 12.10, which is version 0.48. maybe it's time to upgrade?
[5:20] <dmick> 0.48 isn't superold, but it is some ways back
[5:20] <dmick> changing to O_TRUNC there certainly does do more like what I'd expect.
[5:20] <dmick> I'll file an issue and see if others agree, but that certainly seems righter to me
[5:21] <winston-d> thx
[5:21] <dmick> winston-d: I wonder if there's a way we can tell if your wasted space is because of that
[5:22] <dmick> I wouldn't think so, because it's not creating a new name, or unlinking anytyhing
[5:23] <dmick> ah, but
[5:24] <dmick> logrotate renames the file, then calls HUP, or I would think so. your file is named .log.1 so I'm assuming it was rotated
[5:26] <dmick> but if it renamed first and then HUPped, I would expect the current log to be a new file and thus shortened anyway, and the old file would have been closed, so not held
[5:26] <dmick> maybe 0.48 is that old
[5:26] <dmick> can you show ceph --version output?
[5:26] <dmick> sorry ceph -v
[5:26] <winston-d> sure. wait a sec
[5:27] <winston-d> ceph version (commit:)
[5:27] <dmick> er, that all got eaten
[5:28] <dmick> $ ceph -v
[5:28] <dmick> ceph version 0.53-457-gb668ee5 (b668ee503b64f8070fcf3fd5aeaaf70f3321b34b)
[5:29] <winston-d> that's everything i can see. http://paste.openstack.org/show/25998/
[5:30] <dmick> ??!!
[5:31] <dmick> ok how about dpkg -l ceph
[5:32] <dmick> or better yet dpkg -s ceph | grep Version
[5:32] <winston-d> Version: 0.48.2-0ubuntu2
[5:32] <dmick> ok
[5:32] <dmick> (I wonder why -v is broken? that's annoying)
[5:33] <winston-d> again, i have no idea. :(
[5:33] <dmick> rhetorical. clearly that one is our problem somehow :)
[5:42] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[5:48] * houkouonchi-work (~linux@12.248.40.138) Quit (Read error: Connection reset by peer)
[5:50] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[5:54] * houkouonchi-work (~linux@12.248.40.138) Quit (Remote host closed the connection)
[6:03] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[6:05] <dmick> I guess using O_APPEND means that if something goes wrong with the log rotation, you don't lose logs
[6:05] <dmick> so I think maybe that's a red herring
[6:12] <dmick> winston-d: can you lsof -p on a particular ceph-osd process?
[6:22] * KindOne (KindOne@h210.25.131.174.dynamic.ip.windstream.net) Quit (Ping timeout: 480 seconds)
[6:33] * silversu_ (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[6:35] * vjarjadian (~IceChat7@5ad6d001.bb.sky.com) Quit (Ping timeout: 480 seconds)
[6:40] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[6:43] * rweeks (~rweeks@64.55.78.101) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[6:59] * KindOne (~KindOne@h58.175.17.98.dynamic.ip.windstream.net) has joined #ceph
[7:13] <dmick> winston-d: I can confirm the broken -v too. :(
[7:14] * loicd (~loic@magenta.dachary.org) has joined #ceph
[7:41] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[7:52] * Kioob (~kioob@luuna.daevel.fr) Quit (Quit: Leaving.)
[7:54] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) has joined #ceph
[7:59] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:04] * yoshi (~yoshi@p18200-ipngn3601marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[8:08] * yoshi__ (~yoshi@p35183-ipngn4301marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[8:10] * yoshi_ (~yoshi@p30114-ipngn1501marunouchi.tokyo.ocn.ne.jp) Quit (Ping timeout: 480 seconds)
[8:10] * yoshi__ (~yoshi@p35183-ipngn4301marunouchi.tokyo.ocn.ne.jp) Quit (Read error: Connection reset by peer)
[8:10] <dmick> winston-d: so, to summarize, if Ceph is just sitting there the only thing that should use space is logs. However, I can't think of a scenario where log rotation would cause you to lose space like this. Some more info from lsof would be useful; however, since we've lost contact here, if you could send email to ceph-devel@vger.kernel.org about this, that'd be a better place to continue, I think.
[8:10] * yoshi_ (~yoshi@p11251-ipngn4301marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[8:14] * joshd1 (~jdurgin@2602:306:c5db:310:d838:6f4a:b568:65fe) has joined #ceph
[8:14] * yoshi (~yoshi@p18200-ipngn3601marunouchi.tokyo.ocn.ne.jp) Quit (Ping timeout: 480 seconds)
[8:15] * sjustlaptop1 (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) has joined #ceph
[8:15] * LarsFronius (~LarsFroni@2a02:8108:380:12:89f0:14a5:1db7:41d6) has joined #ceph
[8:20] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) Quit (Ping timeout: 480 seconds)
[8:25] * sjustlaptop1 (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) Quit (Ping timeout: 480 seconds)
[8:35] * tnt (~tnt@140.20-67-87.adsl-dyn.isp.belgacom.be) Quit (Read error: Operation timed out)
[8:45] * dmick (~dmick@2607:f298:a:607:5144:73e6:7ad0:5110) Quit (Quit: Leaving.)
[8:47] * LarsFronius (~LarsFroni@2a02:8108:380:12:89f0:14a5:1db7:41d6) Quit (Quit: LarsFronius)
[8:48] * silversu_ (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[8:48] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[8:53] * gaveen (~gaveen@112.134.112.195) has joined #ceph
[8:54] * Kioob`Taff1 (~plug-oliv@local.plusdinfo.com) has joined #ceph
[8:57] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[8:59] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:02] * loicd (~loic@207.209-31-46.rdns.acropolistelecom.net) has joined #ceph
[9:05] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[9:11] * joshd1 (~jdurgin@2602:306:c5db:310:d838:6f4a:b568:65fe) Quit (Quit: Leaving.)
[9:23] * ghbizness (~ghbizness@host-208-68-233-254.biznesshosting.net) Quit ()
[9:38] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:41] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:45] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[9:46] * fc (~fc@home.ploup.net) has joined #ceph
[9:50] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[9:54] * winston-d (~zhiteng@pgdmzpr01-ext.png.intel.com) Quit (Quit: Leaving)
[9:57] * fc (~fc@home.ploup.net) has left #ceph
[10:06] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[10:08] * ScOut3R (~ScOut3R@catv-80-98-44-93.catv.broadband.hu) has joined #ceph
[10:08] * nosebleedkt (~kostas@kotama.dataways.gr) has joined #ceph
[10:11] <nosebleedkt> hello everyone
[10:23] <fmarchand> hello nosebleedkt
[10:23] <nosebleedkt> hi, fmarchand. Im new to ceph so I joined in case I have nooby troubles :)
[10:24] <fmarchand> I'm a newbee too ! So welcome !
[10:26] <nosebleedkt> hehe
[10:26] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[10:28] <nosebleedkt> my first question would be, what is a monitor?
[10:28] <nosebleedkt> :P
[10:28] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Remote host closed the connection)
[10:28] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[10:29] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[10:31] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Read error: Operation timed out)
[10:31] * tryggvil_ is now known as tryggvil
[10:36] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[10:41] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[10:43] * flesh (547908cc@ircip3.mibbit.com) has joined #ceph
[10:44] <flesh> Hi there, I just started a small cluster with a single OSD, and I was wondering where does that OSD actually store the data
[10:45] <tnt> by default /var/lib/ceph/osd/... IIRC
[11:02] <todin> moring
[11:04] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[11:04] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[11:23] * xiu (~xiu@81.93.247.141) Quit (Remote host closed the connection)
[11:28] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[11:31] * davidz1 (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Ping timeout: 480 seconds)
[11:36] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[11:42] * styx-tdo (~styx@chello084113243057.3.14.vie.surfer.at) has joined #ceph
[11:43] * styx (~styx@chello084113243057.3.14.vie.surfer.at) Quit (Read error: Connection reset by peer)
[11:45] <flesh> tnt thanks :)
[11:53] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Ping timeout: 480 seconds)
[11:54] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) has joined #ceph
[12:06] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[12:11] <fmarchand> nosebleedkt: still there to here my definition of a mon ?
[12:11] <fmarchand> hear
[12:11] <nosebleedkt> yeah why not !
[12:12] <fmarchand> it's where you connect to ! without a monitor you don't have an ip and a port to connect to.
[12:14] <fmarchand> and because ceph is based on CRUSH which means "not centralized" then you can have many monitors (quorum)
[12:15] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[12:15] <fmarchand> I use ceph fs and in my fstab if I had severals monitors (3 to have a quorum) I could mount my ceph partition with severals ip's
[12:17] <fmarchand> e.g. 192.168.0.1, 192.168.0.2,192.168.0.3:/ ceph rw,blablabla 0 0
[12:17] <fmarchand> you see waht I mean ?
[12:17] <fmarchand> nosebleedkt: not sure I'm very clear
[12:18] <nosebleedkt> yes
[12:19] <nosebleedkt> i figured out that with monitor ip because without it in ceph.conf i couldnt create a rados device
[12:19] <nosebleedkt> but all the diagrams i see on internet show that the client is speaking with the MDS server rather than monitor server
[12:19] <fmarchand> oh ... my turn ! what is rados ?
[12:20] <nosebleedkt> rados is a block device driver. Instead of just mounting a filesystem (cephfs), you can actually manipulate a whole block device(format,mkfs)
[12:22] <fmarchand> I can't mkfs the cephfs ?
[12:22] <nosebleedkt> BRD must be on XFS or EXT4 only.
[12:23] <nosebleedkt> you can, i think. But you are one level higher.
[12:23] <nosebleedkt> with BRD you speak directly to block level.
[12:24] <fmarchand> so you can resize too ?
[12:24] <nosebleedkt> propably..
[12:24] <nosebleedkt> its like partition
[12:25] <nosebleedkt> but with cephfs you can only work inside its filesystem
[12:25] * Robe (robe@amd.co.at) has joined #ceph
[12:25] <ScOut3R> nosebleedkt: by BRD do you mean RBD? :)
[12:25] <Robe> hm
[12:25] <Robe> what's the largest rados cluster in production these days?
[12:25] <nosebleedkt> ScOut3R, yes
[12:25] <Robe> osd-count wise?
[12:26] <ScOut3R> nosebleedkt: then yes, you can resize it, though you have to unmap and remap it on the client to access the added space
[12:26] * tnt (~tnt@140.20-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[12:26] <nosebleedkt> ScOut3R, thanks for the info.. What I say here, im not sure at all.
[12:27] <ScOut3R> nosebleedkt: no problem, i'm a newbie too :)
[12:27] <nosebleedkt> lol
[12:27] <nosebleedkt> noobpool
[12:27] <nosebleedkt> :D
[12:27] <ScOut3R> something like that :)
[12:28] <fmarchand> :)
[12:36] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[12:40] * yoshi_ (~yoshi@p11251-ipngn4301marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[12:41] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[12:41] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) Quit (Ping timeout: 480 seconds)
[12:43] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[12:43] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[12:56] * kees_ (~kees@devvers.tweaknet.net) has joined #ceph
[12:57] <kees_> hm, one of my osd's is using 25G memory.. too bad that server only has 12G and some swap...
[12:59] <jamespage> gregaf, hey!
[12:59] <jamespage> gregaf, would it be OK if I used the upstart integration in ceph in a blog post of things you can do with upstart? its a nice example of doing something a bit more complicated....
[13:19] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[13:23] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Ping timeout: 480 seconds)
[13:23] * tryggvil_ is now known as tryggvil
[13:36] <nosebleedkt> well I have created a cluster with a monitor server and two OSDs. On another host I have mounted an RBD device under /mnt/myrbd .
[13:37] <nosebleedkt> So now whatever I/O I do is written on the cluster ?
[13:37] <nosebleedkt> that's it ?
[13:37] <nosebleedkt> http://ceph.com/docs/master/start/quick-rbd/
[13:38] * loicd (~loic@207.209-31-46.rdns.acropolistelecom.net) Quit (Quit: Leaving.)
[13:39] * loicd (~loic@207.209-31-46.rdns.acropolistelecom.net) has joined #ceph
[13:41] <ScOut3R> nosebleedkt: yes, that's it
[13:42] <nosebleedkt> sounds easy then
[13:42] <nosebleedkt> i thought it will be harder
[13:50] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[13:52] <Psi-jack> Hmmm. Impressive.
[13:52] <Psi-jack> Ceph, for the first time during a fresh install of a guest VM, post installation I always do apt-get dist-upgrade to pull in all the newest packages, and it never had a single I/O wait or CPU stuck on kworkers the whole time. LOL
[13:54] <Psi-jack> And barely even stuttered during a ls -laR /
[13:56] * maxiz (~pfliu@114.245.254.71) has joined #ceph
[13:56] <jtang> hrm
[13:56] <jtang> its 6am here in utah and there is already activity on the channel
[14:01] * Psi-jack chuckles.
[14:01] <Psi-jack> 8am here. Already in the office. ;)
[14:06] <elder> Bam.
[14:06] <elder> (That's how I read that.)
[14:07] * loicd (~loic@207.209-31-46.rdns.acropolistelecom.net) Quit (Quit: Leaving.)
[14:07] <iltisanni> its 2 pm in germany and I was in office at 6:55 am today :-) Yeah !
[14:07] <iltisanni> tgif
[14:08] * loicd (~loic@207.209-31-46.rdns.acropolistelecom.net) has joined #ceph
[14:08] <joao> and I just realized I haven't had lunch yet
[14:08] <joao> thanks for pointing out the time :p
[14:10] <Psi-jack> Hmmm, and no, apparently it wasn't Direct IO causing mkcephfs to fail on ZFS.
[14:10] <jtang> heh
[14:10] <Psi-jack> 2012-11-16 08:09:56.433227 7f8fd2311780 -1 OSD::mkfs: FileStore::mkfs failed with error -22
[14:10] <Psi-jack> 2012-11-16 08:09:56.433292 7f8fd2311780 -1 ** ERROR: error creating empty object store in /ceph/osd/ceph-0: (22) Invalid argument
[14:10] <Psi-jack> That's my error. :)
[14:11] <jtang> after this week of sc12 and a few chats with the rweeks, alex and miroslave, i think at our site we're going to do more with ceph
[14:11] <jtang> what became apparent to us was no-one was doing to end to end fixity checks (checksumming) of files at the filesystem level to mitigate against silent data corruption
[14:12] <jtang> the only crowd that was remotely doing it was IBM with gpfs and they dont even provide the update unless you pay them even more money for it
[14:12] <jtang> *sigh*, im never going to buy a power series machine for it
[14:14] * jtang is thinking about getting some interns and students to go and implement some changes to ceph
[14:15] * gaveen (~gaveen@112.134.112.195) Quit (Read error: Operation timed out)
[14:16] <jtang> must stay awake for as long as possible so i can sleep on the plane later
[14:16] <jtang> its gonna be a crappy flight back to ireland
[14:26] * gaveen (~gaveen@112.134.113.40) has joined #ceph
[14:28] * benner_ (~benner@193.200.124.63) has joined #ceph
[14:29] * benner (~benner@193.200.124.63) Quit (Read error: Connection reset by peer)
[14:35] * aliguori (~anthony@cpe-70-123-145-75.austin.res.rr.com) has joined #ceph
[14:40] * timmclaughlin (~timmclaug@69.170.148.179) has joined #ceph
[14:47] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) has joined #ceph
[14:49] * maxiz_ (~pfliu@222.128.156.222) has joined #ceph
[14:56] * maxiz (~pfliu@114.245.254.71) Quit (Ping timeout: 480 seconds)
[14:59] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[14:59] <Psi-jack> Hmmm
[14:59] <Psi-jack> I'm almost tempted to backup all my storage servers and replace ZFS with btrfs. ;)
[15:03] <jtang> btrfs seems nice, it's still missing a reliable fsck and repair component though
[15:04] <jtang> at least for people running rhel6 thats the case
[15:04] <Psi-jack> heh
[15:04] <Psi-jack> Yeah.
[15:04] <Psi-jack> That's my only drawback at the moment.
[15:05] <Psi-jack> I see in Linux 3.6 they put in subvolume-aware quotas.. Not sure if that's the same as zfs quotas though.
[15:05] <jtang> we just built a 65tb ceph filesystem across two backblaze pods recently (240tb raw)
[15:05] <Psi-jack> Where the "quota" sets the allotable total size of the subvolume.
[15:05] <jtang> and we used btrfs, while it works and seems fine, we havent tested too many failure cases yet
[15:05] <Psi-jack> heh
[15:06] <jtang> we're only able to use about a quarter of the space available
[15:06] <Psi-jack> But, as I understand it, ext4 should be able to upgrade to btrfs?
[15:06] <jtang> im kinda thinking that we might buy a third pod to experiment with
[15:06] <Psi-jack> heh
[15:06] <jtang> 3x135tb of space
[15:06] <jtang> :)
[15:06] <Psi-jack> Nice. ;)
[15:07] <jtang> still them pods are sucky
[15:07] <jtang> if anyone is thinking about getting them, thye kinda suck if you just have one or two
[15:08] <jtang> i'd expect getting at least 3 or more would make it work with ceph much better
[15:08] <jtang> and popping in a 10gb or ib card might help too
[15:09] <Psi-jack> Hmmm.
[15:09] <Psi-jack> ceph doesn't have packages for openSUSE I see. Just CentOS 6 and Fedora
[15:09] <jtang> we should probably write up our experiences with them pods and share it
[15:11] <Psi-jack> Hmm. Nick Couchman, is that name familiar with Ceph devs?
[15:12] <joao> not really, but then again I'm the worse in regard to names
[15:12] <Psi-jack> hehe
[15:13] <Psi-jack> Yeah, I was looking up ceph repos for openSUSE, and that was the only name that came up.
[15:13] <joao> took me close to a year to remember on a daily basis my lab partner's name
[15:14] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[15:15] <jtang> joao: didnt get to meet nhm
[15:15] <jtang> went up to the booth twice and he wasnt around
[15:15] <joao> jtang, now that's just wrong
[15:15] <jtang> heh
[15:15] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[15:15] <joao> got to meet anyone from inktank? :)
[15:16] <jtang> yea, alex, miroslav, rweeks and another person whose name i cant remember
[15:16] <jtang> bounced a few ideas off them
[15:16] <joao> cool
[15:17] <elder> I haven't met any of those three...
[15:17] <joao> neither have I
[15:17] <jtang> and i'm gonna get peeps back at work to do stuff with ceph :)
[15:17] <joao> for a moment there I thought that the alex jtang was talking about was you, elder :p
[15:17] <elder> Nope.
[15:17] <elder> But I'm the original...
[15:17] <jtang> im gonna hound some vendors to give us some fusion-io cards to play with
[15:18] <jtang> im curious about them cards now after talking to one of the reps
[15:18] <jtang> i didnt know that there are 10tb fusion-io cards
[15:18] <joao> jtang, I did had the chance to meet your colleagues in amsterdam though
[15:18] <jtang> thats kinda cool, if i can get 3 of them into a set of c6100's then it would be interesting
[15:19] <jtang> joao: they will probably go to other ceph events if there will be more technical ones related to tuning and performance
[15:19] <Psi-jack> Hmmm
[15:20] <Psi-jack> Well, blasted.. So apparently Ubuntu 12.10 has Linux 3.5, at least.
[15:20] <jtang> i think there is a gap in best practices and performance tuning guides right now
[15:20] <joao> jtang, I'm sure that's one of the topics that will always be on demand ;)
[15:21] <jtang> joao: yea thats true
[15:21] <jtang> i'd still love to see some comparions of a stock lustre, gpfs and ceph install
[15:22] <jtang> with some generic work loads of bulk streamed io (typical hpc workloads) and random io
[15:22] <jtang> even simple benchmarks would be nice to see
[15:22] <jtang> i guess the more important thing is to compare like with like on the same hardware
[15:22] <jtang> perhaps nhm can do that next ;)
[15:31] * drokita (~drokita@199.255.228.10) has joined #ceph
[15:55] * PerlStalker (~PerlStalk@perlstalker-1-pt.tunnel.tserv8.dal1.ipv6.he.net) has joined #ceph
[15:56] * nosebleedkt (~kostas@kotama.dataways.gr) Quit (Quit: Leaving)
[15:57] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[16:04] * senner (~Wildcard@68-113-232-90.dhcp.stpt.wi.charter.com) has joined #ceph
[16:05] * guigouz (~guigouz@177.33.216.27) Quit (Quit: Computer has gone to sleep.)
[16:09] * loicd (~loic@207.209-31-46.rdns.acropolistelecom.net) Quit (Quit: Leaving.)
[16:09] * maxiz__ (~pfliu@111.192.242.110) has joined #ceph
[16:16] * maxiz_ (~pfliu@222.128.156.222) Quit (Ping timeout: 480 seconds)
[16:18] <flesh> Hey, I set up a cluster with 2 mds and 2 osds
[16:18] <flesh> I read with a 2 MDS configuration, the recovery mode is activated, or something like that
[16:19] <flesh> so the metada in my confugration it is not being replicated, right?
[16:19] <flesh> or , essentially, only one MDS is really working
[16:21] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[16:28] * danieagle (~Daniel@177.99.134.146) has joined #ceph
[16:30] * senner (~Wildcard@68-113-232-90.dhcp.stpt.wi.charter.com) Quit (Ping timeout: 480 seconds)
[16:34] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[16:40] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Read error: Connection reset by peer)
[16:43] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[16:50] * LarsFronius (~LarsFroni@95-91-242-149-dynip.superkabel.de) has joined #ceph
[16:51] <flesh> is there an option to set all my MDS as active. I don't want any of them to be standby
[16:53] <kees_> when i did that i got some very strange results, apparently the mds is still quite a work in progress
[16:54] <flesh> ah ok
[16:54] <flesh> so it automatically the start service
[16:55] <flesh> sets up , which MDSs are going to be active, and the ones that are not¿
[16:56] <flesh> I would like to know how to specify which MDSs are active and standby
[16:57] <flesh> 'cause, what would happen with a 4-MDS configuration?
[16:59] * kees_ (~kees@devvers.tweaknet.net) Quit (Remote host closed the connection)
[17:04] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[17:06] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[17:06] * ChanServ sets mode +o elder
[17:06] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:08] * noob2 (a5a00214@ircip1.mibbit.com) has joined #ceph
[17:09] * loicd (~loic@2a01:e35:2eba:db10:1d57:d60a:e658:eb38) has joined #ceph
[17:12] * jlogan1 (~Thunderbi@2600:c00:3010:1:1ccf:467e:284:aea8) has joined #ceph
[17:15] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:20] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[17:24] * rlr219 (43c87e04@ircip3.mibbit.com) has joined #ceph
[17:27] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[17:27] <rlr219> I have a question bout the "rbd export" command. I am trying to back up my VM's by shutting them down, creating a snapshot, and then exporting the snap to a program like pbzip2 and SSHing it to an off site server, but it seems like 'rbd export' will only right to a file on the local machine. Will rbd export work with pipes?
[17:36] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:38] * guigouz (~guigouz@177.33.216.27) has joined #ceph
[17:54] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[17:55] <wer> so I notice that my rados gateway is responding with http/1.1 even though I am talking http/1.0 I think that may be part of my issue.
[17:58] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[18:03] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[18:06] * danieagle (~Daniel@177.99.134.146) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[18:08] * ScOut3R (~ScOut3R@catv-80-98-44-93.catv.broadband.hu) Quit (Remote host closed the connection)
[18:18] * Kioob`Taff1 (~plug-oliv@local.plusdinfo.com) Quit (Quit: Leaving.)
[18:20] <joao> I'm curious: what was a Lamborghini doing in the SC12 expo floor?
[18:21] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[18:22] <elder> I'm sure they used someones software or hardware to design it.
[18:22] <jtang> heh
[18:22] <jtang> probably to do with CFD's and them showing off
[18:22] <jtang> HLRS usually has a porsche
[18:22] <jtang> or a chrysler
[18:22] <jtang> i remember one year there was a formula one team at the event
[18:23] <jtang> i've a bunch of pictures on my camera that i need to sort out
[18:24] <joao> well, I gotta say that nevertheless it is a great way to bring people to the booth
[18:24] <joao> probably only topped off by a pillow fight
[18:24] <jtang> heh, there was some cool stuff alright, not as cool as previous years
[18:25] <jtang> sicortex must have been the coolest stuff i've seen in years past, but they went bankrupt
[18:26] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[18:27] <jtang> we almost bought one right before they went bankrupt
[18:27] <jtang> im kinda glad we didnt get one in hindsight, we would have been pretty screwed with support if we did
[18:27] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) Quit (Quit: Leaving.)
[18:28] * felixdcat (~Adium@248.sub-70-199-66.myvzw.com) has joined #ceph
[18:28] <felixdcat> hi guys - any advice on best install method for quantal?
[18:29] <jtang> follow the online docs at ceph.com ?
[18:29] <felixdcat> there's no quantal folder in the dists
[18:29] <jtang> they are quite good across distros the last time i checked
[18:30] <felixdcat> attempting to install on precise has a nonexistent package conflict
[18:30] <felixdcat> well, attempting to install precise's dist version on quantal, i should say
[18:30] * jtang shrugs
[18:31] <jtang> i've only used precise with ceph, i have little interest in releases that aren't LTS based ones :P
[18:32] <felixdcat> fair enough, it seemed wise to use quantal since it ships with 3.5 at least, and there's large red glowing warning labels all over the ceph site saying use latest kernel
[18:32] <felixdcat> i'll reinstall using precise then
[18:36] * jtang doesnt work for inktank
[18:36] <jtang> i guess one of the ceph guys could answer the question
[18:37] <jtang> its just a personal taste thing, i've been around the block long enough to just be happy with LTS releases of distros
[18:37] <jtang> its just a pain to upgrade things every 18months in production systems
[18:38] * felixdcat1 (~Adium@248.sub-70-199-66.myvzw.com) has joined #ceph
[18:41] <benpol> jtang: I'd tend to agree, much easier to simply run a current kernel on an LTS (or Debian) system
[18:42] <jtang> right time to checkout and go to the airport to go home!
[18:42] * felixdcat (~Adium@248.sub-70-199-66.myvzw.com) Quit (Ping timeout: 480 seconds)
[18:43] <benpol> make-kpkg is a really handy tool for Debian systems (and derivatives). I'm running Debian Squeeze with self generated 3.6.6 linux kernel packages.
[18:45] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[18:49] <joao> jtang, have a safe trip :)
[18:55] * slang (~slang@ace.ops.newdream.net) has joined #ceph
[19:01] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[19:04] * stass (stas@ssh.deglitch.com) Quit (Quit: leaving)
[19:04] * stass (stas@ssh.deglitch.com) has joined #ceph
[19:09] <noob2> has anyone else had 12.04 ubuntu clients kernel panic randomly when ceph storage is mounted? it could just be an ubuntu error that i'm not seeing cause i can't scroll up :)
[19:15] <drokita> Not when the storage is mounted, but when the cluster was in a severely degraded state it caused a kernel panic
[19:15] <drokita> Had to upgrade the libceph module
[19:15] * slang (~slang@ace.ops.newdream.net) Quit (Read error: Operation timed out)
[19:15] <drokita> I think an upgrade to 3.6 kernel will fix it as well
[19:15] <noob2> yeah i had a monitor go down once
[19:16] <noob2> and i cycled some boxes so it went into a degraded state
[19:16] <noob2> that'll kernel panic the boxes?
[19:16] <drokita> There apparently is a bug that was fixed in later kernels that addresses that issue
[19:16] <noob2> oh cool
[19:16] <noob2> what kernel were you seeing that on?
[19:17] <drokita> We saw it on 12.04 LTS (3.2)
[19:17] <noob2> uh oh
[19:17] <noob2> i just upgraded to 3.2 12.04LTS
[19:17] <noob2> so i take it 12.10 doesn't have this problem then?
[19:17] <drokita> The good news is that it is a know issue
[19:17] <noob2> true
[19:18] <drokita> I can't say for sure, but with an more updated kernel, the chances are good
[19:18] <noob2> ok
[19:18] <noob2> yeah 12.10 has 3.5.x i think
[19:18] <noob2> hopefully 3.6 soon
[19:19] <drokita> Actually, the Inktank gave us a special libceph.ko with that particluar bug fixed. You are welcome to try it.
[19:19] <noob2> i might have to give that a try as i move closer to our production setup
[19:19] <drokita> By that I mean, that bug was fixed for the LTS version of the kernel
[19:19] <noob2> right now i'm still in dev and it's not a big deal
[19:19] <noob2> ok
[19:19] <drokita> We were in prod, it was a pretty big deal.
[19:19] <noob2> yeah i can imagine
[19:19] <drokita> Let me know and I will send you the file.
[19:20] <noob2> sure
[19:20] <noob2> send it to cholcomb@cscinfo.com
[19:20] <noob2> i'll take a look at it
[19:29] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[19:31] * adjohn (~adjohn@69.170.166.146) has left #ceph
[19:31] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[19:32] <drokita> noob2: I'll have that to you in an hour or so. I think there is a minor kernel update required too. Looking through my notes.
[19:36] * gaveen (~gaveen@112.134.113.40) Quit (Remote host closed the connection)
[19:37] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[19:37] <noob2> cool thanks :)
[19:48] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) has joined #ceph
[19:49] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[19:51] <gregaf> flesh: sounds like you're interested in the standby-replay stuff; I think the most current docs on that are in the wiki still: http://ceph.com/deprecated/Standby-replay_modes
[19:51] <gregaf> jamespage: of course!
[19:51] <jamespage> gregaf, great - thanks!
[19:51] <gregaf> oh, and I guess I don't need to email you as well :)
[19:52] * miroslav1 (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[19:56] <gregaf> jamespage: just make sure you send us a link when you publish it ;)
[19:56] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[19:57] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[19:57] * adjohn (~adjohn@69.170.166.146) Quit (Read error: Connection reset by peer)
[19:58] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[19:58] * rlr219 (43c87e04@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[19:59] * rlr219 (43c87e04@ircip4.mibbit.com) has joined #ceph
[20:02] <flesh> gregaf thanks!
[20:11] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[20:13] <jamespage> gregaf, will do - its going to go on the official ubuntu server blog
[20:13] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[20:13] <jamespage> (well once I've written it it will do)
[20:16] <noob2> can i do a rolling upgrade of ceph from 0.53 -> 0.54?
[20:17] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit ()
[20:19] <gregaf> noob2: should be able to; don't think it's been tested (that can take a while so we only do it on stable releases) but all our stuff is versioned appropriately
[20:20] * slang (~slang@ace.ops.newdream.net) has joined #ceph
[20:30] <noob2> ok cool
[20:30] <noob2> i might test that out
[20:34] <flesh> Hey, finally I could set both MDS as active
[20:35] * felixdcat (~Adium@248.sub-70-199-66.myvzw.com) has joined #ceph
[20:35] <flesh> cluster with 2 mds 2 osds 1 mon. I have a lot of clients creating files at the same time
[20:36] * Cube (~Cube@12.248.40.138) has joined #ceph
[20:36] <flesh> and one of the mds y consuming way too much RAM
[20:36] <flesh> any thoughts?
[20:37] <flesh> nearly "GB
[20:37] <flesh> *2GB
[20:37] <flesh> I wanted the two MDS active so they could share the load
[20:37] <flesh> but still, one of them it consuming a lot of RAM
[20:39] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[20:39] * felixdcat (~Adium@248.sub-70-199-66.myvzw.com) has left #ceph
[20:39] * dmick (~dmick@2607:f298:a:607:75cc:429e:ce3b:50cd) has joined #ceph
[20:40] * ChanServ sets mode +o dmick
[20:40] * felixdcat1 (~Adium@248.sub-70-199-66.myvzw.com) Quit (Ping timeout: 480 seconds)
[20:44] * Cube (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[20:50] <sagewk> spamaps: ping
[20:52] <SpamapS> sagewk: pong, wassup?
[20:53] <dmick> SpamapS: we just noticed that the Ceph build for quantal doesn't output version from ceph -v; something must have gone weird with the build procedure
[20:54] <SpamapS> dmick: something in the back of my head is saying thats a bug that was fixed late in the cycle..
[20:54] <SpamapS> Hrm but I don't see anything in the changelog, so maybe not
[20:55] <dmick> I'm running quantal and can confirm it's that way with the current version; I'm sadly uneducated about the build handoff though
[20:58] <dmick> .git_version is FORCEd, and creates the src/.git_version file
[20:58] <dmick> ceph_ver.h depends on .git_version, and make_version creates it from .git_version
[20:58] <dmick> and then the tools include that when they build
[20:59] <dmick> but check_version doesn't create anything if .git doesn't exist at the top level
[20:59] <dmick> (because it uses git rev-parse to come up with it)
[20:59] <dmick> so perhaps there was a clean build from a copy of the tree not including .git?
[21:02] <rlr219> I have a question bout the "rbd export" command. I am trying to back up my VM's by shutting them down, creating a snapshot, and then exporting the snap to a program like pbzip2 and SSHing it to an off site server, but it seems like 'rbd export' will only write to a file on the local machine. Does or will rbd export work with pipes?
[21:03] <dmick> rlr219: it does not now
[21:03] <dmick> you might be able to trick it with /dev/stdout
[21:04] <dmick> nope, EEXIST. drat
[21:06] <rlr219> any chance that might be modified for bobtail?
[21:07] <dmick> rlr219: it's probably a bit late for bobtail, but it certainly could be added soon; it's probably not hard
[21:08] <rlr219> Ok. thanks dmick!
[21:10] <dmick> SpamapS: any thoughts on how to proceed?
[21:12] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[21:12] <noob2> i'm having a little trouble removing an osd from my tree. when i give it the osd id it removes the first osd for some reason
[21:12] <noob2> i see the ones i want to remove marked as DNE
[21:13] <gregaf> flesh: what's the output of ceph -s?
[21:14] <gregaf> and keep in mind we really don't recommend multi-MDS systems at this time for stability reasons
[21:14] <dmick> noob2: the way you name the osd id is confusing; what command did you use?
[21:14] <dmick> (and we've just made that handling better recently)
[21:15] <noob2> i did ceph osd down 12
[21:15] <noob2> and then ceph osd rm 12
[21:15] <noob2> and it removed 0
[21:15] <noob2> should i give it host=x also to narrow it down?
[21:15] <noob2> i then did ceph osd crush remove 13
[21:15] <noob2> sorry 12
[21:16] <noob2> removed item id 0 name '12' from crush map
[21:17] <noob2> the id's match up with the osd names so i don't see why it's removing id 0
[21:17] <dmick> yeah, that sounds like a bug to me
[21:17] <dmick> what version are you using?
[21:17] <noob2> uh oh
[21:18] <noob2> 0.53
[21:18] <noob2> 12.10 ubuntu
[21:18] <noob2> i can bump it up to 0.54 with an apt-get upgrade
[21:19] * mdrnstm (~mdrnstm@206.169.78.213) has joined #ceph
[21:19] <noob2> what might have caused issues is the new osd i added and then removed was 0.54's version
[21:19] <noob2> i didn't realize until after i added it
[21:19] <elder> dmick, BadHostKeyException: Host key for server plana08.front.sepia.ceph.com does not match!
[21:20] <elder> Do you know how I fix that?
[21:20] <dmick> elder: try teuthology-updatekeys (see help)
[21:20] <elder> Where is that command?
[21:20] <dmick> noob2: 12 ought to have worked, we would have thought
[21:20] <noob2> yeah
[21:20] <dmick> elder: in teuthology? :)
[21:20] <noob2> ok just making sure i'm not going crazy :)
[21:21] <dmick> (virtualenv/bin, as usual)
[21:21] <elder> Got it.
[21:21] * rlr219 (43c87e04@ircip4.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[21:22] <noob2> dmick: it's def confused. when i rebooted the first host only 1 osd came down and i have 2 osd's running on it
[21:24] <elder> dmick, problem was in my hosts file. I had the wrong ip for plana08.
[21:26] * flesh (547908cc@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[21:27] <dmick> elder: *hosts* file? :-P
[21:29] * chutzpah (~chutz@199.21.234.7) Quit (Quit: Leaving)
[21:31] * senner (~Wildcard@68-113-232-90.dhcp.stpt.wi.charter.com) has joined #ceph
[21:31] * senner (~Wildcard@68-113-232-90.dhcp.stpt.wi.charter.com) Quit ()
[21:33] * timmclau_ (~timmclaug@69.170.148.179) has joined #ceph
[21:34] <elder> I have the whole internet in there.
[21:36] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[21:40] * timmclaughlin (~timmclaug@69.170.148.179) Quit (Ping timeout: 480 seconds)
[21:46] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[21:49] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[21:50] <darkfader> i still have an '88 /etc/hosts somewhere
[21:50] <darkfader> i think we could add the ones from yours and submit to IANA?
[21:50] <noob2> can you put rbd devices in the fstab so they're automounted?
[21:52] <joshd> noob2: you could if they exist, but there's no built-in way to map them on startup
[21:53] <noob2> ok
[21:54] <noob2> that's what i was thinking but just wanted to confirm
[21:54] <dmick> maybe you could rig up some startup script to run before whatever mounts the 'rest of the filesystems'
[21:54] <dmick> that could parse fstab and map things that need it
[21:55] <noob2> yeah i could def rig it
[21:55] * ghbizness (~ghbizness@host-208-68-233-254.biznesshosting.net) has joined #ceph
[21:56] <ghbizness> hola all
[21:56] <ghbizness> the default replication of blocks for CEPH is currently 2X ... any recommendations on going higher ?
[21:56] <ghbizness> i see that amazon S3 replicates to 3X
[21:57] <ghbizness> we come from a RAID10 + DRBD world which gives us 4X
[21:57] <noob2> use the pool attributes to change it to 3x
[21:57] <noob2> pretty easy
[21:57] <ghbizness> pool attributes /
[21:58] <ghbizness> ?
[21:58] <ghbizness> btw... i too am basically a noob in ceph
[21:58] <noob2> ok
[21:58] <noob2> lemme find it
[21:58] <dmick> joshd: wip-rbd-export-stdout. sagewk: any thoughts on including in bobtail? feature only; hasn't worked in the past (and import - is currently broken too)
[21:58] <ghbizness> no pun on your handle :-)
[21:59] <noob2> ceph osd pool set {pool-name} {key} {value}
[21:59] <noob2> haha
[21:59] * gucki (~smuxi@HSI-KBW-082-212-034-021.hsi.kabelbw.de) Quit (Remote host closed the connection)
[21:59] <noob2> http://ceph.com/docs/master/rados/operations/pools/
[21:59] <noob2> see the size attribute?
[21:59] <noob2> that's your replica number
[21:59] <joshd> sagewk: wip-oc-hang
[22:00] <ghbizness> looking
[22:00] <ghbizness> thanks, give me a few and ill respond further
[22:00] <dmick> min_size is probably what you want though
[22:00] <ghbizness> i think mick may be right
[22:00] <joshd> dmick: that's different
[22:01] <ghbizness> hmm.
[22:01] <joshd> ghbizness: size is the total number of copies
[22:01] <noob2> yeah min_size sorry
[22:01] <ghbizness> joshd, i have seen a few of your posts around
[22:01] <dmick> I'd believe joshd over me
[22:01] <ghbizness> can you confirm if we are looking at osd or crush
[22:02] <drokita> Is it normal for a 'rbd rm' if a 5GB image to not give any feedback for 5 minutes?
[22:02] <joshd> got a meeting, back in a bit
[22:02] <ghbizness> drokita, ceph status ??
[22:02] <drokita> one sec... need another session
[22:03] <ghbizness> min_size
[22:03] <ghbizness> Description: Sets the minimum number of replicas required for io. See Set the Number of Object Replicas for further details
[22:03] <ghbizness> Type: Integer
[22:03] <ghbizness> looks like min_size is the winner
[22:03] <drokita> Ceph status is good.
[22:04] <drokita> at this rate, it will take 2 hours for this 5 gig volume to delete
[22:04] <ghbizness> is it mounted ?
[22:04] <ghbizness> mapped ?
[22:04] <ghbizness> drb ls ?
[22:05] <drokita> no... just created1 minute before
[22:05] <drokita> I should have probably just resized it
[22:05] * jjgalvez (~jjgalvez@12.248.40.138) has joined #ceph
[22:06] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[22:12] * timmclau_ (~timmclaug@69.170.148.179) Quit (Remote host closed the connection)
[22:13] * timmclaughlin (~timmclaug@69.170.148.179) has joined #ceph
[22:15] <drokita> Seems to be getting faster
[22:17] <ghbizness> osd.2 [INF] 2.30b7 scrub ok
[22:18] <ghbizness> gettings allot of scrubs going on
[22:18] <ghbizness> now that i changed the size to 3
[22:20] <ghbizness> is this normal ?
[22:21] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[22:35] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[22:36] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:39] * flesh (547908cc@ircip1.mibbit.com) has joined #ceph
[22:42] <ghbizness> when setting data pool size to 3 for replicas, i am still seeing only 2 replicas per block
[22:43] <ghbizness> 38668 MB data, 78399 MB used
[22:44] <ghbizness> pool 0 'data' rep size 3 crush_ruleset 0 object_hash rjenkins pg_num 12928 pgp_num 12928 last_change 160 owner 0 crash_replay_interval 45
[22:50] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[22:55] <dmick> NOTICE: teuthology is down (its vm host is getting more RAM; intended to prenotify, but reracking its neighbor interrupted it so it's getting RAM now)
[22:55] * loicd (~loic@2a01:e35:2eba:db10:1d57:d60a:e658:eb38) Quit (Quit: Leaving.)
[22:55] * loicd (~loic@magenta.dachary.org) has joined #ceph
[23:03] <gregaf> ghbizness: you did set size rather than min_size then, right?
[23:03] <gregaf> for everybody following along, size is the parameter you want :)
[23:03] <gregaf> ghbizness: in any case, have you actually written any data to it yet?
[23:03] <gregaf> and how many OSDs do you have?
[23:04] * JoDarc (~Adium@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[23:05] * vata (~vata@208.88.110.46) Quit (Quit: Leaving.)
[23:10] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[23:14] * noob2 (a5a00214@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[23:18] * scalability-junk (~stp@188-193-202-99-dynip.superkabel.de) has joined #ceph
[23:20] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[23:20] * loicd (~loic@magenta.dachary.org) has joined #ceph
[23:28] * buck1 (~buck@bender.soe.ucsc.edu) has joined #ceph
[23:32] <dmick> teuthology back, I think workers are running
[23:46] <flesh> Hi, I am trying version ceph-0.48.2argonaut for the servers, and I mount clients with a kernel 3.3.4 Are there any incompatibilities, or dependecies. or it should work just fine?
[23:47] * timmclaughlin (~timmclaug@69.170.148.179) Quit (Quit: Leaving...)
[23:48] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[23:49] * rweeks (~rweeks@c-24-4-66-108.hsd1.ca.comcast.net) has joined #ceph
[23:49] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[23:49] <gregaf> there shouldn't be any version incompatibilities there flesh
[23:49] * drokita (~drokita@199.255.228.10) Quit (Read error: Operation timed out)
[23:53] * buck1 (~buck@bender.soe.ucsc.edu) Quit (Remote host closed the connection)
[23:57] * aliguori (~anthony@cpe-70-123-145-75.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[23:58] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.