#ceph IRC Log

Index

IRC Log for 2013-04-17

Timestamps are in GMT/BST.

[0:00] <lurbs> As far as I'm aware the options as set via iotune or blkiotune in libvirt just leverage cgroups for their particular magic.
[0:00] <lurbs> So if you're not using libvirt you could use cgroups directly. But that can get a bit messy.
[0:00] <mrjack> i use libvirt
[0:00] <mrjack> will test this..
[0:06] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[0:06] * jskinner (~jskinner@69.170.148.179) Quit (Remote host closed the connection)
[0:06] * noahmehl (~noahmehl@67.23.204.150) has joined #ceph
[0:10] * noahmehl (~noahmehl@67.23.204.150) Quit ()
[0:12] * smeven (~diffuse@1.145.222.157) Quit (Ping timeout: 480 seconds)
[0:15] * madkiss1 (~madkiss@tmo-103-108.customers.d1-online.com) has joined #ceph
[0:15] * madkiss (~madkiss@tmo-102-111.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[0:22] * joshd1 (~joshd@67.23.204.150) has joined #ceph
[0:22] * leseb (~Adium@67.23.204.160) Quit (Quit: Leaving.)
[0:36] * rustam (~rustam@94.15.91.30) has joined #ceph
[0:37] * thelan (~thelan@paris.servme.fr) Quit (Ping timeout: 480 seconds)
[0:39] * tnt_ (~tnt@91.177.247.88) Quit (Ping timeout: 480 seconds)
[0:42] * BillK (~BillK@58-7-209-64.dyn.iinet.net.au) has joined #ceph
[0:45] * dosaboy_ (~dosaboy@67.23.204.150) has joined #ceph
[0:45] * dosaboy (~dosaboy@67.23.204.150) Quit (Read error: Connection reset by peer)
[0:46] * vata (~vata@2607:fad8:4:6:6c73:55ef:8faa:2314) Quit (Quit: Leaving.)
[0:52] * aliguori (~anthony@32.97.110.51) Quit (Remote host closed the connection)
[0:53] * kfox1111 (bob@leary.csoft.net) has joined #ceph
[0:53] <kfox1111> Is there any documentation or example code for implementing an osd plugin?
[0:55] * dosaboy_ (~dosaboy@67.23.204.150) Quit (Ping timeout: 480 seconds)
[0:56] * scuttlemonkey (~scuttlemo@67.23.204.150) has joined #ceph
[0:56] * ChanServ sets mode +o scuttlemonkey
[0:56] * yehudasa_ (~yehudasa@static-66-14-234-139.bdsl.verizon.net) has joined #ceph
[0:56] * madkiss1 (~madkiss@tmo-103-108.customers.d1-online.com) Quit (Quit: Leaving.)
[0:56] * thelan (~thelan@paris.servme.fr) has joined #ceph
[0:57] * PerlStalker (~PerlStalk@72.166.192.70) Quit (Quit: ...)
[0:58] * dosaboy (~dosaboy@67.23.204.150) has joined #ceph
[1:00] * Havre (~Havre@2a01:e35:8a2c:b230:8553:22eb:2cef:6ac7) has joined #ceph
[1:02] * drokita (~drokita@199.255.228.128) Quit (Ping timeout: 480 seconds)
[1:03] <dmick> kfox1111: no doc that I'm aware of
[1:03] <dmick> the source tree has a few working implementations and a few toy ones
[1:09] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) has joined #ceph
[1:10] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) Quit ()
[1:11] <kfox1111> dmick: I'm thinking of making one that does a few simple manipulations of stored json documents. Know any particular one that would be a good staring ground?
[1:12] <dmick> lol staring
[1:12] <dmick> good typo
[1:12] <dmick> um...not offhand, no
[1:12] <kfox1111> :)
[1:12] <kfox1111> bummer. Ok. thanks.
[1:12] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[1:12] <yehudasa_> there should be a term for having a compilation error that doesn't make sense just to find out that you looked at the wrong file, but the line in question aligned nicely to the compilation error
[1:14] <yehudasa_> cross-source-file-line-alignment-compilation-wtf
[1:14] * houkouonchi-work (~linux@12.248.40.138) Quit (Remote host closed the connection)
[1:15] <kfox1111> dmick: can you point me at a toy one then?
[1:15] <dmick> there are several in src/cls
[1:16] <dmick> and a few more in src/cls_*
[1:16] <kfox1111> ah. ok. Thanks.
[1:16] <dmick> (excluding src/cls_*client.*)
[1:16] <dmick> (which are the examples of how to load and call them)
[1:17] <yehudasa_> simple ones would be cls/version, cls/refcount
[1:17] <dmick> version?
[1:17] <yehudasa_> argh.. yeah, it's actually not upstream yet
[1:18] <dmick> :)
[1:18] <yehudasa_> it's been a while since we developed out bleeding edge stuff directly on the master branch
[1:19] <kfox1111> hmmm... lock looks interesting....
[1:20] <kfox1111> can you use that to ensure no two clients are editing/reading a file at the same time?
[1:20] <yehudasa_> it's an advisory lock
[1:20] <kfox1111> If it does, I could just take a lock, read the file, update the json, and write it back.
[1:20] <yehudasa_> clients can use it to coordinate
[1:21] <yehudasa_> why would you read the file though
[1:21] <yehudasa_> you can send a command that does the sequence in a single atomic operation
[1:21] <kfox1111> so, I can read the file, make a change, write the file, and ensure no one wrote to the file during the whole time?
[1:22] <yehudasa_> yeah, everything that runs within a single objclass call is atomic
[1:22] <yehudasa_> just note that an operation that writes data can not return data
[1:22] * mattm__ (~matt@108-95-148-196.lightspeed.austtx.sbcglobal.net) has left #ceph
[1:23] <kfox1111> Are you talking about plugins, or librados clients?
[1:23] <yehudasa_> plugins
[1:24] <kfox1111> ok. I'm wondering if the lock class would let me do everything I need to do from librados instead of writing a plugin.
[1:24] <yehudasa_> for librados clients there's a compound operation, which is a way to run multiple operations on a single request
[1:24] * LeaChim (~LeaChim@90.215.24.238) Quit (Ping timeout: 480 seconds)
[1:24] <kfox1111> It looks like it supports reader/writer locks...
[1:24] <yehudasa_> these are applied atomically too
[1:25] <yehudasa_> the lock objclass?
[1:25] <kfox1111> can a read/modify/write to an object happen atomically?
[1:25] <kfox1111> ie, fail the write if the read changed in the mean time?
[1:26] <yehudasa_> depends how you do the read/modify/write
[1:26] <yehudasa_> if you're doing it in a single operation then yes
[1:26] <yehudasa_> if not then you need to use some trickery, like we do in the gateway
[1:27] <kfox1111> so if two clients are racing to write to the same file, and one gets his write in while the other is writing his, the second will fail... then it can simply retry again...
[1:28] <yehudasa_> basically you set some xattr on the object, and when you write it you do a compound operation that first checks that the xattr is still the same
[1:28] * joshd1 (~joshd@67.23.204.150) Quit (Quit: Leaving.)
[1:28] * joshd1 (~joshd@67.23.204.150) has joined #ceph
[1:28] <yehudasa_> you can use vanilla librados operations to do that
[1:28] <yehudasa_> without any needs for plugins
[1:28] <kfox1111> ah. interesting. that should work then. thank you.
[1:32] <dmick> but the plugin buys you "do it next to the data", which can be nice
[1:33] * malcolm (~malcolm@silico24.lnk.telstra.net) has joined #ceph
[1:33] <kfox1111> dmick: my only consern there is how much data can I push to the plugin. Is there any limits?
[1:33] * dosaboy (~dosaboy@67.23.204.150) Quit (Ping timeout: 480 seconds)
[1:33] <kfox1111> The other is, it may be simpler to do it from the client...
[1:34] <dmick> don't know.
[1:34] <kfox1111> I do like the idea of running it right next to do the data though.
[1:34] <kfox1111> so long as replication and everything still works.
[1:35] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[1:36] <kfox1111> yehudasa_: Do you have an example of chaining an xattr check with a full write? I'm not seeing how to do that in the api docs.
[1:38] <yehudasa_> kfox1111: look at rgw/rgw_rados.cc, there's a bunch of compound operation calls there, specifically look at anything that uses cmpxattr
[1:38] <yehudasa_> the code path may not be trivial though
[1:38] <kfox1111> ok. thanks.
[1:39] <yehudasa_> basically you define ObjectWriteOperation op, then call op.cmpxattr(), op.write(), and ioctx.exec(op, ...)
[1:46] <malcolm> So I've got a question about read performance. I built a cluster that gets around 1.2GB/s write on the client, however my reads are more like 50-100mb/s. I'm using 0.56.4. Nothing to fancy in my other settings.. Where should I start looking/what info do you want to point me in the right direction?
[1:48] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[1:48] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Read error: Connection reset by peer)
[1:54] <kfox1111> yehudasa_: so, like RGWRados::set_attr looks to be a good example? Basically the same thing, with a write in betwee too?
[1:57] * madkiss (~madkiss@tmo-103-108.customers.d1-online.com) has joined #ceph
[2:01] * madkiss (~madkiss@tmo-103-108.customers.d1-online.com) Quit (Read error: Connection reset by peer)
[2:08] * scuttlemonkey (~scuttlemo@67.23.204.150) Quit (Ping timeout: 480 seconds)
[2:08] <yehudasa_> kfox1111: yeah, the append_atomic_test() call adds the guard
[2:09] <yehudasa_> kfox1111: note that at the point that's called, we've already read the object
[2:09] <kfox1111> Thanks again for the pointers. I think I see how this all works. You saved me lots of work. :)
[2:10] <kfox1111> so in that case you are checking to see if it changed while you were reading it?
[2:10] <yehudasa_> checking to see if it changed after reading it
[2:11] <kfox1111> ok.
[2:11] <yehudasa_> it cannot change while reading it .. the read operation that we do there is atomic, that is, we only read once from that object
[2:11] <yehudasa_> but once it's read we have no way to know whether the data we have is current
[2:12] <kfox1111> so.. if I do an atomic write while something else is reading, will it just block until the reader is done?
[2:12] <yehudasa_> yeah, operations on the object are sequential
[2:12] <kfox1111> awesome.
[2:12] <yehudasa_> as long as you do your operation in a single rados command
[2:13] <kfox1111> where a rados command can be compound?
[2:13] <yehudasa_> yes
[2:13] <kfox1111> k.
[2:13] <yehudasa_> the compound operation will be applied atomically
[2:13] * joshd1 (~joshd@67.23.204.150) Quit (Ping timeout: 480 seconds)
[2:15] <kfox1111> ok. Hopefully I can write up a quick test case tomorrow. :)
[2:15] <yehudasa_> good luck
[2:15] <kfox1111> Thanks. :)
[2:24] <malcolm> Sorry am I asking in the wrong place or not providing enough info? I've kinda exhausted every web based resource I could find..
[2:26] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:34] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:35] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:37] * dpippenger (~riven@216.103.134.250) Quit (Remote host closed the connection)
[2:42] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[2:45] <dmick> malcolm: this is a reasonable place; there just isn't anyone available right at the moment who can say much
[2:45] <dmick> how are you measuring write/read performance?
[2:48] * xiaoxi (~xiaoxi@shzdmzpr01-ext.sh.intel.com) has joined #ceph
[2:50] * yehudasa__ (~yehudasa@m9f2736d0.tmodns.net) has joined #ceph
[2:51] * rekby (~Adium@2.93.58.253) has joined #ceph
[2:51] * madkiss (~madkiss@tmo-103-108.customers.d1-online.com) has joined #ceph
[2:52] * kfox1111 (bob@leary.csoft.net) Quit (Quit: Lost terminal)
[2:53] <joao> Karcaw, still around?
[2:55] * yehudasa_ (~yehudasa@static-66-14-234-139.bdsl.verizon.net) Quit (Ping timeout: 480 seconds)
[2:56] <rekby> Hello.
[2:56] <rekby> I can mount CephFS as fuse:
[2:56] <rekby> ceph-fuse /ceph
[2:56] <rekby> But I can't mount it as kernel file system:
[2:56] <rekby> mount -t ceph x.x.x.x:/ /ceph/
[2:56] <rekby> mount: 213.239.212.103:/: can't read superblock
[2:56] <rekby> mount.ceph x.x.x.x:/ /ceph/
[2:56] <rekby> mount error 5 = Input/output error
[3:00] <dmick> rekby: which kernel?
[3:01] <rekby> 3.8 from elrepo:
[3:01] <rekby> Linux s.rekby.ru 3.8.7-1.el6.elrepo.x86_64 #1 SMP Sat Apr 13 04:51:25 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux
[3:01] <dmick> anything interesting in the logs when you attempt the mount?
[3:02] <malcolm> dmick: Thanks for your reply. We are currently doing stream reads. We were hoping to store some big flat data files. We have 'fallen' back to using dd to remove complexity from our reads.
[3:03] <rekby> I have non standart port: 2398 for monitors
[3:03] <rekby> mount.ceph 213.239.212.103:2398:/ /ceph/
[3:03] <rekby> mount error 22 = Invalid argument
[3:03] <rekby> No, I don't see any intresting in log
[3:04] <yehudasa__> that's an interesting port
[3:05] <yehudasa__> usually the default is 6789
[3:05] <yehudasa__> ah.. you just mentioned that you have non standard.. should read beyond the top two lines
[3:06] <dmick> malcolm: I was sorta asking for context. there are many many ways to use RADOS. How are you using it specifically?
[3:06] <rekby> I have found problem:
[3:06] <rekby> I have turned on auth and ceph-fuse read auth info from /etc/ceph/ceph.conf
[3:06] <rekby> mount.ceph don't do it
[3:06] <rekby> It work for me:
[3:06] <rekby> mount.ceph 213.239.212.103:2398:/ /ceph/ -o name=admin,secretfile=/etc/ceph/admin.secret
[3:08] <rekby> But i don't saw any auth-error on console or in log.
[3:08] <malcolm> dmick: Sorry we are using RBD. We then map then format with ext4. We dd'ed data in nice and quick. We were seeing upto 2GB/s on our backend storage and getting 800mb ~ 1.2GB/s on our clients. When we started testng reading data out we are only seeing 20mb/s ~ 70mb/s read on the back end and about the same or less on the client.
[3:08] <dmick> so not just rbd but kernel rbd
[3:08] <dmick> it certainly doesn't make much sense that read would be slower than write
[3:09] <malcolm> dmick: sorry yes. And that is what we figured.
[3:09] <dmick> I assume when you write "mb" you mean "MB" (as in megabyte)
[3:09] <malcolm> Yes
[3:09] <malcolm> Shift keys are so far away sometimes :P
[3:09] <dmick> and the tools you were using for read and write were both dd?
[3:10] <malcolm> at the moment. We were doing loads using our editing software. But they ran so bad that we decided to test with something a tad more.. simple
[3:12] <dmick> how about avoiding the filesystem and dd'ing from the block device itself (preferably with direct)
[3:12] <dmick> also bad?
[3:14] <rekby> dmick, hello
[3:14] <rekby> Is any way to use rbd as block device without kernel module? for example I want use it in CentOS with native kernel 2.6.32
[3:15] <malcolm> dmick: just did a read test on tthe raw device. It got an average of 280MB/s this was with direct
[3:15] <dmick> rekby: no. there's an iscsi bridges, and you can use it from VMs without the kernel driver
[3:15] <dmick> well, that's certainly better, but still.
[3:15] <dmick> ^malcolm
[3:15] <rekby> dmick, thanks
[3:16] <dmick> but a block device needs a block driver
[3:17] <dmick> I guess there are conceivably such things as userland block drivers, but I don't know much about them
[3:18] <dmick> malcolm: does "without direct" make it better?
[3:19] <malcolm> dmick: worse actually
[3:20] <malcolm> dmick: average is about 120MB/s
[3:25] * tserong_ (~tserong@124-171-116-238.dyn.iinet.net.au) has joined #ceph
[3:30] * tserong (~tserong@124-168-229-104.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[3:34] * madkiss (~madkiss@tmo-103-108.customers.d1-online.com) Quit (Read error: Connection reset by peer)
[3:37] * yehudasa_ (~yehudasa@static-66-14-234-139.bdsl.verizon.net) has joined #ceph
[3:37] <dmick> I would post an email with the data you have above to ceph-devel and see if you get a wider audience. The guy I'd ask about this is Mark Nelson; he's definitely got the most "stack bottleneck" experience among us
[3:39] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[3:42] * yehudasa__ (~yehudasa@m9f2736d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[3:50] <malcolm> dmick:Thanks for that I will do just that.
[3:54] * chutz (~chutz@2600:3c01::f03c:91ff:feae:3253) Quit (Quit: Leaving)
[3:54] * chutz (~chutz@2600:3c01::f03c:91ff:feae:3253) has joined #ceph
[3:54] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) Quit (Quit: Leaving.)
[3:58] * rekby (~Adium@2.93.58.253) Quit (Quit: Leaving.)
[4:02] * treaki_ (05145164c4@p4FDF76E3.dip.t-dialin.net) has joined #ceph
[4:06] * treaki (~treaki@p4FF4BA9B.dip.t-dialin.net) Quit (Ping timeout: 480 seconds)
[4:16] * madkiss (~madkiss@tmo-103-108.customers.d1-online.com) has joined #ceph
[4:20] * madkiss (~madkiss@tmo-103-108.customers.d1-online.com) Quit (Read error: Connection reset by peer)
[4:25] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[4:26] * yehudasa_ (~yehudasa@static-66-14-234-139.bdsl.verizon.net) Quit (Ping timeout: 480 seconds)
[4:37] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) Quit (Ping timeout: 480 seconds)
[4:37] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[4:55] * chutz (~chutz@2600:3c01::f03c:91ff:feae:3253) Quit (Remote host closed the connection)
[4:56] * chutz (~chutz@2600:3c01::f03c:91ff:feae:3253) has joined #ceph
[5:10] * madkiss (~madkiss@tmo-103-108.customers.d1-online.com) has joined #ceph
[5:12] * davidzlap (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[5:16] * madkiss (~madkiss@tmo-103-108.customers.d1-online.com) Quit (Read error: Connection reset by peer)
[5:19] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[5:20] * tserong_ is now known as tserong
[5:27] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) Quit (Ping timeout: 480 seconds)
[5:28] * alram (~alram@cpe-75-83-127-87.socal.res.rr.com) has joined #ceph
[5:44] * xiaoxi (~xiaoxi@shzdmzpr01-ext.sh.intel.com) Quit (Ping timeout: 480 seconds)
[5:47] * alram (~alram@cpe-75-83-127-87.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[5:59] * Havre (~Havre@2a01:e35:8a2c:b230:8553:22eb:2cef:6ac7) Quit (Ping timeout: 480 seconds)
[6:04] * madkiss (~madkiss@tmo-103-108.customers.d1-online.com) has joined #ceph
[6:05] * john_barbee_ (~jbarbee@c-98-226-73-253.hsd1.in.comcast.net) has joined #ceph
[6:05] * madkiss (~madkiss@tmo-103-108.customers.d1-online.com) Quit (Read error: Connection reset by peer)
[6:07] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) has joined #ceph
[6:13] * Cube1 (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[6:13] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[6:22] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) Quit (Ping timeout: 480 seconds)
[6:26] * calebamiles (~caleb@c-50-138-218-203.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[6:37] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[6:58] * xiaoxi (~xiaoxi@shzdmzpr02-ext.sh.intel.com) has joined #ceph
[6:59] * madkiss (~madkiss@tmo-103-108.customers.d1-online.com) has joined #ceph
[6:59] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Remote host closed the connection)
[7:07] * scuttlemonkey (~scuttlemo@173-12-167-177-oregon.hfc.comcastbusiness.net) has joined #ceph
[7:07] * ChanServ sets mode +o scuttlemonkey
[7:08] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[7:09] * rekby (~Adium@2.93.58.253) has joined #ceph
[7:11] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) has joined #ceph
[7:14] * KindTwo (~KindOne@h74.235.22.98.dynamic.ip.windstream.net) has joined #ceph
[7:15] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[7:19] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * xiaoxi (~xiaoxi@shzdmzpr02-ext.sh.intel.com) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * john_barbee_ (~jbarbee@c-98-226-73-253.hsd1.in.comcast.net) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * chutz (~chutz@2600:3c01::f03c:91ff:feae:3253) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * tserong (~tserong@124-171-116-238.dyn.iinet.net.au) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * thelan (~thelan@paris.servme.fr) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * yehudasa (~yehudasa@2607:f298:a:607:953a:9b8d:c1db:2b84) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * gregaf (~Adium@2607:f298:a:607:114a:6960:bfaa:e904) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * stiller (~Adium@2001:980:87b9:1:b0c7:b3ab:1726:52d5) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * __jt__ (~james@rhyolite.bx.mathcs.emory.edu) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * dmick (~dmick@2607:f298:a:607:2195:1fb3:8d86:a2c4) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * dxd828 (~dxd828@195.191.107.205) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * jpieper_ (~josh@209-6-205-161.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * HauM1 (~HauM1@login.univie.ac.at) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * jackhill (jackhill@pilot.trilug.org) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * piti (~piti@82.246.190.142) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * doubleg (~doubleg@69.167.130.11) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * NuxRo (~nux@85.13.211.140) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * stefunel (~stefunel@static.38.162.46.78.clients.your-server.de) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * joelio (~Joel@88.198.107.214) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * sjust (~sam@2607:f298:a:607:baac:6fff:fe83:5a02) Quit (resistance.oftc.net graviton.oftc.net)
[7:19] * KindTwo is now known as KindOne
[7:19] * xiaoxi (~xiaoxi@shzdmzpr02-ext.sh.intel.com) has joined #ceph
[7:19] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[7:19] * john_barbee_ (~jbarbee@c-98-226-73-253.hsd1.in.comcast.net) has joined #ceph
[7:19] * chutz (~chutz@2600:3c01::f03c:91ff:feae:3253) has joined #ceph
[7:19] * tserong (~tserong@124-171-116-238.dyn.iinet.net.au) has joined #ceph
[7:19] * thelan (~thelan@paris.servme.fr) has joined #ceph
[7:19] * yehudasa (~yehudasa@2607:f298:a:607:953a:9b8d:c1db:2b84) has joined #ceph
[7:19] * gregaf (~Adium@2607:f298:a:607:114a:6960:bfaa:e904) has joined #ceph
[7:19] * stiller (~Adium@2001:980:87b9:1:b0c7:b3ab:1726:52d5) has joined #ceph
[7:19] * __jt__ (~james@rhyolite.bx.mathcs.emory.edu) has joined #ceph
[7:19] * dmick (~dmick@2607:f298:a:607:2195:1fb3:8d86:a2c4) has joined #ceph
[7:19] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[7:19] * dxd828 (~dxd828@195.191.107.205) has joined #ceph
[7:19] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[7:19] * jpieper_ (~josh@209-6-205-161.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) has joined #ceph
[7:19] * HauM1 (~HauM1@login.univie.ac.at) has joined #ceph
[7:19] * jackhill (jackhill@pilot.trilug.org) has joined #ceph
[7:19] * piti (~piti@82.246.190.142) has joined #ceph
[7:19] * doubleg (~doubleg@69.167.130.11) has joined #ceph
[7:19] * NuxRo (~nux@85.13.211.140) has joined #ceph
[7:19] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[7:19] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[7:19] * stefunel (~stefunel@static.38.162.46.78.clients.your-server.de) has joined #ceph
[7:19] * sjust (~sam@2607:f298:a:607:baac:6fff:fe83:5a02) has joined #ceph
[7:19] * joelio (~Joel@88.198.107.214) has joined #ceph
[7:19] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[7:19] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) Quit (Ping timeout: 480 seconds)
[7:22] * JohansGlock (~quassel@kantoor.transip.nl) has joined #ceph
[7:31] * Cube (~Cube@cpe-76-172-67-97.socal.res.rr.com) has joined #ceph
[7:37] * treaki_ is now known as treaki
[7:42] * madkiss (~madkiss@tmo-103-108.customers.d1-online.com) Quit (Quit: Leaving.)
[7:52] * tnt (~tnt@91.177.247.88) has joined #ceph
[7:53] * norbi (~nonline@buerogw01.ispgateway.de) has joined #ceph
[8:00] * Vjarjadian (~IceChat77@90.214.208.5) Quit (Quit: REALITY.SYS Corrupted: Re-boot universe? (Y/N/Q))
[8:02] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[8:10] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) Quit (Ping timeout: 480 seconds)
[8:15] * rustam (~rustam@94.15.91.30) has joined #ceph
[8:17] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[8:39] * capri (~capri@212.218.127.222) has joined #ceph
[8:46] * joshd1 (~joshd@173-12-167-177-oregon.hfc.comcastbusiness.net) has joined #ceph
[8:48] * sleinen (~Adium@130.59.94.207) has joined #ceph
[8:51] * sleinen1 (~Adium@2001:620:0:25:3dcb:1710:a54d:19fb) has joined #ceph
[8:53] * scuttlemonkey (~scuttlemo@173-12-167-177-oregon.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[8:56] * sleinen (~Adium@130.59.94.207) Quit (Ping timeout: 480 seconds)
[9:04] * tnt (~tnt@91.177.247.88) Quit (Ping timeout: 480 seconds)
[9:04] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[9:06] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) Quit (Read error: Connection reset by peer)
[9:07] * leseb1 (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[9:08] * paravoid (~paravoid@scrooge.tty.gr) Quit (Read error: Connection reset by peer)
[9:08] * paravoid (~paravoid@scrooge.tty.gr) has joined #ceph
[9:09] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) has joined #ceph
[9:10] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:15] * leseb1 (~Adium@ip-64-134-128-29.public.wayport.net) Quit (Ping timeout: 480 seconds)
[9:19] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:23] * malcolm (~malcolm@silico24.lnk.telstra.net) Quit (Ping timeout: 480 seconds)
[9:33] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has left #ceph
[9:34] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[9:36] * l0nk (~alex@83.167.43.235) has joined #ceph
[9:39] * paravoid (~paravoid@scrooge.tty.gr) Quit (Ping timeout: 480 seconds)
[9:40] * paravoid (~paravoid@scrooge.tty.gr) has joined #ceph
[9:41] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[9:49] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) Quit (Ping timeout: 480 seconds)
[9:51] * LeaChim (~LeaChim@90.215.24.238) has joined #ceph
[9:54] * joshd1 (~joshd@173-12-167-177-oregon.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[9:58] * mega_au (~chatzilla@84.244.1.200) Quit (Ping timeout: 480 seconds)
[9:59] * Yen_ (~Yen@ip-83-134-91-48.dsl.scarlet.be) has joined #ceph
[10:01] * Yen (~Yen@ip-83-134-112-127.dsl.scarlet.be) Quit (Ping timeout: 480 seconds)
[10:01] * Yen_ is now known as Yen
[10:03] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[10:09] * rahmu (~rahmu@83.167.43.235) has joined #ceph
[10:14] * jgallard (~jgallard@gw-aql-129.aql.fr) has joined #ceph
[10:17] * Dieter_be (~Dieterbe@dieter2.plaetinck.be) Quit (Read error: Operation timed out)
[10:17] * Dieter_be (~Dieterbe@dieter2.plaetinck.be) has joined #ceph
[10:18] * jpieper_ (~josh@209-6-205-161.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) Quit (Quit: Ex-Chat)
[10:18] * jpieper (~josh@209-6-205-161.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) has joined #ceph
[10:35] * Forced (~Forced@205.132.255.75) Quit (Ping timeout: 480 seconds)
[10:35] * Forced (~Forced@205.132.255.75) has joined #ceph
[10:35] * smeven (~diffuse@110.151.97.93) has joined #ceph
[10:42] * Yen (~Yen@ip-83-134-91-48.dsl.scarlet.be) Quit (Ping timeout: 480 seconds)
[10:43] * Yen (~Yen@ip-81-11-239-131.dsl.scarlet.be) has joined #ceph
[10:45] * Cube (~Cube@cpe-76-172-67-97.socal.res.rr.com) Quit (Quit: Leaving.)
[10:46] <tnt> Anybody using xen with ceph rbd for vm image here ?
[10:46] <tnt> (and with the osd as domU)
[10:49] * vo1d (~v0@193-83-48-168.adsl.highway.telekom.at) has joined #ceph
[10:56] * v0id (~v0@62-46-172-131.adsl.highway.telekom.at) Quit (Ping timeout: 480 seconds)
[10:56] * rustam (~rustam@94.15.91.30) has joined #ceph
[10:57] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[11:07] * KindTwo (~KindOne@h225.169.17.98.dynamic.ip.windstream.net) has joined #ceph
[11:09] * xiaoxi (~xiaoxi@shzdmzpr02-ext.sh.intel.com) Quit (Ping timeout: 480 seconds)
[11:10] * KindOne- (~KindOne@h119.3.40.162.dynamic.ip.windstream.net) has joined #ceph
[11:13] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:13] * KindOne- is now known as KindOne
[11:13] * mcclurmc_laptop (~mcclurmc@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) Quit (Ping timeout: 480 seconds)
[11:15] * Cube (~Cube@cpe-76-172-67-97.socal.res.rr.com) has joined #ceph
[11:15] * KindTwo (~KindOne@h225.169.17.98.dynamic.ip.windstream.net) Quit (Ping timeout: 480 seconds)
[11:27] * Cube (~Cube@cpe-76-172-67-97.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[11:29] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[11:33] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[11:37] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) Quit (Ping timeout: 480 seconds)
[11:49] * Yen (~Yen@ip-81-11-239-131.dsl.scarlet.be) Quit (Ping timeout: 480 seconds)
[11:50] * Yen (~Yen@ip-81-11-235-40.dsl.scarlet.be) has joined #ceph
[11:52] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[11:52] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[11:57] * mcclurmc_laptop (~mcclurmc@firewall.ctxuk.citrix.com) has joined #ceph
[12:04] * Yen_ (~Yen@ip-81-11-199-77.dsl.scarlet.be) has joined #ceph
[12:06] * rustam (~rustam@94.15.91.30) has joined #ceph
[12:06] * Yen (~Yen@ip-81-11-235-40.dsl.scarlet.be) Quit (Ping timeout: 480 seconds)
[12:06] * Yen_ is now known as Yen
[12:07] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[12:23] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[12:23] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[12:26] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[12:27] * rekby (~Adium@2.93.58.253) Quit (Quit: Leaving.)
[12:28] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:28] * diegows (~diegows@190.190.2.126) has joined #ceph
[12:30] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[12:31] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) Quit (Ping timeout: 480 seconds)
[12:35] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[12:35] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:43] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[12:44] * jerker (jerker@Psilocybe.Update.UU.SE) Quit (Read error: Connection reset by peer)
[12:44] * jerker (jerker@Psilocybe.Update.UU.SE) has joined #ceph
[12:48] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:48] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[12:58] * ScOut3R_ (~ScOut3R@212.96.47.215) has joined #ceph
[12:59] * Yen (~Yen@ip-81-11-199-77.dsl.scarlet.be) Quit (Ping timeout: 480 seconds)
[13:01] * Yen (~Yen@ip-81-11-209-89.dsl.scarlet.be) has joined #ceph
[13:05] * malcolm (~malcolm@101.165.48.42) has joined #ceph
[13:05] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Ping timeout: 480 seconds)
[13:08] * Yen_ (~Yen@ip-81-11-201-180.dsl.scarlet.be) has joined #ceph
[13:09] * Yen (~Yen@ip-81-11-209-89.dsl.scarlet.be) Quit (Ping timeout: 480 seconds)
[13:09] * Yen_ is now known as Yen
[13:12] * hybrid5121 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[13:14] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[13:17] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[13:20] * trond (~trond@trh.betradar.com) has joined #ceph
[13:22] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[13:24] <joao> wow, slow day on IRC...
[13:25] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) Quit (Ping timeout: 480 seconds)
[13:28] <trond> Hi, i am having problems removing an rbd image.
[13:29] <trond> root@zrh4-gnt01:~# rbd rm df62c719-b02f-4a78-8b0b-b6791b3003ee.rbd.disk0
[13:29] <trond> Removing image: 99% complete...failed.
[13:29] <trond> 2013-04-17 13:28:39.938789 7fc8a7032760 -1 rbd: error: image still has watchers
[13:29] <trond> This means the image is still open or the client using it crashed. Try again after closing/unmapping it or waiting 30s for the crashed client to timeout.
[13:29] <trond> librbd: error removing header: (16) Device or resource busy
[13:29] <trond> It is not mapped anywhere.
[13:29] * malcolm (~malcolm@101.165.48.42) Quit (Read error: Connection reset by peer)
[13:30] <trond> Running ceph 0.56.4 with Linux Kernel 3.9rc5
[13:30] <trond> Btw, when can we expect format 2 support for the kernel module.
[13:31] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has left #ceph
[13:32] <Azrael> any folks have suggestions for ceph server naming? what schemes do you follow? have rack name in the hostname, etc?
[13:35] <darkfaded> trond: may i ask what format 2 is?
[13:35] <darkfaded> rings no bells :/
[13:37] <trond> darkfaded: image format 2, allows striping over multiple osd's, snapshots etc.. afaik
[13:38] <trond> http://ceph.com/docs/master/man/8/rbd/?highlight=format#cmdoption-rbd--image-format
[13:39] <darkfaded> thanks.
[13:46] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[13:48] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[13:48] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:49] * lxo (~aoliva@lxo.user.oftc.net) Quit ()
[13:49] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:54] * Yen (~Yen@ip-81-11-201-180.dsl.scarlet.be) Quit (Remote host closed the connection)
[13:56] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) Quit (Remote host closed the connection)
[13:56] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) has joined #ceph
[13:56] * sivanov (~sivanov@gw2.maxtelecom.bg) has joined #ceph
[13:56] <sivanov> Hello anyone with ceph + opennebula expirience
[13:56] <sivanov> ?
[13:59] * john_barbee_ (~jbarbee@c-98-226-73-253.hsd1.in.comcast.net) Quit (Read error: Connection reset by peer)
[14:01] * dxd828 (~dxd828@195.191.107.205) Quit (Remote host closed the connection)
[14:01] * Yen (~Yen@ip-81-11-208-196.dsl.scarlet.be) has joined #ceph
[14:01] * mega_au (~chatzilla@94.137.199.2) has joined #ceph
[14:05] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[14:10] * dxd828 (~dxd828@195.191.107.205) has joined #ceph
[14:11] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[14:13] * JohansGlock_ (~quassel@kantoor.transip.nl) has joined #ceph
[14:13] * capri_on (~capri@212.218.127.222) has joined #ceph
[14:14] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[14:15] * stefunel (~stefunel@static.38.162.46.78.clients.your-server.de) Quit (Quit: ZNC - http://znc.sourceforge.net)
[14:16] * stefunel (~stefunel@static.38.162.46.78.clients.your-server.de) has joined #ceph
[14:17] * joelio (~Joel@88.198.107.214) Quit (Remote host closed the connection)
[14:18] * joelio (~Joel@88.198.107.214) has joined #ceph
[14:19] <imjustmatthew_> sivanov: A lot of the devs are on US-West time, you mihgt have better luck trying again in a few hours.
[14:19] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) Quit (Ping timeout: 480 seconds)
[14:19] <sivanov> Thank you :)
[14:20] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[14:20] * JohansGlock (~quassel@kantoor.transip.nl) Quit (Ping timeout: 480 seconds)
[14:21] * rekby (~Adium@2.93.58.253) has joined #ceph
[14:21] * Yen (~Yen@ip-81-11-208-196.dsl.scarlet.be) Quit (Ping timeout: 480 seconds)
[14:23] * Yen (~Yen@ip-83-134-66-117.dsl.scarlet.be) has joined #ceph
[14:26] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[14:26] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[14:39] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[14:40] * fre (~fre@ip-188-118-13-113.reverse.destiny.be) has joined #ceph
[14:40] * fre (~fre@ip-188-118-13-113.reverse.destiny.be) Quit ()
[14:53] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Quit: wogri_risc)
[15:05] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) Quit (Remote host closed the connection)
[15:05] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[15:13] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) Quit (Ping timeout: 480 seconds)
[15:16] * xiaoxi (~xiaoxi@shzdmzpr01-ext.sh.intel.com) has joined #ceph
[15:17] * rustam (~rustam@94.15.91.30) has joined #ceph
[15:18] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[15:22] * rustam (~rustam@94.15.91.30) has joined #ceph
[15:23] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[15:33] * portante (~user@75-150-32-73-Oregon.hfc.comcastbusiness.net) has joined #ceph
[15:34] * verwilst (~verwilst@110.138-78-194.adsl-static.isp.belgacom.be) has joined #ceph
[15:35] * aliguori (~anthony@32.97.110.51) has joined #ceph
[15:41] * Vjarjadian (~IceChat77@90.214.208.5) has joined #ceph
[15:46] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Remote host closed the connection)
[15:51] * loicd (~loic@173-12-167-177-oregon.hfc.comcastbusiness.net) has joined #ceph
[15:52] * yehudasa_ (~yehudasa@2602:306:330b:1410:695d:9bd8:d757:d68a) has joined #ceph
[15:56] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[15:58] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[16:00] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[16:00] * sleinen1 (~Adium@2001:620:0:25:3dcb:1710:a54d:19fb) Quit (Quit: Leaving.)
[16:00] * sleinen (~Adium@130.59.94.207) has joined #ceph
[16:07] <mattch> sivanov: I have a bit, but no expert - you can always ask the question and see if it's a simple one :)
[16:07] <mattch> (or try #opennebula irc)
[16:08] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) Quit (Ping timeout: 480 seconds)
[16:08] * sleinen (~Adium@130.59.94.207) Quit (Ping timeout: 480 seconds)
[16:12] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[16:12] * drokita (~drokita@199.255.228.128) has joined #ceph
[16:13] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[16:16] * loicd (~loic@173-12-167-177-oregon.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[16:17] * sleinen (~Adium@user-28-22.vpn.switch.ch) has joined #ceph
[16:23] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[16:24] * dosaboy (~dosaboy@ip-64-134-229-164.public.wayport.net) has joined #ceph
[16:27] * norbi (~nonline@buerogw01.ispgateway.de) Quit (Quit: Miranda IM! Smaller, Faster, Easier. http://miranda-im.org)
[16:28] * loicd (~loic@67.23.204.150) has joined #ceph
[16:34] * sleinen (~Adium@user-28-22.vpn.switch.ch) Quit (Quit: Leaving.)
[16:39] * sleinen (~Adium@user-28-14.vpn.switch.ch) has joined #ceph
[16:42] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[16:51] * loicd1 (~loic@67.23.204.150) has joined #ceph
[16:51] * loicd (~loic@67.23.204.150) Quit (Read error: Connection reset by peer)
[16:51] * mattm__ (~matt@108-95-148-196.lightspeed.austtx.sbcglobal.net) has joined #ceph
[16:54] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[17:02] <jerker> trond: striping is in format 1 too, isn't it? it feels like a fundamental feature of the ceph storage architecture.
[17:02] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) Quit (Ping timeout: 480 seconds)
[17:04] * vata (~vata@2607:fad8:4:6:6cfa:ef3e:586c:fa19) has joined #ceph
[17:19] * scuttlemonkey (~scuttlemo@173-12-167-177-oregon.hfc.comcastbusiness.net) has joined #ceph
[17:19] * ChanServ sets mode +o scuttlemonkey
[17:20] * verwilst (~verwilst@110.138-78-194.adsl-static.isp.belgacom.be) Quit (Quit: Ex-Chat)
[17:21] <jefferai> Ah, awesome, I'm glad to see someone is testing the OSD on ZFS
[17:21] <jefferai> I hope the problems get sorted out quickly, as ZFS has become my favorite Linux filesystem, by far
[17:21] <jefferai> and my XFS OSDs constantly screw themselves
[17:21] <jefferai> like, whenever I reboot (gracefully!) I lose 1-2 XFS filesystems
[17:23] <janos> uh wow
[17:23] <janos> my xfs has been fine
[17:26] <jefferai> yeah, not sure what kills mine
[17:26] <jefferai> but, it does
[17:27] * jgallard (~jgallard@gw-aql-129.aql.fr) Quit (Remote host closed the connection)
[17:28] * jgallard (~jgallard@gw-aql-129.aql.fr) has joined #ceph
[17:28] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) Quit (Remote host closed the connection)
[17:32] * Cube (~Cube@cpe-76-172-67-97.socal.res.rr.com) has joined #ceph
[17:32] <sstan> data striping is a pool attribute, it has nothing to do with clients?
[17:36] * jgallard (~jgallard@gw-aql-129.aql.fr) Quit (Remote host closed the connection)
[17:37] * jgallard (~jgallard@gw-aql-129.aql.fr) has joined #ceph
[17:38] <sstan> http://ceph.com/docs/master/architecture/#how-ceph-clients-stripe-data
[17:42] * kfox1111 (bob@leary.csoft.net) has joined #ceph
[17:48] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) has joined #ceph
[17:51] * jgallard (~jgallard@gw-aql-129.aql.fr) Quit (Quit: Leaving)
[17:52] * dosaboy (~dosaboy@ip-64-134-229-164.public.wayport.net) Quit (Quit: Lost terminal)
[17:53] <Dieter_be> any plans of using erasure codes in ceph?
[17:54] <kfox1111> trying to run mkcephfs -a -c /etc/ceph/ceph.conf. Its failing... cat /tmp/mkcephfs.icJAHFg9mc/key.*: No such file or directory.
[17:54] * xiaoxi (~xiaoxi@shzdmzpr01-ext.sh.intel.com) Quit (Remote host closed the connection)
[17:54] <kfox1111> Any idea why?
[17:54] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:55] * scuttlemonkey (~scuttlemo@173-12-167-177-oregon.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[17:56] * leseb (~Adium@ip-64-134-128-29.public.wayport.net) Quit (Ping timeout: 480 seconds)
[17:57] <kfox1111> looks kind of like http://www.mail-archive.com/ceph-users@lists.ceph.com/msg00428.html
[17:57] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[17:58] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[17:58] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[17:58] * sivanov (~sivanov@gw2.maxtelecom.bg) Quit (Ping timeout: 480 seconds)
[17:59] * rustam (~rustam@94.15.91.30) has joined #ceph
[17:59] <sstan> did you create a keyring?
[18:00] <kfox1111> I'm following http://ceph.com/docs/master/start/quick-start/
[18:00] <kfox1111> Do I need to create a keyring first?
[18:01] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[18:02] <sstan> you're using -a .. on what host did you receive that message?
[18:02] * rustam (~rustam@94.15.91.30) has joined #ceph
[18:02] <kfox1111> I'm trying to do a single node standadlone install for testing.
[18:02] <sstan> you don't need -a , then.
[18:03] <sstan> How many OSDs?
[18:03] <kfox1111> Oh. ok. 4.
[18:03] <sstan> -a uses SSH to configure every node, so since you have only one, it's probably not necessary :/
[18:04] <kfox1111> if I leave out the -a, it complains its not there.
[18:04] <sstan> what does it say?
[18:04] * BillK (~BillK@58-7-209-64.dyn.iinet.net.au) Quit (Read error: Operation timed out)
[18:04] <imjustmatthew_> Does anyone know if it's fairly safe to switch to a gitbuilder packages of ceph and then back to main release after the next update?
[18:04] <kfox1111> usage: /usr/sbin/mkcephfs -a -c ceph.conf [-k adminkeyring] [--mkfs]
[18:04] <kfox1111> ...
[18:05] <kfox1111> hmm... the man page implies it is optional though, like you say.
[18:06] <kfox1111> do I need a --mkfs flag too?
[18:07] <sstan> you don't "need" it. But you need to create and mount filesystems that belong to the OSDs
[18:08] * tnt (~tnt@91.177.247.88) has joined #ceph
[18:08] <kfox1111> so: mkcephfs -c /etc/ceph/ceph.conf -k ceph.keyring --mkfs
[18:08] <kfox1111> You must specify an action. See man page.
[18:08] <kfox1111> then the usage again...
[18:08] <sstan> if everything fails, use ceph-authtool to create a key
[18:08] <sstan> I think that ..
[18:09] <sstan> you must switch parameters :
[18:09] <sstan> mkcephfs -c /etc/ceph/ceph.conf --mkfs -k ceph.keyring
[18:09] <kfox1111> same thing.
[18:10] * rustam (~rustam@94.15.91.30) Quit (Ping timeout: 480 seconds)
[18:11] <sstan> well ... I never used mkcephfs. Actually, you can create monitors with another command, and osds yet with another command. And that's all what you need really
[18:11] * rekby (~Adium@2.93.58.253) Quit (Quit: Leaving.)
[18:11] <kfox1111> hmm.. ok.
[18:12] <sstan> when did you start reading about ceph?
[18:12] <kfox1111> Started playing with it the last couple of days.
[18:13] <kfox1111> been keeping an eye on it off and on for a few years.
[18:13] <kfox1111> mostly been using Lustre.
[18:13] * portante (~user@75-150-32-73-Oregon.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[18:13] <sstan> what your ceph.conf does look like?
[18:13] <sstan> could you link to a pastebin
[18:14] <sstan> ?
[18:14] <kfox1111> sure. just a sec...
[18:14] <kfox1111> can a mon addr be a hostname or must it be an IP?
[18:14] <sstan> I'm looking at this page: http://ceph.com/docs/master/start/quick-start/ ... and the only reason why your command might fail seems to be ceph.conf
[18:14] <sstan> IP
[18:15] <kfox1111> that may be related. just a sec...
[18:15] <mattch> kfox1111: Must be an ip
[18:16] * ScOut3R_ (~ScOut3R@212.96.47.215) Quit (Ping timeout: 480 seconds)
[18:16] <kfox1111> http://pastebin.com/NWNWPngp
[18:18] <kfox1111> I tried with and without the cephx auth stuff.
[18:20] <mattch> kfox1111: I assume you've done mkdir for /srv/ceph/*
[18:20] <kfox1111> yes.
[18:21] <kfox1111> here's some full output, if that helps: http://pastebin.com/GsVVUkud
[18:23] <sstan> try that without the -k
[18:23] <mattch> Any particular reason you're running 3 mons on the same host? I don't think it's related, but it's certainly not recommended practice
[18:24] <sstan> there must be at least 3 mons in a cluster ..
[18:24] <kfox1111> same effect without the keyring.
[18:24] <kfox1111> mattch: just trying to build a stand alone box that I can poke and prod that might work similary to a production box. With three, I can kill one and things should still work.
[18:24] <sstan> get this file if possible /tmp/mkcephfs.gA4D6w9iUt/keyring.admin, copy it to your current working directory, and use -k with it
[18:25] <kfox1111> its nuking the directory when it errors. :/
[18:25] <sstan> argh
[18:25] <kfox1111> I can remove the extra monitors if you think its an issue.
[18:25] <mattch> kfox1111: Again, probably not related, but it's not adviseable to set the mon ip to a localhost ip
[18:26] <sstan> try control+z before it stop lol
[18:26] <kfox1111> yeah. if I ever tried to scale it out, it would probably explode. :)
[18:26] <mattch> kfox1111: Neither /should/ make a difference, but there's no harm in using 1 mon with a public ip and see if it fixes mkcephfs
[18:26] <kfox1111> its really quick. I'll try to find the line in the script and put a pause in..
[18:26] <kfox1111> ok. commenting out b and c.
[18:27] <kfox1111> same thing.
[18:27] <sstan> it creates keyring.ceph ... but looking for key.*
[18:27] <sstan> weird ...
[18:27] <sstan> try modifiying that
[18:27] <mattch> oh, also , I think you mean 'host=ceph0' not hostname=ceph0'
[18:27] <sstan> hmm true XD didn't see that
[18:28] <sstan> kfox1111: that might help
[18:28] <kfox1111> http://www.mail-archive.com/ceph-users@lists.ceph.com/msg00428.html
[18:29] <kfox1111> that fixed it. :/
[18:29] <kfox1111> thanks sstan. :)
[18:30] <mattch> kfox1111: If you look at the last message in that email thread it mentions the host != hostname problem
[18:30] <kfox1111> I missread one of the emails in that thread and changed host -> hostname instead of the other way around.
[18:30] <sstan> did you try mattch's suggestion also?
[18:30] <kfox1111> yeah. I can try and reenable the extra mons now.
[18:30] <mattch> kfox1111: looking at the script it creates a key.* for each 'host=' line in ceph.conf - since you had none, it wasn't creating any :)
[18:31] <kfox1111> that makes sense. :)
[18:31] <sstan> after you are done using you cat delete host = ... some lines in the conf file are used only by cephmkfs
[18:31] <mattch> got to run - hope the rest of your testing goes ok!
[18:31] <sstan> ttyl
[18:31] <kfox1111> mattch: thanks. :)
[18:32] <sstan> kfox : what are you going to use ceph for ? cephFS , RBD ?
[18:32] * loicd1 (~loic@67.23.204.150) Quit (Read error: Connection reset by peer)
[18:32] * loicd (~loic@67.23.204.150) has joined #ceph
[18:32] <kfox1111> librados actually.
[18:32] <sstan> ah
[18:32] <sstan> what software uses that?
[18:33] <kfox1111> I'm interested in a very scalable, huge hash table basically. :)
[18:33] <kfox1111> We have a multi PB scientific archive.
[18:33] <kfox1111> I need a scalable raw metadata database for storing extended metadata.
[18:34] <kfox1111> I'm considering using librados to store the file metadata as json documents.
[18:34] <kfox1111> basically every file in our HSM will have a corisponding json document in ceph.
[18:35] <kfox1111> OSD::mkfs: couldn't mount FileStore: error -95
[18:36] <sstan> http://ceph.com/community/summer-adventures-with-ceph-building-a-b-tree/
[18:37] <sstan> can you see if you can mount /dev/loopN manually ?
[18:38] * sleinen1 (~Adium@130.59.94.207) has joined #ceph
[18:39] * alram (~alram@38.122.20.226) has joined #ceph
[18:40] <kfox1111> its mounting. in fact, it is leaving it mounted after erroring.
[18:40] <kfox1111> the previous line is:
[18:40] <kfox1111> enable filestore_xattr_use_omap
[18:40] <kfox1111> 2013-04-17 09:39:37.247078 7f56596e6780 -1 filestore(/srv/ceph/data/osd.0) limited size xattrs -- enable filestore_xattr_use_omap
[18:41] <sstan> osd data = /srv/ceph/data/$name
[18:41] <kfox1111> errno 95 is Operation not supported on transport endpoint....
[18:41] <sstan> shouldn't it be $id
[18:41] <sstan> please check that
[18:41] <kfox1111> k.
[18:42] <kfox1111> that changed where it was mounting it. but same error.
[18:42] <kfox1111> perhaps I don't have my ext4 options correct.
[18:42] <kfox1111> I was guessing on them.
[18:42] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[18:43] <sstan> ah maybe
[18:43] <sstan> actually, $name expands to $type.$id ... so it shouldn't be a problem
[18:43] <kfox1111> ah. it needed a "filestore xattr use omap = true"
[18:44] <sstan> ok
[18:44] <kfox1111> the mkcephfs command completed ok this time. yay. :)
[18:44] <sstan> cool
[18:45] <sstan> if you can have a SSD, storing your journals on it will make ceph 10x faster
[18:45] * sleinen (~Adium@user-28-14.vpn.switch.ch) Quit (Ping timeout: 480 seconds)
[18:45] <kfox1111> I can believe it.
[18:45] <kfox1111> we have afew SSD's for ouor postgresql db and it really helped.
[18:46] * sleinen1 (~Adium@130.59.94.207) Quit (Ping timeout: 480 seconds)
[18:46] <sstan> I tried RAM also. It's fast
[18:47] <kfox1111> machine's chugging along now.... :)
[18:47] <sstan> great!
[18:47] <kfox1111> looks like it is syncing things up...
[18:49] * rahmu (~rahmu@83.167.43.235) Quit (Remote host closed the connection)
[18:49] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[18:49] <sstan> yeah it starts with a 192 PGs by default, I think
[18:49] <kfox1111> [root@ceph0 ~]# ceph health
[18:49] <kfox1111> HEALTH_OK
[18:49] <sstan> ceph osd tree
[18:49] <sstan> good !
[18:50] * l0nk (~alex@83.167.43.235) Quit (Quit: Leaving.)
[18:50] <kfox1111> all looks good. :)
[18:50] <kfox1111> now I can start writing code. :)
[18:50] <kfox1111> Thanks for all the help.
[18:50] <sstan> no problem :) stay connected, this channel is really active
[18:54] <kfox1111> hmm.....
[18:54] <kfox1111> the mons are writing to disk alot. not much data, but fairly constantly.
[18:55] <kfox1111> will they settle down eventually?
[18:57] <kfox1111> there's a fairly constant ~1-2MB/s write out.
[18:57] <sstan> even though you're not writing to RADOS?
[18:58] <kfox1111> not doing anything yet. just started the cluster.
[18:58] <sstan> data read/writes, maintenance, etc. will make the monitors work
[18:58] <sstan> see what's going on with ceph -s
[18:58] <sstan> or -w
[18:59] <kfox1111> deep-scrub ok.....
[19:00] <kfox1111> its cleaning out stuff still perhapse?
[19:00] <sstan> yeah it's gonna stop deep-scrubbing soon
[19:00] <kfox1111> ok. just curious what it was doing. :)
[19:00] <kfox1111> Its not a large write load, but it is noticable in a vm on my laptop.
[19:01] <sstan> haha doing all that on a VM? must be slow
[19:01] <kfox1111> actually, not too bad.
[19:01] <pioto> hi, with rbd... if i create a 40GB image... is that going to always be taking up 40GB on my cluster? or, will it only use that 40GB once i've written to all the blocks in the device?
[19:01] <kfox1111> I've been doing more and more stuff in vm's.
[19:02] <pioto> and, if the latter, how can i tell how much of a given image is *actually* being stored ? rbd info doesn't seem to make that distinction
[19:04] <kfox1111> is there a command to say if it is deep-scrubbing or not? (So I can automate things abit?)
[19:04] <pioto> i think i saw something somewhere that made me think it's "thin provisioned", but i can't find it now
[19:04] <sstan> thin provisioned: it is!
[19:04] <pioto> oh, this. duh: http://ceph.com/docs/master/rbd/rados-rbd-cmds/#resizing-a-block-device-image
[19:04] <pioto> "RBD images are thin provisioned."
[19:04] <pioto> sweet
[19:04] <sstan> kfox: idk but developpers here might, re-ask the question later
[19:05] <kfox1111> ok. thanks. :)
[19:05] <kfox1111> looks like its done though. disk io is now flat. awesome.
[19:05] <pioto> so, you could have a cluster with, say, a 8GB capacity, and create 10 1GB rbd images on it, and that'd be okay, until thye actually have to write > 8GB total?
[19:05] <sstan> yes
[19:06] * portante (~user@67.23.204.150) has joined #ceph
[19:06] <sstan> when that happens, all I/O will be blocked though
[19:06] <pioto> great.
[19:06] <pioto> yeah
[19:06] <pioto> before then even, i guess
[19:06] <pioto> when you hit the full ratio
[19:06] <pioto> so, 0.95*8GB or whtaever
[19:07] <pioto> so, then part 2: how can i judge how much space a given image is *actually* using?
[19:07] <sstan> hmm I don't think the "rbd" command can tell you that. However I suspect "rados" might
[19:07] <pioto> hm
[19:08] <pioto> by checking the size of, what, every single block's object?
[19:08] <sstan> for the pool I think
[19:09] <pioto> well. if you have >1 image in a pool... that isn't really the same answer
[19:09] * Vjarjadian (~IceChat77@90.214.208.5) Quit (Quit: Give a man a fish and he will eat for a day. Teach him how to fish, and he will sit in a boat and drink beer all day)
[19:09] <sstan> no but does the RBD user know how much information it has written to the block device?
[19:09] <pioto> well. yes, you can i guess ask the logical filesystems what they think
[19:10] <pioto> but that's still a slightly different question
[19:10] * loicd (~loic@67.23.204.150) Quit (Quit: Leaving.)
[19:10] <pioto> one is what they currently reference
[19:10] * sjusthm (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) has joined #ceph
[19:10] <sstan> yeah you want to ask the cluster ...
[19:10] <sstan> hmm idk
[19:10] <pioto> the other is how much space the cluster is using
[19:10] <pioto> if, say, you write 2GB, then delete it
[19:10] <pioto> i bet there'd be a 2 GB discrepency here
[19:10] <pioto> unless the cluster is made aware that you did that
[19:11] <pioto> (TRIM? )
[19:11] <sstan> It's filesystem dependant
[19:11] <pioto> well, the kernel has to tell the block device that that space is no longer needed
[19:11] <pioto> but, yues, ok
[19:11] <pioto> great
[19:11] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[19:11] <pioto> i think that's all i need to work with for now
[19:16] * loicd (~loic@67.23.204.150) has joined #ceph
[19:18] * rekby (~Adium@2.93.58.253) has joined #ceph
[19:23] * mcclurmc_laptop (~mcclurmc@firewall.ctxuk.citrix.com) Quit (Ping timeout: 480 seconds)
[19:23] * aliguori (~anthony@32.97.110.51) Quit (Quit: Ex-Chat)
[19:25] * leseb (~Adium@67.23.204.167) has joined #ceph
[19:28] * Cube (~Cube@cpe-76-172-67-97.socal.res.rr.com) Quit (Quit: Leaving.)
[19:33] <t0rn> can anyone confirm if Xens qemu-dm in any version supports RBD? Or do you have to use the kernel driver when using xen?
[19:37] * gmason (~gmason@12.139.57.253) has joined #ceph
[19:38] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Remote host closed the connection)
[19:39] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[19:41] * vata (~vata@2607:fad8:4:6:6cfa:ef3e:586c:fa19) Quit (Quit: Leaving.)
[19:45] <pioto> sstan: looks like that also depends upon qemu 1.1, and setting some special options? http://ceph.com/docs/master/rbd/qemu-rbd/#enabling-discard-trim
[19:45] <pioto> but, hopefully that won't be too much of an issue, since the filesystem will just recycle space on its own often enough
[19:45] <sstan> maybe. Tell us if it works
[19:47] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) Quit (Remote host closed the connection)
[19:49] * Cube (~Cube@12.248.40.138) has joined #ceph
[19:51] * leseb (~Adium@67.23.204.167) Quit (Quit: Leaving.)
[19:53] * leseb (~Adium@67.23.204.167) has joined #ceph
[19:54] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[19:55] <mrjack> the docs http://ceph.com/docs/master/rbd/libvirt/ should be updated
[19:55] <mrjack> client.libvirt.key is used without a reference of what the content of client.libvirt.key is nor how it is created
[20:01] * calebamiles (~caleb@c-50-138-218-203.hsd1.vt.comcast.net) has joined #ceph
[20:04] * dosaboy (~dosaboy@67.23.204.150) has joined #ceph
[20:07] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[20:09] <mrjack> and the search returns no result when searching "client.libvirt.key"
[20:13] * rustam (~rustam@94.15.91.30) has joined #ceph
[20:15] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[20:15] * rustam (~rustam@94.15.91.30) has joined #ceph
[20:17] * themgt (~themgt@24-177-232-181.dhcp.gnvl.sc.charter.com) has joined #ceph
[20:18] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[20:18] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[20:20] <dmick> mrjack: step 6 is supposed to imply tha tyou carete client.libvirt.key from the output of auth list
[20:20] <dmick> it could be clearer
[20:23] * rustam (~rustam@94.15.91.30) Quit (Ping timeout: 480 seconds)
[20:24] * diegows (~diegows@200.68.116.185) has joined #ceph
[20:28] * drokita1 (~drokita@199.255.228.128) has joined #ceph
[20:30] * dpippenger (~riven@216.103.134.250) has joined #ceph
[20:32] * sagewk (~sage@2607:f298:a:607:dcf3:d317:b771:4962) Quit (Ping timeout: 480 seconds)
[20:34] * drokita (~drokita@199.255.228.128) Quit (Ping timeout: 480 seconds)
[20:37] * rekby (~Adium@2.93.58.253) Quit (Quit: Leaving.)
[20:37] * LeaChim (~LeaChim@90.215.24.238) Quit (Ping timeout: 480 seconds)
[20:38] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[20:40] * leseb (~Adium@67.23.204.167) Quit (Quit: Leaving.)
[20:41] <mrjack> dmick: yeah, and is --base64 really needed?
[20:42] <mrjack> seems to work without as well?
[20:42] <dmick> to virsh?
[20:42] <mrjack> yes, when setting value to secret
[20:42] <dmick> beats me
[20:43] * sagewk (~sage@2607:f298:a:607:3044:ec9d:bf7c:3ca1) has joined #ceph
[20:43] <mrjack> dmick?
[20:43] <dmick> current doc seems to state that secret-set-value takes two args, no switches
[20:43] <dmick> so maybe that changed
[20:43] <dmick> what?
[20:44] <mrjack> hm
[20:44] <mrjack> 1.0.4 says two options are possible
[20:45] <mrjack> secret-set-value <secret> <base64>
[20:45] <dmick> yes, that's what my manpage says too. I note the complete absence of --secret and --base64
[20:45] <mrjack> takes two args directly without --
[20:46] * LeaChim (~LeaChim@176.250.202.138) has joined #ceph
[20:48] * loicd (~loic@67.23.204.150) Quit (Quit: Leaving.)
[20:50] * rustam (~rustam@94.15.91.30) has joined #ceph
[20:56] <mrjack> dmick: it seems both ways work
[20:59] * jskinner_ (~jskinner@69.170.148.179) has joined #ceph
[20:59] * jskinner (~jskinner@69.170.148.179) Quit (Read error: Connection reset by peer)
[21:01] <kfox1111> librados question. rados_read, how do you know how much data to read? does it behave like posix read?
[21:02] * dosaboy (~dosaboy@67.23.204.150) Quit (Ping timeout: 480 seconds)
[21:03] * ctrl (~ctrl@83.149.9.227) has joined #ceph
[21:03] * eschnou (~eschnou@182.189-201-80.adsl-dyn.isp.belgacom.be) has joined #ceph
[21:05] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[21:07] * gmason (~gmason@12.139.57.253) Quit (Quit: Computer has gone to sleep.)
[21:08] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[21:08] * jskinner_ (~jskinner@69.170.148.179) Quit (Read error: Connection reset by peer)
[21:22] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) Quit (Remote host closed the connection)
[21:26] * krebbit (5b09c9dd@ircip3.mibbit.com) has joined #ceph
[21:26] <krebbit> Does anybody know when the next bobtail release is planned?
[21:27] * eschnou (~eschnou@182.189-201-80.adsl-dyn.isp.belgacom.be) Quit (Quit: Leaving)
[21:27] * loicd (~loic@67.23.204.150) has joined #ceph
[21:27] * loicd (~loic@67.23.204.150) Quit ()
[21:29] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[21:33] <kfox1111> sounds like the next big release will be soon.
[21:40] <krebbit> yes first weeks of may but i hope to see another bobtail release fixing the rbd async issues
[21:41] * loicd (~loic@67.23.204.150) has joined #ceph
[21:44] * portante (~user@67.23.204.150) Quit (Ping timeout: 480 seconds)
[21:45] <mrjack> krebbit: rbd async issue?
[21:46] <krebbit> sorry async flush #3737
[21:46] <krebbit> in the tracker
[21:55] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[21:57] * loicd (~loic@67.23.204.150) Quit (Read error: Connection reset by peer)
[21:57] * loicd (~loic@67.23.204.150) has joined #ceph
[21:59] * loicd (~loic@67.23.204.150) Quit ()
[22:01] <kfox1111> any libradios developers around?
[22:03] * gmason (~gmason@12.139.57.253) has joined #ceph
[22:07] * benner (~benner@193.200.124.63) has joined #ceph
[22:07] <benner> hi
[22:07] <sjusthm> kfox1111: you can use the stat call to determine size
[22:15] * ctrl (~ctrl@83.149.9.227) Quit ()
[22:18] <dspano> Speaking of rbd cache=true. Is there anything else required for this to be active besides setting that flag under [client] in ceph.conf? I'm trying to get it working with Openstack Cinder.
[22:19] * imjustmatthew_ (~imjustmat@c-24-127-107-51.hsd1.va.comcast.net) Quit (Remote host closed the connection)
[22:20] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[22:21] <benner> how to specify monitors in ceph.conf on client side?
[22:23] * sivanov (~sivanov@94.72.154.228) has joined #ceph
[22:23] * loicd (~loic@67.23.204.150) has joined #ceph
[22:23] <dspano> benner: You define them the same way in ceph.conf throughout the cluster.
[22:25] <dmick> dspano: you need to make sure qemu knows about the cache setting
[22:25] <dmick> for correctness
[22:26] <mrjack> i see this nearly every day: 2013-04-17 22:25:06.979692 osd.6 [WRN] map e2237 wrongly marked me down
[22:26] <mrjack> and a lot slow requests before
[22:29] <mrjack> for some reason, io stops and some kvm guests restart...
[22:29] <benner> dspano: is it possible to have only one [mon] section with list of all mon servers?
[22:30] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:31] * loicd (~loic@67.23.204.150) Quit (Ping timeout: 480 seconds)
[22:34] <dspano> benner: You've got to list them by id.
[22:37] <dspano> First the global mon section with your global settings, then you have a defined section by monitor. I.E [mon.0] host=host1 mon addr=10.10.1.100 [mon.1] host=host2 mon addr=10.10.1.101..etc.
[22:38] <dspano> benner: How many monitors do you plan on adding?
[22:38] <dspano> dmick: Thanks.
[22:40] * loicd (~loic@67.23.204.150) has joined #ceph
[22:46] <sstan> is there a userland rbd client?
[22:47] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[22:49] <gregaf> that's librbd, sstan
[22:49] <gregaf> or the experimental rbd-fuse if you want something to mount into your fs or whatever
[22:50] <dmick> or stgt with the bs_rbd backend
[22:50] <dmick> or qemu
[22:50] <dmick> (all of which build on librbd)
[22:50] <sstan> thanks
[22:50] <dmick> I've been wondering if there are userland block device drivers that could retarget the backend
[22:50] <dmick> but haven't searched
[22:51] * Vjarjadian (~IceChat77@90.214.208.5) has joined #ceph
[23:05] * rustam (~rustam@94.15.91.30) has joined #ceph
[23:06] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[23:18] * sivanov (~sivanov@94.72.154.228) Quit (Ping timeout: 480 seconds)
[23:23] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Quit: Leaving.)
[23:23] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[23:25] * krebbit (5b09c9dd@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[23:25] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) Quit (Remote host closed the connection)
[23:30] * Qten (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Ping timeout: 480 seconds)
[23:32] * loicd (~loic@67.23.204.150) Quit (Quit: Leaving.)
[23:36] * Qten (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[23:40] * loicd (~loic@67.23.204.150) has joined #ceph
[23:41] * loicd (~loic@67.23.204.150) Quit ()
[23:47] * mcclurmc_laptop (~mcclurmc@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) has joined #ceph
[23:52] * dosaboy (~dosaboy@67.23.204.150) has joined #ceph
[23:53] * dosaboy (~dosaboy@67.23.204.150) Quit ()
[23:53] * dosaboy (~dosaboy@67.23.204.150) has joined #ceph
[23:54] * loicd (~loic@67.23.204.150) has joined #ceph
[23:54] * leseb (~Adium@67.23.204.150) has joined #ceph
[23:54] * jskinner (~jskinner@69.170.148.179) Quit (Remote host closed the connection)
[23:59] * Yen (~Yen@ip-83-134-66-117.dsl.scarlet.be) Quit (Ping timeout: 480 seconds)
[23:59] * rustam (~rustam@94.15.91.30) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.