#ceph IRC Log

Index

IRC Log for 2012-12-19

Timestamps are in GMT/BST.

[0:13] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[0:15] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[0:15] * Cube (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[0:21] * Steki (~steki@85.222.179.85) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:22] * Cube (~Cube@12.248.40.138) has joined #ceph
[0:25] <ron-slc> when rbd writes even a small amount ~100KB; is it normal to see ~16MB of physical disk writes (totaled across 4 disks?) My journals are not counted in this, they are on a separate SSD.
[0:25] * benpol (~benp@garage.reed.edu) has joined #ceph
[0:41] <jmlowe1> ron-slc: That doesn't sound right to me
[0:47] * jjgalvez (~jjgalvez@12.248.40.138) Quit (Quit: Leaving.)
[0:47] * LeaChim (~LeaChim@5ad684ae.bb.sky.com) Quit (Remote host closed the connection)
[0:50] <ron-slc> Yea, that's what I would think.. I have only a single VM running, with virt-top showing Block IO. I see 20K "WRBY". Then in separate windows, I sum the total of ceph OSD disk IO using iostat -k, and come to KB_wrtn totals up around 20-30MB!!
[0:56] <ron-slc> so pretty much, just three booting VM's are a complete performance killer, as every OSD disk is running at full 150MB/s, for relatively small 4k-block write requests.
[0:58] * ircolle (~ircolle@c-67-172-132-164.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[0:59] <andreask> ron-slc: you use rbd caching?
[1:01] <ron-slc> yes, indeed. I have write-back enabled (this did make things quite a lot faster), I have even set [client] rbd_cache_max_dirty_bytes to 72MiB
[1:01] <andreask> and the pool size for rbd?
[1:02] <ron-slc> pool 5 'kvm' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 1408 pgp_num 1408 last_change 665 owner 0
[1:04] <andreask> and the osd filesystem is xfs?
[1:04] <ron-slc> actually using btrfs on this one
[1:04] <ron-slc> Kinda testing based on the performance advantages I saw recently in the Ceph Blog.
[1:06] <andreask> quemu-rbd?
[1:07] <ron-slc> do you mean qemu?
[1:08] <andreask> yes, I mean you don't use xen with rbd kernel module
[1:08] <ron-slc> (yes, created the image with qemu-img create)
[1:08] <ron-slc> Correcrt, no xen
[1:08] <andreask> sorry, never tested with btrfs
[1:09] <andreask> last time I used btrfs it was not stable enough
[1:09] <andreask> I assume you have latest kernel with btrfs?
[1:13] <ron-slc> yea, seems stable.. But I'm wondering if there's a LOT happening... I see in the root of the OSD, btrfs (ceph?), has created 3 snaps (today.)
[1:14] <ron-slc> I'm using kernel 3.5.0 (ubuntu - Quantal/12.10) so pretty recent in BTRFS terms.
[1:15] <andreask> recent in BTRFS terms is 3.7 ;-)
[1:15] <ron-slc> lol true, maybe a little bleeding edge there too.
[1:16] <ron-slc> I'm not really sure how Ceph tries to take advantage of BTRFS snaps
[1:18] <yasu`> it seems they take advantages: http://ceph.com/docs/master/rados/configuration/filestore-config-ref/#b-tree-filesystem
[1:20] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[1:21] <ron-slc> I think I may wipe-out my cluster, split all osd's in half... 1st partition BTRFS, 2nd Partition XFS running two clusters, and doing single-running VM 4/k tests...
[1:27] * jlogan1 (~Thunderbi@2600:c00:3010:1:19e4:b73c:924b:79fd) Quit (Ping timeout: 480 seconds)
[1:29] * jlogan1 (~Thunderbi@72.5.59.176) has joined #ceph
[1:29] <benpol> ron-slc: I think a ceph osd really exercises btrfs, I've been having some serious issues with my small test cluster using btrfs.
[1:30] <benpol> seems like the osd makes a lot of use of btrfs transactions. Sometimes it becomes impossible to unmount the btrfs filesystem (for a reboot).
[1:31] <benpol> Also mounting said filesystem after a forced reboot can be troublesome (mounting the btrfs filesystems can take a long long time as the btrfs-transaction kernel thread thrashes)
[1:32] <benpol> perhaps I'm missing some magic btrfs mount options
[1:32] * benpol shrugs
[1:34] <ron-slc> benpol: yea, I think I may take a quick spin with XFS..
[1:34] <ron-slc> Luckily I haven't seen the unmounting issues yet. Though I only have 2-3 VMs pointing at the cluster for now... So it is easier for me to stop operations first.
[1:35] <ron-slc> But, I have seen that OSD mounting takes quite a LONG time, with quite a lot of DISK I/O
[1:38] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[1:42] * benpol wanders off to check on a machine that's in that very state
[1:42] <jmlowe1> ron-slc: how about fragmentation, what state are your filesystems in?
[1:43] <ron-slc> how can this be discovered on BTRFS? I have dedicated the entire disk as OSD.
[1:44] <jmlowe1> ron-slc: nevermind, I just skimmed the history, thought you were having excessive writes and slow performance using xfs
[1:44] <jmlowe1> ron-slc: that shouldn't be a problem with btrfs
[1:46] <jmlowe1> ron-slc: you should talk to nhm, he does lots of benchmarking and had some tweaks to make the osd's play nicer with xfs
[1:46] <ron-slc> No, just HUGE amounts of back-end OSD writing for very small KVM/QEMU writes to rbd. A 20-30kb VM Block-write, causing a sum of 20-30MB disk-writes
[1:47] <ron-slc> So ceph/rados/rbd will not re-write a 4MB object-block, on a simple small update to that block, correct?
[1:48] <jmlowe1> ron-slc: as I understand it no, it uses both sparse files and offset arguments to rados calls to avoid that
[1:48] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[1:49] <andreask> ron-slc: looking at the btrfs changes since 3.5 updating to 3.7 seems to be a good choice
[1:50] <jmlowe1> I pull the latest kernels from here and run them http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.7.1-raring/
[1:50] <jmlowe1> ymmv
[1:50] <ron-slc> kk, I can pull down the kernel.org OR even better, the PPA jmlowe1 suggested.. ;)
[1:50] <andreask> especially that fsync speedups ...
[2:05] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[2:11] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) has joined #ceph
[2:11] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) Quit (Remote host closed the connection)
[2:12] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) has joined #ceph
[2:18] * yasu` (~yasu`@soenat3.cse.ucsc.edu) Quit (Remote host closed the connection)
[2:34] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[2:59] * maxiz (~pfliu@202.108.130.138) has joined #ceph
[3:00] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[3:00] * loicd (~loic@magenta.dachary.org) has joined #ceph
[3:02] * aliguori (~anthony@cpe-70-113-5-4.austin.res.rr.com) Quit (Remote host closed the connection)
[3:30] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[3:31] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[3:38] * Ryan_Lane (~Adium@216.38.130.167) Quit (Quit: Leaving.)
[3:40] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[3:41] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Read error: Operation timed out)
[3:47] * jlogan1 (~Thunderbi@72.5.59.176) Quit (Ping timeout: 480 seconds)
[3:50] * fzylogic (~fzylogic@69.170.166.146) Quit (Quit: fzylogic)
[3:56] <astalsi> So. I'm getting an error when I try to start ceph ("service ceph start") - "OSD::mkfs: FileStore::mkfs failed with error -22". Caveat: I have ZFS as an underlying storage layer rather than btrfs. Any pointers on what I've done wrong? I'm fairly sure its something simple....
[4:08] <dmick> first thing is to know what -22 is
[4:09] <dmick> amazingly, even this isn't good enough to do that: find /usr/include -name errno.h | xargs grep 22
[4:10] <dmick> -name '*errno*' | xargs grep 22 is though. that's EINVAL
[4:11] <dmick> ISTR problems with zfs not supporting O_DIRECT
[4:11] <dmick> if that's still the case, there's probably something in the OSD log about the underlying filesystem not support O_DIRECT
[4:14] <dmick> https://github.com/zfsonlinux/zfs/issues/224
[4:14] <dmick> if that's the problem
[4:14] <dmick> the OSD wants O_DIRECT for the journal by default
[4:14] <dmick> but that can be disabled
[4:14] <dmick> or you can put the journal on a non-ZFS FS
[4:14] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has left #ceph
[4:15] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[4:15] * loicd (~loic@2a01:e35:2eba:db10:120b:a9ff:feb7:cce0) has joined #ceph
[4:16] <dmick> set journal dio = false
[4:23] <astalsi> dmick: cool, thanks. didnt realize that was a errno... And sorry about the delayed response - got called away...
[4:28] <dmick> astalsi: very interested to know how that setup works for you
[4:31] <astalsi> dmick: will let you know/post in here when/if I get it working
[4:42] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[4:44] * loicd (~loic@2a01:e35:2eba:db10:120b:a9ff:feb7:cce0) Quit (Quit: Leaving.)
[4:44] * loicd (~loic@magenta.dachary.org) has joined #ceph
[5:33] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[5:36] * chutzpah (~chutz@199.21.234.7) Quit (Quit: Leaving)
[5:42] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[5:48] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[5:53] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[6:09] * houkouonchi-work (~linux@12.248.40.138) Quit (Remote host closed the connection)
[6:22] * dmick (~dmick@2607:f298:a:607:55f2:b245:82d:3a20) Quit (Quit: Leaving.)
[6:54] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[7:03] * n-other (c32fff42@ircip3.mibbit.com) has joined #ceph
[7:05] <n-other> hello! anyone alive?
[7:14] <phantomcircuit> no
[7:15] <phantomcircuit> the world ended early in #ceph
[7:15] <n-other> come on... it's still 2 days before the end )
[7:18] <n-other> I am looking for help with stuck ceph clients they usually stuck on stat* operations
[7:19] <n-other> and ceph -s is healthy with no warnings
[7:22] <n-other> and my second issue - mons are getting dead without a reason. here is the link http://pastebin.com/Q2jej3iZ disk space is ok
[7:22] <n-other> the most fresh one. just couple of hours ago
[7:26] * deepsa (~deepsa@122.166.161.214) Quit (Ping timeout: 480 seconds)
[7:34] * deepsa (~deepsa@122.172.11.243) has joined #ceph
[7:45] * Cube (~Cube@12.248.40.138) Quit (Read error: Operation timed out)
[7:58] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[7:58] * deepsa (~deepsa@122.172.11.243) Quit (Ping timeout: 480 seconds)
[8:04] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[8:04] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[8:05] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[8:05] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[8:15] * deepsa (~deepsa@122.172.37.29) has joined #ceph
[8:19] * sleinen (~7d1@2600:3c00::2:2424) has joined #ceph
[8:20] * deepsa_ (~deepsa@122.166.162.179) has joined #ceph
[8:23] * deepsa (~deepsa@122.172.37.29) Quit (Ping timeout: 480 seconds)
[8:23] * deepsa_ is now known as deepsa
[8:36] * The_Bishop (~bishop@2001:470:50b6:0:3532:6a57:4b2e:1ad2) Quit (Read error: Operation timed out)
[8:48] <davidz> n-other: try again in the morning or send e-mail to ceph-devel@vger.kernel.org
[8:51] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[8:52] * The_Bishop (~bishop@2001:470:50b6:0:5965:e8be:8ad:d440) has joined #ceph
[8:54] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:55] * low (~low@188.165.111.2) has joined #ceph
[8:57] * gaveen (~gaveen@112.135.19.188) has joined #ceph
[9:01] <n-other> what's your current time?
[9:01] <n-other> ok, I guess mail will be better
[9:01] <n-other> thanks
[9:07] * sleinen (~7d1@2600:3c00::2:2424) has left #ceph
[9:07] * sleinen1 (~Adium@2001:620:0:26:9069:eed:43d0:8489) has joined #ceph
[9:08] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[9:16] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:17] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[9:19] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[9:23] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[9:23] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[9:25] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit ()
[9:28] <vjarjadian> anyone online know what the ceph's developers are planning for their geo-location feature?
[9:30] <vjarjadian> geo-replication
[9:41] * vjarjadian_ (~IceChat7@5ad6d005.bb.sky.com) has joined #ceph
[9:41] * vjarjadian (~IceChat7@5ad6d005.bb.sky.com) Quit (Read error: Connection reset by peer)
[9:42] * vjarjadian_ (~IceChat7@5ad6d005.bb.sky.com) Quit (Read error: Connection reset by peer)
[9:52] * LeaChim (~LeaChim@5ad684ae.bb.sky.com) has joined #ceph
[9:54] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[10:02] * stp (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[10:04] * sileht (~sileht@sileht.net) Quit (Quit: WeeChat 0.3.9.2)
[10:04] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[10:04] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) has joined #ceph
[10:09] * roald (~roaldvanl@139-63-21-176.nodes.tno.nl) has joined #ceph
[10:23] * HauM1 (~HauM1@login.univie.ac.at) has joined #ceph
[10:31] * sileht (~sileht@sileht.net) has joined #ceph
[10:33] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[10:49] * maxiz (~pfliu@202.108.130.138) Quit (Quit: Ex-Chat)
[10:59] <joao> <n-other> and my second issue - mons are getting dead without a reason. here is the link http://pastebin.com/Q2jej3iZ disk space is ok
[10:59] <joao> http://tracker.newdream.net/issues/3495
[10:59] <joao> fixed in latest versions
[11:34] * morse (~morse@supercomputing.univpm.it) Quit (Quit: Bye, see you soon)
[11:34] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[11:49] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Read error: Connection reset by peer)
[11:50] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[11:51] * gaveen (~gaveen@112.135.19.188) Quit (Remote host closed the connection)
[11:54] * renzhi (~renzhi@116.226.37.139) Quit (Quit: Leaving)
[11:57] * IceGuest_75 (~IceChat7@buerogw01.ispgateway.de) has joined #ceph
[11:57] <IceGuest_75> #ceph
[11:57] * IceGuest_75 is now known as norbi
[11:57] <norbi> hi #ceph :)
[11:59] <norbi> i want to check the ceph health, is there a command for a fake "host down" ? "ceph osd host down HOSTNAME" and the result must be a "data loss" or "no data loss"
[11:59] <norbi> or how can i see, if i will have dataloss if OSDx get crashed ?
[12:07] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[12:09] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[12:10] * stp (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[12:13] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) Quit (Quit: Leaving.)
[12:19] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[12:20] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[12:21] * deepsa_ (~deepsa@115.184.48.153) has joined #ceph
[12:24] * deepsa (~deepsa@122.166.162.179) Quit (Ping timeout: 480 seconds)
[12:24] * deepsa_ is now known as deepsa
[12:47] * loicd (~loic@178.20.50.225) has joined #ceph
[12:49] <norbi> no idea anybody ?
[12:51] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[12:56] * tezra (~rolson@116.226.37.139) Quit (Quit: Ex-Chat)
[13:13] <ScOut3R> norbi: as far as i know you can answer your last question by knowing your data layout, so your CRUSHMAP
[13:14] <ScOut3R> is there a way to "force" remapping pgs? so let's say an osd falls out which causes some degraded pgs, will ceph remap them to
[13:14] <ScOut3R> *temporarily to fulfill the size requirement?
[13:15] <ScOut3R> forget it, just found the answer for my own question :)
[13:18] <norbi> hm yes thats the theory
[13:18] <norbi> ;)
[13:18] <norbi> have 3 hosts and i know the crushmap, so it has do be no problem if one host goes down
[13:18] <norbi> i have tested it, and get dataloss :)
[13:19] <norbi> of 0.0004% :D
[13:32] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[13:39] <ScOut3R> what do you have in your crushmap?
[13:40] <Psi-Jack> Alrighty! Speaking of CRUSH maps... :D
[13:42] <norbi> host XXXXXX with OSD 1-4 and host YYYYY with OSD 5-10 and host ZZZZ with OSD 11-16
[13:42] <norbi> ceph was "HEALTH_OK"
[13:42] <Psi-Jack> I currently have the defaultly installed crush map, and I want to insure that my rbd group has at least 2, maybe 3 replicas accross the 3 physical servers the 9 OSD's are on.
[13:43] <ScOut3R> norbi: could you dump the decompiled map on pastebin? :)
[13:45] <Psi-Jack> Here's mine: http://pastebin.ca/2294891
[13:46] <ScOut3R> just a few minutes, i'll finish my lunch ;)
[13:49] <norbi> just wait
[13:49] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[13:49] * ChanServ sets mode +o elder
[13:52] <ScOut3R> Psi-Jack: with that crushmap (and with the default pool size) i believe you will have to replicas of each object distributed on two hosts
[14:02] <Psi-Jack> Okay. How would I re-define the rbd one to do 3 replicas accross the three physical hosts?
[14:02] <norbi> ScOut3R- here the map http://pastebin.com/9Dz0rQnf
[14:03] <norbi> but i have a little bit to modify because my new map doesnt show osd.0 and osd.2, but there are my lost pgs
[14:03] <ScOut3R> Psi-Jack: i think you just need to set the relevant pool size to 3
[14:04] <ScOut3R> norbi: your crushmap does not distribute replicas across but just osds, so there's the possibility that you have all of the replicas on the same host
[14:05] <Psi-Jack> hmm?
[14:05] <ScOut3R> Psi-Jack: http://ceph.com/docs/master/rados/operations/pools/#set-the-number-of-object-replicas
[14:06] <norbi> hm... where can i set this up ?
[14:06] <ScOut3R> i think that's enough because your crushmap already distributes objects by host, not by osd, so increasing the number of replicas to 3 will put the third object on the third host
[14:07] <Psi-Jack> Ahh.. IN this, ceph osd pool set {poolname} size {num-replicas}, is there a "get" command to see what it's currently set at?
[14:09] <ScOut3R> norbi: in the crushmap; you have these lines in your rules: step choose firstn 0 type osd which means the algorithm will choose osds to distribute data and will ignore host level separation; for type you can specify for example "host" so it will distribute the replica objects across hosts to achieve the redundancy you require
[14:09] <ScOut3R> Psi-Jack: ceph osd dump | grep 'rep size'
[14:10] <ScOut3R> norbi: http://pastebin.com/D5KxM8Bh
[14:10] <ScOut3R> that's a ruleset from my crushmap
[14:10] <ScOut3R> you can look at the differences
[14:10] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[14:10] <ScOut3R> can you deduce what it does? :)
[14:11] <Psi-Jack> OKay. Yeah, currently set to rep size 2, crush_ruleset 2. What I'm aiming at is getting 3 replica's, but I don't want it to try to do 2 replica's on a single host, which I suspect would happen if I have 3 replica's set, and 1 storage server goes down for any length of time, it would try to start replicating the 3rd replica to one of both of the other 2 storage servers.
[14:11] <norbi> ok "step choose firstn 0 type host" fix the problem? where can i find that in docu ? :)
[14:12] <ScOut3R> norbi: yes, correct; i've found it somewhere on the wiki but i cannot find it anymore
[14:13] <ScOut3R> Psi-Jack: if one of your hosts goes down with a size of 3 then yes, if the pool still has enough space the third replica will be "remade" on the remaining hosts
[14:13] <ScOut3R> but i assume in that case you will be working hard to bring back the third host ;)
[14:13] <Psi-Jack> Exactly. Which is what I'd rather avoid having happening. ;)
[14:14] <ScOut3R> and after it goes live the third replica will be moved their (at least i think that will happen)
[14:16] <ScOut3R> hm, Psi-Jack: i think it won't create a third replica on the remaining two hosts because you only specified host replication and with 2 hosts the 3 replicas cannot be created so your pool will be degraded
[14:17] <Psi-Jack> Hmmm.
[14:17] <ScOut3R> i'm just started to test a situation like that
[14:17] * roald (~roaldvanl@139-63-21-176.nodes.tno.nl) Quit (Read error: Operation timed out)
[14:18] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[14:26] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[14:30] * guigouz (~guigouz@177.33.243.196) has joined #ceph
[14:33] * roald (~roaldvanl@139-63-20-145.nodes.tno.nl) has joined #ceph
[14:38] <ScOut3R> Psi-Jack: yes, if there are not enough hosts to satisfy your replication size (using your current crushmap) then your cluster will stay in degraded mode
[14:38] <Psi-Jack> I see.
[14:45] * roald (~roaldvanl@139-63-20-145.nodes.tno.nl) Quit (Ping timeout: 480 seconds)
[14:57] * sleinen1 (~Adium@2001:620:0:26:9069:eed:43d0:8489) Quit (Quit: Leaving.)
[15:00] * sleinen (~Adium@130.59.94.86) has joined #ceph
[15:00] * sleinen1 (~Adium@2001:620:0:26:2c34:e7af:b673:eb5b) has joined #ceph
[15:01] * match (~mrichar1@pcw3047.see.ed.ac.uk) has joined #ceph
[15:01] * gaveen (~gaveen@112.135.15.211) has joined #ceph
[15:08] * sleinen (~Adium@130.59.94.86) Quit (Ping timeout: 480 seconds)
[15:18] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[15:20] * roald (~roaldvanl@87.209.150.214) has joined #ceph
[15:26] * roald_ (~roaldvanl@87.209.150.214) has joined #ceph
[15:32] <norbi> ScOut3R?
[15:32] <norbi> i have chagend the crushmap now, its normal that there are no racks,hosts,pools are in the crushmap ?
[15:33] * roald (~roaldvanl@87.209.150.214) Quit (Ping timeout: 480 seconds)
[15:33] <norbi> hm ok seems not normal :D
[15:34] <ScOut3R> as far as i remember there were those items when you've pasted your crushmap :)
[15:36] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[15:37] <norbi> hm sounds like a problem in my crush map :)
[15:39] <norbi> that "step choose firstn 0 type hos" works
[15:40] <norbi> i had choosen "step chooseleaf 0 type host" because i have seen it here "http://ceph.com/docs/master/rados/operations/crush-map/"
[15:41] <norbi> and "step chooseleaf firstn 0 type host" this doenst seem to work
[15:43] <ScOut3R> i assume firstn does not go well with chooseleaf :)
[15:43] <ScOut3R> so your first option is good
[15:48] * noob2 (~noob2@ext.cscinfo.com) has joined #ceph
[15:48] <noob2> jmlowe1: i found something interesting with the ssd's. fedora 17 works, detects everything and has the 3.6.x kernel
[15:49] * francois-pl (c3dc640b@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[15:57] <norbi> hm must test this, ScOut3R. i have now a poor performance
[15:58] <norbi> and now, time to go home :) bye
[15:58] * norbi (~IceChat7@buerogw01.ispgateway.de) Quit (Quit: Relax, its only ONES and ZEROS!)
[16:06] * dasher (~dasher@pb-d-128-141-156-91.cern.ch) has joined #ceph
[16:22] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[16:24] * jlogan (~Thunderbi@2600:c00:3010:1:942b:403e:be8f:1b6c) has joined #ceph
[16:30] <janos> maaan, i "mathed" wrong and made a petabyte-sized rbd image instead of a terrabyte - this is taking a while to remove
[16:30] <janos> haha
[16:30] <Psi-Jack> Ooooh.. Fun..
[16:30] <janos> yeah
[16:30] <Psi-Jack> Can I get a couple petabytes? ;D
[16:30] <janos> it doesn't seem to be interrupting any other activities though
[16:30] <janos> made otherimage, formatted, moved a few gb into it
[16:43] * deepsa_ (~deepsa@122.172.167.63) has joined #ceph
[16:49] * deepsa (~deepsa@115.184.48.153) Quit (Ping timeout: 480 seconds)
[16:49] * deepsa_ is now known as deepsa
[16:50] * yoshi (~yoshi@80.30.51.242) has joined #ceph
[16:56] * low (~low@188.165.111.2) Quit (Quit: bbl)
[16:59] * dasher (~dasher@pb-d-128-141-156-91.cern.ch) Quit (Quit: dasher)
[17:03] * match (~mrichar1@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[17:05] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Quit: Leaving.)
[17:06] * dasher (~dasher@pb-d-128-141-156-91.cern.ch) has joined #ceph
[17:10] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Remote host closed the connection)
[17:13] * mengesb (~bmenges@servepath-gw3.servepath.com) Quit (Quit: Leaving.)
[17:15] * mengesb (~bmenges@servepath-gw3.servepath.com) has joined #ceph
[17:16] * mengesb (~bmenges@servepath-gw3.servepath.com) has left #ceph
[17:22] * gaveen (~gaveen@112.135.15.211) Quit (Remote host closed the connection)
[17:25] * dasher_ (~dasher@pb-d-128-141-156-91.cern.ch) has joined #ceph
[17:25] * dasher (~dasher@pb-d-128-141-156-91.cern.ch) Quit (Read error: Connection reset by peer)
[17:25] * dasher_ is now known as dasher
[17:27] * gaveen (~gaveen@112.135.15.211) has joined #ceph
[17:32] * verwilst (~verwilst@d5152FEFB.static.telenet.be) Quit (Quit: Ex-Chat)
[17:36] * sagelap1 (~sage@c-67-180-202-32.hsd1.ca.comcast.net) has joined #ceph
[17:42] <mikedawson> Setting up OpenStack Folsom, Cinder, and Ceph
[17:42] <mikedawson> rbd -p volumes ls
[17:43] <mikedawson> volume-b47f4a38-0b98-4074-b4b6-fd02af17625f
[17:43] <mikedawson> volume-d4afad21-1092-4932-9ec5-8c4f81d94e0b
[17:43] <mikedawson> volume-0b8f58ca-1e39-4cd5-86c0-2baaa7d727ab
[17:43] <mikedawson> root@node1:~# rbd -p volumes info volume-b47f4a38-0b98-4074-b4b6-fd02af17625f
[17:43] <mikedawson> rbd image 'volume-b47f4a38-0b98-4074-b4b6-fd02af17625f':
[17:43] <mikedawson> size 20480 MB in 5120 objects
[17:43] <mikedawson> order 22 (4096 KB objects)
[17:43] <mikedawson> block_name_prefix: rb.0.1630.238e1f29
[17:43] <mikedawson> format: 1
[17:43] <mikedawson> Should that list a parent if COW is working?
[17:44] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[17:55] * gucki (~smuxi@46-126-114-222.dynamic.hispeed.ch) has joined #ceph
[17:55] * sagelap (~sage@204.sub-70-199-74.myvzw.com) has joined #ceph
[17:56] * sagelap1 (~sage@c-67-180-202-32.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:03] * sagelap (~sage@204.sub-70-199-74.myvzw.com) Quit (Ping timeout: 480 seconds)
[18:07] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[18:10] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[18:10] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[18:12] * yoshi_ (~yoshi@80.30.51.242) has joined #ceph
[18:12] * yoshi (~yoshi@80.30.51.242) Quit (Read error: Connection reset by peer)
[18:13] * xdccFrien (~maravilla@80.30.141.8) has joined #ceph
[18:13] <xdccFrien> http://www.carolinaherrera.com/212/es/areyouonthelist?share=Vb9UR_gNSOypVWCs4rq6jTOV5yr2vy28bBN8Zn1HTj3kz4rz3EUUdzs6j6FXsjB4447F-isvxjqkXd4Qey2GHw#episodio-3
[18:14] * `gregorg` (~Greg@78.155.152.6) has joined #ceph
[18:14] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[18:14] * xdccFrien (~maravilla@80.30.141.8) Quit (autokilled: Spambot. Mail support@oftc.net with questions (2012-12-19 17:14:23))
[18:18] <rweeks> oh look, spam
[18:19] <Psi-Jack> Where? What kind? Turkey? Beef? Chicken?
[18:19] <rweeks> Looks like portugese sausage.
[18:19] <Psi-Jack> hmmmm
[18:20] <Psi-Jack> Can you ever be sure with Spam? ;)
[18:21] <wer> just put enough velveeta on it and you will not notice...
[18:21] <Psi-Jack> lol
[18:21] <rweeks> mmmm spam and velveeta
[18:22] <wer> I had to eat that stuff growing up.... :)
[18:22] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[18:23] <rweeks> and now you have radioactive superpowers?
[18:23] * dubna`city (~kvirc@95.73.0.183) has joined #ceph
[18:24] <wer> yeah, I have an amazing superpower to not profit and be lazy.
[18:25] <rweeks> ah
[18:25] <rweeks> so you're a systems admin then
[18:25] * rweeks grins
[18:25] <wer> lol of sorts probably :)
[18:33] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[18:33] * `gregorg` (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[18:44] * fzylogic (~fzylogic@69.170.166.146) has joined #ceph
[18:47] * dasher (~dasher@pb-d-128-141-156-91.cern.ch) Quit (Read error: Connection reset by peer)
[18:47] * dasher (~dasher@pb-d-128-141-156-91.cern.ch) has joined #ceph
[18:49] * Kioob (~kioob@luuna.daevel.fr) Quit (Remote host closed the connection)
[18:49] <Psi-Jack> Cool. I finally got my article archive/system up and online on my site now, so I can actually start using it to write weekly articles or some-such. Such as the one I'll be doing for Ceph. :)
[18:51] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[18:52] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[18:52] * roald_ (~roaldvanl@87.209.150.214) Quit (Ping timeout: 480 seconds)
[18:52] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) has joined #ceph
[18:56] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[18:57] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[18:58] * loicd (~loic@178.20.50.225) Quit (Read error: Operation timed out)
[18:59] * sleinen1 (~Adium@2001:620:0:26:2c34:e7af:b673:eb5b) Quit (Quit: Leaving.)
[18:59] * sleinen (~Adium@130.59.94.86) has joined #ceph
[19:02] <joao> rweeks, what kind of sausage is it?
[19:02] * dasher (~dasher@pb-d-128-141-156-91.cern.ch) Quit (Read error: Connection reset by peer)
[19:03] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[19:03] * dasher (~dasher@pb-d-128-141-156-91.cern.ch) has joined #ceph
[19:03] <rweeks> that link from above looked like an advertisement for some sort of boxing match in portugese
[19:05] * dasher (~dasher@pb-d-128-141-156-91.cern.ch) Quit ()
[19:06] <Psi-Jack> Hmmm.
[19:07] <Psi-Jack> When was Ceph first started, anyway?
[19:07] <noob2> i think back in like 2006 if i remember right
[19:07] * sleinen (~Adium@130.59.94.86) Quit (Ping timeout: 480 seconds)
[19:09] <rweeks> thereabouts
[19:09] <rweeks> Sage's phd dissertation was published in 2007
[19:10] <joao> rweeks, I was curious about what 'portuguese sausage' is, but wikipedia knows it all ;)
[19:11] <Psi-Jack> Hmm, interesting. I didn't realize it'd been in the works that far back. :)
[19:11] <rweeks> joao: LinguiƧa is the only one I'm familiar with personally
[19:11] <rweeks> it's pretty popular in california
[19:13] <joao> I had no idea it was even known
[19:13] <rweeks> well
[19:13] <rweeks> there are/were lots of portugese in california
[19:13] <rweeks> particularly in the fishing towns
[19:14] <joao> oh, yeah, that makes sense
[19:14] <joao> although afaik the biggest communities are on the east coast
[19:14] <rweeks> probably
[19:16] <rweeks> but yeah, in San Francisco you look at the names on restaurants in Fisherman's Wharf you see names like Silva
[19:17] <rweeks> mostly portugese and sicilian names
[19:19] <sstan> quick question : if the number of replicas is 2 and one OSD goes down, how does the cluster know weather it should wait for it to come back or replicate the data to some other osd?
[19:20] * loicd (~loic@magenta.dachary.org) has joined #ceph
[19:20] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[19:23] <iggy> sstan: it waits for a certain timeout for it to comeback and then starts rebuilding
[19:24] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:24] <sstan> iggy: thanks : ) I'll try to find that option when reading the documentation
[19:24] <sstan> how much time does one typically need to master Ceph?
[19:25] <iggy> depends what you mean by master
[19:25] <wer> sstan: you can also set noout for an osd I believe.
[19:26] <sstan> ok I'll look into that too
[19:27] <sstan> I don't know how vast Ceph really is either ..
[19:27] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:27] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[19:27] * Leseb_ is now known as Leseb
[19:28] * `gregorg` (~Greg@78.155.152.6) has joined #ceph
[19:28] <iggy> it's a tough question to answer even
[19:28] <wer> sstan: bazillions.....
[19:28] <iggy> you can have a simple 2 node setup all the way to hundreds or 1000s of nodes
[19:28] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[19:29] <sstan> but is there a difference between 100 nodes and n*100 ....
[19:29] <rweeks> there are production Ceph installs today that are over 3PB, so it definitely scales
[19:29] <iggy> heh... it depends
[19:30] <sstan> I guess that what's difficult is to integrate Ceph in a non-perfect network (many switches, vlans, different bandwidths, peak hours, etc.)
[19:30] <iggy> you will likely have to scale mons and mdses (if using cephfs) depending on usage scenarios
[19:30] <rweeks> sstan: it's like any storage network
[19:31] <rweeks> you'll get better and more consistent performance from a network that is storage only, than you will a mixed network
[19:31] <rweeks> this is true of any storage protocol
[19:36] * stass (stas@ssh.deglitch.com) Quit (Read error: Connection reset by peer)
[19:39] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[19:39] * `gregorg` (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[19:42] * roald (~roaldvanl@87.209.150.214) has joined #ceph
[19:43] * stass (stas@ssh.deglitch.com) has joined #ceph
[19:43] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[19:43] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[19:47] * `gregorg` (~Greg@78.155.152.6) has joined #ceph
[19:47] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[19:51] * dubna`city (~kvirc@95.73.0.183) Quit (Quit: KVIrc 4.0.4 Insomnia http://www.kvirc.net/)
[19:54] <yehudasa> jamespage: I can't reproduce the second issue, the listing containers returns empty result. Do you have more info? e.g., rgw logs
[19:55] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[19:55] * yasu` (~yasu`@dhcp-59-227.cse.ucsc.edu) has joined #ceph
[19:55] * `gregorg` (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[19:57] * stass (stas@ssh.deglitch.com) Quit (Read error: Connection reset by peer)
[20:00] * stass (stas@ssh.deglitch.com) has joined #ceph
[20:02] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[20:04] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has left #ceph
[20:05] * `gregorg` (~Greg@78.155.152.6) has joined #ceph
[20:05] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[20:09] * renzhi (~xp@114.86.28.219) has joined #ceph
[20:14] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) Quit (Remote host closed the connection)
[20:15] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[20:15] * `gregorg` (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[20:17] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[20:17] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has left #ceph
[20:19] * BManojlovic (~steki@85.222.183.165) has joined #ceph
[20:20] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) has joined #ceph
[20:24] * glowell1 (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[20:24] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[20:24] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[20:24] * glowell1 (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[20:27] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[20:27] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:30] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[20:30] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[20:30] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[20:34] * `gregorg` (~Greg@78.155.152.6) has joined #ceph
[20:34] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[20:37] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[20:37] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:44] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) Quit (Remote host closed the connection)
[20:45] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[20:48] * gregorg (~Greg@78.155.152.6) has joined #ceph
[20:48] * `gregorg` (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[20:50] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[20:50] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[20:57] * sleinen (~Adium@user-28-10.vpn.switch.ch) has joined #ceph
[20:59] * `gregorg` (~Greg@78.155.152.6) has joined #ceph
[20:59] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[21:00] * ircolle (~ircolle@c-67-172-132-164.hsd1.co.comcast.net) has joined #ceph
[21:04] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[21:04] * `gregorg` (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[21:10] * `gregorg` (~Greg@78.155.152.6) has joined #ceph
[21:10] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[21:14] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[21:14] * `gregorg` (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[21:15] * sleinen1 (~Adium@2001:620:0:26:c17a:477a:5e87:e49e) has joined #ceph
[21:16] * gaveen (~gaveen@112.135.15.211) Quit (Remote host closed the connection)
[21:16] * sleinen (~Adium@user-28-10.vpn.switch.ch) Quit (Read error: Connection reset by peer)
[21:16] <mikedawson> I have an issue with OpenStack Nova/Cinder and RBD. Can anyone point me to a way to test that Cider is setup properly to authenticate against Ceph?
[21:16] * benpol (~benp@garage.reed.edu) has left #ceph
[21:19] * sleinen1 (~Adium@2001:620:0:26:c17a:477a:5e87:e49e) Quit ()
[21:21] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[21:23] * `gregorg` (~Greg@78.155.152.6) has joined #ceph
[21:23] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[21:24] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[21:33] * gregorg (~Greg@78.155.152.6) has joined #ceph
[21:33] * `gregorg` (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[21:38] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) has joined #ceph
[21:40] * sleinen1 (~Adium@2001:620:0:25:101e:3469:5904:45a3) has joined #ceph
[21:42] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[21:43] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:43] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) Quit (Ping timeout: 480 seconds)
[21:43] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[21:43] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[21:46] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:48] * `gregorg` (~Greg@78.155.152.6) has joined #ceph
[21:48] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[21:50] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) Quit (Remote host closed the connection)
[21:52] <jamespage> yehudasa, I can get some
[21:52] <wer> I can't seem to move an osd from one rack to another....
[21:52] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[21:52] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Read error: Operation timed out)
[21:53] * stxShadow1 (~Jens@jump.filoo.de) has joined #ceph
[21:54] <janos> wer - you have to remove the rack screws ;)
[21:54] * `gregorg` (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[21:54] * janos ducks
[21:54] <wer> ceph osd crush create-or-move 0 1.0 root=default rack=c03-03-79 host=host2 is keeping the osd in unknownrack2
[21:54] * gregorg (~Greg@78.155.152.6) has joined #ceph
[21:54] <wer> hush
[21:54] <wer> :)
[21:55] <janos> i haven't tried it that way yet. i've exported the crushmap, decompiled, etc altered, then added new crushmap - scucesfully
[21:55] <janos> but my way was a bit heavy-handed
[21:56] <wer> I'll give that a shot.... in creating the crush I used 'ceph osd crush set 0 osd.0 1.0 root=default host=host2 rack=unknownrack2' and now I am just trying to clean it up with actual rack info....
[21:56] <yehudasa> jamespage: thanks
[21:57] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) Quit (Read error: Operation timed out)
[21:57] <renzhi> an osd is running, but it is reported as down. The osd log shows something like this, what does it mean?
[21:57] <renzhi> 2012-12-20 04:54:48.239658 7fba70c7a700 0 -- 10.1.0.13:6800/12192 send_keepalive con 0x7efc6420, no pipe.
[21:57] <renzhi> 2012-12-20 04:54:58.239827 7fba70c7a700 0 -- 10.1.0.13:6800/12192 send_keepalive con 0x7efc6420, no pipe.
[21:57] <renzhi> 2012-12-20 04:55:08.239931 7fba70c7a700 0 -- 10.1.0.13:6800/12192 send_keepalive con 0x7efc6420, no pipe.
[21:59] <jamespage> yehudasa, http://paste.ubuntu.com/1450821/
[22:00] <jamespage> that is log for 'swift stat test, swift list, swift list test'
[22:00] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[22:00] <jamespage> (ignore the decode warnings - I've not transferred the keystone signing CA in yet)
[22:00] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[22:00] * sjustlaptop (~sam@mf30536d0.tmodns.net) has joined #ceph
[22:04] <jamespage> mikedawson, what do you see in the cinder logs?
[22:09] * gregorg (~Greg@78.155.152.6) has joined #ceph
[22:09] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[22:10] <mikedawson> tail /var/log/cinder/cinder-volume.log
[22:10] <mikedawson> 2012-12-19 16:09:23 22645 DEBUG cinder.manager [-] Running periodic task VolumeManager._publish_service_capabilities periodic_tasks /usr/lib/python2.7/dist-packages/cinder/manager.py:164
[22:10] <mikedawson> 2012-12-19 16:09:23 22645 DEBUG cinder.manager [-] Running periodic task VolumeManager._report_driver_status periodic_tasks /usr/lib/python2.7/dist-packages/cinder/manager.py:164
[22:10] <mikedawson> that just repeats, nothing else
[22:12] <jamespage> mikedawson, OK _ lemme just check on how I set it up
[22:13] <jamespage> mikedawson, are you using cephx?
[22:13] <yehudasa> jamespage: thanks, I'll look at it now
[22:14] <mikedawson> Yes, I'm using. Cinder and Glance are configured to be backed by RBD.
[22:14] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[22:14] * gregorg (~Greg@78.155.152.6) has joined #ceph
[22:14] <mikedawson> Glance seems to be working, but the Cinder integration is failing when I try to Boot from Volume
[22:15] <mikedawson> nova show lists this error "| fault | {u'message': u'libvirtError', u'code': 500, u'created': u'2012-12-19T21:09:15Z'} |"
[22:15] <jamespage> mikedawson, hmm that rings a bell
[22:15] <jamespage> mikedawson, is general volume provisioning working OK with cinder?
[22:16] <mikedawson> not sure. I can boot from an image, but I haven't tried to attach a volume
[22:16] <mikedawson> jamespage: Thanks for your help on the openvswitch-datapath-dkms bug earlier
[22:17] * jamespage makes the connection
[22:17] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[22:17] <jamespage> ah!
[22:17] * gregorg (~Greg@78.155.152.6) has joined #ceph
[22:17] <jamespage> mikedawson, no problem
[22:18] <jamespage> mikedawson, how have you setup your libvirt secrets for nova-compute? I remember that was particularly tricky
[22:18] <jamespage> (esp as I wanted to use different ceph keys between glance and nova-compute)
[22:19] <mikedawson> I tried to follow the instructions at http://ceph.com/docs/master/rbd/rbd-openstack/
[22:20] <mikedawson> I'm using different keys for Glance and Cinder as well
[22:20] <jamespage> mikedawson, sorry - I mean't cinder and nova-compute
[22:20] <jamespage> I use a different key for glance as well
[22:21] <mikedawson> I just don't know how to test Cinder token against Ceph.
[22:21] <jamespage> mikedawson, to test general cinder I would recommend just creating a volume using nova volume-create XX
[22:21] <mikedawson> in /etc/cinder/cinder.conf I have:
[22:22] <jamespage> if its creates OK then the cinder integration is working OK
[22:22] <mikedawson> volume_driver=cinder.volume.driver.RBDDriver
[22:22] <mikedawson> rbd_pool=volumes
[22:22] <mikedawson> rbd_user=volumes
[22:22] <mikedawson> rbd_secret_uuid=0989e4f1-181a-0e42-be59-0da77bf58f7b
[22:22] <jamespage> mikedawson, the uuid setup is the tricky bit; there are two ways of configuring it
[22:22] <jamespage> 1) the uuid must be consistent across cinder and all nova-compute nodes
[22:23] <jamespage> 2) you can override the cinder provided uuid in nova-compute by specifying rbd_secret_uuid and rbd_user in /etc/nova/nova.conf
[22:24] <mikedawson> 1) really? I thought it became unique to each nova-compute node
[22:24] <jamespage> the uuid you specify in cinder gets passed over in the messages sent to nova-compute
[22:24] <jamespage> yeah - thats what I thought as well - I wrote the patch to make 2) work
[22:24] <jamespage> (which is in the Ubuntu packaging for Folsom and upstream in Grizzly)
[22:25] <scuttlemonkey> now we just need an openstack patch so you don't still have to select an image when booting from volume
[22:26] <jamespage> scuttlemonkey, yeah - that does suck!
[22:26] <mikedawson> jamespage: could you point me at that patch so I can learn your method?
[22:26] <scuttlemonkey> move my metadata to cinder! :)
[22:26] <jamespage> mikedawson, its probably better if I point you at the bits of code in the charms that do this
[22:26] * sjustlaptop (~sam@mf30536d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[22:27] <mikedawson> that'll work
[22:27] <scuttlemonkey> mikedawson: are you deploying ceph/openstack w/ juju?
[22:27] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[22:27] <jamespage> mikedawson, nova-compute config - https://bazaar.launchpad.net/~charmers/charms/precise/nova-compute/trunk/view/head:/hooks/nova-compute-relations#L186
[22:28] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[22:28] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[22:28] * loicd (~loic@magenta.dachary.org) has joined #ceph
[22:28] <jamespage> mikedawson, cinder - https://bazaar.launchpad.net/~charmers/charms/precise/cinder/trunk/view/head:/hooks/cinder-hooks#L124
[22:28] <mikedawson> scuttlemonkey: we tried charms several months ago. didn't get it to work. had success with crowbar for a while. now we're building by hand
[22:28] <scuttlemonkey> ahh, cool
[22:29] <mikedawson> scuttlemonkey: want to get back to automation soon
[22:29] <scuttlemonkey> yeah
[22:29] <scuttlemonkey> you on your own hardware?
[22:29] <scuttlemonkey> or cloud/virtualized stuff?
[22:29] <mikedawson> our own gear
[22:29] <jamespage> mikedawson, I'd be interested to know what issues you had with the charms
[22:29] <mikedawson> jamespage: mostly MaaS issues
[22:30] <jamespage> mikedawson, maas on 12.04?
[22:30] <mikedawson> jamespage: yeah
[22:30] <jamespage> hmm
[22:31] <scuttlemonkey> jamespage: this just makes me want to follow a day in the life of one of your devops guys
[22:31] <scuttlemonkey> go from zero to MaaS->Juju->ceph->openstack
[22:31] <jamespage> mikedawson, a SRU for MaaS 1.2 is working through the system ATM; so its worth taking another look once that lands
[22:31] <scuttlemonkey> and document it for blog-fodder
[22:31] <mikedawson> jamespage: i'm willing to try again. Right now we're 12.10, Folsom with Cinder and Quantum, Ceph 0.55.1 backing Cinder and Glance
[22:32] * gregorg (~Greg@78.155.152.6) has joined #ceph
[22:32] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[22:32] <jamespage> mikedawson, the 12.10 maas it much better - +6 months and alot of refactoring
[22:32] <jamespage> mikedawson, thats 1.1 - 1.2 is the 'stablization' release
[22:33] <mikedawson> jamespage: Cobbler was the biggest issue iifc
[22:33] <scuttlemonkey> mikedawson: alternately, if you give it another go please let me know. If you are amenable I'd love to get your notes/results and publish on ceph.com
[22:33] <jamespage> mikedawson, thats gone
[22:33] <scuttlemonkey> I'd be happy to write the prose or just publish as a guest blog
[22:33] <mikedawson> scuttlemonkey: just need to get the Cinder + RBD integration to work
[22:34] <scuttlemonkey> gotcha
[22:34] <scuttlemonkey> I actually just finished standing up a test box doing exactly that
[22:34] <scuttlemonkey> now, it's all one one machine, not multi-headed
[22:34] <scuttlemonkey> but I finally managed to get it working
[22:35] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) has joined #ceph
[22:38] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:38] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[22:39] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[22:39] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[22:44] * gregorg (~Greg@78.155.152.6) has joined #ceph
[22:44] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[22:47] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[22:49] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[22:49] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[22:49] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[22:50] <mikedawson> scuttlemonkey: system just died. sorry about that
[22:51] <scuttlemonkey> hehe, no worries
[22:51] <scuttlemonkey> wanna try again?
[22:53] <mikedawson> going to try something first...
[22:53] <scuttlemonkey> cool
[22:53] <scuttlemonkey> I'll probably move away from irc here in a bit
[22:54] <scuttlemonkey> but if you still have issues poke me here tomorrow anytime
[22:54] <scuttlemonkey> (I'm EST)
[22:54] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[22:54] * gregorg (~Greg@78.155.152.6) has joined #ceph
[22:54] <mikedawson> thx
[22:57] <yehudasa> jamespage: really strange, seems that it shortcuts the auth checking completely, ending up being a user without a uid
[22:58] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) Quit (Quit: Leaving)
[22:58] <ron-slc> Hello all. I'm checking back in on a issue discovered in which a 4Kb write to an RBD device, causes ~30MB of ceph cluster writes across 4 btrfs OSDs (dedicated disks), and dedicated Journals on SSD
[22:59] <ron-slc> I've updated to kernel 3.7.1; with no change to the large disk writes
[23:01] <ron-slc> I've even validated this scenario in which librbd (via qemu/kvm) can cause this; and I have also done a 'rbd map' to directly map an rbd to a local disk. And wrote a controlled 4KB dataset
[23:02] <ron-slc> My concern is 4KB being magnified into ~30MB does not scale
[23:02] * gucki (~smuxi@46-126-114-222.dynamic.hispeed.ch) Quit (Remote host closed the connection)
[23:03] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[23:03] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[23:04] <jamespage> yehudasa, the endpoint in keystone is configured as "http://10.55.60.67:80/swift/v1" - I think that is correct
[23:04] <ron-slc> Is there an underlying rbd or rados command to get statistics of Bytes read/written to/from the pg's, or osd's? To further determine the source of amplified write data?? This would help in determining if CEPH/RADOS is the cause, or if it is btrfs internals.
[23:07] <yehudasa> jamespage: should be. what swift/keystone related configurables do you have in your ceph.conf?
[23:07] <joshd> ron-slc: 'ceph pg dump' aggregates them per pool
[23:07] <jamespage> yehudasa, http://paste.ubuntu.com/1450976/
[23:08] <ron-slc> joshd: ahh! so it does, forgot pg dump did that! Thanks.
[23:08] <joshd> ron-slc: I'd suggest using 'rbd map' and dd with oflags=direct to the device so there's no fs or anything in the way
[23:09] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) has joined #ceph
[23:10] <ron-slc> yea, I used dd with if=/dev/zero bs=4096 of=/dev/rbd1 (directly), and iostat on the client does show only 4KB written to disk-device. Where runnin iostat's on the cluster SUM into ~30MB written
[23:11] <joshd> does that include log files? if you've got any kind of debug logging on, it could balloon quickly
[23:11] * stxShadow1 (~Jens@jump.filoo.de) Quit (Read error: Connection reset by peer)
[23:11] <yehudasa> jamespage: can you add 'rgw swift use keystone = true'?
[23:12] <yehudasa> jamespage: I see the bug now
[23:12] <jamespage> yehudasa, that made no difference
[23:12] <yehudasa> jamespage: did you restart the gateway?
[23:13] <ron-slc> no, all my debugging/logging are set to defaults. Also the OSD journals are on a separate SSD device. So Joural writes, are not being included in these sums.
[23:13] <jamespage> yehudasa, yes
[23:13] * gregorg (~Greg@78.155.152.6) has joined #ceph
[23:13] <yehudasa> can you upload a file again, then list?
[23:13] <mikedawson> joshd: I'm struggling to get Cinder setup with RBD using the instructions at http://ceph.com/docs/master/rbd/rbd-openstack/
[23:13] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[23:14] <mikedawson> seems like an auth issue injecting the keys into libvirt maybe
[23:14] <yehudasa> jamespage: basically what happened was that since that wasn't set it used a different swift auth (v1)
[23:15] <yehudasa> jamespage: but then there is apparently a bug there, and it shortcuts through the authorization
[23:15] <joshd> mikedawson: so creating a volume (with e.g. cinder create) works, but attaching to a vm doesn't?
[23:15] <mikedawson> joshd: is there any way to test the rdb_secret_uuid outside of the rest of Cinder to make sure it is valid?
[23:16] <yehudasa> jamespage: ended up not authenticating on one hand, but also uid wasn't set
[23:16] <mikedawson> joshd: correct
[23:16] <yehudasa> I think we need to remove that configurable (rgw swift use keystone), and use keystone whenever keystone url is set
[23:17] <jamespage> yehudasa, would that explain why 'swift stat' also returns no stats for the top level
[23:17] * jamespage has to fess up that he also has a swift install that he's comparing RADOS gw with
[23:17] <joshd> mikedawson: on the compute host you can look at the value set with 'virsh secret-get-value --secret <uuid>'
[23:18] <mikedawson> joshd: if I have /etc/ceph/ceph.client.volumes.keyring does my rbd_user=volumes or rbd_user=client.volumes
[23:18] <yehudasa> jamespage: you'd get all sorts of strange behavior
[23:18] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[23:18] <joshd> mikedawson: rbd_user=volumes
[23:18] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[23:19] <mikedawson> root@node10:/var/log/libvirt# virsh secret-get-value --secret 54acfe4b-e8c1-df66-f0be-db5ad380d1ef
[23:19] <mikedawson> AQDgUc9QICmMAhAAmUK2JxPYWRFRALO5QeQPLA==
[23:19] <mikedawson> and /etc/cinder/cinder.com has rbd_secret_uuid=54acfe4b-e8c1-df66-f0be-db5ad380d1ef
[23:19] <joshd> and that matches the client.volumes value in 'ceph auth list'?
[23:19] <mikedawson> client.volumes
[23:20] <mikedawson> key: AQDgUc9QICmMAhAAmUK2JxPYWRFRALO5QeQPLA==
[23:21] <jamespage> yehudasa, OK _ I'm going to raise another Ubuntu bug so I don't lose track on this one
[23:21] <joshd> mikedawson: which version of libvirt do you have?
[23:21] <mikedawson> joshd: the rbd_secret_uuid is different on each compute node -> jamespage thinks that may be wrong
[23:21] <yehudasa> jamespage: did you get it to work with that configurable?
[23:21] <jamespage> yehudasa, no
[23:22] <joshd> mikedawson: yeah, if you aren't using the overrides he mentioned in the nova.conf, you'd need to use the same uuid for each compute host
[23:22] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[23:22] * gregorg (~Greg@78.155.152.6) has joined #ceph
[23:22] <mikedawson> ii libvirt-bin 0.9.13-0ubuntu12 amd64 programs for the libvirt library
[23:22] <yehudasa> jamespage: same behavior, or different issues?
[23:22] <jamespage> yehudasa, same behaviour
[23:22] <yehudasa> jamespage: can you provide logs?
[23:23] <sstan> does Ceph always require SSH ?
[23:23] <mikedawson> joshd: I have nothing in nova.conf about rbd
[23:23] <jamespage> yehudasa, http://paste.ubuntu.com/1451017/
[23:23] <rweeks> ssh for what, sstan
[23:24] <sstan> Exchanging information, I don't know ...
[23:24] <rweeks> daemons don't use ssh to communicate, no
[23:24] <ron-slc> sstan: SSH, as shown in the config / install guide, is just to ease in the scripted creation of the cluster
[23:24] <sjustlaptop> sstan: mkcephfs wants to use ssh
[23:24] <sjustlaptop> there are other install options that don't
[23:24] <yehudasa> jamespage: did you upload new stuff? old containers were tied into a bad user
[23:25] <sstan> ah I see, to copy config files, etc. That task can be done without SSH
[23:25] <jamespage> yehudasa, ah - no I did not
[23:25] <yehudasa> jamespage: upload to a different container
[23:25] <ron-slc> sstan: manually yes, you can also opt, not to copy ssh key-files around, and just enter the password quite a few times.. ;)
[23:25] <yehudasa> .. old ones are problematic
[23:25] * noob2 (~noob2@ext.cscinfo.com) Quit (Ping timeout: 480 seconds)
[23:25] <sjustlaptop> yeah, ceph-deploy is the new approach, though it's still very new and fairly limited
[23:25] <jamespage> yehudasa, right - that worked!
[23:26] <yehudasa> jamespage: awesome
[23:26] <yehudasa> jamespage: I'll remove that useless configurable, and fix the other issue
[23:26] <sstan> Since there is a lot of manipulation for adding OSDs, etc. .. Something like Puppet is necessary
[23:26] <jamespage> yehudasa, +1 - thanks for your help
[23:26] <yehudasa> jamespage: I've already pushed a fix for the crash (3648), will merge it later
[23:26] <yehudasa> jamespage: no, thank you
[23:26] <rweeks> Yes, you can use Chef, puppet, juju or crowbar depending on your flavor
[23:27] <sstan> thanks, rweeks
[23:27] <rweeks> someone in here was going to write something for Salt, but I don't know if that happened
[23:27] <sstan> Some simple Text user interface program could be an excellent tool for managing a cluster ..
[23:28] <sstan> It might come as an addon to the ceph package, perhaps. That could be a good project ?
[23:28] <rweeks> that's what ceph-deploy should be, but it's not fully baked
[23:29] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[23:29] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[23:29] <sstan> is it because it's redundant? Since some tool like Puppet can do the work
[23:30] <rweeks> yes, but not everyone uses chef or puppet
[23:30] <rweeks> paticularly when spinning up test or eval clusters
[23:30] <sstan> hmm true .. then I don't see why that feature isn't backed
[23:30] <joshd> mikedawson: like james suggested, you'd want to set rbd_secret_uuid in nova.conf if it's different on each compute host
[23:31] <sjustlaptop> I think he meant baked, as in not quite done yet
[23:31] <rweeks> yes
[23:31] <rweeks> it's not done.
[23:31] <sstan> aaah sorry , I'm more fluent in French actually
[23:31] <jamespage> yehudasa, actually while I have your attention
[23:31] <jamespage> yehudasa, mod_fastcgi!
[23:32] <mikedawson> joshd:, jamespage: giving it a try
[23:32] <rweeks> sorry, I tend to assume everyone in IRC speaks english, a failing of mine
[23:32] <jamespage> yehudasa, I had an action from the last Ubuntu Developer Summit tor review the patches you guys had for mod_fastcgi and see if we should pull them into the Ubuntu pakcages
[23:32] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[23:33] <jamespage> yehudasa, I had a look at the commits; in summary I *think* they fixup chunked transfers and 100 continue processing - is that correct?
[23:33] <jamespage> (and a few log tidy bits and pieces)
[23:33] <yehudasa> jamespage: yeah
[23:34] <yehudasa> jamespage: the problem that I see is that they currently break compatibility
[23:35] <yehudasa> jamespage: basically need to make it so that if 'Expect: 100-continue' is specified, but the backend fastcgi server does not return 100, to behave correctly
[23:36] <jamespage> yehudasa, hmm - that makes me a little less comfortable
[23:37] <jamespage> yehudasa, what's the implication of running radosgw with the vanilla mod_fastcgi in Ubuntu?
[23:37] <yehudasa> jamespage: no chunked encoding, you lose the 100-continue optimization. Need to set a configurable in ceph.conf to get it working.
[23:38] <yehudasa> cd
[23:38] <yehudasa> argh
[23:38] * gregorg (~Greg@78.155.152.6) has joined #ceph
[23:38] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[23:38] * roald (~roaldvanl@87.209.150.214) Quit (Ping timeout: 480 seconds)
[23:39] <yehudasa> jamespage: I can take a look at it, see if it's a trivial fix to get it compatible
[23:39] <yehudasa> might be a one line change
[23:40] <jamespage> yehudasa, that would be great - thanks!
[23:40] <jamespage> yehudasa, 'rgw print continue' right?
[23:41] <yehudasa> jamespage: right, lame, sorry
[23:44] <jamespage> yehudasa, anyway - I'm done for the day (getting late) - thanks for your help today!
[23:45] <yehudasa> jamespage: sure thing, I'm looking at mod_fastcgi now, will let you know if came up with anything
[23:46] * vjarjadian (~IceChat7@5ad6d005.bb.sky.com) has joined #ceph
[23:46] <vjarjadian> hi
[23:53] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[23:53] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[23:57] * sleinen1 (~Adium@2001:620:0:25:101e:3469:5904:45a3) Quit (Remote host closed the connection)
[23:58] <nhm> Hey, only 2 more days until the days start getting longer again.
[23:58] * nhm looks out his window at the dark sky
[23:59] <yehudasa> nhm: and it's like 4 months until you're not getting any more snow, right?

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.