#ceph IRC Log

Index

IRC Log for 2012-11-28

Timestamps are in GMT/BST.

[0:01] * jtang1 (~jtang@79.97.135.214) has joined #ceph
[0:01] * jtang1 (~jtang@79.97.135.214) Quit ()
[0:01] * rz (~root@ns1.waib.com) Quit (Ping timeout: 480 seconds)
[0:03] * aliguori (~anthony@32.97.110.59) Quit (Remote host closed the connection)
[0:05] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[0:15] * ScOut3R (~scout3r@54004264.dsl.pool.telekom.hu) has joined #ceph
[0:15] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[0:26] * mdawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[0:28] * illuminatis (~illuminat@0001adba.user.oftc.net) Quit (Quit: WeeChat 0.3.9.2)
[0:30] * miroslav (~miroslav@adsl-67-124-149-139.dsl.pltn13.pacbell.net) Quit (Quit: Leaving.)
[0:39] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[0:43] * aliguori (~anthony@cpe-70-123-145-75.austin.res.rr.com) has joined #ceph
[0:46] <yehudasa> nhm: picking up a fight with shuttleworth?
[0:55] * PerlStalker (~PerlStalk@72.166.192.70) Quit (Quit: rcirc on GNU Emacs 24.2.1)
[0:56] * rturk` (~rturk@ds2390.dreamservers.com) has joined #ceph
[0:57] * rturk (~rturk@ps94005.dreamhost.com) Quit (Quit: Coyote finally caught me)
[0:57] * rturk` is now known as rturk
[1:01] * dmick is now known as fake_rturk
[1:01] * fake_rturk is now known as ok_ok
[1:01] * ok_ok is now known as dmick
[1:10] * cephalobot` (~ceph@ps94005.dreamhost.com) Quit (Remote host closed the connection)
[1:10] * ScOut3R (~scout3r@54004264.dsl.pool.telekom.hu) Quit (Quit: Lost terminal)
[1:10] * cephalobot (~ceph@ds2390.dreamservers.com) has joined #ceph
[1:11] <rturk> wb cephalobot :)
[1:11] <elder> nhm is trying to get shuttleworth to notice
[1:12] <dmick> is there tweeting going on?
[1:15] * cephalobot (~ceph@ds2390.dreamservers.com) Quit (Remote host closed the connection)
[1:16] * cephalobot (~ceph@ds2390.dreamservers.com) has joined #ceph
[1:20] <Robe> fight about what?
[1:22] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[1:22] <yehudasa> http://interviews.slashdot.org/comments.pl?sid=3274815&cid=42095137
[1:23] <elder> Yeah I saw that. I have mod points and wanted to vote it up but I didn't want to be a slashvertiser.
[1:23] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[1:23] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[1:24] <rturk> wow
[1:24] <rturk> really throwin down
[1:25] <Robe> lol
[1:27] * rturk (~rturk@ds2390.dreamservers.com) Quit (Quit: Coyote finally caught me)
[1:28] * rturk (~rturk@ds2390.dreamservers.com) has joined #ceph
[1:31] * jlogan (~Thunderbi@72.5.59.176) Quit (Read error: Connection reset by peer)
[1:32] * jlogan (~Thunderbi@2600:c00:3010:1:742b:7b42:4526:f0f2) has joined #ceph
[1:33] <rturk> if anyone's interested in chatlog stats, you can find them here (for a while anyway) http://brawny.inktank.com/irc/
[1:35] <tnt> yeah, I'm in the most talkative for 0h - 5h :)
[1:36] <tnt> I guess this is West Coast time ?
[1:39] <joao> tnt, yeah, that should be Pacific Time
[1:40] <dmick> wow. top 7. woot. now if you analyzed by "useful content"....
[1:46] <Robe> that needs time zone support!
[1:48] <Robe> ah
[1:48] <Robe> on the bottom
[1:48] * maxiz (~pfliu@222.128.152.28) Quit (Quit: Ex-Chat)
[1:48] <Robe> I like the word with n characters section
[1:48] <Robe> some deep shit there!
[1:50] * jjgalvez (~jjgalvez@12.248.40.138) Quit (Ping timeout: 480 seconds)
[1:52] * tnt (~tnt@55.188-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[1:57] * aliguori (~anthony@cpe-70-123-145-75.austin.res.rr.com) Quit (Remote host closed the connection)
[1:58] <dmick> lol hadn't scrolled
[1:59] <dmick> does that actually mean that elder has said Yip skip! 73 times??
[1:59] <dmick> :)
[2:03] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[2:08] <nhm> elder: lol, yeah. I was hoping it was so obvious blatent slashvertising that it would go all the way around and be modded up as funny, but apparently I didn't make it corny enough.
[2:09] <nhm> People actually thought I was flaming shuttleworth.
[2:10] <nhm> I should have known after the people on phoronix were bitching that shuttleworth invested $1m in ceph instead of fixing unity.
[2:11] <nhm> Oh well, lesson learned. :)
[2:19] * sagelap (~sage@2607:f298:a:607:7463:bf6a:b3fa:74f8) Quit (Ping timeout: 480 seconds)
[2:19] * sagelap (~sage@215.sub-70-197-128.myvzw.com) has joined #ceph
[2:22] <tore_> poor shuttleworth...
[2:24] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[2:25] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[2:27] <tore_> good article. I wonder what impact sticking a couple SAS switches between these controlelrs and the JBOD would have if any
[2:29] <tore_> not a huge fan of supermicro's sas implementations though. I've seen quite a few bricked supermicro raid cards due to crappy firmware updates. pure LSI all the way for me
[2:30] <elder> dmick, I really don't think I've said that 73 times. Nor has Sage said "elder: nice!" 184 times. It's possible gregaf asked "and have you tried restarting the client nodes and seeing if that changes anything?" 1651 times though.
[2:31] <nhm> tore_: thanks! I wanted to skip expanders/switches/etc since I'm using SATA SSDs.
[2:33] <dmick> elder: :D
[2:34] <dmick> nhm: finally read the /. Cheeky monkey. :)
[2:35] <dmick> elder: sage raised the question of "shouldn't rbd just do the modprobe before the map"
[2:35] <dmick> (or conditionally if /sys/bus/rbd doesn't exist I suppose)
[2:35] <nhm> dmick: it got modded as flamebait. :)
[2:35] <dmick> and I couldn't think of a good reason why not
[2:35] <dmick> any thoughts?
[2:36] <tore_> the sas switches look extremely interesting from a maintenance prospective. I just haven't seen any perf testing from third parties
[2:37] <elder> dmick, I think it's probably a good idea.
[2:38] <elder> It's harmless to run modprobe for an already loaded module.
[2:38] <joshd> mount.ceph already does modprobe fwiw
[2:39] <elder> The issue arose in the rbd teuthology tests.
[2:40] <elder> I ran one and it said I didn't have the rbd module loaded. It just seemed like if the rbd CLI needs the module loaded, and it has the power to do so, it should do so.
[2:41] <joshd> yeah, currently it just asks you whether you've run modprobe rbd
[2:41] * Ryan_Lane (~Adium@216.38.130.164) Quit (Quit: Leaving.)
[2:41] <dmick> elder: harmless, but takes time, perhaps more than stat
[2:41] <elder> Yes lots more.
[2:41] <elder> So you're right, check for /sys/bus/rbd first.
[2:42] <dmick> seems like cost/benefit is low there
[2:48] <mdawson> Anyone have any experience with combining OpenStack Nova, Quantum, Cinder, Open vSwitch and Ceph Mon/OSDs on combined compute / storage nodes? My implementation is close.
[2:50] <joshd> what are you missing?
[2:53] <mdawson> 2 NICs available. eth0 is the public side. Trying to use openvswitch on eth1 for Nova communication per instructions at https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/VLAN/2NICs/OpenStack_Folsom_Install_Guide_WebVersion.rst
[2:54] * maxiz (~pfliu@202.108.130.138) has joined #ceph
[2:54] <mdawson> Ceph was configured and working on these nodes using eth1 as the Ceph replication network.
[2:55] <mdawson> When I run the commands to setup the openvswitch bridge, I lose the ability to send traffic, but don't know enough about openvswitch to know what to do
[2:56] <mdawson> ovs-vsctl add-br br-int
[2:56] <mdawson> ovs-vsctl add-br br-eth1
[2:56] <mdawson> ovs-vsctl add-port br-eth1 eth1
[2:56] <mdawson> that sequence kills all communication over the network previously configured on eth1
[2:57] <mdawson> ovs-vsctl del-port br-eth1 eth1
[2:57] <mdawson> that restores the traffic
[3:03] <joshd> I don't know enough about openvswitch either
[3:04] <mdawson> joshd: thanks for looking
[3:06] * sagelap (~sage@215.sub-70-197-128.myvzw.com) Quit (Ping timeout: 480 seconds)
[3:10] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[3:10] * loicd (~loic@magenta.dachary.org) has joined #ceph
[3:14] <elder> profiling:/srv/autobuild-ceph/gitbuilder.git/build/src/.libs/libcommon_la-lockdep.gcda:Cannot open
[3:15] <elder> Anyone know how to avoid that in a teuthology script?
[3:16] <joshd> elder: you're missing /tmp/cephtest/binary/usr/local/bin/ceph-coverage /tmp/cephtest/archive/coverage at the beginning of your command
[3:16] <elder> Oh, I'm trying to run it manually.
[3:16] <elder> I'll try that.
[3:17] * buck (~buck@bender.soe.ucsc.edu) has left #ceph
[3:17] <sage> elder: it happens if you have coverage: true in there and it uses the gcov gitbuilder
[3:18] <elder> So that's what that line does... Any reason I shouldn't delete it from my normal yaml files?
[3:19] <sage> kill it
[3:19] <elder> Great, thanks.
[3:20] <nhm> elder: shouldn't you be done working by now?
[3:20] <nhm> ;)
[3:20] <elder> I'm quitting soon. I'm pretty fried and frustrated.
[3:21] <nhm> elder: Lets get together on Monday and we can complain about our respective problems. :)
[3:22] * dmick (~dmick@2607:f298:a:607:9971:8e07:6a30:2cb8) Quit (Quit: Leaving.)
[3:22] <elder> Sounds good to me. Hopefully all my problems will be solved by then.
[3:22] <elder> But I'll fill you in on all the new ones.
[3:22] <nhm> elder: I was going to say, if all of your problems can be solved by monday you are doing much better than the rest of us. ;)
[3:23] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[3:38] * jlogan (~Thunderbi@2600:c00:3010:1:742b:7b42:4526:f0f2) Quit (Ping timeout: 480 seconds)
[3:45] * deepsa (~deepsa@122.172.5.197) has joined #ceph
[3:58] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[4:32] * chutzpah (~chutz@199.21.234.7) Quit (Quit: Leaving)
[4:33] * KindTwo (KindOne@h161.22.131.174.dynamic.ip.windstream.net) has joined #ceph
[4:34] * KindOne (KindOne@h4.176.130.174.dynamic.ip.windstream.net) Quit (Ping timeout: 480 seconds)
[4:34] * KindTwo is now known as KindOne
[4:35] * wubo (80f42605@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[4:43] <elder> I think I heard something about this recently. I'm seeing this in my log: auth method 'x' error -1
[4:43] <elder> Is there something I need to change somewhere in my teuthology scripts or something?
[4:57] <sage> elder: hmm, from teuth or something you're running manually?
[4:57] <elder> Well, at this point I'm not sure...
[4:57] <elder> I've been doing a little of both. But runniing teuthology to get things set up, then running my script manually on that machine in interaactive.
[4:58] <elder> I also just got a crash with a null reference in __remove_osd
[4:58] <elder> But who knows.
[4:59] <nhm> elder: hah, you didn't stop working. :P
[4:59] <elder> I have rebooted everything, and am making sure I'm using the testing kernel.
[4:59] <elder> I wish.
[4:59] <elder> Should only be five more minutes. (I said that about 5 hours ago.)
[5:02] <nhm> elder: are you trying to run with auth?
[5:02] <elder> I'm just trying to run the same old yaml files I've always run.
[5:03] <elder> I'm not *trying* to do anything with auth.
[5:03] <nhm> elder: does the generated ceph.conf file have valid auth?
[5:03] <elder> You mean in /tmp/cephtest?
[5:04] <nhm> elder: found this: http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/4187
[5:06] <nhm> elder: Maybe see if you can disable auth. I don't use it for testing at all.
[5:07] <elder> Well, I had lots of connection messages in my log. I don't know what the problem was, and just thought I'd managed to get myself in a hole somehow. So I've rebooted everything and am hoping for the best.
[5:09] <sage> you're running the rbd map/unmap commands? i bet they're not finding hte ceph keyring and setting up auth properly with the kernel rbd
[5:09] <sage> which would mean a bug.
[5:09] <sage> i'd call it quits for tonight, and i'll help investigate in the morning!
[5:09] <elder> Is this the CLI that's not finding the key ring?
[5:10] <elder> I'm doing "one last try" by the way. My last one didn't work because I lacked a "sudo" before the "modprobe" command I added.
[5:10] <elder> If it works, I'm committing it.
[5:11] <elder> And clean up whatever mess it makes another day.
[5:12] <sage> heh ok
[5:12] <sage> i think teh rbd tool isn't doing the key properly
[5:12] <elder> OK, it didn't work, and I just don't know why. I have a loop and it's inexplicably ending early.
[5:13] <elder> while ! times_up "${END_TIME}"; do
[5:13] <elder> map_unmap "${IMAGE_NAME}"
[5:13] <elder> ((COUNT++))
[5:13] <elder> done
[5:13] <elder> The verbose shell output shows:
[5:13] <elder> + (( COUNT++ ))
[5:13] <elder> + cleanup
[5:14] <elder> which means that it went right from the ((COUNT++)) call to exiting the script.
[5:14] <elder> But I'm going to call it a night. Tomorrow all will seem clearer. I'll wait on this script until you're online Sage, and will be looking at my bisect results from earlier when I get up.
[5:14] <elder> Good night all.
[5:21] * deepsa (~deepsa@122.172.5.197) Quit (Quit: Computer has gone to sleep.)
[5:22] <sage> elder: 'night!
[5:40] * deepsa (~deepsa@122.172.169.188) has joined #ceph
[5:50] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[6:29] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[6:47] <jefferai> sage: still around?
[6:54] <jefferai> sage: ping me back when you can -- I'm hitting http://www.mail-archive.com/ceph-devel@vger.kernel.org/msg10159.html as well and would like to help debug if possible
[6:54] <jefferai> it happened when I rebooted one of my storage nodes
[6:55] <jefferai> when it came back up for some reason the ceph daemons didn't start (I think buggy ubuntu upstart dependency handling with the network interfaces)
[6:55] <jefferai> so the two remaining (I have a replication factor of 3) synced for a while, then I realized the daemons weren't up and started them
[6:55] <jefferai> after that three things happened:
[6:55] <jefferai> 1) It eventually synced
[6:55] <jefferai> 2) One of the mons (I have 5) crashed...I have the backtrace, although on another computer
[6:56] <jefferai> 3) One of my VMs dies trying to do I/O, even if I stop it and start it again -- the dmesg is filled with notifications about osd.5 timing out and resetting, and if I check the osd.5 log file, I see e.g. 2012-11-28 00:56:36.116134 7fc747730700 0 log [WRN] : slow request 19183.261618 seconds old, received at 2012-11-27 19:36:52.854435: osd_op(client.6677.1:867814 rb.0.1a1a.4904a0c5.000000000316 [write 3190784~4096] 3.8ca007a7 RETRY) currently delayed
[6:56] <jefferai> one of those a second
[6:57] <jefferai> (this is via qemu-rbd)
[6:58] <jefferai> I'm not actually sure why this would cause the VM to not be able to do I/O, unless what's blocking is a write request and thus it's blocking further cluster operations on that OSD until it finishes? Or something? I really don't know
[6:58] <jefferai> I don't think anything is *actually* waiting on it, since the VM that had the problem has since been forcefully killed (but now cannot start up)
[6:59] <jefferai> The VM, before I killed it, was showing basically a timeout in its ext4 file system, which makes sense if the backing RBD is problematic
[6:59] <jefferai> dmesg trace from my compute node (RBD client) is here: http://paste.kde.org/615938/
[7:02] <jefferai> actually, looks like I have six slow requests, but all from the same client
[7:03] <jefferai> I gather from the mailing list that restarting the OSD might clear this up, but I can wait on that until tomorrow morning if it will help with debugging
[7:03] <jefferai> once tomorrow morning rolls around I'll have to try that as I need to get this machine back up and running
[7:03] <jefferai> VM, I mean
[7:03] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has left #ceph
[7:04] <jefferai> this is with 0.5.4 Ubuntu packages
[7:04] <jefferai> storage are running stock Precise with XFS as the OSD backing store; VM nodes are running your 3.6.3 kernel
[7:07] <jefferai> dump of ops in flight on the OSD: https://gist.github.com/4159331
[7:08] <jefferai> anything else I can provide, let me know
[7:23] * plut0 (~cory@pool-96-236-43-69.albyny.fios.verizon.net) has left #ceph
[7:31] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[7:45] * deepsa_ (~deepsa@115.241.71.12) has joined #ceph
[7:46] * deepsa (~deepsa@122.172.169.188) Quit (Ping timeout: 480 seconds)
[7:46] * deepsa_ is now known as deepsa
[7:58] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[7:59] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[8:11] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:21] * nosebleedkt (~kostas@kotama.dataways.gr) has joined #ceph
[8:22] <nosebleedkt> good morning all
[8:22] <nosebleedkt> :D
[8:36] * loicd (~loic@90.84.144.69) has joined #ceph
[8:40] * illuminatis (~illuminat@0001adba.user.oftc.net) has joined #ceph
[8:59] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[8:59] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[9:03] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[9:03] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[9:06] * tnt (~tnt@55.188-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[9:20] * Norman (53a31f10@ircip2.mibbit.com) has joined #ceph
[9:23] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[9:28] * morse (~morse@supercomputing.univpm.it) Quit (Ping timeout: 480 seconds)
[9:31] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:33] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[9:34] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit ()
[9:34] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:34] * loicd (~loic@90.84.144.69) Quit (Ping timeout: 480 seconds)
[9:36] * lxo (~aoliva@83TAACWHW.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[9:39] * loicd (~loic@90.84.144.148) has joined #ceph
[9:44] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:47] * loicd (~loic@90.84.144.148) Quit (Quit: Leaving.)
[9:50] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[9:52] * tnt (~tnt@55.188-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[9:54] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[10:01] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[10:02] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[10:03] * anon (~chatzilla@hippo2.bbaw.de) has joined #ceph
[10:04] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[10:08] <anon> Hi there!
[10:08] <anon> Is this (from the wiki) still true?:
[10:08] <anon> In shared simultaneous writer situations, a write that crosses object boundaries is not necessarily atomic. This means that you could have writer A write “aa|aa” and writer B write “bb|bb” simultaneously (where | is the object boundary), and end up with “aa|bb” rather than the proper “aa|aa” or “bb|bb”.
[10:08] <anon> This reads like a show stopper for ceph to be used as a cluster fs.
[10:09] <anon> Does someone of you use ceph in a multi writer cluster situation?
[10:09] <tnt> maybe there are cluster fs that don't assume atomicity across block
[10:11] <anon> the cite is from http://ceph.com/docs/master/dev/differences-from-posix/
[10:11] <anon> i'd like to use a cluster fs
[10:12] <anon> my wuestion is, if i can use ceph or have to use a different cluster fs ontop of rdb
[10:21] <nosebleedkt> tnt, hello
[10:21] <nosebleedkt> tnt, in order to get snapshots the OSD filesystem must only be XFS or BTRFS?
[10:25] * tnt has no idea
[10:26] <nosebleedkt> :D
[10:32] * loicd (~loic@2a01:e35:2e9b:c420:20cb:4b08:6ee5:9eb4) has joined #ceph
[10:39] <joao> morning #ceph
[10:39] <tnt> morning @joao :)
[10:40] * joao sets mode -o joao
[10:40] <ScOut3R> morning
[10:40] <joao> anon, operations are guaranteed to be serialized only at the object level afaik
[10:42] <joao> I don't what are cephfs's consistency guarantees though
[10:43] <joao> but given that cephfs keeps its data and metadata on osds, I'd say that the rule still applies to it on the object level
[10:43] * maxiz (~pfliu@202.108.130.138) Quit (Quit: Ex-Chat)
[10:43] <anon> joao: thanks, but »afaik« is to uncertain for me ;-)
[10:44] <anon> is there someone who knows for sure?
[10:44] <joao> anon, there should be later this afternoon
[10:44] <nosebleedkt> joao, morning :D...
[10:44] <nosebleedkt> joao, do you know in order to get snapshots the OSD filesystem must only be XFS or BTRFS?
[10:45] <joao> I'm basing my answer on meager work I did in the osd a couple months back and the assumption that I fully understood your questions; hence 'afaik' :p
[10:45] <joao> nosebleedkt, what kind of snapshots?
[10:45] <joao> btrfs is only required for journal checkpoints, iirc
[10:46] <joao> I mean, btrfs's snapshot capabilities
[10:46] <nosebleedkt> joao, snapshots. Why is there different kind of snapshots in ceph ?
[10:47] <joao> nosebleedkt, rbd snapshots come to mind, vs the snapshotting done by the osds as checkpointing mechanisms
[10:47] <nosebleedkt> I mean snapshots that can be saved in a backup server
[10:47] <nosebleedkt> so in case of failure i can bring back those
[10:48] <joao> I'm not sure I ever heard of that feature :x
[10:48] <nosebleedkt> lol
[10:48] <nosebleedkt> what are rbd snapshots about ?
[10:49] <joao> http://ceph.com/docs/master/rbd/rbd-snapshot/
[10:50] <nosebleedkt> joao, when I am using a cephfs client
[10:50] <nosebleedkt> can i have snapshot of that?
[10:50] <nosebleedkt> or only when using ceph from RBD I can have snapshots?
[10:51] <joao> not that I know of, *but* I'm far from knowing cephfs that well to know for sure
[10:52] <joao> nosebleedkt, I think that rbd is the only component offering snapshot capabilities to the user
[10:52] <nosebleedkt> yeah
[10:53] <nosebleedkt> i try to understand in which machine we do the snapshots
[10:53] <nosebleedkt> I make the snapshot as iam admin of the cluster?
[10:53] <nosebleedkt> or from the client side
[10:55] <joao> from the docs I'd say either one, but never fooled around with rbd to know that for a fact
[10:56] <joao> also, from the docs:
[10:56] <joao> Ceph also supports snapshot layering, which allows you to clone images (e.g., a VM image) quickly and easily. Ceph supports block device snapshots using the rbd command and many higher level interfaces, including QEMU, libvirt, OpenStack and CloudStack.
[10:56] <joao> so it looks like you can do it from the client side; and a couple lines down you have an example of doing it manually with the 'client.admin' user id
[10:59] * `gregorg` (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[10:59] * `gregorg` (~Greg@78.155.152.6) has joined #ceph
[11:00] * yoshi (~yoshi@p11251-ipngn4301marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:08] * jtangwk (~Adium@2001:770:10:500:6482:6af9:bdc4:912c) Quit (Quit: Leaving.)
[11:11] * loicd (~loic@2a01:e35:2e9b:c420:20cb:4b08:6ee5:9eb4) Quit (Quit: Leaving.)
[11:11] * loicd (~loic@lib59-3-82-233-188-66.fbx.proxad.net) has joined #ceph
[11:12] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) has joined #ceph
[11:13] * sbadia (~seb@yasaw.net) has joined #ceph
[11:13] <Norman> after playing around with the MDS daemon, I'd like to remove it .... is there a command to remove this?
[11:14] <Norman> because just removing it, influences the Heatlh status :(
[11:18] <nosebleedkt> joao, what is the proper filesystem an OSD should be?
[11:19] <joao> any fs with extended attributes should work; right now xfs might be your best bet
[11:26] * jtangwk (~Adium@2001:770:10:500:4953:4af3:3fbc:9d8a) has joined #ceph
[11:51] <jtang> good morning!
[11:56] <nosebleedkt> joao, does it matter what FS the OSD in mounted on, so I can get snapshots ?
[12:06] <tnt> but what snapshot are you talking about ...
[12:08] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[12:15] * loicd (~loic@lib59-3-82-233-188-66.fbx.proxad.net) Quit (Quit: Leaving.)
[12:15] * loicd (~loic@lib59-3-82-233-188-66.fbx.proxad.net) has joined #ceph
[12:15] * maxiz (~pfliu@114.245.253.77) has joined #ceph
[12:19] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[12:37] <nosebleedkt> tnt, RBD snapshot
[12:48] * loicd (~loic@lib59-3-82-233-188-66.fbx.proxad.net) Quit (Quit: Leaving.)
[12:48] * loicd (~loic@lib59-3-82-233-188-66.fbx.proxad.net) has joined #ceph
[12:51] * loicd1 (~loic@lib59-3-82-233-188-66.fbx.proxad.net) has joined #ceph
[12:53] * loicd1 (~loic@lib59-3-82-233-188-66.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[12:53] * loicd2 (~loic@2a01:e35:2e9b:c420:b1be:c3ae:73d1:46bc) has joined #ceph
[12:53] * loicd (~loic@lib59-3-82-233-188-66.fbx.proxad.net) Quit (Read error: No route to host)
[12:54] * loicd (~loic@lib59-3-82-233-188-66.fbx.proxad.net) has joined #ceph
[12:54] * loicd2 (~loic@2a01:e35:2e9b:c420:b1be:c3ae:73d1:46bc) Quit (Read error: Connection reset by peer)
[13:04] * MikeMcClurg1 (~mike@firewall.ctxuk.citrix.com) has joined #ceph
[13:04] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) Quit (Read error: Connection reset by peer)
[13:16] * deepsa (~deepsa@115.241.71.12) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[13:21] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[13:22] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[13:25] * ramsay_za (~ramsay_za@41.223.32.4) has joined #ceph
[13:29] * gaveen (~gaveen@112.134.112.245) has joined #ceph
[13:34] * gaveen (~gaveen@112.134.112.245) Quit (Remote host closed the connection)
[13:37] <Leseb> hi guys
[13:42] <ScOut3R> hello Leseb
[13:45] * gaveen (~gaveen@175.157.142.169) has joined #ceph
[13:46] <ramsay_za> hey
[13:55] <nosebleedkt> yo :P
[14:09] * mdawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[14:20] * iltisanni (d4d3c928@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[14:20] * iltisanni (d4d3c928@ircip3.mibbit.com) has joined #ceph
[14:29] * loicd (~loic@lib59-3-82-233-188-66.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[14:31] * aliguori (~anthony@cpe-70-123-145-75.austin.res.rr.com) has joined #ceph
[14:35] <joao> has anyone here ever used git rerere?
[14:37] * loicd (~loic@90.84.144.36) has joined #ceph
[14:42] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[14:43] <elder> I haven't used it directly but I've had it re-apply stuff for me when it recognized it could, automatically.
[14:45] <joao> have you enabled it then?
[14:45] <tnt> Mmmm, I have a fairly high variation in the number of pg per OSD, (up to 25% difference)
[14:47] <Norman> mmmm with version 0.54, when adding a OSD on a running local cluster, it crashes the existing OSD daemons .... what could be wrong ?
[14:48] <joao> Norman, crashes as in they are terminated and a stack trace is dumped?
[14:48] <Norman> joao: crashes in, the daemon's stop running ....
[14:49] <ramsay_za> joao: any segfault in syslog?
[14:49] <joao> Norman, ^ or in the osds logs?
[14:51] <Norman> no segfault in ceph.log, it says -> reported failed by osd.2
[14:51] <Norman> for osd.0 and osd.1
[14:52] <joao> are osd.0 and osd.1 the ones that stopped running?
[14:52] <Norman> yea
[14:53] <joao> and you have no mention of any potential cause for that on either osd.0's or osd.1's logs?
[14:53] <joao> not ceph.log
[14:53] <ramsay_za> what's your add sequence look like?
[14:55] <Norman> yes the osd log's show strange stuff, doesn't make allot of sense to me though
[14:55] <Norman> I made a new ceph-2 in /var/lib/ceph/osd
[14:56] <Norman> then I made a filesystem for them ceph-osd --mkfs ceph-2
[14:56] <Norman> started the daemon service ceph start osd.2
[14:56] <Norman> then let it join the crush map by doing
[14:56] <Norman> ceph osd crush set 2 osd.2 1.0 pool=default rack=unknownrack host=ceph
[14:57] <joao> did you 'ceph osd create' ?
[14:57] * gaveen (~gaveen@175.157.142.169) Quit (Remote host closed the connection)
[14:57] <joao> not that I see that as a reason for the other two osds to fail
[14:57] <Norman> ah yes I did :) ceph osd create 2
[14:58] <joao> yeah...
[14:58] <joao> it looks right to me
[14:58] <Norman> CPU spikes when doing this and things go nuts
[14:58] <joao> what does 'ceph osd tree' show?
[14:58] <ramsay_za> do you have other osds on other boxes?
[14:59] <Norman> the 3 osd's with everything checked in and then after several secons osd 0 and 1 go down
[14:59] <joao> Norman, those spikes may be due data shifting around once osd.2 comes up
[14:59] <Norman> no its a local test setup
[14:59] <Norman> well the logs show something like this I see now
[14:59] <Norman> 2012-11-28 10:06:13.553176 7f7391385700 0 -- 192.168.1.76:6802/8942 >> 192.168.1.76:6805/1246 pipe(0x329a480 sd=29 :60010 pgs=0 cs=0 l=0).connect claims to be 192.168.1.76:6805/9189 not 192.168.1.76:6805/1246 - wrong node!
[15:00] <Norman> few 100 times and then it crashes
[15:00] <nosebleedkt> joao, can i have many monitors in ceph.conf ?
[15:01] <joao> Norman, is it safe to assume that the osd on port 6805 is osd.2?
[15:01] <joao> nosebleedkt, afaik, you can have as many as you want
[15:01] <joao> there's probably a limit regarding the monitor's id data type
[15:01] <ramsay_za> yeah joao I'd say port conflict
[15:02] <joao> but that should be in the 32 or 64 bit order
[15:02] <ramsay_za> just make sure it's an odd number of mon
[15:02] <joao> ramsay_za, it looks like an issue with the nonce
[15:02] <joao> but that's a bit outside my comfort zone
[15:02] * anon (~chatzilla@hippo2.bbaw.de) Quit (Quit: ChatZilla 0.9.89 [Firefox 17.0/20121120062532])
[15:02] <Norman> hmm dunno it says this in the log file ---> -- 192.168.1.76:6807/3032 <== osd.2 192.168.1.76:6806/4234 183 ==== pg_query(1.12 epoch 72) v2 ==== 129+0+0 (889852661 0 0) 0x2405a40 con 0x18b3e00
[15:03] <ramsay_za> like wise, I'd say update the ceph.conf with hard port numbers and see what it does
[15:03] <joao> no, don't do that
[15:03] <joao> don't stick the ports on the conf file
[15:03] <joao> a user last week or so had a problem and it was solved by removing the osd ports from the conf file
[15:04] <joao> had something to do with the nonces and a bug gregaf had been chasing
[15:04] <joao> can't recall the details, but unless someone who actually understands what's going on tells you to, don't stick the port numbers on the conf file :p
[15:04] <joao> and that would probably be gregaf or sjust
[15:05] <joao> oh man, this coffee is delicious; brb
[15:07] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) has joined #ceph
[15:15] <tnt> Is it possible to simulate CRUSH ? i.e. see if I add a osd or change the crushmap, how the pg would be distributed ?
[15:17] <ramsay_za> well as the Crush map is a pseudo random placement algorithm in theory yes you could, no idea how you'd go about doing it though
[15:17] * joao (~JL@89.181.144.216) Quit (Read error: Connection reset by peer)
[15:18] <tnt> yeah the question was more of a "what tool would I use to do it ? Or do I have to write my own ?" :)
[15:18] * joao (~JL@89.181.144.216) has joined #ceph
[15:18] * ChanServ sets mode +o joao
[15:18] <joao> crap
[15:18] <joao> disk failed again
[15:19] <joao> took the whole desktop with it's dma errors
[15:20] <Norman> just wondering also guys, is it correct that every OSD takes up 4.9GB of space for itself ?
[15:20] <nosebleedkt> joao, ok i used ceph osd pool mksnap {pool-name} {snap-name}
[15:20] <nosebleedkt> to make a snapshot of my data pool
[15:20] <nhm> tnt: talk to caleb, he did some work on crush distribution modelling.
[15:20] <Norman> it says 14703 MB used, 70707 MB / 89981 MB avail, while having nothing in it yet
[15:21] <nhm> joao: doh
[15:21] <nosebleedkt> joao, now How I use this snapshot?
[15:21] <tnt> nhm: thanks for the tip
[15:21] <tnt> \whosi calebamiles
[15:21] <tnt> damnit
[15:22] <tnt> calebamiles: ping :)
[15:22] <joao> well, looks like a run to the city for a couple of new hdds is in my near future
[15:23] <joao> Norman, is that a disk dedicated to the osd?
[15:23] <ramsay_za> well 2TB Western digitals are cheat at the moment
[15:24] <joao> yeah, but I'd rather have two 500GB hdds to be honest
[15:24] <ramsay_za> well I'm out
[15:24] <Norman> joao: no its just a folder, so its probably 1x 4,9GB :)
[15:24] <joao> less space to stash my stuff in the desktop means I have to back it up more often
[15:24] * ramsay_za (~ramsay_za@41.223.32.4) Quit (Quit: Leaving)
[15:24] <nhm> tnt: I some day have a dream of trying to introduce some kind of van der Corput sequence like a halton sequence where possible instead of psuedo-random generation, but honestly I have no idea how possible it would be and where.
[15:25] <Norman> joao: rados df show the RBD pool is 5GB
[15:25] <nhm> tnt: I think Sage said it wouldn't work for Crush, but maybe we could do something like that in other areas.
[15:25] <joao> Norman, the osds will report to the monitors whatever comes from statfs
[15:26] <joao> so those statistics are from the whole volume, instead of being the space used by the osds themselves
[15:26] <joao> Norman, not sure how rados df is accounted though
[15:28] <tnt> nhm: well, right now I'm just surprised than pseudo random isn't as well distributed as I tought it would be. With 12800 pg I didn't expect as much as 300 pg difference per osd ...
[15:31] * match (~mrichar1@pcw3047.see.ed.ac.uk) has joined #ceph
[15:32] * loicd1 (~loic@90.84.144.36) has joined #ceph
[15:33] <nhm> tnt: We actually just had someone report the same thing. Try using a power-of-two number of PGs. (IE 8192 or 16384)
[15:33] * loicd (~loic@90.84.144.36) Quit (Read error: Connection reset by peer)
[15:34] <nhm> tnt: you could try making a new pool just to test it.
[15:34] <jefferai> nhm: won't help existing pools :-)
[15:34] <nhm> jefferai: yes, that's unfortunately true.
[15:40] * nyeates (~nyeates@pool-173-59-243-233.bltmmd.fios.verizon.net) has joined #ceph
[15:40] <jefferai> nhm: any idea what time sage usually comes around? I'd like to be able to debug for him if possible, but I have to weigh that against downtim
[15:40] <jefferai> downtime
[15:46] <tnt> nhm: trying now.
[15:47] <nhm> jefferai: he's sometimes around at ~7:30am PST, but that's pretty early for those guys on the coast. Does he know you want to do that for him?
[15:47] <jefferai> no, I left a series of messages in here about 7 hours ago, so he just has pings :-)
[15:47] <jefferai> basically I'm running into a condition that he was unable to replicate before, so I'd like to be able to give him debug info
[15:48] <jefferai> but it's also affecting a production VM :-)
[15:48] <tnt> nhm: doesn't really help. I still have an osd with 156 pg and another with 199.
[15:48] <nhm> jefferai: I'm sure he'd appreciate it! I don't see him online yet. If I see him I can try to ping him.
[15:48] <tnt> (I used pg_num=1024)
[15:49] <nhm> tnt: How many OSDs?
[15:50] <tnt> 12 OSD distributed on 4 host but with 2/2/4/4 and a rule distributing across host rather than across osd.
[15:51] <nhm> tnt: I might have been misremembering and the powers-of-two thing might have just been for how data gets mapped to PGS, not PG/OSD mappings.
[15:51] * jefferai hopes so :-)
[15:52] <tnt> number of object per PG is pretty well distributed. So is the number of bytes per objects.
[15:53] <nhm> jefferai: It still may be really important for balancing how data gets distributed.
[15:54] <jefferai> nhm: wish it was in the documentation then...I did the calculations they suggested in the documentation to pick the number
[15:54] <jefferai> if it's helpful/important to pick a power of two, it's not in there :-)
[15:54] <nhm> jefferai: I'm not sure it's generally well known yet, and it's something I want to test.
[15:55] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[15:55] <jefferai> ah
[15:55] <nhm> jefferai: we just had the issue reported last week.
[16:03] <nhm> so it looks like that issue is definitely about data placement. I'm doing a quick browse through the code to see if I can tell if it affects PG/OSD mappings. I need to learn this sooner or later anyway.
[16:06] <nhm> ok, we use ceph_stable_mod to map raw pgs into actual pgs, and raw pgs into placement seeds. I'll probably need to go back and read Sage's paper.
[16:06] * Kioob`Taff1 (~plug-oliv@local.plusdinfo.com) Quit (Quit: Leaving.)
[16:09] * epitron_ (~epitron@bito.ponzo.net) has joined #ceph
[16:11] * MapspaM (~clint@xencbyrum2.srihosting.com) has joined #ceph
[16:12] * loicd1 (~loic@90.84.144.36) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * jtangwk (~Adium@2001:770:10:500:4953:4af3:3fbc:9d8a) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * sbadia (~seb@yasaw.net) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * Norman (53a31f10@ircip2.mibbit.com) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * KindOne (KindOne@h161.22.131.174.dynamic.ip.windstream.net) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * nhm (~nh@184-97-251-146.mpls.qwest.net) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * gregaf (~Adium@2607:f298:a:607:fd99:359:95ec:8287) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * yehudasa (~yehudasa@2607:f298:a:607:45bd:9a9d:83a3:4164) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * todin (tuxadero@kudu.in-berlin.de) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * dok_ (~dok@static-50-53-68-158.bvtn.or.frontiernet.net) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * sagewk (~sage@2607:f298:a:607:e116:e786:b94f:5586) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * AaronSchulz (~chatzilla@216.38.130.166) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * cclien_ (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * asadpanda (~asadpanda@67.231.236.80) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * epitron (~epitron@bito.ponzo.net) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * Psi-jack (~psi-jack@psi-jack.user.oftc.net) Quit (resistance.oftc.net graviton.oftc.net)
[16:12] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[16:13] * gucki (~smuxi@80-218-125-247.dclient.hispeed.ch) has joined #ceph
[16:14] * mdawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[16:15] * Psi-jack (~psi-jack@psi-jack.user.oftc.net) has joined #ceph
[16:16] * nhm (~nh@184-97-251-146.mpls.qwest.net) has joined #ceph
[16:18] * dok (~dok@static-50-53-68-158.bvtn.or.frontiernet.net) has joined #ceph
[16:18] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[16:18] * ChanServ sets mode +o elder
[16:23] * loicd1 (~loic@90.84.144.36) has joined #ceph
[16:23] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) has joined #ceph
[16:23] * jtangwk (~Adium@2001:770:10:500:4953:4af3:3fbc:9d8a) has joined #ceph
[16:23] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[16:23] * gregaf (~Adium@2607:f298:a:607:fd99:359:95ec:8287) has joined #ceph
[16:23] * yehudasa (~yehudasa@2607:f298:a:607:45bd:9a9d:83a3:4164) has joined #ceph
[16:23] * sagewk (~sage@2607:f298:a:607:e116:e786:b94f:5586) has joined #ceph
[16:23] * AaronSchulz (~chatzilla@216.38.130.166) has joined #ceph
[16:23] * cclien_ (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[16:23] * asadpanda (~asadpanda@67.231.236.80) has joined #ceph
[16:23] * AaronSchulz (~chatzilla@216.38.130.166) Quit (Read error: Connection timed out)
[16:23] * loicd1 (~loic@90.84.144.36) Quit (Read error: Connection timed out)
[16:24] * AaronSchulz (~chatzilla@216.38.130.166) has joined #ceph
[16:24] * nosebleedkt (~kostas@kotama.dataways.gr) Quit (Quit: Leaving)
[16:27] * asadpand- (~asadpanda@67.231.236.80) has joined #ceph
[16:29] * sbadia (~seb@yasaw.net) has joined #ceph
[16:29] * benner (~benner@193.200.124.63) Quit (Read error: Connection reset by peer)
[16:29] * loicd (~loic@90.84.144.36) has joined #ceph
[16:32] * cclien_ (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (resistance.oftc.net graviton.oftc.net)
[16:32] * sagewk (~sage@2607:f298:a:607:e116:e786:b94f:5586) Quit (resistance.oftc.net graviton.oftc.net)
[16:32] * yehudasa (~yehudasa@2607:f298:a:607:45bd:9a9d:83a3:4164) Quit (resistance.oftc.net graviton.oftc.net)
[16:32] * gregaf (~Adium@2607:f298:a:607:fd99:359:95ec:8287) Quit (resistance.oftc.net graviton.oftc.net)
[16:32] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (resistance.oftc.net graviton.oftc.net)
[16:32] * jtangwk (~Adium@2001:770:10:500:4953:4af3:3fbc:9d8a) Quit (resistance.oftc.net graviton.oftc.net)
[16:32] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) Quit (resistance.oftc.net graviton.oftc.net)
[16:32] * asadpanda (~asadpanda@67.231.236.80) Quit (resistance.oftc.net graviton.oftc.net)
[16:33] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[16:33] * illuminatis (~illuminat@0001adba.user.oftc.net) Quit (Quit: WeeChat 0.3.9.2)
[16:39] * benner (~benner@193.200.124.63) has joined #ceph
[16:42] * maxiz (~pfliu@114.245.253.77) Quit (Ping timeout: 480 seconds)
[16:52] * maxiz (~pfliu@111.194.202.4) has joined #ceph
[16:53] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) has joined #ceph
[16:53] * jtangwk (~Adium@2001:770:10:500:4953:4af3:3fbc:9d8a) has joined #ceph
[16:53] * gregaf (~Adium@2607:f298:a:607:fd99:359:95ec:8287) has joined #ceph
[16:53] * yehudasa (~yehudasa@2607:f298:a:607:45bd:9a9d:83a3:4164) has joined #ceph
[16:53] * sagewk (~sage@2607:f298:a:607:e116:e786:b94f:5586) has joined #ceph
[16:53] * cclien_ (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[16:53] * KindOne (KindOne@h161.22.131.174.dynamic.ip.windstream.net) has joined #ceph
[16:54] * vata (~vata@208.88.110.46) has joined #ceph
[17:03] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[17:05] * sagelap (~sage@111.sub-70-197-150.myvzw.com) has joined #ceph
[17:10] * sagelap1 (~sage@88.sub-70-197-143.myvzw.com) has joined #ceph
[17:13] * sagelap (~sage@111.sub-70-197-150.myvzw.com) Quit (Ping timeout: 480 seconds)
[17:13] * joshd1 (~jdurgin@2602:306:c5db:310:c0b6:60ae:5292:f4c6) has joined #ceph
[17:15] * benner (~benner@193.200.124.63) Quit (Read error: Connection reset by peer)
[17:16] * loicd (~loic@90.84.144.36) Quit (Ping timeout: 480 seconds)
[17:17] * verwilst (~verwilst@d5152FEFB.static.telenet.be) Quit (Quit: Ex-Chat)
[17:19] * jlogan1 (~Thunderbi@2600:c00:3010:1:742b:7b42:4526:f0f2) has joined #ceph
[17:20] * benner (~benner@193.200.124.63) has joined #ceph
[17:22] <jefferai> sagelap1: sagewk: Sorry for the insistent pings, but my time to debug is running short before I have to simply try restarting the daemon...demoing some stuff in an hour and a half and I need to try to have this working by then
[17:26] * nyeates (~nyeates@pool-173-59-243-233.bltmmd.fios.verizon.net) Quit (Quit: Zzzzzz)
[17:29] <nhm> jefferai: I pinged sage earlier but didn't hear back. He might be in a meeting.
[17:30] <jefferai> yeah
[17:30] <jefferai> ah well
[17:31] <jefferai> I hadn't pinged those other nicks, so...
[17:31] <jefferai> I'd just really like to try to help you guys out with figuring out the problem, but I have to try to have this demo working
[17:31] <jefferai> I'll give it a little longer
[17:32] * loicd (~loic@90.84.144.36) has joined #ceph
[17:34] <joshd1> jefferai: I think having an osd log of the problem happening with debug ms = 1, debug osd = 20, debug filestore = 20 would help
[17:35] <jefferai> joshd1: can those values be changed on the fly?
[17:35] * sagelap1 (~sage@88.sub-70-197-143.myvzw.com) Quit (Ping timeout: 480 seconds)
[17:36] <joshd1> jefferai: yes, using injectargs will work for debug settings
[17:36] <jefferai> can you tell me how to do it?
[17:37] <joshd1> ceph osd tell \* '--debug-ms 1 --debug-osd 20 --debug-filestore 20'
[17:38] <joshd1> err, ceph osd tell \* injectargs '--debug-ms 1 --debug-osd 20 --debug-filestore 20'
[17:38] <jefferai> huh, I wonder why ceph -s says my health is okay but my monis getting all these osd X reported failed by osd Y messages
[17:39] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Remote host closed the connection)
[17:39] * loicd1 (~loic@90.84.144.20) has joined #ceph
[17:39] <joshd1> backend network issue that doesn't affect osd<->mon connectivity?
[17:39] <jefferai> not that I'm aware of
[17:40] * loicd (~loic@90.84.144.36) Quit (Ping timeout: 480 seconds)
[17:46] <jefferai> I'm getting flooded with these:
[17:46] <jefferai> 2012-11-28 11:46:27.509283 7f9a5c590700 0 -- 192.168.37.202:6816/32522 >> 192.168.37.204:6811/23258 pipe(0xb5dc900 sd=30 :49535 pgs=0 cs=0 l=0).connect claims to be 192.168.37.204:6811/37926 not 192.168.37.204:6811/23258 - wrong node!
[17:47] <jefferai> hm, got tons of those for a bit, but then they went away
[17:48] * sagelap (~sage@2607:f298:a:607:7463:bf6a:b3fa:74f8) has joined #ceph
[17:51] * ircolle (~ian@c-67-172-132-164.hsd1.co.comcast.net) has joined #ceph
[17:54] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[17:58] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[17:58] * danieagle (~Daniel@177.99.132.31) has joined #ceph
[17:59] <jefferai> no, they come and go
[17:59] <jefferai> wonder what is causing that
[18:00] <jefferai> and they seem to be related to I/O hiccups I'm seeing
[18:00] <joshd1> yeah, that looks like a bug
[18:01] <joshd1> sagelap might be here now, is debug ms 1 enough to debug that 'wrong node' problem?
[18:06] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[18:06] <jefferai> ok, for the moment the machine I need up and running for the demo is fine...later on I can try to debug the wrong node stuff with you guys
[18:06] <jefferai> bbl
[18:07] <joshd1> ok, thanks
[18:09] <sagewk> jefferai: thanks. ping me later when you have some time to collect logs etc
[18:09] <jefferai> sure
[18:10] <jefferai> will be in maybe 3 hours or so
[18:10] <jefferai> for now things are working although I/O is very slow (with these errors occurring when I make I/O requests, so I'm sure they're related)
[18:10] * tnt (~tnt@55.188-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[18:11] <jefferai> sagewk: if you scroll back a bunch I detailed as much as possible the conditions that led to the osd wonkiness
[18:11] <jefferai> which is a separate but possibly related issue to what I'm seeing now
[18:11] <jefferai> anyways, bbl
[18:15] <sagewk> k
[18:19] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[18:23] * loicd1 (~loic@90.84.144.20) Quit (Ping timeout: 480 seconds)
[18:24] * nyeates (~nyeates@pool-173-59-243-233.bltmmd.fios.verizon.net) has joined #ceph
[18:25] * loicd (~loic@90.84.144.20) has joined #ceph
[18:27] * yehuda_hm (~yehuda@2602:306:330b:a40:992c:171e:117b:ab90) has joined #ceph
[18:32] * nwatkins (~Adium@soenat3.cse.ucsc.edu) has joined #ceph
[18:33] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[18:34] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[18:34] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[18:34] * Leseb_ is now known as Leseb
[18:38] * joshd1 (~jdurgin@2602:306:c5db:310:c0b6:60ae:5292:f4c6) Quit (Quit: Leaving.)
[18:40] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[18:50] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[18:50] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[18:50] * Leseb_ is now known as Leseb
[18:51] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[18:53] * loicd (~loic@90.84.144.20) Quit (Ping timeout: 480 seconds)
[18:53] * joao sets mode -o joao
[18:59] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) has joined #ceph
[19:01] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:09] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[19:10] * loicd (~loic@178.20.50.225) has joined #ceph
[19:13] * The_Bishop (~bishop@2001:470:50b6:0:85ca:6278:8125:d9e9) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[19:16] * MikeMcClurg1 (~mike@firewall.ctxuk.citrix.com) Quit (Quit: Leaving.)
[19:17] * noob2 (a5a00214@ircip1.mibbit.com) has joined #ceph
[19:19] <noob2> i have a quick question about the radosgw. How can i show usage information for users? Also how do i setup limit of how much a user can upload?
[19:24] * aliguori (~anthony@cpe-70-123-145-75.austin.res.rr.com) Quit (Remote host closed the connection)
[19:39] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[19:39] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[19:51] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[20:03] * aliguori (~anthony@32.97.110.59) has joined #ceph
[20:12] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[20:15] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[20:18] * loicd1 (~loic@178.20.50.225) has joined #ceph
[20:18] * loicd (~loic@178.20.50.225) Quit (Ping timeout: 480 seconds)
[20:23] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[20:32] * denken (~denken@dione.pixelchaos.net) has joined #ceph
[20:32] * dmick (~dmick@2607:f298:a:607:9971:8e07:6a30:2cb8) has joined #ceph
[20:44] * loicd1 (~loic@178.20.50.225) Quit (Ping timeout: 480 seconds)
[20:46] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[20:49] * dmick is currently being amused by CEPH_AES_IV
[20:53] <jefferai> sagewk: back, for when you're around
[20:53] <jefferai> btw, this is the mon crash: http://paste.kde.org/616310/
[20:54] <joao> jefferai, can you grep the log for "FAILED" ?
[20:55] <joao> would be nice to know whether it was the 'have_pending' assertion
[20:56] <joao> or the other one
[20:59] <noob2> does the 0.9.11 libvirt support ceph? i think it does but i'm not sure
[20:59] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[20:59] <noob2> oh looks like 0.9.13. i found it
[21:01] <slang> I know there are ci builds for the ceph-client, as I see the packages on gitbuilder.ceph.com
[21:02] <slang> but I can't seem to the find the equivalent of ceph.com/gitbuilder.cgi for the kernel
[21:02] <joshd> noob2: the storage pool was added in 0.9.13, but just attaching disks to vms has been there longer (originally in 0.8.7, authentication support added in 0.9.7)
[21:03] <slang> elder: do you know?
[21:03] <joshd> slang: gitbuilder.sepia.ceph.com/gitbuilder-precise-kernel-amd64/
[21:03] <joshd> there's only one
[21:03] <slang> ah I just missed it
[21:03] * BManojlovic (~steki@81.18.49.20) has joined #ceph
[21:03] <slang> joshd: thanks!
[21:04] <joshd> np
[21:06] <jefferai> joao: sorry, do you mean the mon? I think that log got overwritten
[21:07] <lurbs> noob2: From what I can tell Ubuntu's 0.9.13 (in 12.10, or 12.04 LTS Cloud Archive) doesn't support RBD pools, though. Wasn't compiled with support.
[21:07] <jefferai> joao: yep -- mon/PaxosService.cc: 110: FAILED assert(have_pending)
[21:07] <joao> jefferai, yeah, that's what I meant
[21:07] <joao> thanks
[21:07] <jefferai> sure
[21:08] <joao> jefferai, can you make that log available somewhere?
[21:08] <jefferai> I now have two of my storage boxes filling logs with the "X claims to be Y -- wrong node!" bits
[21:08] <elder> slang, sorry, wasn't paying attention.
[21:09] <elder> I see Josh answered your question.
[21:09] <slang> elder: no worries, yep
[21:10] <jefferai> joao: it's about 100k compressed...how do you want it?
[21:10] <jefferai> I could email it to you
[21:10] <jefferai> or uuencode it and post on a pastebin
[21:10] <joao> email is fine
[21:10] <joao> joao.luis@inktank.com
[21:11] <jefferai> sent'
[21:11] <joao> ty
[21:11] <jefferai> joao: might you know anything about the wrong node log entries?
[21:12] <joao> jefferai, I don't, but another user was seeing those earlier today as well
[21:12] <jefferai> might have been me :-)
[21:12] <joao> <Norman> 2012-11-28 10:06:13.553176 7f7391385700 0 -- 192.168.1.76:6802/8942 >> 192.168.1.76:6805/1246 pipe(0x329a480 sd=29 :60010 pgs=0 cs=0 l=0).connect claims to be 192.168.1.76:6805/9189 not 192.168.1.76:6805/1246 - wrong node!
[21:13] <jefferai> ah, okay
[21:13] <jefferai> yeah, it's killing my I/O too
[21:14] <joao> I'm not sure if it has anything to do with this, but the nonces seem off
[21:14] <joao> and last week (or so) another user incurred in issues with nonces screwing up the osds
[21:15] <joao> may be unrelated though
[21:15] <sjustlaptop> joao: that bug occurred due to explicitely setting the port in the osd conf
[21:15] <sjustlaptop> fwiw
[21:15] <joao> oh yeah
[21:15] <joao> that's right
[21:15] <sjustlaptop> jefferai: did you explicitely set the osd port?
[21:16] <jefferai> sjustlaptop: only for the mons
[21:16] <jefferai> not the osds
[21:17] <jefferai> meh, getting some slow requests now too
[21:21] <noob2> ok thanks
[21:22] <sagewk> elder: there?
[21:22] <elder> Yes
[21:23] <sagewk> are you rebasing testing, or can i force push fixes to that osdtimeout patch?
[21:23] <elder> Send me the commit, I'll add it before I push.
[21:23] <sagewk> wip-osdtimeout
[21:24] <sagewk> thanks
[21:24] <elder> I have not been running iozone because it was failing long ago, by the way. It completed for me, now it hit a failure again...
[21:24] <elder> No problem.
[21:24] <elder> I'm going to push my fix though, because it fixes the direct I/O problem.
[21:24] <elder> I'll document the failure I see in a bug for iozone.
[21:25] <sagewk> k
[21:25] <jefferai> sagewk: FWIW I've started seeing some more slow requests, if you want to look at that
[21:25] <sagewk> jefferai: yeah!
[21:25] <jefferai> although the more pressing issue I'm having is this "wrong node!" issue
[21:26] <sagewk> are any osds going up or down?
[21:26] <jefferai> because it's just absolutey dumping the logs with those messages
[21:26] <jefferai> um
[21:26] <sagewk> when you see the wrong node?
[21:26] <jefferai> what's the best way to tell?
[21:26] <sagewk> ceph -w
[21:26] <sagewk> whose logs?
[21:26] <jefferai> no, nothing going up and down
[21:26] <jefferai> ceph -w is showing normal mon traffic and 1 slwo request
[21:27] <jefferai> ok, so I have node 1, 2, and 4
[21:27] <jefferai> nodes 1 and 2 are complaining, when talking to node 4
[21:27] <jefferai> so, for instance, this is from node 1:
[21:27] <jefferai> 2012-11-28 15:27:28.137449 7fd320440700 0 -- 192.168.37.201:6800/13115 >> 192.168.37.204:6804/22552 pipe(0xc15db40 sd=15 :46294 pgs=0 cs=0 l=0).connect claims to be 192.168.37.204:6804/30624 not 192.168.37.204:6804/22552 - wrong node!
[21:27] <jefferai> .201 = node 1, .204 = node 4
[21:28] <jefferai> node 4's logs are pretty quiet
[21:29] <elder> sagewk (or anyone else), where does "suites/iozone.sh" actually run? Does it depend on what tasks are before it (e.g., rbd: or ceph: or ceph-fuse:)?
[21:30] <sagewk> in /tmp/cephtest/mnt.NNN for client.NNN
[21:30] <sagewk> kclinet, ceph-fuse, rbd tasks all mount there
[21:30] <elder> So whichever was the last of those prior to running it is where it runs.
[21:31] <elder> If I have rbd: before it, then it runs in a file system mounted on an rbd image.
[21:31] * illuminatis (~illuminat@0001adba.user.oftc.net) has joined #ceph
[21:31] <sagewk> whichever client the workunit is running on..
[21:31] <elder> OK.
[21:31] <elder> I got no problem with kclient, but see a weird thing with rbd. I'll file it as an rbd bug.
[21:34] <sagewk> k
[21:35] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[21:38] <denken> any known memory leaks in argonaut? trying to figure out what this is all about https://dl.dropbox.com/u/49973161/argonaut.png
[21:39] <sagewk> denken: monitor or osd?
[21:39] <denken> thats 12 nodes over a 14 day period... ending with a restart of all ceph services
[21:39] <sagewk> there are known (and now fixed) leaks in the monitor
[21:39] <denken> especially the monitors, but we are seeing it with non-mons, too
[21:40] <denken> good to know on the monitors though. we were planning on bobtail anyway.
[21:43] * `gregorg` (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[21:43] * `gregorg` (~Greg@78.155.152.6) has joined #ceph
[21:57] <jefferai> sagewk: forget about me? :-)
[21:57] <sagewk> jefferai: can you send a 'ceph osd dump' ?
[21:57] <jefferai> sagewk: sorry, just saw your ping
[21:58] * KindTwo (KindOne@h209.39.28.71.dynamic.ip.windstream.net) has joined #ceph
[21:58] <jefferai> http://paste.kde.org/616358/
[22:01] * KindOne (KindOne@h161.22.131.174.dynamic.ip.windstream.net) Quit (Ping timeout: 480 seconds)
[22:01] * KindTwo is now known as KindOne
[22:06] <elder> sagewk, testing branch pushed, I'm done with your wip-osdtimeout branch.
[22:11] <sagewk> elder: thanks
[22:22] <elder> sagewk, got a minute to talk about the "resubmit linger ops" patch? 469c1d1
[22:22] <sagewk> in 15 min?
[22:22] <elder> Sure. Let me know when you're available.
[22:31] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) Quit (Quit: Leaving)
[22:54] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[23:01] * aliguori (~anthony@32.97.110.59) Quit (Remote host closed the connection)
[23:09] * The_Bishop (~bishop@e179015093.adsl.alicedsl.de) has joined #ceph
[23:12] * plut0 (~cory@pool-96-236-43-69.albyny.fios.verizon.net) has joined #ceph
[23:22] <elder> sagewk, I think my test script problem is due to a bug in bash. I've bumped into this sort of thing in the (distant) past.
[23:23] <sagewk> lovely
[23:23] <elder> ((COUNT++)) caused my loop to quit.
[23:23] <elder> COUNT=$(expr $COUNT + 1) does not
[23:23] <sagewk> COUNT=$(($COUNT+1))
[23:23] <sagewk> yeah
[23:24] <elder> Is my 15 minutes up?
[23:27] <elder> Actually, now that I look at it, maybe it's not a bug after all, but then that begs the question why running my script manually did not suffer this problem.
[23:27] * mdawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:28] * sagelap (~sage@2607:f298:a:607:7463:bf6a:b3fa:74f8) Quit (Ping timeout: 480 seconds)
[23:28] <elder> ((COUNT++)) is a ((expression)) and its return status is 0 if the expression is non-zero. A post-decrement of COUNT which was initially zero should I guess have a value 0, meaning return status 1.
[23:28] <elder> Hmph.
[23:28] <elder> I'll use expr.
[23:30] <dmick> I'm trying to figure out how ++ is a post-decrement :)
[23:30] <dmick> maybe in Australia?
[23:31] <elder> Yes, sorry, I was playing by Australian rules. I meant --COUNT is pre-increment.
[23:31] <elder> Coralis effect, you know.
[23:33] * Steki (~steki@81.18.49.20) has joined #ceph
[23:34] * BManojlovic (~steki@81.18.49.20) Quit (Read error: Connection reset by peer)
[23:37] <rweeks> Coriolis
[23:37] <rweeks> <.<
[23:37] <rweeks> </pedant>
[23:38] <lurbs> No opening <pedant> tag? For shame.
[23:38] * sagelap (~sage@2607:f298:a:607:f51a:d51:bb88:da91) has joined #ceph
[23:39] <rweeks> I've never been an XML purist.
[23:39] * Steki (~steki@81.18.49.20) Quit (Read error: Connection reset by peer)
[23:39] * Steki (~steki@81.18.49.20) has joined #ceph
[23:42] * noob2 (a5a00214@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[23:42] <elder> Sorry rweeks I'm usually a pedant as well, but I've given up on it for typo's in contexts like this.
[23:42] <rweeks> hehe
[23:42] <rweeks> no worries
[23:42] <rweeks> at least you didn't call it the "Corvallis effect"
[23:43] <elder> Those Orginers
[23:43] <elder> (That's how they like it pronounced, right? Two syllables?)
[23:44] <rweeks> something like that
[23:44] <rweeks> I prefer saying "or-e-gone" just to annoy them
[23:45] * danieagle (~Daniel@177.99.132.31) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[23:46] <sjustlaptop> rweeks: I find "californian" works better
[23:46] * rweeks chuckles
[23:46] <elder> Well, it's North California, right?
[23:47] * Steki (~steki@81.18.49.20) Quit (Read error: Connection reset by peer)
[23:47] <rweeks> it's where we sent all the hippies that were too loopy for SF or Humboldt
[23:47] * Steki (~steki@81.18.49.20) has joined #ceph
[23:47] * nyeates (~nyeates@pool-173-59-243-233.bltmmd.fios.verizon.net) Quit (Quit: nyeates)
[23:55] <plut0> rweeks: whats up?
[23:56] * rweeks checks
[23:56] <rweeks> the ceiling
[23:56] <plut0> when you going to email me? :)
[23:57] <rweeks> soon I hope
[23:57] <plut0> waiting on approval?
[23:58] <rweeks> yep
[23:58] <plut0> i see

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.