#ceph IRC Log

Index

IRC Log for 2012-03-14

Timestamps are in GMT/BST.

[0:02] <jefferai> Also, in terms of performance: RAID-10 would make a bunch of slow HDDs be much more performant; how does Ceph do in terms of performance, if I were to have e.g. 4 nodes with 2x replication?
[0:03] <jefferai> does RBD mulipath amongst them pretty well?
[0:03] <dwm__> jefferai: RBD stripes over 4MB objects.
[0:03] <jefferai> yes
[0:03] <jefferai> er
[0:03] <dwm__> (And can be configured to keep an arbitrary number of replicas.)
[0:03] <jefferai> right, so if I went with 4 nodes for OSD storage, 2 replicas would probably be good -- I could lose two of them
[0:03] <dwm__> So you can get MD to do replication, or you can get Ceph to do replication, or both.
[0:04] <jefferai> dwm__: right...
[0:04] <dwm__> jefferai: Ah, I think the failure modes work slightly differently.
[0:04] <jefferai> I guess I'm wondering at that tradeoff. Less nodes, but replication locally via RAID; or more nodes, but replication via Ceph
[0:04] <jefferai> especially in terms of performance, since I'd be running VMs off of these
[0:04] <dwm__> If you only replicate Ceph data twice, and you have a non-trivial amount of data, then you will lose some of that data if you lose two OSDs.
[0:05] <jefferai> you will?
[0:05] <jefferai> what happened to the third copy?
[0:05] <dwm__> jefferai: Ah, apologies, difference in terminology.
[0:05] <jefferai> oh
[0:05] <jefferai> I see
[0:05] <jefferai> you mean, 2 copies total
[0:05] <jefferai> I'm thinking 3 copies total
[0:05] <dwm__> Precisely.
[0:05] <jefferai> I guess proper terminology would be 3 replicas, then
[0:06] <dwm__> Depends on semantics. So long as you're consistent, and other people know what you mean. :)
[0:06] <jefferai> heh
[0:06] <Tv|work> the 1+n naming is more common in the master-slave world
[0:06] <jefferai> so I could do local RAID and have replicas (plus 2 total copies via Ceph replication)
[0:06] <Tv|work> natively distributed systems just tend to say n
[0:06] <dwm__> Anyway, the reason I looked in: I see that (Sage?) might be stopping over in England on the way to Germany?
[0:07] <jefferai> or I could do Ceph repliation and have 3 total copies across 4 nodes
[0:07] <jefferai> but if I did that, I'm wondering about performance
[0:07] <elder> *)#)(&&@@%!!! "There was an error installing the bootloader. The system may not be bootable.
[0:08] <dwm__> elder: GRUB2 on Debian?
[0:08] <elder> sagewk, I'm about to update the testing branch. Do you have any desire to look at my wip-testing branch before I do that?
[0:08] <elder> dwm__, no, Fedora 16 upgrade over Fedora 15.
[0:08] <sagewk> elder: go for it!
[0:08] <elder> OK.
[0:09] <elder> Just for good measure I did a pull of linus/master and found only that one expected conflict.
[0:11] <sagewk> cool
[0:13] <elder> I'll wait until testing gets through at least one nightly set of tests before I push out master, which is shares all but the last 5-6 commits with what is now the testing branch.
[0:13] <elder> Oh, dwm__ I believe it's grub2 on Fedora though.
[0:14] <dwm__> elder: `grub-install /dev/<root device>` is usually pretty good at telling you what the problem is.
[0:14] <elder> It's a machine intended for testing, so it's not as much of a loss. I'll try that if I can get it booted.
[0:14] <dwm__> I had a case recently where the amount of size used by the GRUB2 bootstrap image had grown to be too big to fit before my first partition.
[0:14] <dwm__> Given it was a fresh install, simply blew it away and made /dev/sda1 start a little later into the disk..
[0:15] <elder> I don't remember the partition size, but I've been burned by that before so I'm sure I have at least a GB, maybe several for /boot.
[0:15] <dwm__> elder: No, I mean the image that goes in the MBR.
[0:15] <elder> Ohhh.
[0:15] <elder> Wow.
[0:15] <elder> I hadn't thought about that.
[0:15] <jefferai> dwm__: I've found that starting at sector 2048 is a good option these days
[0:16] <elder> I'm brand new to grub2... Seems like they went wild.
[0:16] <jefferai> (and a lot of distros default to it now)
[0:16] <dwm__> It dynamically includes all the modules it needs to find the kernel files -- and in my case that included MD's RAID10 code and LVM.
[0:16] <jefferai> elder: they sure did -- in theory it's much more capable, but it's still wonky in many ways
[0:16] <dwm__> I do like GRUB2, if only because it *can* cope with a rootfs on LVM inside MD..
[0:16] <jefferai> dwm__: yeah, if you really hate boot partitions, that's a nice benefit
[0:17] <elder> My needs aren't very demanding. It just means I have a new language to learn.
[0:17] <jefferai> and new bugs to find!
[0:17] <elder> (Or system. I know how to do shell scripting.)
[0:17] <dwm__> elder: TBH, the scripts shipped with distributions will probably Do What You Want out of the box.
[0:17] <elder> It Did Not.
[0:18] <dwm__> elder: I should probably not assume that's because the Fedora scripts are suckier than Debian's. :P
[0:18] <elder> But to be fair, it was an upgrade, on a dual-booted system with XP I think so I could upgrade firmware.
[0:19] <elder> I don't remember how I configured the thing, I just had the crazy idea that upgrading might magically get me past the problem I was facing with the machine. Instead, it gave me a new one without solving the old.
[0:19] <dwm__> But yes, the nice thing about LVM is that I can lazily allocate space to disk partitions -- and don't need to artificially split /boot.
[0:20] <elder> I think I might just scrub it at this point and start fresh.
[0:20] <elder> Maybe install Ubuntu so I'm only dealing with one type of system.
[0:22] * LarsFronius (~LarsFroni@f054106023.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[0:32] <jefferai> stupid question: btrfs is recommended for ceph; what happens if btrfs goes toast...the OSD will become unavailable and when btrfs is recreated, you'd bring the OSD back up and it would have its data written back from the replications?
[0:33] <Tv|work> jefferai: in general, if you lose an fs, yes
[0:33] <Tv|work> jefferai: most of the time, we see a non-dataloss crash instead
[0:33] <jefferai> Iah
[0:33] <jefferai> ah
[0:33] <jefferai> so when the crash is recovered, the node is brought back up to date
[0:40] <jefferai> so tell me if this idea is super crazy. Say I have a 12-disk system and will be using Ceph replication, so I don't need replication via e.g. RAID. Rather than specify a particular disk to host an OSD, what if I partitioned each disk into e.g. four partitions; created a btrfs volume across those (so that btrfs metadata would be replicated across the devices); then put OSD on top of those four btrfs volumes?
[0:41] <jefferai> that way losing a disk would still hose me (but ceph would have two extra copies), I'd have replicated metadata, the data would be striped (in addition to RBD striping) for faster access
[0:42] * lofejndif (~lsqavnbok@659AAAKN1.tor-irc.dnsbl.oftc.net) Quit (Quit: Leaving)
[0:46] <dwm__> jefferai: I suspect the extra complexity may not be worth it.
[0:46] <jefferai> Ah
[0:46] <dwm__> jefferai: If it's something you can prototype quickly, might be worth comparing the relative performance..
[0:46] <jefferai> Right
[0:47] <dwm__> I suspect it'd be more important to ensure you have a fast journal device for each OSD.
[0:48] <jefferai> SSD should be good
[0:56] * tnt_ (~tnt@87.67.189.55) Quit (Ping timeout: 480 seconds)
[0:59] <sagewk> stxshadow: still there?
[0:59] <sagewk> no :(
[0:59] * groovious (~Adium@64-126-49-62.dyn.everestkc.net) has joined #ceph
[1:01] * Jaykra (~Jamie@64-126-89-248.dyn.everestkc.net) has joined #ceph
[1:01] * Tv|work (~Tv_@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[1:10] * joao (~JL@ace.ops.newdream.net) Quit (Ping timeout: 480 seconds)
[1:16] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[1:22] * joao (~JL@89-181-145-13.net.novis.pt) has joined #ceph
[1:23] * mtk (~mtk@ool-44c35967.dyn.optonline.net) Quit (Remote host closed the connection)
[1:25] * Jaykra (~Jamie@64-126-89-248.dyn.everestkc.net) Quit (Quit: Leaving.)
[1:44] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[1:55] * joao (~JL@89-181-145-13.net.novis.pt) Quit (Ping timeout: 480 seconds)
[2:02] * groovious1 (~Adium@64-126-49-62.dyn.everestkc.net) has joined #ceph
[2:02] * dmick (~dmick@aon.hq.newdream.net) Quit (Quit: Leaving.)
[2:02] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[2:03] * groovious (~Adium@64-126-49-62.dyn.everestkc.net) Quit (Read error: Operation timed out)
[2:08] * Jaykra (~Jamie@64-126-89-248.dyn.everestkc.net) has joined #ceph
[2:57] * Jaykra (~Jamie@64-126-89-248.dyn.everestkc.net) Quit (Quit: Leaving.)
[3:12] <elder> 2012-03-13 16:59:40.695048 7f63f4994700 log [WRN] : old request osd_op(client.41
[3:12] <elder> 15.1:5774 10000001809.00000000 [write 0~6722] 0.3a90daab snapc 1=[]) received at
[3:12] <elder> 2012-03-13 16:59:09.770256 currently waiting for sub ops
[3:12] <elder> What does it mean when I got lots of these in a log/
[3:22] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) Quit (Quit: adjohn)
[3:35] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:43] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:53] * Jaykra (~Jamie@64-126-89-248.dyn.everestkc.net) has joined #ceph
[3:54] * Jaykra (~Jamie@64-126-89-248.dyn.everestkc.net) Quit ()
[4:06] * chutzpah (~chutz@216.174.109.254) Quit (Quit: Leaving)
[5:16] <sage> elder: usually that one of your osds is going very slowly
[5:16] <elder> I got a LOT of them.
[5:16] <sage> elder: the 'waiting for sub_ops' mean one osd is waiting for another osd to ack a replicated op
[5:16] <elder> I think from just one OSD.
[5:17] <elder> The only reason I noticed is that it was busy copying my log file back here to MN over the wire and it took a while...
[5:17] <sage> elder: usually if you get any, you get a lot, because requests are piling up. maybe check dmesg to see if the underlying fs is wedged or something
[5:17] <elder> OK. My tests seemed to pass regardless, just had a lot of output to share with me :)
[5:17] <sage> oh, yeah, that can take a while. if i were you i'd run teuthology from a box here in la
[5:18] <elder> Yeah I know, I should.
[5:18] <elder> But with all the chaos recently I have been trying to limit my variables until all is stable again.
[5:18] <elder> Things seem better now.
[5:18] <sage> you can also omit the --archive option, or just control-c once you see it's a success and get impatient... you just need to nuke after that to clean up any mess
[5:19] <elder> Meanwhile I've installed Ubuntu on my target box. It may be my crashdumps weren't working because I had a separate /boot partition. We'll see. I have to go to bed soon thouhg.
[5:20] * adjohn (~adjohn@50-0-92-115.dsl.dynamic.sonic.net) has joined #ceph
[5:24] <sage> elder: btw last qa run was nearly clean.. just a lingering librgw linking issue, and rbd failing when osds are thrashing.
[5:24] <elder> Yip skip!
[5:25] <sage> need to check the console on those tomorrow to see what's going on, but everything else is passing (and the rbd/kclient + osd thrashing tests are new)
[5:25] <elder> Is that a "nightlyh" run?
[5:25] <sage> and kclient doesn't mind
[5:25] <sage> well i scheduled it midafternoon.
[5:25] <elder> But it's that group of tests?
[5:25] <sage> same suite tho
[5:25] <elder> OK.
[5:25] <sage> yeah
[5:25] <sage> ceph-qa-suite.git/suites/regression
[5:26] <elder> Great to hear. This exercises all my recent ceph-client commits as well, right?
[5:26] <elder> (testing branch_)
[5:26] <sage> 147ad9e3a993733ed1adb91829dcb40f0431a3b4
[5:27] <elder> Yup.
[5:27] <sage> cool
[5:27] <elder> That's really great. I was pretty sure they were OK, but it's good to get that confirmed by testing. I'm always nervous until tests are run.
[5:28] <sage> yeah
[5:28] <elder> What time to they normally run at night?
[5:28] <elder> Will they go again?
[5:29] * sage (~sage@cpe-76-94-40-34.socal.res.rr.com) has left #ceph
[5:29] * sage (~sage@cpe-76-94-40-34.socal.res.rr.com) has joined #ceph
[5:29] <sage> noon and midnight
[5:37] <elder> I got a kernel core dump (sort of)
[5:37] <elder> Yay! It took a good few minutes though, and my display went dark, and I thought it was hung. I think I'll try it again and check in the morning.
[5:49] <sage> nice
[5:49] <sage> what does the receive end look like? does it copy it over ssh or something?
[5:53] <elder> Sorry. I'm only copying it locally so far. Once that works (once) I'll try it over the network.
[5:54] <elder> I just am not sure what's going on. It's taking like, minutes to do *something*. My disk light is pinned, and it sounds like it might be seeking. But it just doesn't seem like it should take this long to copy 2 GB of memory.
[5:55] <elder> Cursor doesn't move either. I think it's stuck, but I'll give it another 6-8 hours while I sleep to really finish all its hard work.
[5:55] <elder> Talk to you in the morning.
[6:01] <sage> 'night
[6:07] * gregaf (~Adium@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[6:09] * yehudasa_ (~yehudasa@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[6:09] * mkampe (~markk@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[6:09] * sjust1 (~sam@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[6:09] * sagewk (~sage@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[6:24] * yehudasa (~yehudasa@aon.hq.newdream.net) has joined #ceph
[6:24] * gregaf (~Adium@aon.hq.newdream.net) has joined #ceph
[6:24] * sjust (~sam@aon.hq.newdream.net) has joined #ceph
[6:26] * mkampe (~markk@aon.hq.newdream.net) has joined #ceph
[6:27] * sagewk (~sage@aon.hq.newdream.net) has joined #ceph
[6:39] * gregaf1 (~Adium@aon.hq.newdream.net) has joined #ceph
[6:44] * cattelan_away is now known as cattelan_away_away
[6:45] * gregaf (~Adium@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[7:10] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[7:37] * tnt_ (~tnt@55.189-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[7:45] * adjohn (~adjohn@50-0-92-115.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[7:52] * adjohn (~adjohn@50-0-92-115.dsl.dynamic.sonic.net) has joined #ceph
[7:56] * adjohn (~adjohn@50-0-92-115.dsl.dynamic.sonic.net) Quit ()
[7:59] * adjohn (~adjohn@50-0-92-115.dsl.dynamic.sonic.net) has joined #ceph
[8:00] * adjohn (~adjohn@50-0-92-115.dsl.dynamic.sonic.net) Quit ()
[8:24] * ivan\ (~ivan@108-213-76-179.lightspeed.frokca.sbcglobal.net) Quit (Read error: Operation timed out)
[8:30] * ivan\ (~ivan@108-213-76-179.lightspeed.frokca.sbcglobal.net) has joined #ceph
[8:45] * tnt_ (~tnt@55.189-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[8:58] * tnt_ (~tnt@212-166-48-236.win.be) has joined #ceph
[9:12] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[9:47] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[10:30] * guilhem1 (~spectrum@sd-20098.dedibox.fr) has left #ceph
[10:31] * guilhem1 (~spectrum@sd-20098.dedibox.fr) has joined #ceph
[10:40] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[11:39] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[11:44] * Eduard_Munteanu (~Eduard_Mu@188.25.92.188) has joined #ceph
[11:45] <Eduard_Munteanu> Hi.
[11:47] <Eduard_Munteanu> I was investigating a possible P2P-ish rsync to distribute data efficiently over regular internet connections to a few dozen clients.
[11:47] <Eduard_Munteanu> I had little success finding anything in that department. Would Ceph be a possible solution?
[11:48] <NaioN> No
[11:48] <NaioN> Ceph isn't build for that purpose
[11:48] <Eduard_Munteanu> I see. Any alternatives you could suggest? I can't really believe it hasn't been done yet :).
[11:49] <NaioN> http://en.wikipedia.org/wiki/List_of_file_systems
[11:49] <NaioN> see the section with peer-to-peer filesystems
[11:50] <NaioN> or something as PeerFS (under the distributed parallel fault-tolerant fss)
[11:50] <Eduard_Munteanu> I looked at those, but they didn't seem well-known or anything. They also seem to emphasize other aspects.
[11:50] <NaioN> The problem those fs's have to deal with is unreliable connections to each other
[11:51] <Eduard_Munteanu> I'm mainly interested in getting a more scalable rsync, the environment is pretty much trusted.
[11:51] <Eduard_Munteanu> e.g. more like BitTorrent.
[11:51] <Eduard_Munteanu> NaioN: yeah, fault tolerance would be nice but not a requirement here.
[11:52] <NaioN> well i think those are your best shot
[11:52] <Eduard_Munteanu> I just want to avoid subjecting the server to high network loads.
[11:54] <Eduard_Munteanu> I see, thanks. I'll investigate other stuff too, like xtreemfs, but I'm not really sure. So far only 'unison' came close to being a reasonable tool for that, but I'm unsure it does what I want.
[11:55] <Eduard_Munteanu> BTW, any general FS-related channel here on OFTC? (or Freenode)
[12:41] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[12:47] * Eduard_Munteanu (~Eduard_Mu@188.25.92.188) Quit (Ping timeout: 480 seconds)
[12:59] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[13:11] * cattelan_away_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[13:17] * cattelan_away_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) has joined #ceph
[13:23] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[13:25] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[13:26] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[13:30] * stxShadow (~jens@p4FFFE7A7.dip.t-dialin.net) has joined #ceph
[13:59] <wonko_be> any idea what I did wrong here: log 2012-03-14 13:58:40.514712 osd.1 10.1.10.181:6800/26736 141 : [WRN] map e407 wrongly marked me down or wrong addr
[14:04] <stxShadow> was the osd restartet ?
[14:12] <wonko_be> yes, all of them
[14:13] <stxShadow> then the messages should disappear after a few seconds
[14:18] <wonko_be> after a new restart of the osd, all is well again
[14:19] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Quit: Ex-Chat)
[14:20] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[14:20] <wonko_be> anyone using chef here (not the developers)?
[14:33] <wonko_be> i have a chef cookbook for chef that sets up and manages a cluster (expands the cluster as osd's are added, etc...)
[14:33] <wonko_be> a bit crude in some places, but if someone is interested, I'll share it
[14:35] * joao (~joao@di17.di.fct.unl.pt) has joined #ceph
[14:41] <guilhem1> wonko_be, I already work on it :) there is a pull request on github
[14:41] <guilhem1> maybe we can share between our 2 work
[14:41] <wonko_be> ah
[14:42] <wonko_be> i'll have a look at all the redundant work I did :)
[14:42] <wonko_be> i still need to extract the ceph cookbook from our general tree before i can put it online
[14:43] <guilhem1> Do you manage the ceph.conf ?
[14:43] <guilhem1> it's the part I don't do
[14:44] <wonko_be> guilhem1: yes, it does everything: authentication, crushmap reloading when adding osds, managing the keyrings
[14:44] <wonko_be> the ceph.conf is managed also
[14:44] <guilhem1> oh nice, your impl must be better than mine :)
[14:44] <wonko_be> just set a node to apply ceph::osd and it will pop up in the config files eventually on all nodes
[14:44] <guilhem1> I'm very interessted
[14:45] <wonko_be> let me see to chop the tree from my main chef tree, and I'll push it on github
[14:45] <guilhem1> (And I will complete it for the radosgw part if you don't do it, that is the part I use the most)
[14:45] <wonko_be> i didn't do anything on the radosgw
[14:46] <guilhem1> nice :)
[14:46] <wonko_be> :)
[14:46] <wonko_be> it'll need some cleaning up and edge-case evasions also, but "It Works For Me"(tm)
[14:47] <guilhem1> I will work on it when you will publish it
[14:49] <wonko_be> nice, I'll try to put it online either this evening, or tomorrow
[14:49] <wonko_be> lets get some other work done first
[14:51] * groovious1 (~Adium@64-126-49-62.dyn.everestkc.net) Quit (Quit: Leaving.)
[15:09] <joao> howdy all
[15:21] * oliver1 (~oliver@p4FFFE7A7.dip.t-dialin.net) has joined #ceph
[15:24] <oliver1> Hey, anybody out there who could explain the structure of a rbd-header? After last crash we have about 10 images with a:
[15:24] <oliver1> 2012-03-14 15:22:47.998790 7f45a61e3760 librbd: Error reading header: (2) No such file or directory
[15:24] <oliver1> error opening image vm-266-disk-1.rbd: (2) No such file or directory
[15:24] <oliver1> ... error?
[15:25] <oliver1> I understand the "rb.x.y"-prefix, the 16 as 2^22(dec) as block-size. But the size/count encoding is not intuitive ;)
[15:27] * cattelan_away_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[15:29] * cattelan_away_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) has joined #ceph
[15:44] * nhm waits for someone to build a ceph cluster out of these: http://www.pcconnectionexpress.com/IPA/Shop/Product/Detail.htm?sku=13371257&cac=BrandsItem&SourceID=k1971&cm_mmc=GAN-_-Slick%20Deals-_-120x60_Sony_HDV_Handy_Camcorder_Sm%26lightest-_-k1971&clickid=0004bb3500a20fcd0a2b760af3c44ad7
[15:52] <iggy> based on the atoms not being fast enough, I'd guess those are probably in the same boat
[15:52] <nhm> iggy: Sounds like the atoms are maybe starting to work better.
[15:53] <nhm> But yeah, those turion cores are probably not that much faster.
[15:54] <joao> nhm: that is way too mainstream to be a challenge :p
[15:55] <nhm> joao: fair enough. I think I have an old gateway "web computer" with a transmeta chip and integrated LCD display in it somewhere we could try.
[15:55] <iggy> lol, transmeta
[15:55] <joao> lol
[15:56] <nhm> I got it at a swapmeet nearly 10 years ago from some gateway guys for like $30.
[15:56] <joao> had you bought another two and we could have quorum in the cluster
[15:57] <nhm> hehe, they only had 1. I think it was some kind of preproduction model.
[15:57] <nhm> probably was supposed to be destroyed. :P
[15:58] <iggy> and they were selling it at a swap meet... nice
[15:59] <nhm> iggy: Yeah, looking back on it I'm guessing those guys were probably not supposed to be selling half the stuff they were selling. Was too young to really think about it at the time.
[15:59] <joao> those were the good times before leaked iphones and goons knocking down your door
[16:00] <elder> ARM V5 CPU 800Mhz and 128MB? How does that rate?
[16:00] <nhm> yeah, for all I know maybe Gateway just told them to get rid of the stuff they had.
[16:00] <nhm> joao/iggy: http://www.theinquirer.net/img/9545/gctp.jpg
[16:01] <nhm> I had debian running on mine.
[16:01] <iggy> elder: that's like a 4 year old pda
[16:01] <joao> that reminds me that I really need a coffee pot by the computer
[16:02] <joao> it would save me miles everyday
[16:26] * cattelan_away_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[16:28] * cattelan_away_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) has joined #ceph
[16:59] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:13] * cattelan_away_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[17:13] * Tv|work (~Tv_@aon.hq.newdream.net) has joined #ceph
[17:22] * tjikkun (~tjikkun@82-169-255-84.ip.telfort.nl) has joined #ceph
[17:27] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[17:29] * cattelan (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) has joined #ceph
[17:30] <sagewk> stxshadow: there?
[17:31] <stxShadow> yes
[17:32] <sagewk> is it possible that osd.2 is running a different version of ceph-osd?
[17:33] <stxShadow> hmmm .... no .... i dont think so .... i will have a look
[17:33] <stxShadow> just a moment
[17:35] <stxShadow> root@fcmsnode2:~# ceph -v
[17:35] <stxShadow> ceph version 0.43 (commit:9fa8781c0147d66fcef7c2dd0e09cd3c69747d37)
[17:35] <stxShadow> root@fcmsnode0:~# ceph -v
[17:35] <stxShadow> ceph version 0.43 (commit:9fa8781c0147d66fcef7c2dd0e09cd3c69747d37)
[17:35] <stxShadow> fcmsnode2 -> osd2
[17:36] <stxShadow> same version on node 3 und 4
[17:36] <stxShadow> we dont have node1 actual
[17:36] <sagewk> can you generate a log from osd.2 for us?
[17:37] <stxShadow> with debug 20 ?
[17:37] <sagewk> ceph tell osd.2 injectargs '--debug-osd 20 --debug-filestore 20 --debug-ms 1' and then restart osd.0 again?
[17:38] <stxShadow> yes .... i will do so .... but at the moment here in germany its "prime time" .... the recovery would slow down the whole cluster ..... (yes .... i know .... its not production ready ;))
[17:40] <stxShadow> at the moment wir are fighting against corrupted rbd images ..... oliver wrote a mail to the mailinglist a few hours ago
[17:40] <sagewk> stxshadow: can you open a bug and attach a copy of one of the header objects?
[17:40] <stxShadow> -> i will generate it later this evening ....
[17:41] <joao> hey sagewk, is there a sepia or a plana server where I can take a look at an osd directory hierarchy?
[17:41] <stxShadow> sagewk .... i will give your request to oliver .... he will do so
[17:41] <sagewk> stxshadow: the log you posted has 0.43-244-g98792e9 ... that's the reason you saw that missing assert failure.
[17:41] <oliver1> sagewk: therer _is_ no header :-\
[17:42] <sjust> stxshadow: looks like osd0 and the primary were running different versions
[17:42] <sagewk> joao: just run vstart.sh on flak (or wherever) and look in dev/osd0/0.*
[17:42] * tnt_ (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[17:42] <joao> kay
[17:42] <sagewk> oliver1: right...
[17:42] <stxShadow> sagewk ..... hmmmm ...... i ask oliver ...... he tried to update the failing osd in hope it would recover then
[17:42] <stxShadow> sagewk --> yes ... thats him
[17:43] <stxShadow> hmmm .... just for my understanding ..... one of the osds in the cluster is marked as "primary" ?
[17:43] <sagewk> stxshadow: for that particular pg (where you saw the failed assert), osd.2 is the primary and osd.0 is a replica
[17:44] <stxShadow> ah .... i see ....
[17:44] <sjust> stxShadow: each PG has a primary and some number of replicas at any one time (sorry for the confusion)
[17:44] <stxShadow> no problem .... i just have to understand :)
[17:44] <stxShadow> ok .... here is what has happend:
[17:45] <stxShadow> osd.2 crashed yesterday evening ..... we cloud not kill the process and had to reboot
[17:45] <sagewk> stxshadow, oliver1: we haven't been seeing any lost objects. i wonder if the mixed versions have anything to do with it. did you see the lost headers before or after you tried running a new ceph-osd on osd.2?
[17:45] <stxShadow> then osd.0 crashed to .....
[17:45] * mgl_clown (~wircer@80stb15.codetel.net.do) has joined #ceph
[17:46] * mgl_clown (~wircer@80stb15.codetel.net.do) has left #ceph
[17:47] <sjust> stxShadow: do you have logs from those two crashes?
[17:48] <stxShadow> donna is on a telco with us in parallel .... oliver is taking part ..... so he will answer your questions in a few minutes
[17:50] <stxShadow> so i will continue what happend
[17:51] <stxShadow> osd.0 crash again and again at 0.79% --> then we tried to upgrade to master branche (we hoped that this would fix the problem)
[17:52] <stxShadow> but the crash occured further ....
[17:52] * tnt_ (~tnt@55.189-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[17:52] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[17:52] <stxShadow> this morning after getting up we hit : Bug #2160
[17:53] <stxShadow> this was solved by restarting all osds again
[17:54] <stxShadow> actualy we are running our 300 vms on 3 ceph nodes
[17:54] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[17:54] <joshd> stxShadow: the osd memory usage you had yesterday seemed pretty high, are you using tcmalloc?
[17:55] <stxShadow> no .... dont think so .... the memory usage only occured one osd.3
[17:56] <stxShadow> -> it rises every time if scrubbing was started
[17:56] <joshd> yeah, that makes sense during scrubbing
[17:57] <joshd> there's some work to be done there to make it finer grained, so it uses less memory
[17:57] <joshd> but using tcmalloc should help a lot too
[17:57] <stxShadow> after scrubbing the memory drops down very slowly
[17:57] * gregaf1 (~Adium@aon.hq.newdream.net) Quit (Quit: Leaving.)
[17:57] <stxShadow> hmmm ..... i will try that .... osd.3 is our oldest node
[17:58] <stxShadow> maybe the newer nodes use tmcalloc
[17:58] <stxShadow> i will have a look for that too
[17:59] * gregaf (~Adium@aon.hq.newdream.net) has joined #ceph
[18:06] * dmick (~dmick@aon.hq.newdream.net) has joined #ceph
[18:10] * Theuni (~Theuni@12.43.173.22) has joined #ceph
[18:17] * Guest5293 (~chaos@hybris.inf.ug.edu.pl) Quit (Ping timeout: 480 seconds)
[18:23] * stxShadow (~jens@p4FFFE7A7.dip.t-dialin.net) Quit (Remote host closed the connection)
[18:23] * tnt_ (~tnt@55.189-67-87.adsl-dyn.isp.belgacom.be) Quit (Read error: Connection reset by peer)
[18:27] * tnt_ (~tnt@148.47-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[18:31] * oliver1 (~oliver@p4FFFE7A7.dip.t-dialin.net) has left #ceph
[18:31] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[18:46] * chaos_ (~chaos@hybris.inf.ug.edu.pl) has joined #ceph
[18:51] * joao (~joao@di17.di.fct.unl.pt) Quit (Quit: joao)
[18:52] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) has joined #ceph
[19:20] * cattelan is now known as cattelan_away
[19:28] <sagewk> i wonder if we should create a 'scratch' or 'tmp' pool by default, with the same pg count as data etc, just so we can rados bench without leaving cruft around.
[19:29] <sagewk> or make a rados bench cleanup operation that lists objects and removes it's crap
[19:29] <nhm> sagewk: oh, it just leaves stuff laying around?
[19:30] <nhm> I confess I haven't looked yet.
[19:30] <sagewk> nhm: yeah
[19:30] * perplexed (~ncampbell@216.113.168.141) has joined #ceph
[19:30] <sagewk> but its a really useful tool to have people run in the field..
[19:31] <nhm> indeed
[19:31] <joshd> sagewk: why not make write clean up after itself, and read create the data to read first?
[19:32] <Tv|work> do it's object names have some fixed prefix?
[19:32] <sagewk> joshd: it could, but invariably it'll get interrupted and leave stuff around
[19:32] <sagewk> the prefix is hostname right now, but we can stick more on the front easily enough
[19:33] <Tv|work> sagewk: it's a devil both ways.. without a prefix, you could even wreck customer data with radosbench; with one, it's behavior is different from purely random keys
[19:33] <Tv|work> separate pool sounds safe ;)
[19:33] <sagewk> i guess we want (1) cleanup after write, and (2) slow search+cleanup in case it didn't complete and left crap around
[19:33] <sagewk> the prefix _shouldn't_ affect placement since it's all hashed
[19:34] <sagewk> brb
[19:39] * chutzpah (~chutz@216.174.109.254) has joined #ceph
[19:40] <Tv|work> sagewk: i'm thinking inside the osd
[19:41] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:42] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:49] * guilhem1 (~spectrum@sd-20098.dedibox.fr) has left #ceph
[20:00] * Theuni (~Theuni@12.43.173.22) Quit (Quit: Leaving.)
[20:02] * Theuni (~Theuni@12.43.173.22) has joined #ceph
[20:02] * joao (~JL@89-181-145-13.net.novis.pt) has joined #ceph
[20:03] <sagewk> we could do a scratch/temp pool, but also cleanup, so you can test the performance of any given pool (which may have different layout etc).
[20:04] * nhm (~nh@68.168.168.19) Quit (Ping timeout: 480 seconds)
[20:17] <elder> sagewk, Tv|work, or someone... I have a C source file and I'm trying to figure out how to add it to what's built under ceph/src. It's a maze of twisty little AC stuff. Is there a simple way to add "cc $CFLAGS -o $file $file.c" somewhere?
[20:17] <elder> Or should I leave it to someone who has mastered this stuff?
[20:17] <Tv|work> vpn service blipping in & out as i change the config, sorry.. if you have trouble, restart openvpn
[20:17] <Tv|work> elder: you gotta play by automake rules.. perhaps sjust has time to help you?
[20:19] <elder> Perhaps sjust can.
[20:19] <elder> Perhaps sjust cannot.
[20:20] <elder> But only sjust can answer that.
[20:20] <elder> If I can get the attention of sjust.
[20:20] <Tv|work> lunch time..
[20:20] <elder> Oh.
[20:20] <elder> OK, well I can wait, it's just polish really, but I thought I'd try to get it built if I can.
[20:23] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) Quit (Quit: adjohn)
[20:29] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[20:31] * ghaskins (~ghaskins@68-116-192-32.dhcp.oxfr.ma.charter.com) has left #ceph
[20:32] * BManojlovic (~steki@212.200.240.216) has joined #ceph
[20:57] * ghaskins (~ghaskins@68-116-192-32.dhcp.oxfr.ma.charter.com) has joined #ceph
[20:57] * ghaskins (~ghaskins@68-116-192-32.dhcp.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[20:57] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[21:00] * Theuni (~Theuni@12.43.173.22) Quit (Ping timeout: 480 seconds)
[21:11] * Theuni (~Theuni@12.43.173.22) has joined #ceph
[21:20] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Quit: Leaving)
[21:21] * nhm (~nh@68.168.168.19) has joined #ceph
[21:24] <sjust> elder: I can help
[21:24] <elder> OK, well all I have is a single source file, and my thought was I'd put it under ceph/src
[21:24] <elder> It's a C file, so a simple "cc" command builds it.
[21:25] <sjust> what sort of thing is it?
[21:25] <elder> It's a simple command that extracts some information identifying the content of a kernel core dump (kdump) file.
[21:26] <elder> # /tmp/kdump_info dump.201203132346
[21:26] <elder> DUMP_FILE="dump.201203132346"
[21:26] <elder> DUMP_TYPE="KDUMP V4"
[21:26] <elder> DUMP_SYSNAME="Linux"
[21:26] <elder> DUMP_NODENAME="testor"
[21:26] <elder> DUMP_RELEASE="3.0.0-16-generic"
[21:26] <elder> DUMP_VERSION="#29-Ubuntu SMP Tue Feb 14 12:48:51 UTC 2012"
[21:26] <elder> DUMP_MACHINE="x86_64"
[21:26] <elder> DUMP_DOMAINNAME="(none)"
[21:26] <elder> DUMP_TIMESTAMP="1331700359.000000"
[21:26] <elder> DUMP_TIME="Tue Mar 13 23:45:59 2012"
[21:26] <elder> #
[21:26] <sjust> name of file?
[21:26] <elder> # wc src/kdump_info.c
[21:26] <elder> 153 543 4615 src/kdump_info.c
[21:27] <elder> I.e., src/kdump_info.c
[21:27] <sjust> try adding this to src/Makefile.am:
[21:27] <sjust> diff --git a/src/Makefile.am b/src/Makefile.am
[21:27] <sjust> index 82154df..432e21d 100644
[21:27] <sjust> --- a/src/Makefile.am
[21:27] <sjust> +++ b/src/Makefile.am
[21:27] <sjust> @@ -96,6 +96,9 @@ gceph_CXXFLAGS = ${AM_CXXFLAGS} $(GTKMM_CFLAGS) \
[21:27] <sjust> bin_PROGRAMS += gceph
[21:27] <sjust> endif
[21:27] <sjust>
[21:27] <sjust> +kdump_info_SOURCES = kdump_info.c
[21:27] <sjust> +bin_PROGRAMS += dump_info
[21:27] <sjust> +
[21:27] <sjust> ceph_conf_SOURCES = ceph_conf.cc
[21:27] <sjust> ceph_conf_LDADD = $(LIBGLOBAL_LDA)
[21:27] <sjust> ceph_authtool_SOURCES = ceph_authtool.cc
[21:27] <elder> OK.
[21:27] <Tv|work> sjust: -1 on the name! ceph-*
[21:27] <sjust> from there, make should build it with everything else
[21:27] <sagewk> (presumably sleeping) joao: i just triggered the btrfs bug!
[21:27] <sjust> ah, right, darn
[21:28] <sjust> one sec
[21:28] <sjust> diff --git a/src/Makefile.am b/src/Makefile.am
[21:28] <sjust> index 82154df..62fed10 100644
[21:28] <sjust> --- a/src/Makefile.am
[21:28] <sjust> +++ b/src/Makefile.am
[21:28] <sjust> @@ -96,6 +96,9 @@ gceph_CXXFLAGS = ${AM_CXXFLAGS} $(GTKMM_CFLAGS) \
[21:28] <sjust> bin_PROGRAMS += gceph
[21:28] <sjust> endif
[21:28] <sjust>
[21:28] <sjust> +ceph-kdump-info_SOURCES = kdump_info.c
[21:28] <sjust> +bin_PROGRAMS += ceph-dump-info
[21:28] <sjust> +
[21:28] <sjust> ceph_conf_SOURCES = ceph_conf.cc
[21:28] <sjust> ceph_conf_LDADD = $(LIBGLOBAL_LDA)
[21:28] <sjust> ceph_authtool_SOURCES = ceph_authtool.cc
[21:29] <sjust> one sec
[21:30] <sjust> diff --git a/src/Makefile.am b/src/Makefile.am
[21:30] <sjust> index 82154df..6f73a66 100644
[21:30] <sjust> --- a/src/Makefile.am
[21:30] <sjust> +++ b/src/Makefile.am
[21:30] <sjust> @@ -96,6 +96,9 @@ gceph_CXXFLAGS = ${AM_CXXFLAGS} $(GTKMM_CFLAGS) \
[21:30] <sjust> bin_PROGRAMS += gceph
[21:30] <sjust> endif
[21:30] <sjust>
[21:30] <sjust> +ceph_kdump_info_SOURCES = kdump_info.c
[21:30] <sjust> +bin_PROGRAMS += ceph-dump-info
[21:30] <sjust> +
[21:30] <sjust> ceph_conf_SOURCES = ceph_conf.cc
[21:30] <sjust> ceph_conf_LDADD = $(LIBGLOBAL_LDA)
[21:30] <sjust> ceph_authtool_SOURCES = ceph_authtool.cc
[21:30] <sjust> ok, that one oughta work
[21:35] * The_Bishop (~bishop@178-17-163-220.static-host.net) Quit (Ping timeout: 480 seconds)
[21:36] * The_Bishop (~bishop@178-17-163-220.static-host.net) has joined #ceph
[21:37] * cattelan_away is now known as cattelan
[21:38] <elder> How do I make clean in the ceph repository?
[21:41] <elder> I'm getting errors when I run: ./do_autogen.sh -d3
[21:42] <elder> (I'm getting the errors even with a clean ceph/master tree)
[21:42] <elder> sjust, do you konw?
[21:48] * cp (~cp@209.49.63.97) has joined #ceph
[21:51] <sjust> make distclean, I think
[21:51] <sjust> or git clean -fdx
[21:52] * jluis (~JL@89.181.145.13) has joined #ceph
[21:53] <joao> sagewk, still there?
[21:53] <joao> I read something about you triggering the bug :D
[21:53] <joao> (I never thought I could be this excited when it comes to bugs)
[21:56] <elder> sjust, with a clean repository, up-to-date master branch, I run "./do_autogen -d3" and it ends up with a problem. Do you see that?
[21:56] <elder> Makefile.am:182: `lib/libgtest.a' is not a standard libtool library name
[21:56] <elder> Do I need to install something new? I'm sure I wasn't seeing this before.
[21:57] * adjohn (~adjohn@50.56.129.169) has joined #ceph
[21:58] <Tv|work> elder: what exactly did you run?
[21:58] <elder> ./do_autogen -d3
[21:58] <Tv|work> including the clean and everything after that
[21:58] <dmick> ./do_autogen.sh, I assume?...
[21:58] <elder> Yes.]
[21:59] * lofejndif (~lsqavnbok@09GAADW91.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:00] <dmick> 1) works for me 2) I don't have any libgtest.a (for what little it's worth)
[22:00] <sagewk> joao: yeah, plana10
[22:00] <sagewk> check dmesg
[22:01] <elder> Trying it on another machine.
[22:01] <elder> autoreconf: running: /usr/bin/autoheader --force
[22:01] <elder> autoreconf: running: automake --add-missing --copy --force-missing
[22:01] <elder> Makefile.am:182: `lib/libgtest.a' is not a standard libtool library name
[22:01] <elder> Makefile.am:182: did you mean `lib/libgtest.la'?
[22:01] <elder> Makefile.am:182: `lib/libgtest_main.a' is not a standard libtool library name
[22:01] <elder> Makefile.am:182: did you mean `lib/libgtest_main.la'?
[22:01] <elder> autoreconf: Leaving directory `.'
[22:01] <elder> autoreconf: `configure.ac' or `configure.in' is required
[22:01] <elder> autogen failed
[22:05] <elder> I suppose I could have installed something that conflicts with the existing build process. I've been tweaking things in order to get kdump to go.
[22:06] <sagewk> joao: /tmp/sdc.dump
[22:06] <Tv|work> elder: did you run make distclean or git clean?
[22:07] <joao> sagewk, doesn't seem I have access to plana10 (or any other plana for that matter)
[22:07] <elder> Well, both.
[22:07] <sagewk> joao: try from metropolis
[22:07] <sagewk> or restart your vpn
[22:07] <elder> Actually, I did a git status because I didn't want it to blow away my patches.
[22:07] <dmick> joao: restart vpn, probably
[22:07] <joao> I am able to connect to it
[22:07] <Tv|work> elder: sounds like you have junk files still in your tree
[22:08] <Tv|work> elder: "git clean -ndx" will list them
[22:08] <joao> jecluis@Magrathea:~/Code/dreamhost/ceph$ ssh joao@plana10.front.sepia.ceph.com -i ~/.ssh/ndn_dsa
[22:08] <joao> joao@plana10.front.sepia.ceph.com's password:
[22:08] <sagewk> ubuntu@
[22:08] <dmick> ubuntu
[22:08] <joao> oh
[22:08] <elder> That's a problem with putting too much in your .gitignore file...
[22:08] <joao> should I need a password?
[22:09] <sagewk> no
[22:09] <joao> then something has terribly gone wrong :p
[22:09] <Tv|work> elder: no it isn't; .a and .la *belong* in .gitignore, yet they've demonstratably broken a build if you merge automake changes with generated files in the tree
[22:10] <dmick> joao: I will help you privately
[22:10] <Tv|work> elder: automake is just broken in a way that it doesn't handle arbitrary changes to a partially built tree
[22:11] <elder> OK, I cleaned the hell out of my tree and still get the same thing.
[22:12] <Tv|work> elder: pastebin results of "git rev-parse HEAD; git status; git clean -ndx"
[22:12] <elder> {2802} elder@speedy-> git rev-parse HEAD
[22:12] <elder> 8c96fd26d6516571388a59a428016abd5a434005
[22:12] <elder> {2803} elder@speedy-> git status
[22:12] <elder> # On branch master
[22:12] <elder> nothing to commit (working directory clean)
[22:12] <elder> {2804} elder@speedy-> git clean -ndx
[22:12] <elder> {2805} elder@speedy->
[22:14] <elder> http://pastebin.com/SCiYbLZe
[22:16] <dmick> sage I'll take care of fixing joao's keys in the chef if you like
[22:16] <dmick> sagewk ^
[22:16] <Tv|work> elder: git submodule init && git submodule update && echo 'blame sjust ;)'
[22:16] <sjust> sounds right
[22:16] <joao> dmick, just curious, what is the chef?
[22:16] <elder> What is the git submodule stuff doing?
[22:16] <dmick> multiserver setup/config framework so your key gets distributed everywhere
[22:17] <sagewk> dmick: oops too late :)
[22:17] <dmick> takes you longer to read the IM than it does to type the 10 commands :)
[22:18] <elder> Tv|work, that's better. sjust, I blame you (as instructed)
[22:30] <elder> OK, so sjust your suggested change to the Makefile.in works, it produces "src/ceph-keump-info"
[22:30] <elder> (src/ceph-kdump-info)
[22:30] <elder> Thanks.
[22:30] <sjust> cool
[22:31] <sjust> is that what you were looking for?
[22:31] <elder> I think so.
[22:32] <elder> Yes, as far as I understand thing. I'm not sure what more to do with this thing at this point, but I wanted to stash it away somewhere.
[22:33] * perplexed_ (~ncampbell@216.113.168.141) has joined #ceph
[22:40] * perplexed (~ncampbell@216.113.168.141) Quit (Ping timeout: 480 seconds)
[22:40] * perplexed_ is now known as perplexed
[22:42] <Tv|work> the dell out-of-band remote console reminds me of DESQview on the i386s ages ago.. if the modifier keys don't work, slam them all repeatedly for 5 seconds, then try again
[22:43] * lofejndif (~lsqavnbok@09GAADW91.tor-irc.dnsbl.oftc.net) Quit (Quit: Leaving)
[22:44] <nhm> Tv|work: oh, are you using their DRAC stuff?
[22:44] <nhm> Tv|work: we always just used ipmi.
[22:50] <darkfader> Tv|work: if you wanna get really depressed, read the protocol specs for "smash-sh", the stuff underneath DRAC
[22:50] <darkfader> otherwise do as nhm says ;p
[22:57] <Tv|work> nhm: drac has more power than ipmi; i'll use both
[22:57] * Theuni (~Theuni@12.43.173.22) Quit (Remote host closed the connection)
[22:57] * Theuni (~Theuni@12.43.173.22) has joined #ceph
[22:58] * lofejndif (~lsqavnbok@83TAAD31Y.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:02] <elder> sagewk, I'm not acting on this: http://tracker.newdream.net/issues/2174
[23:03] <elder> because as I understand it we need to reproduce it with access to console to see what's going on.
[23:03] <elder> But also because of this, I'm not pushing out the master branch until we get that resolved/understood.
[23:04] <elder> Back in a abit.
[23:07] <Tv|work> ... aaand the password i left on this system, some 2 weeks ago, was "ubuntu"
[23:07] <Tv|work> *headdesk*
[23:07] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Quit: LarsFronius)
[23:08] <Tv|work> anyway, i have an automatically-reinstalled plana box.. now to fiddle with the details
[23:10] <dmick> Tv|work: sweet
[23:10] <dmick> RAID Level : Primary-5, Secondary-0, RAID Level Qualifier-3
[23:11] <dmick> wow. so it's RAID 503. <sigh>
[23:11] * perplexed (~ncampbell@216.113.168.141) Quit (Remote host closed the connection)
[23:11] <Tv|work> the sad truth is, i did this more than 2 weeks ago.. everything since then has been an interruption
[23:11] * perplexed (~ncampbell@216.113.168.141) has joined #ceph
[23:13] <Tv|work> RAID3? oh wow.
[23:13] <Tv|work> I might actually never have seen a RAID3.
[23:18] <dmick> I suppose this is probably "RAID5{+}0"
[23:19] <nhm> yeah, that's crazy
[23:19] <nhm> did Dell ship it that way?
[23:21] <darkfader> can i ask a ceph-on-old-kernels type question?
[23:21] * LarsFronius (~LarsFroni@f054109189.adsl.alicedsl.de) has joined #ceph
[23:21] <dmick> nhm: I believe so
[23:22] <nhm> silly dell
[23:23] <nhm> dmick: some times with those LSI controllers you have to install a different firmware to make them behave like normal SATA adapters.
[23:24] <dmick> yeah, I think the current plan is "almost-JBOD"
[23:24] <nhm> dmick: look for "Initiator Target" firmwares
[23:25] <dmick> I'm not going to update firmware, I don't think
[23:26] <dmick> darkfader: you can certainly ask
[23:27] <dmick> the issue is whether you get a useful answer or not
[23:27] <darkfader> dmick: hehe, i'm aware of that but i wasnt in a hurry :)
[23:28] <darkfader> i have just tried oraclevm3 for my xen boxes, and flashcache build in like a minute. so i'd be tempted to make rbd work; they have a 2.36.32-21 kernel
[23:29] <darkfader> so like squeeze roughly
[23:30] <darkfader> each host will run an osd or two i think
[23:30] <darkfader> alternatively i just wait another year till i move to something with 3.x kernel
[23:32] <joshd> darkfader: not really sure how much work is required; last time I asked yehudasa, he thought a 2.6.30+ kernel wouldn't be that bad to backport to
[23:36] * Theuni1 (~Theuni@12.43.172.10) has joined #ceph
[23:36] * Theuni (~Theuni@12.43.173.22) Quit (Ping timeout: 480 seconds)
[23:40] * Theuni1 (~Theuni@12.43.172.10) Quit ()
[23:41] <darkfader> joshd: thanks :)
[23:42] * jluis (~JL@89.181.145.13) Quit (Quit: Leaving)
[23:44] <sagewk> elder: http://linux.die.net/man/8/netdump
[23:45] <sagewk> elder: is that what you were using?
[23:46] <sagewk> elder: oh, no. netdump looks more attractive because you avoid the need for the crashed machine to reboot to recover the crash info
[23:48] * perplexed_ (~ncampbell@216.113.168.141) has joined #ceph
[23:53] <elder> I did not use netdump. I looked at it, but it was related to kdump and I figured it wouldn't be hard to do. But in the end (when I switched to Ubuntu) the instructions I had were about kdump.
[23:54] <elder> I think they use the same safety mechanism, namely kexec, to avoid having the broken kernel be the one that writes out data.
[23:55] * perplexed (~ncampbell@216.113.168.141) Quit (Ping timeout: 480 seconds)
[23:55] <elder> When you boot the machine, kexec loads up a separate kernel, ready to be jumped to, in some reserved memory. When a panic occurs (or whatever) control jumps to that kernel, whose only job it is to copy out memory to disk (or, via netdump to a remote netdump server)
[23:56] <elder> The other reason I didn't use netdump was that it wasn't available for Ubuntu. I had to get the source RPM,, figure out how to get rpm tools installed on my Ubuntu system, and so on. I started down that path, but libraries were missing, header files were not there, and so on. So I set that aside for the time being.
[23:56] * perplexed_ (~ncampbell@216.113.168.141) Quit (Ping timeout: 480 seconds)
[23:56] <elder> I believe though that a netdump server is simply a service with an attached storage repository; it does'nt do much other than wait for and handle incoming dump requests.
[23:58] <elder> I believe that kdump with proper configuration can have its dump file(s) sent to a netdump server.

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.