#ceph IRC Log

Index

IRC Log for 2012-12-31

Timestamps are in GMT/BST.

[0:05] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[0:45] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[0:52] * themgt (~themgt@71-90-234-152.dhcp.gnvl.sc.charter.com) Quit (Quit: themgt)
[0:54] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[0:54] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:55] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) Quit (Remote host closed the connection)
[1:04] * MilesF (~chatzilla@pool-71-184-234-147.bstnma.fios.verizon.net) has joined #ceph
[1:08] * MilesF (~chatzilla@pool-71-184-234-147.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[1:40] * Kioob (~kioob@luuna.daevel.fr) Quit (Remote host closed the connection)
[1:41] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[1:44] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[2:18] * themgt (~themgt@71-90-234-152.dhcp.gnvl.sc.charter.com) has joined #ceph
[3:24] * LeaChim (~LeaChim@5ad684ae.bb.sky.com) Quit (Ping timeout: 480 seconds)
[3:53] * themgt_ (~themgt@71-90-234-152.dhcp.gnvl.sc.charter.com) has joined #ceph
[4:00] * themgt (~themgt@71-90-234-152.dhcp.gnvl.sc.charter.com) Quit (Ping timeout: 480 seconds)
[4:00] * themgt_ is now known as themgt
[4:24] * astalsi (~astalsi@c-69-255-38-71.hsd1.md.comcast.net) Quit (Read error: Connection reset by peer)
[6:39] * themgt (~themgt@71-90-234-152.dhcp.gnvl.sc.charter.com) Quit (Quit: themgt)
[6:55] * maxia (~rolson@114.91.108.175) Quit (Quit: Leaving)
[8:10] * themgt (~themgt@71-90-234-152.dhcp.gnvl.sc.charter.com) has joined #ceph
[8:23] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[8:29] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:30] * themgt (~themgt@71-90-234-152.dhcp.gnvl.sc.charter.com) Quit (Quit: themgt)
[8:31] * loicd (~loic@magenta.dachary.org) has joined #ceph
[8:39] * lx0 is now known as lxo
[8:47] * sleinen (~Adium@2001:620:0:26:e5b0:70e:ab95:c597) Quit (Quit: Leaving.)
[9:03] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[9:15] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[9:19] * nyeates (~nyeates@pool-173-59-239-231.bltmmd.fios.verizon.net) Quit (Quit: Zzzzzz)
[9:38] * Leseb (~Leseb@2001:980:759b:1:6996:33c8:857f:731d) has joined #ceph
[9:43] * Leseb_ (~Leseb@193.172.124.196) has joined #ceph
[9:50] * Leseb (~Leseb@2001:980:759b:1:6996:33c8:857f:731d) Quit (Ping timeout: 480 seconds)
[9:50] * Leseb_ is now known as Leseb
[9:51] * nyeates (~nyeates@pool-173-59-239-231.bltmmd.fios.verizon.net) has joined #ceph
[9:55] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[9:55] * loicd (~loic@magenta.dachary.org) has joined #ceph
[9:56] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) has joined #ceph
[10:05] * verwilst (~verwilst@d5152FEFB.static.telenet.be) Quit (Read error: Connection reset by peer)
[10:06] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[10:07] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) has left #ceph
[10:18] * Leseb_ (~Leseb@193.172.124.196) has joined #ceph
[10:18] * Leseb (~Leseb@193.172.124.196) Quit (Read error: Connection reset by peer)
[10:18] * Leseb_ is now known as Leseb
[10:31] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) has joined #ceph
[10:33] * sleinen1 (~Adium@2001:620:0:26:50f5:6260:efb:3760) has joined #ceph
[10:39] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[10:50] * ScOut3R (~ScOut3R@2E6BADF9.dsl.pool.telekom.hu) has joined #ceph
[11:00] <renzhi> what issue could we run into if we run 2 clusters side by side?
[11:00] <renzhi> will they mess with each other, in terms of messaging, etc?
[11:06] <joao> theoretically, as long as you have two different sets of everything, including monitors, and each set has the right configuration, everything should work, albeit it could obviously impose extra load on your network
[11:07] <joao> can't think of a practical issue out of the top of my head
[11:08] <renzhi> joao: yes, each cluster with its own mons, config files, and osds, etc. But they just run on the same network.
[11:08] <renzhi> ceph seems to be quite chatty, would that be an issue?
[11:11] <joao> renzhi, afaik, all communication is made with stateful tcp connections; as long as you don't screw the config up, and everyone knows where their cluster couterparts are, everything should be fine; but just because I'm not aware of any obvious issues, it doesn't mean there are none ;)
[11:12] <joao> an email to ceph-devel might show more fruitful
[11:12] <joao> maybe someone else tried that before?
[11:12] <renzhi> ok
[11:14] <joao> fwiw, we do run multiple clusters in qa; several test runs, and our own test deployments, are made on the same network, but always on different servers
[11:14] * ScOut3R (~ScOut3R@2E6BADF9.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[11:14] <renzhi> joao: ok, that sounds a bit reassuring
[11:15] <joao> teuthology does the heavy configuration lifting though :)
[11:33] * andret (~andre@pcandre.nine.ch) Quit (Ping timeout: 480 seconds)
[11:42] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[11:42] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[11:45] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) has joined #ceph
[12:13] * LeaChim (~LeaChim@5ad684ae.bb.sky.com) has joined #ceph
[12:13] * houkouonchi-home (~linux@fios.houkouonchi.jp) Quit (Ping timeout: 480 seconds)
[12:22] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[13:04] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[13:06] * ScOut3R (~ScOut3R@2E6BADF9.dsl.pool.telekom.hu) has joined #ceph
[13:08] * houkouonchi-home (~linux@pool-108-38-63-38.lsanca.fios.verizon.net) has joined #ceph
[13:08] * ScOut3R (~ScOut3R@2E6BADF9.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[13:09] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[13:15] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[13:18] * nyeates (~nyeates@pool-173-59-239-231.bltmmd.fios.verizon.net) Quit (Quit: Zzzzzz)
[13:24] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[13:32] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[13:50] * sleinen1 (~Adium@2001:620:0:26:50f5:6260:efb:3760) Quit (Quit: Leaving.)
[13:50] * tezra (~rolson@116.226.64.176) has joined #ceph
[13:50] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) has joined #ceph
[13:50] <tezra> write performance to ceph during "repair" mode is awful, is there some way around it?
[13:51] <tezra> Like tweaking a variable or something... it is timing out a lot
[13:51] <tezra> like it just gives up
[13:52] <tezra> writing a 20MB file via rados command line bails a lot
[13:52] <tezra> it's hurting
[13:56] * stwind (~stwind@116.226.64.176) has joined #ceph
[13:58] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[13:58] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[14:00] * verwilst (~verwilst@d5152FEFB.static.telenet.be) Quit (Ping timeout: 480 seconds)
[14:02] * houkouonchi-home (~linux@pool-108-38-63-38.lsanca.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[14:04] * madkiss2 (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[14:04] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[14:04] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Read error: No route to host)
[14:13] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[14:20] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[14:20] * loicd (~loic@magenta.dachary.org) has joined #ceph
[14:23] <tezra> any ceph pros around?
[14:28] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[14:41] * stwind (~stwind@116.226.64.176) Quit (Quit: stwind)
[14:51] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[14:53] * glowell1 (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[14:53] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[15:00] <iggy> tezra: most of them are US east coast which is 6am right now, so they will probably be asleep for a few more hours
[15:00] <madkiss2> and then they might even want to celebrate … :)
[15:01] <iggy> I don't know... most people I know have to work today and are off tomorrow
[15:01] <iggy> that's the only reason I'm awake at this ungodly hour
[15:01] <madkiss2> tezr
[15:02] <madkiss2> err
[15:02] <madkiss2> tezra: so what do you need?
[15:03] <iggy> I'm guessing better write performance during recovery
[15:18] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[15:18] * loicd (~loic@magenta.dachary.org) has joined #ceph
[15:21] <joao> I'm not sure how many of the guys have taken the day off today
[15:21] <joao> not even sure if we'll have the morning stand-up today
[15:35] * allsystemsarego (~allsystem@188.27.165.115) has joined #ceph
[15:37] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) has left #ceph
[15:38] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[15:43] <nhm> joao: We may have the day off, I don't remember.
[15:44] <joao> I think that's the case
[15:45] <nhm> Yeah, I just looked at the calendar and there is a little green square.
[15:45] <joao> I'd go through the emails to check it, but I don't have anything better to do until 6pm, so I'll just make myself useful :p
[15:53] <elder> nhm, we do have the day off. But I think we're a hard working bunch, so I expect many people will be online parts of the day.
[16:01] <nhm> I think my wife would describe our behaviour as slightly OCD. :)
[16:07] <joao> I know my parents do describe it pretty much that way
[16:10] * noob2 (~noob2@ext.cscinfo.com) has joined #ceph
[16:12] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[16:13] <mikedawson> Is there any way to fix leveldb?
[16:13] <mikedawson> Error initializing leveldb: Corruption: CURRENT file does not end with newline
[16:16] <nhm> mikedawson: don't know, that sounds like this though: http://code.google.com/p/leveldb/issues/detail?id=69
[16:17] * verwilst (~verwilst@d5152FEFB.static.telenet.be) Quit (Quit: Ex-Chat)
[16:17] <mikedawson> yeah. I have a few of servers that seem to have backplane stability issues .... drives fall offline randomly
[16:18] <mikedawson> seemed like a good test case for Ceph resiliency, but I'm in a state where too much failed at once
[16:18] <mikedawson> HEALTH_WARN 1 pgs backfill; 1 pgs degraded; 20 pgs down; 20 pgs peering; 1 pgs stale; 20 pgs stuck inactive; 1 pgs stuck stale; 21 pgs stuck unclean
[16:20] <mikedawson> If I could fix the leveldb issue, I'd be able to recover. Alternately, I have issues on a couple other OSDs with truncated logs
[16:20] <noob2> anyone had trouble with the s3 gateway and deletions?
[16:20] <nhm> mikedawson: Not sure if tu can repair leveldb, one of the other guys might know. If not, you could probably let ceph replicate to other servers and then clean the bad one and readd it.
[16:21] <joao> assuming you have a good replica
[16:21] <joao> if too much failed at the same time, it would be wise to check that before attempting to let ceph replicate
[16:21] <nhm> true
[16:22] * jbarbee (17192e61@ircip3.mibbit.com) has joined #ceph
[16:22] <mikedawson> here's the type of issue on my other two misbehaving OSDs:
[16:22] <mikedawson> 2012-12-31 10:21:06.504658 7f44f9f0b780 0 osd.20 pg_epoch: 2704 pg[4.604( v 2276'152 lc 2276'127 (2276'152,2276'152] local-les=2702 n=12 ec=11 les/c 2702/2618 2680/2681/2553) [] r=0 lpr=0 pi=2617-2680/6 (info mismatch, log(2276'152,0'0]) (log bound mismatch, empty) lcod 0'0 mlcod 0'0 inactive] Got exception 'read_log_error: read_log got 0 bytes, expected 23016-0=23016' while reading log....
[16:22] <mikedawson> ...Moving corrupted log file to 'corrupt_log_2012-12-31_10:21_4.604' for later analysis.
[16:24] <mikedawson> is that recoverable?
[16:27] <nhm> mikedawson: trying to find info on it. There was a bug that got resolved about a month ago: http://tracker.newdream.net/issues/2649
[16:27] <nhm> a little different though
[16:28] <nhm> thread on the mailing list: http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/10597
[16:30] <mikedawson> nhm: I'd like to avoid Stefan's ultimate fix - wiping the cluster
[16:30] <nhm> See sam's reply. Try restarting one of the OSDs with debug osd = 20, debug filestore = 20, and debug ms = 1. I probably won't be of much help, but when someone who knows what they are doing shows up then you'll already have the info. ;)
[16:30] <nhm> mikedawson: yeah
[16:30] <mikedawson> nhm: yeah. I've done that and have the logs ready
[16:31] <nhm> mikedawson: ah, good. what version of ceph btw?
[16:31] <mikedawson> 0.55.1
[16:32] <nhm> Ok. So perhaps whatever bit Stefan also bit you.
[16:33] <nhm> Did this all happen at the same time?
[16:33] <mikedawson> yeah - reading his stuff, he appeared to be stress testing. I just have some flaky backplanes.
[16:34] <nhm> mikedawson: it's entirely possible he also has flaky backplanes. ;)
[16:35] <mikedawson> Yes. Power outage triggered a number of issues Sunday night. Recovered most of them, but then ran into trouble as Ceph came back up / rebalanced
[16:35] <mikedawson> drives dropping offline during recovery only makes it worse
[16:36] <mikedawson> I've been using this gear to convince myself test if it is safe to have nodes/drives can die with 2x replication.
[16:37] <nhm> mmmm, alcoholic espresso drink, perfect thing to have on a bitterly cold non-work day.
[16:38] <mikedawson> I'm moving to 3x replication. Been bit by 2x a few times now
[16:39] <nhm> mikedawson: sounds like if the hardware is flaky enough that you have a moderate probabilty of drives dropping during replication, you are going to need more replication.
[16:39] <joao> nhm, great idea!
[16:39] <mikedawson> Yeah. This gear is going away, too
[16:39] <nhm> mikedawson: that's probably for the best. Honestly there's only so much something like Ceph can do if the hardware is regularly crapping out.
[16:40] * joao has to add 'irish whiskey' and 'cream' to the new year's eve shopping list
[16:40] <nhm> joao: kahlua and amaretto for me this morning
[16:55] <noob2> are there any other gateway caps besides usage and user?
[16:56] <nhm> noob2: sorry, I'm way behind on the gateway. Probably best to talk to Yehuda
[16:56] * themgt (~themgt@71-90-234-152.dhcp.gnvl.sc.charter.com) has joined #ceph
[16:57] * slang (~slang@c-71-239-8-58.hsd1.il.comcast.net) has joined #ceph
[16:57] <noob2> ok
[16:58] <noob2> i'm finding it odd that i can upload things to the s3 gateway but when i try to delete i get a 500
[16:58] <nhm> noob2: definitely sounds odd!
[16:58] <nhm> noob2: anything in the logs?
[16:59] <noob2> the only thing i see is this: not unsetting Content-Length in HEAD response (rgw changes) \n
[16:59] <noob2> and also this: "DELETE /test/test HTTP/1.1" 500 460 "-" "
[17:00] <nhm> noob2: you could try debug 20 on the rgw
[17:01] <noob2> sure
[17:01] <noob2> just throw that in the ceph.conf and restart apache?
[17:03] <noob2> ok now i see something interesting
[17:03] <noob2> (111)Connection refused: FastCGI: failed to connect to server "/var/www/s3gw.fcgi": connect() failed
[17:04] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) Quit (Remote host closed the connection)
[17:04] <janos> arg, i have a log filling up - two different hosts. each has one osd filling logs with "heartbeat_check: no reply from osd.201 ever"
[17:05] <janos> and the other doing the same
[17:05] <janos> 201 can't see 103 and 103 cant see 201
[17:05] <janos> anyone have any idea what causes this?
[17:05] <janos> or even better - what fixes this?
[17:05] <janos> ;)
[17:05] <janos> this is 0.55.1 compiled here
[17:05] <nhm> noob2: hrm, sounds like maybe there is something screwy with the apache conf
[17:05] <noob2> yeah
[17:06] <noob2> it allows me to connect and upload but smacks down deletes
[17:07] <nhm> janos: hrm, can the two machines talk to each other on the various ports being used?
[17:07] <janos> iptables should be open
[17:07] <janos> i can shut them down
[17:08] <janos> iptables, that is
[17:08] <nhm> janos: other osds on the same hosts are fine?
[17:09] <janos> seems so. i started two hosts, each with one osd. then added one more to each host
[17:09] <janos> these are the two new ones
[17:10] <nhm> noob2: and the OSDs in question are up right?
[17:11] <nhm> oops, janos that was for you
[17:11] <janos> yep, up
[17:11] <janos> hrmm. killing ip tables - they aren't spooling log messages out
[17:11] <janos> omg, did i seriously flub iptables
[17:11] * janos checks his rules
[17:12] <janos> 6789 and 6800-6805
[17:12] <noob2> nhm: yeah the osd's are up. i only have 2 monitors on this cluster so that could be causing problems
[17:12] <janos> that 6805 seems to be too low
[17:12] <nhm> noob2: sorry, that was for janos
[17:12] <noob2> oh :)
[17:12] <janos> how are port ranges decided?
[17:12] <janos> i can certainly grant a larger range - just not sure what to set
[17:13] <nhm> janos: mon port at last is usually specified in ceph.conf. Other ports might just be a default range.
[17:13] <nhm> janos: https://github.com/ceph/ceph/blob/master/src/common/config_opts.h
[17:14] <janos> cool, looking
[17:14] <nhm> janos: search for "port"
[17:14] <janos> ah interesting
[17:14] <janos> OPTION(ms_bind_port_min, OPT_INT, 6800)
[17:14] <janos> OPTION(ms_bind_port_max, OPT_INT, 7100)
[17:14] <janos> i think that might answer it
[17:14] <nhm> yep
[17:14] <janos> thank you very much sir
[17:15] <nhm> np
[17:15] <janos> i'm one of those fedora-using people, so i ended up compiling myself to get more recent goodies
[17:15] <janos> and i was unsure if my efforts there were a problem
[17:15] <nhm> janos: ah! good to have people pushing the envelope. :)
[17:15] <janos> made a local repo as well
[17:15] <janos> for easier lan distribution
[17:16] <janos> if i felt like i knew what i was doing, i wouldcontribute to that. but this is my first shot at making a repo and rpm's
[17:16] <janos> i do not want to mess anyone else up
[17:17] <nhm> janos: Nice. I confess I've been comfortably sticking with easy ubuntu deployments for most of the performance testing I've been doing, so I've got some tests on CentOS/RHEL coming up.
[17:17] <nhm> s/so/though
[17:17] <janos> i've been doing entirely fedora
[17:17] <nhm> janos: talk to Gary. He's been doing the packaging for Inktank. He might have some insights or even some questions for you!
[17:17] <janos> i've had mixed results, but i chalk that up to my own ignorance
[17:18] <janos> learning, though
[17:18] <janos> dang that reminds me
[17:18] <janos> need to file a bug
[17:18] <janos> line 280 (iirc) on /etc/init.d/ceph
[17:19] <janos> fs_type = "btrfs" --> should fs_type="btrfs"
[17:19] <janos> +be
[17:19] <janos> i've been hand-fixing that when i compile
[17:19] <nhm> ah yes, spaces kill
[17:19] <janos> i really like this project
[17:20] <nhm> janos: We try our best. :)
[17:20] <janos> it's cool. i'm dogfooding at home for a while before i consider work
[17:20] <mikedawson> Anyone know leveldb?
[17:20] <mikedawson> 2012-12-31 10:09:59.078587 7f3112a59780 -1 filestore(/var/lib/ceph/osd/ceph-8) Error initializing leveldb: Corruption: CURRENT file does not end with newline
[17:21] <mikedawson> the file /var/lib/ceph/osd/ceph-8/current/omap/CURRENT is in fact hosed
[17:21] <janos> thank you for the help, nhm. i need to get back to the kids, but i'll be lurking ;)
[17:22] <mikedawson> all others are 16 bytes of plain text that simply list something like MANIFEST-000048
[17:22] <nhm> janos: sounds good, enjoy the holiday. :)
[17:22] <janos> will do - you too!
[17:22] <mikedawson> When I manually fix the CURRENT file to point to the MANIFEST-0000xx file in the same directory, I get:
[17:22] <mikedawson> 2012-12-31 11:19:03.600650 7fb36c31c780 -1 filestore(/var/lib/ceph/osd/ceph-8) Error initializing leveldb: Corruption: bad record length
[17:22] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[17:23] <mikedawson> If I copy the CURRENT file from another OSD (which points to that other OSD's manifest file), I get:
[17:23] <nhm> mikedawson: what is the content of the CURRENT file btw?
[17:23] <mikedawson> 2012-12-31 11:16:56.575424 7f538d0df780 -1 filestore(/var/lib/ceph/osd/ceph-8) Error initializing leveldb: IO error: /var/lib/ceph/osd/ceph-8/current/omap/MANIFEST-000074: No such file or directory
[17:23] <mikedawson> ©QS¬QF²DbµA>
[17:24] <nhm> mikedawson: ok, so not the 16 null bytes as described in issue 68 for leveldb.
[17:24] <mikedawson> seems like manually fixing CURRENT would work, but maybe there is also an issue on MANIFEST-000048 or associated files now
[17:26] <nhm> mikedawson: I wonder if this would be useful: https://github.com/ceph/leveldb/blob/master/db/corruption_test.cc
[17:26] <nhm> well, I suppose we already know it's corrupt.
[17:30] <mikedawson> nhm: the frusterating part is the data appears to be sound on the OSDs that are causing problems, but the Ceph metadata (leveldb on one OSD, and truncated log files on two other) is corrupt
[17:31] <nhm> mikedawson: Yeah, I'm guessing you are correct.
[17:32] <nhm> mikedawson: If you can hold out for Sam I imagine he could probably give a quicker answer if it would be best if there were some way to recover/rebuild the metadata or if it's best to just blow stuff away. Are the bad OSDs on the same host or on different hosts?
[17:33] <mikedawson> nhm: the leveldb issue is on node3 and the two truncated log files issues are on node7
[17:34] <mikedawson> so *most* of the PGs are fine, but...
[17:34] <mikedawson> nhm: Do you know Sam's ETA? Is he west coast?
[17:35] <nhm> mikedawson: he's west coast. I think we actually have today off, so he might not be around until later this week. :/
[17:35] <mikedawson> Understood. Thanks!
[17:36] <noob2> nhm: i found an error in the radosgw logs when i try to delete
[17:36] <noob2> WARNING: set_req_state_err err_no=5 resorting to 500
[17:37] <nhm> noob2: see: http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/8941
[17:37] <nhm> noob2: slightly different issue, but the same error number.
[17:37] <noob2> ok
[17:37] <nhm> noob2: looks like debug ms = 1 might help.
[17:38] <nhm> also, might want to just look at the OSDs and see if you have a similar problem.
[17:38] <noob2> how could i check my osd's?
[17:38] <noob2> with the rados tool?
[17:39] <nhm> noob2: oh, meant look at hte logs for the OSDs and see if you see any problems loading the rgw library.
[17:39] <noob2> oh
[17:39] <noob2> lemme check
[17:39] <nhm> noob2: if not, you might want to try Yehuda's suggestion in that thread and enable debug ms on both the RGW and the OSDs.
[17:39] <noob2> 2012-12-31 11:15:25.971093 7f74420ba700 0 _load_class could not open class /usr/lib/rados-classes/libcls_lock.so (dlopen failed): /usr/lib/rados-classes/libcls_lock.so: cannot open shared object file: No such file or directory
[17:40] <noob2> yeah i see some errors
[17:41] <nhm> noob2: what OS/packages are you using?
[17:41] <noob2> the ubuntu packages straight from the default repos
[17:42] <noob2> lemme make that symlink
[17:43] <noob2> actually i spoke too soon. i don't see that so file anywhere
[17:43] <noob2> maybe ubuntu's packages have a bug
[17:43] <nhm> hrm, what version of ceph is that?
[17:43] <noob2> 0.48.2-0ubuntu2 on ubuntu 12.10
[17:43] <nhm> ah, ok
[17:44] <nhm> btw, I have no idea if that's actually causing the problem.
[17:44] <nhm> it could be
[17:44] <noob2> right
[17:45] <noob2> just something that is suspect
[17:45] <noob2> we wanted to use the stable packages on ubuntu 12.10
[17:45] <noob2> can i deploy argonaut on 12.10 yet?
[17:47] <nhm> noob2: Do you mean bobtail?
[17:47] <noob2> well either one haha
[17:48] <noob2> one that works :D
[17:48] <nhm> noob2: :) We are feverishly working on getting bobtail ready.
[17:48] <noob2> sweet
[17:48] <noob2> next week maybe?
[17:49] <nhm> noob2: there's a lot of good stuff in it, but we uncovered some gotchas during testing at scale such that we had to delay it. Not sure what the current roadmap is.
[17:49] <noob2> aww
[17:49] <nhm> noob2: might be next week for all I know. I've been to wrapped up in my performance testing.
[17:50] <noob2> gotcha
[17:50] <noob2> well as soon as it lands you can be sure i'll deploy it
[17:50] <noob2> brb going to grab some lunch
[17:51] <nhm> noob2: enjoy, probably going afk here too.
[17:51] * slang (~slang@c-71-239-8-58.hsd1.il.comcast.net) Quit (Quit: slang)
[18:01] * slang (~slang@c-71-239-8-58.hsd1.il.comcast.net) has joined #ceph
[18:02] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[18:10] * calebamiles1 (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has left #ceph
[18:11] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[18:19] * fzylogic (~fzylogic@69.170.166.146) has joined #ceph
[18:30] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (Ping timeout: 480 seconds)
[18:34] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[18:50] * themgt (~themgt@71-90-234-152.dhcp.gnvl.sc.charter.com) Quit (Quit: themgt)
[18:50] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[19:11] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[19:11] * loicd (~loic@magenta.dachary.org) has joined #ceph
[19:19] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) has joined #ceph
[19:35] <sage> wido: feeling better?
[19:36] * Leseb (~Leseb@5ED17881.cm-7-2b.dynamic.ziggo.nl) has joined #ceph
[19:42] * gaveen (~gaveen@112.135.151.56) has joined #ceph
[19:58] <noob2> is there a known bug with the ubuntu packages from ubuntu and a missing symlink to /usr/lib/rados-classes/libcls_lock.so ?
[20:29] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[20:40] * markl (~mark@tpsit.com) Quit (Ping timeout: 480 seconds)
[20:54] * joshd1 (~jdurgin@2602:306:c5db:310:a5e0:68a1:e60a:3db) has joined #ceph
[20:57] <mikedawson> noob2: jamespage may be the best resource for ubuntu packages
[21:06] * nyeates (~nyeates@pool-173-59-239-231.bltmmd.fios.verizon.net) has joined #ceph
[21:08] * The_Bishop (~bishop@2001:470:50b6:0:212f:f61b:4e74:a0a4) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[21:11] <noob2> thanks, i'll ping him
[21:16] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[21:16] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit ()
[21:17] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) has joined #ceph
[21:23] * gaveen (~gaveen@112.135.151.56) Quit (Remote host closed the connection)
[21:32] * Leseb (~Leseb@5ED17881.cm-7-2b.dynamic.ziggo.nl) Quit (Quit: Leseb)
[21:37] * nyeates (~nyeates@pool-173-59-239-231.bltmmd.fios.verizon.net) Quit (Quit: Zzzzzz)
[21:45] * nyeates (~nyeates@pool-173-59-239-231.bltmmd.fios.verizon.net) has joined #ceph
[21:54] * samppah (hemuli@namibia.aviation.fi) Quit (Ping timeout: 480 seconds)
[21:58] * Leseb (~Leseb@5ED17881.cm-7-2b.dynamic.ziggo.nl) has joined #ceph
[22:01] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[22:02] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[22:11] <denken> am i correct to assume that once an osd is down/out, any previous weight (custom) it may have had is lost?
[22:26] * samppah (hemuli@namibia.aviation.fi) has joined #ceph
[22:26] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) Quit (Quit: Hard work pays off in the future, laziness pays off now)
[22:26] * themgt (~themgt@24-177-232-181.dhcp.gnvl.sc.charter.com) has joined #ceph
[22:34] * samppah (hemuli@namibia.aviation.fi) Quit (Ping timeout: 480 seconds)
[22:40] * imjustmatthew (~imjustmat@pool-173-53-54-22.rcmdva.fios.verizon.net) has joined #ceph
[22:44] * nyeates (~nyeates@pool-173-59-239-231.bltmmd.fios.verizon.net) Quit (Quit: Zzzzzz)
[22:44] <dspano> This may be the cold medicine talking, but what happens if you're stupid enough to create two large osds, then add a smaller osd with a pool size of 3? Will CRUSH just try it's best to replicate things in the pool to three OSDs in that scenario?
[22:44] * themgt (~themgt@24-177-232-181.dhcp.gnvl.sc.charter.com) Quit (Ping timeout: 480 seconds)
[22:45] <dspano> I haven't done this, I'm just curious.
[22:45] <imjustmatthew> I'm getting a weird message in my OSD logs: "cephx: verify_authorizer could not decrypt ticket info: error: NSS AES final round failed: -8023" and a mismatch between "ceph status" output which says the OSDs are up and the output of "service ceph status" which says the OSDs are dead the processes seem to be running though. Any thoughts on where I should be looking for the problem?
[22:45] * jluis (~JL@89.181.148.232) has joined #ceph
[22:47] * joao (~JL@89.181.152.168) Quit (Read error: Operation timed out)
[22:49] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[22:49] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[22:53] <dspano> Nevermind. I just had to RTFM.
[22:57] <phantomcircuit> dspano, what was the answer
[22:58] * mikedawson_ (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[22:58] <imjustmatthew> Hmmm, after restarting an OSD the AES messages disappeared and only the OSD has a different status. The cluster seems healthy but is spamming the OSD logs with message like "[(IPv6 for OSD0)]:6801/1403 >> [(IPv6 for OSD1)]:6801/31166 pipe(0xf1bfd80 sd=31 :46219 pgs=2 cs=1 l=0).fault with nothing to send, going to standby"
[23:01] <dspano> phantomcircuit: I found this in the pool section.
[23:01] <dspano> Note, however, that pool size is more of a best-effort setting: an object might accept ios in degraded mode with fewer than size replicas. To set a minimum number of required replicas for io, you should use the min_size setting.
[23:01] <imjustmatthew> Maybe a hostname/DNS issue? "[(IPv6 for OSD0)]:6801/1403 >> [(IPv6 for OSD1)]:6801/31166 pipe(0xf1bfd80 sd=31 :46271 pgs=2 cs=2 l=0).connect claims to be [::]:6801/1375 not [(IPv6 for OSD1)]:6801/31166 - wrong node!"
[23:02] <dspano> http://ceph.com/docs/master/rados/operations/pools/#set-the-number-of-object-replicas
[23:03] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:03] * mikedawson_ is now known as mikedawson
[23:05] * KindTwo (~KindOne@h195.0.40.162.dynamic.ip.windstream.net) has joined #ceph
[23:05] <imjustmatthew> After restarting both everything the cluster seems healthy except that "ceph status" and "service ceph status" disagree on OSD health.
[23:07] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[23:07] * loicd (~loic@magenta.dachary.org) has joined #ceph
[23:09] * KindOne (~KindOne@h53.49.186.173.dynamic.ip.windstream.net) Quit (Ping timeout: 480 seconds)
[23:09] * KindTwo is now known as KindOne
[23:10] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[23:11] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:15] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[23:16] * jbarbee (17192e61@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[23:20] * loicd (~loic@magenta.dachary.org) has joined #ceph
[23:22] * KindTwo (~KindOne@50.96.84.155) has joined #ceph
[23:22] * KindOne (~KindOne@h195.0.40.162.dynamic.ip.windstream.net) Quit (Ping timeout: 480 seconds)
[23:22] * KindTwo is now known as KindOne
[23:28] * themgt_ (~themgt@24-177-232-181.dhcp.gnvl.sc.charter.com) has joined #ceph
[23:48] * houkouonchi-home (~linux@pool-108-38-63-48.lsanca.fios.verizon.net) has joined #ceph
[23:49] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Quit: This computer has gone to sleep)
[23:57] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) Quit (Remote host closed the connection)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.