#ceph IRC Log

Index

IRC Log for 2011-06-30

Timestamps are in GMT/BST.

[0:04] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[0:29] <joshd> bchrisman: your rbd snapshot problem might be a build issue
[0:30] <bchrisman> hmm.. something missing?
[0:30] <bchrisman> didn't see any errors on the build??? but could be??? this is the first I've been evaluating rbd.
[0:31] <joshd> bchrisman: are your libraries the same versions as the build machine?
[0:31] <joshd> it just looks like stack corruption
[0:32] <joshd> adding more debugging makes it work since it overwrites other parts of the stack
[0:34] <bchrisman> Odd??? I assume I'd see that happening elsewhere ??? hmm
[0:35] <joshd> bchrisman: to confirm I could compile locally on the test system, but it doesn't have git or other dependencies
[0:35] <bchrisman> yeah.. our cluster nodes don't have build tools or such on them..
[0:36] <bchrisman> I'll check our library versions...
[0:48] <bchrisman> hmm. libstdc++ has same version but different build number from the compile environment...
[0:49] <bchrisman> can spin a new one of that.
[0:51] * yoshi (~yoshi@KD027091032046.ppp-bb.dion.ne.jp) has joined #ceph
[0:54] <bchrisman> build going.. .will see if that fixes it.
[0:55] * yoshi (~yoshi@KD027091032046.ppp-bb.dion.ne.jp) Quit (Remote host closed the connection)
[1:09] * lxo (~aoliva@189.27.173.208.dynamic.adsl.gvt.net.br) has joined #ceph
[1:15] * aliguori (~anthony@32.97.110.65) Quit (Quit: Ex-Chat)
[1:29] * lxo (~aoliva@189.27.173.208.dynamic.adsl.gvt.net.br) Quit (Ping timeout: 480 seconds)
[1:46] * Tv (~Tv|work@ip-64-111-111-107.dreamhost.com) Quit (Ping timeout: 480 seconds)
[2:55] * yoshi (~yoshi@p15251-ipngn1601marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[3:12] * bchrisman (~Adium@70-35-37-146.static.wiline.com) Quit (Quit: Leaving.)
[3:44] * lxo (~aoliva@189.27.173.208.dynamic.adsl.gvt.net.br) has joined #ceph
[3:45] * joshd (~joshd@ip-64-111-111-107.dreamhost.com) Quit (Quit: Leaving.)
[4:06] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[6:54] * DanielFriesen (~dantman@S0106001731dfdb56.vs.shawcable.net) has joined #ceph
[6:54] * Nadir_Seen_Fire (~dantman@S0106001731dfdb56.vs.shawcable.net) Quit (Ping timeout: 480 seconds)
[7:31] * stingray (~stingray@stingr.net) has joined #ceph
[8:58] * darktim (~andre@ticket1.nine.ch) has joined #ceph
[10:07] * votz_ (~votz@pool-72-78-219-167.phlapa.fios.verizon.net) has joined #ceph
[10:14] * votz (~votz@pool-72-78-219-167.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[11:30] * Jiaju (~jjzhang@222.126.194.154) Quit (Quit: ??????)
[11:45] <stingray> huh
[12:43] * yoshi (~yoshi@p15251-ipngn1601marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[12:57] * lxo (~aoliva@189.27.173.208.dynamic.adsl.gvt.net.br) Quit (Ping timeout: 480 seconds)
[13:46] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) has joined #ceph
[14:03] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) Quit (Remote host closed the connection)
[14:10] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) has joined #ceph
[14:58] * Juul (~Juul@3408ds2-vbr.4.fullrate.dk) has joined #ceph
[15:07] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[15:08] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[15:14] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[15:18] * aliguori (~anthony@32.97.110.64) has joined #ceph
[15:20] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[15:24] * damien1 (~damien@andromeda.digitalnetworks.co.uk) has joined #ceph
[15:24] * damien1 is now known as damoxc
[16:03] * yoshi (~yoshi@KD027091032046.ppp-bb.dion.ne.jp) has joined #ceph
[16:04] * aliguori (~anthony@32.97.110.64) Quit (Quit: Ex-Chat)
[16:05] * aliguori (~anthony@32.97.110.64) has joined #ceph
[16:07] * dilemma (~dan@69.167.130.11) has joined #ceph
[16:13] <dilemma> So the roadmap has v1.0 with a release date in 21 days, but there's some pretty old (big?) tickets there. Anyone know if this is still a realistic timeframe for release?
[16:14] <dilemma> I'd love to use it in a medium-sized (tens of nodes, initially) deployment, but that'll be a hard sell before it hits 1.0
[16:21] <dilemma> I do see a lot of tickets being moved from the 1.0 to the 1.1 release a couple weeks ago. That sort of tells me that the 1.0 release is being pared down a bit so the target can be hit. Looks promising.
[16:29] * greglap (~Adium@cpe-76-170-84-245.socal.res.rr.com) has joined #ceph
[16:30] <greglap> dilemma: yeah, 1.0 has a moving deadline, sorry :(
[16:30] <greglap> gotta run but there'll be a lot of people on in a few hours (pacific time work hours)
[16:30] <dilemma> cool, thanks
[16:31] * greglap (~Adium@cpe-76-170-84-245.socal.res.rr.com) Quit ()
[16:47] * greglap (~Adium@166.205.136.150) has joined #ceph
[16:59] * aliguori (~anthony@32.97.110.64) Quit (Ping timeout: 480 seconds)
[17:09] * aliguori (~anthony@32.97.110.65) has joined #ceph
[17:21] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[17:24] <wido> dilemma: What are you aming for?
[17:24] <wido> How many nodes? How many OSD's?
[17:26] <dilemma> I'm looking at between 60 and 100 nodes to start with. I'm interested in using RADOS directly, with the possibility of using Ceph on it at a later time.
[17:27] <wido> dilemma: 100 nodes, each with 2 to 4 disks? So about 400 OSD's at max?
[17:27] <wido> assuming one OSD per disk
[17:28] <dilemma> Right, initially. We may scale it up, or scale out into multiple clusters. Still investigating possible architectures.
[17:28] <wido> I currently have a 10 node cluster with 4 disks each, so 40 OSD's
[17:28] <wido> All nodes are Atoms with 4GB of Ram
[17:29] <wido> Running it has been challanging, had to format it again today. Most of the time it's OSD's crashing due to recovery issues
[17:29] <wido> but today I hit a bug where a lot of my btrfs got corrupted after a power outage
[17:29] <dilemma> I'll be avoiding btrfs and going with ext4
[17:29] <wido> I have a small cluster (6 nodes, 6 OSD's) running with RBD/RADOS only, that has been running well for months now
[17:30] <wido> But going bigtime with Ceph currently, I would not recommend yet unless you really know what you are doing
[17:30] <wido> it will crash some day and recovering might be slow and painfull
[17:30] <dilemma> Understood. I figured the proximity to a 1.0 release would indicate otherwise.
[17:31] <wido> Yes, from what I understand that's why 1.0 is a bit of a moving target
[17:31] <wido> It's getting better and better, that's for sure
[17:31] <wido> But testing is needed! So if you have spare time and hardware, please, test and report
[17:32] <wido> I got to run, heading home
[17:35] * greglap (~Adium@166.205.136.150) Quit (Quit: Leaving.)
[17:36] * rsharpe1 (~Adium@70-35-37-146.static.wiline.com) Quit (Quit: Leaving.)
[17:37] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:50] * rsharpe (~Adium@70-35-37-146.static.wiline.com) has joined #ceph
[17:51] * rsharpe (~Adium@70-35-37-146.static.wiline.com) Quit ()
[17:51] * rsharpe (~Adium@70-35-37-146.static.wiline.com) has joined #ceph
[18:15] * Tv (~Tv|work@ip-64-111-111-107.dreamhost.com) has joined #ceph
[18:15] * aliguori (~anthony@32.97.110.65) Quit (Quit: Ex-Chat)
[18:26] * yoshi (~yoshi@KD027091032046.ppp-bb.dion.ne.jp) Quit (Remote host closed the connection)
[18:34] * bchrisman (~Adium@70-35-37-146.static.wiline.com) has joined #ceph
[18:40] * joshd (~joshd@ip-64-111-111-107.dreamhost.com) has joined #ceph
[18:45] <Tv> ok who stole sepia13 from me :(
[18:45] <Tv> i still have the lock in autotest, so you're trespassing
[18:46] <Tv> root 27364 0.0 55.4 2539808 2248604 ? Ssl Jun03 9:11 rados -p data -k /etc/ceph/keyring bench 3600 write
[18:46] <gregaf> sorry, that's me
[18:46] <gregaf> I got told we were using the wiki for locks now
[18:47] <Tv> whoa jun 03
[18:47] <Tv> what were when?
[18:47] <Tv> i haven't heard that
[18:47] <gregaf> really don't remember
[18:47] <gregaf> https://uebernet.dreamhost.com/wiki/Ceph#temp_sepia_node_reservations
[18:47] <gregaf> hell, you have one locked in there
[18:48] <Tv> i've never edited that page
[18:48] <Tv> but i'll take 66 then, as i seem to have it marked already
[18:48] <Tv> oh wait 66 is broken
[18:48] <Tv> and so is 72
[18:49] <gregaf> well, that explains why teuthology wasn't working for me
[18:49] <Tv> time to try the reinstall scripts sage packaged, i guess
[18:50] <gregaf> you want to try and salvage something off those?
[18:50] <Tv> i don't care about any data on sepia*
[18:50] <gregaf> okay
[18:53] <Tv> powercycling sepia66 seems to hang
[18:53] <Tv> the console servers are a mess again, i guess
[18:55] <Tv> hmm seems to work from isidore
[19:02] <Tv> same thing for 72, works from isidore
[19:07] <Tv> apart from that, reinstalls seem to be progressing ok
[19:08] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[19:14] <gregaf> do we have a specific place for teuthology bugs?
[19:15] <gregaf> it seems to die horribly if you don't have a mon on each node
[19:24] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[19:29] <sagewk> tv: checking on the console server..
[19:29] <sagewk> they renumbered last week
[19:33] * cmccabe (~cmccabe@c-24-23-254-199.hsd1.ca.comcast.net) has joined #ceph
[19:39] * dilemma (~dan@69.167.130.11) Quit (Quit: Leaving)
[19:45] <gregaf> Tv: I've got a teuthology failure at the end of the test because it looks like the "archive" folder isn't getting cleaned up, any ideas?
[19:46] <Tv> gregaf: there has to be another error
[19:46] <Tv> log.info('Removing archived files...')
[19:46] <Tv> run.wait(
[19:46] <Tv> ctx.cluster.run(
[19:46] <Tv> args=[
[19:46] <Tv> 'rm',
[19:46] <Tv> '-rf',
[19:46] <Tv> '--',
[19:46] <Tv> '/tmp/cephtest/archive',
[19:46] <Tv> ],
[19:46] <Tv> wait=False,
[19:46] <Tv> ),
[19:46] <Tv> )
[19:46] <Tv> hard to imagine that not getting cleaned up
[19:46] <gregaf> INFO:teuthology.task.internal:Tidying up after the test...
[19:46] <gregaf> INFO:orchestra.run.err:rmdir: failed to remove `/tmp/cephtest': Directory not empty
[19:46] <gregaf> INFO:orchestra.run.err:rmdir: failed to remove `/tmp/cephtest': Directory not empty
[19:46] <gregaf> ERROR:teuthology.run_tasks:Manager failed: <contextlib.GeneratorContextManager object at 0x1191650>
[19:46] <gregaf> Traceback (most recent call last):
[19:46] <gregaf> if there was another error it shouldn't have gotten that far, right?
[19:46] <Tv> it still tries to clean up
[19:47] <Tv> find the first error
[19:47] <gregaf> well my actual tasks finished successfully...
[19:47] <gregaf> INFO:teuthology.task.radosbench.radosbench.0.out:Total time run: 371.032645
[19:47] <gregaf> INFO:teuthology.task.radosbench.radosbench.0.out:Total writes made: 1047
[19:47] <gregaf> INFO:teuthology.task.radosbench.radosbench.0.out:Write size: 4194304
[19:47] <gregaf> INFO:teuthology.task.radosbench.radosbench.0.out:Bandwidth (MB/sec): 11.287
[19:47] <gregaf> INFO:teuthology.task.radosbench.radosbench.0.out:
[19:47] <gregaf> INFO:teuthology.task.radosbench.radosbench.0.out:Average Latency: 5.66012
[19:47] <gregaf> INFO:teuthology.task.radosbench.radosbench.0.out:Max latency: 37.9301
[19:47] <gregaf> INFO:teuthology.task.radosbench.radosbench.0.out:Min latency: 0.197902
[19:47] <gregaf> INFO:teuthology.task.thrashosds.thrasher:in_osds: [0, 2, 4, 3] out_osds: [1]
[19:47] <Tv> put the whole log in a pastebin?
[19:47] <gregaf> INFO:teuthology.task.thrashosds.thrasher:Adding osd 1
[19:47] <gregaf> INFO:teuthology.task.thrashosds:joining thrashosds
[19:47] <gregaf> INFO:teuthology.task.ceph:Shutting down mds daemons...
[19:47] <gregaf> INFO:teuthology.task.ceph:Shutting down osd daemons...
[19:47] <gregaf> INFO:teuthology.task.ceph:Shutting down mon daemons...
[19:47] <gregaf> INFO:teuthology.task.ceph:Cleaning ceph cluster...
[19:47] <gregaf> INFO:teuthology.task.ceph:Removing ceph binaries...
[19:47] <gregaf> INFO:teuthology.task.ceph:Removing shipped files: daemon-helper enable-coredump...
[19:47] <gregaf> INFO:orchestra.run.out:kernel.core_pattern = core
[19:47] <gregaf> INFO:orchestra.run.out:kernel.core_pattern = core
[19:47] <gregaf> INFO:orchestra.run.out:kernel.core_pattern = core
[19:47] <gregaf> INFO:teuthology.task.internal:Tidying up after the test...
[19:48] <gregaf> does it get stored anywhere for me?
[19:48] <gregaf> or do I need to copy it off my screen...
[19:48] <joshd> if you used the archive option, it's in teuthology.log
[19:48] <gregaf> :(
[19:48] <bchrisman> joshd: indeed it was the libstdc++ version??? going to work on making sure our build & deployment servers can't get out of sync like that??? very odd issue though.
[19:49] <cmccabe> bchrisman: I was just reading about symbol versioning the other day
[19:49] <joshd> bchrisman: well, I'm glad it wasn't a stack corruption bug in Ceph :)
[19:49] <bchrisman> joshd: well.. not the version.. but the build??? version was the same.. but the build must've been different.
[19:49] <cmccabe> bchrisman: symbol versioning in gcc is pretty hairy
[19:49] <gregaf> all right, I'll re-run it with that option to get a log
[19:50] <bchrisman> joshd: the snap create hangs right now.. but I'll track that down before I bug you guys about it again only to find it's some buildenv issue.. :)
[19:50] <Tv> gregaf: why did you say "archive" in the first place?
[19:50] <Tv> gregaf: Tv: I've got a teuthology failure at the end of the test because it looks like the "archive" folder isn't getting cleaned up, any ideas?
[19:51] <Tv> gregaf: why "looks like"
[19:51] <joshd> bchrisman: heh, thanks
[19:51] <Tv> gregaf: what did you observe?
[19:51] <gregaf> the only dir left in /tmp/cephtest is a folder called archive
[19:51] <Tv> gregaf: because if you didn't use --archive, /tmp/cephtest/archive shouldn't exist in the first place
[19:51] <Tv> gregaf: you're fighting over machines with someone
[19:51] <bchrisman> cmccabe: yeah.. sounds like a rat's nest...
[19:51] <gregaf> well then who the hell is using sepia 12-13-14, and why didn't one of our tests break before the end?
[19:51] <cmccabe> bchrisman: ian lance taylor had a good writeup
[19:52] <Tv> gregaf: is this perhaps from when i still thought i had 13?
[19:52] <gregaf> no, I ran teuthology-nuke and then started my test again
[19:53] <cmccabe> bchrisman: I get the impression that symbol versioning is mostly intended to make glibc work, there are several mailing list messages complaining that S.V. does not work for libstdc++
[19:54] <bchrisman> cmccabe: hmm.. will read up on that??? makes sense I wouldn't've seen it before it libstdc++ is more problematic with it.
[19:59] <cmccabe> bchrisman: for you guys, you probably just want to match environments
[20:01] <bchrisman> cmccabe: yeah.. it's such an odd issue that I'll need to familiarize myself with it anyways??? seems like there's something not-quite-right about symbol versioning??? so I guess in perfect-world.. my shlib wouldn't've loaded?
[20:02] <cmccabe> bchrisman: the problem they describe on the ML has to do with a std::string being constructed with one version of the string functions, and destructed with another one
[20:03] <cmccabe> bchrisman: maybe you could rebuild your libstdc++ without symbol versioning in order to reach your "perfect world" :)
[20:04] <Tv> C++ ABI is notoriously bad in that respect.
[20:04] <bchrisman> yup :)
[20:04] <Tv> glibc etc symbol versioning is meant for C, C++ is a different beast.
[20:05] <Tv> i think there was something in the symbol mangling C++ does that makes it not behave the same
[20:05] <cmccabe> bchrisman, tv: the problem is pervasive inlining due to templates
[20:05] <Tv> but yes the details are confusing and mostly unknown
[20:06] <cmccabe> tv: one simple example is if the destructor was inlined as version 5, but the constructor was not, and the library only provides version 6
[20:06] <cmccabe> tv: that is why the gcc guys should have copied sun's design for S.V. and just had it match one version exactly, or no versions
[20:07] <cmccabe> tv: instead they came up with this mix and match system
[20:07] <cmccabe> http://www.airs.com/blog/archives/220
[20:10] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[20:10] <cmccabe> http://www.airs.com/blog/archives/442
[20:14] <gregaf> tv: with the ???archive option there's no error on shutdown, without it there's an error because /tmp/ceph is not empty
[20:15] <gregaf> and looking at the nodes there's an archive folder
[20:15] <gregaf> looks like an unconditional create and a conditional removal
[20:15] <Tv> gregaf: oh then it seems someone introduced a bug
[20:15] <gregaf> I assume
[20:15] <Tv> gregaf: what's inside the archive dir?
[20:15] <Tv> oh wait i see it, it really is unconditional create
[20:15] <Tv> easy to fix
[20:16] <gregaf> empty "coverage", "log" with the daemon logs, and empty "profiling-logger"
[20:16] <gregaf> :)
[20:18] <Tv> oh crap the logging in there is unconditional
[20:18] <Tv> ok so the create must be unconditional
[20:18] <Tv> ok easy fix
[20:19] <Tv> doing one run to verify
[20:26] <Tv> gregaf: fix pushed
[20:26] <gregaf> cool, thanks
[20:35] * aliguori (~anthony@32.97.110.65) has joined #ceph
[20:47] <bchrisman> cmccabe: I see the issue in the symbol versioning.. makes sense??? thanks for the pointer
[20:47] <cmccabe> bchrisman: np
[20:48] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[20:49] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[20:49] * Juul (~Juul@3408ds2-vbr.4.fullrate.dk) Quit (Ping timeout: 480 seconds)
[21:01] * `gregorg` (~Greg@78.155.152.6) has joined #ceph
[21:02] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[21:22] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[21:22] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[21:28] * Juul (~Juul@3408ds2-vbr.4.fullrate.dk) has joined #ceph
[21:48] * Juul (~Juul@3408ds2-vbr.4.fullrate.dk) Quit (Remote host closed the connection)
[22:08] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[22:19] <sagewk> cmccabe: objecter fix looks good.. does that fix the crash?
[22:19] <cmccabe> I'm checking it now
[22:19] <cmccabe> I also needed a testrados fix for some cases where it wasn't checking return codes :P
[22:20] <cmccabe> no crash, but the messager is going nuts
[22:20] <cmccabe> 2011-06-30 13:13:13.282270 7fdeae351700 -- 10.3.14.11:0/3010328 send_message dropped message osd_op(client4105.0:6 foo_object [read 0~128] 8.a12d) v1 because of no pipe on con 0x7fdea80029c0
[22:20] <cmccabe> 2011-06-30 13:13:13.646905 7fde9cef8700 -- 10.3.14.11:0/3010328 >> 10.3.14.11:6800/10021 pipe(0x7fdea001c2d0 sd=9 pgs=0 cs=0 l=0).fault first fault
[22:20] <cmccabe> ...
[22:21] <cmccabe> 2011-06-30 13:13:15.044469 7fdeaeb52700 -- 10.3.14.11:0/3010328 send_message dropped message osd_op(client4115.0:7 foo_object [setxattr b (2)] 8.a12d) v1 because of no pipe on con 0x7fdea8008a80
[22:21] <gregaf> heh
[22:21] <cmccabe> I think we might need to analyze the lifecycle of a rados request
[22:22] <gregaf> how are you hitting those
[22:22] <gregaf> ?
[22:22] <cmccabe> just running NUM_THREADS=3 ./testrados
[22:22] <gregaf> so that error message means that you're passing in a Connection that's since been closed for some reason
[22:23] <cmccabe> I'm going to do this blog thing, then I'm going to make sure testrados is checking every error path
[22:23] <cmccabe> then I'll diagram the lifecycle of a rados request and see whether it makes sense
[22:23] <gregaf> remind me what testrados does, and how the threads impact it?
[22:24] <cmccabe> just does a bunch of rados operations with librados
[22:24] <gregaf> are you maybe closing connections in one thread and then trying to use them in another?
[22:24] <cmccabe> no
[22:24] <gregaf> hrmm
[22:26] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[22:27] <gregaf> cmccabe: I'm just playing with teuthology here, gimme a few minutes to reproduce and look over it
[22:32] <gregaf> cmccabe: this is on current wip-objecter-error-handling?
[22:32] <cmccabe> yes
[22:32] <gregaf> NUM_THREADS=3 ./testrados
[22:32] <gregaf> is it consistent?
[22:32] <cmccabe> it succeeds sometimes
[22:33] <cmccabe> always with 1 thread
[22:33] <cmccabe> usually with 2
[22:33] <cmccabe> seldom with 3 :)
[22:33] <gregaf> you just running vstart?
[22:33] <gregaf> it's not failing for me
[22:33] <gregaf> I saw a message about a spurious mon_subscribe message but no errors
[22:35] <gregaf> hmm, okay, got one failure
[22:37] <gregaf> hmm, it doesn't respect out-file=? sadness
[22:37] <gregaf> wow, I really can't generate any failures ??? I think that one it just took longer than normal so I cancelled it
[22:38] <cmccabe> it doesn't use dout, and doesn't parse the comment line I think?
[22:38] <gregaf> well, that'd explain that
[22:38] <gregaf> how're the messenger bugs getting out then?
[22:38] <cmccabe> I should probably change it to use rados->set_argv
[22:38] <cmccabe> so it at least respects your commandline options
[22:38] <cmccabe> the default logging must be like 1 or something
[22:38] <cmccabe> for messenger
[22:40] <cmccabe> one thing you can do is this
[22:40] <cmccabe> CEPH_CONF=foo ./testrados
[22:40] <gregaf> okay
[22:40] <cmccabe> librados will check the environment no matter what
[22:41] <gregaf> hmm, I seem to have hung it with 5 threads, but the only error output is
[22:41] <gregaf> fault with nothing to send, going to standby
[22:41] <gregaf> and that's not necessarily (ever, anymore?) an actual problem
[22:41] <cmccabe> first we need to ensure that testrados isn't doing anything illegal
[22:42] <cmccabe> it has to check all return codes
[22:42] <cmccabe> then I think we need to figure out how we want the system to work in this kind of situation
[22:44] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[22:44] <gregaf> okay, well, I just got a segfault:
[22:44] <gregaf> #0 0x00007f2c75f4ce22 in Objecter::handle_osd_op_reply (this=0xd0c640, m=0xd19380) at osdc/Objecter.cc:760
[22:44] <gregaf> 760 if (op->session->con != m->get_connection()) {
[22:44] <gregaf> op->session is NULL
[22:46] <gregaf> hrm, doesn't look like adding a conf file with:
[22:46] <gregaf> [global]
[22:46] <gregaf> debug client = 10
[22:46] <gregaf> debug ms = 5
[22:46] <gregaf> log file = out/client.log
[22:46] <gregaf> is helping any
[22:47] <cmccabe> you're setting CEPH_CONF?
[22:47] <gregaf> CEPH_CONF=ceph.conf.logging NUM_THREADS=3 ./testrados
[22:47] <cmccabe> arg
[22:47] <cmccabe> I don't know where that getenv went...
[22:48] <cmccabe> oh... it went into the constructor of CephInitParameters, but that gets overridden later
[22:55] <gregaf> hmm
[22:55] <gregaf> gregf@kai:~/ceph/src$ CEPH_CONF=ceph.conf.logging NUM_THREADS=3 ./testrados
[22:55] <gregaf> 0 :rados_pool_create = 0
[22:55] <gregaf> 0 :rados_ioctx_create = 0, io_ctx = 0x7fe8e8010b80
[22:55] <gregaf> terminate called after throwing an instance of 'std::logic_error'
[22:55] <gregaf> what(): basic_string::_S_construct NULL not valid
[22:55] <gregaf> Aborted (core dumped)
[22:55] <gregaf> I think the issues are probably in testrados rather than the messenger, from what I'm picking up so far
[23:02] <gregaf> cmccabe: and it looks like maybe you're still using some hard-coded "foo" objects/pools without any thread distinguishers?
[23:03] <cmccabe> it should work
[23:03] <cmccabe> if that causes a crash, rgw is doomed
[23:03] <cmccabe> that's why I think we need to revisit some of the assumptions here
[23:06] <gregaf> maybe I'm just making assumptions about what this does, I'm not reading it too closesly
[23:06] <gregaf> *closely
[23:06] <cmccabe> no, you're right. It simultaneously operates on the same object from different threads, with no external locking
[23:06] <gregaf> but rgw can handle stuff disappearing or already existing whereas I assume this code is checking errors and stuff?
[23:07] <cmccabe> it probably needs some fixes. the earlier code didn't do much checking because in the single threaded-case, it can't fail.
[23:07] <gregaf> yeah
[23:07] <cmccabe> or shouldn't fail
[23:08] <gregaf> I mean this works for me most of the time with lowish thread counts, but I don't think it's thread-safe for the assumptions it's making
[23:09] <cmccabe> not sure I follow your reasoning
[23:09] <gregaf> and I have no idea how librados handles threading any more so I can't comment on that, but when I see attempts to deref null variables that's somebody violating a contract
[23:09] <gregaf> well it looks like they're doing stuff in the "foo" pool
[23:10] <gregaf> and each one tries to create it
[23:10] <gregaf> and then each one deletes it at the end
[23:10] <cmccabe> yep
[23:10] <cmccabe> it should give errors sometimes
[23:10] <cmccabe> but not segfault
[23:10] <gregaf> so if one of them finishes enough ahead of the others to delete the foo pool, kaplooey
[23:10] <cmccabe> sure. error should happen
[23:10] <cmccabe> that's why we have error codes
[23:12] <gregaf> *shrug*
[23:12] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[23:12] <gregaf> I can't seem to get any messenger bugs to appear, anyway, so unless you want some help I'm going to go back to losing my battle with teuthology
[23:12] <gregaf> :)
[23:14] <cmccabe> ok
[23:14] <cmccabe> I have to finish one other bugfix anyway
[23:16] <Tv> joshd: what work do you have going on with teuthology?
[23:16] <joshd> Tv: fixing up the kernel install, and locking
[23:26] <Tv> joshd: + # radosgw does not get shutdown with apache
[23:26] <Tv> that's probably an rgw bug
[23:26] <Tv> not using fcgi library right or something
[23:26] <Tv> care to file a ticket?
[23:26] <joshd> sure
[23:27] <Tv> it's harder because you use daemon-helper with SIGKILL
[23:27] <Tv> so apache doesn't do graceful shutdown at all
[23:27] <Tv> but still, that should be detected
[23:30] <Tv> if nothing else, we can make apache not start rgw, but start & control rgw separately, just let them use the unix domain socket to communicate
[23:30] <Tv> then we get to kill rgw explicitly, without relying on killall
[23:31] <Tv> + path='/tmp/cephtest/apache/htdocs/rgw.fcgi',
[23:31] <Tv> + data="""#!/bin/sh
[23:31] <Tv> +ulimit -c unlimited
[23:31] <Tv> +/tmp/cephtest/binary/usr/local/bin/radosgw -c /tmp/cephtest/ceph.conf
[23:31] <Tv> +"""
[23:31] <Tv> + )
[23:31] <Tv> that could use an exec, to avoid the shell staying in between
[23:31] <Tv> that may actually help it shut down properly
[23:31] <joshd> I think I tried that, and it didn't help
[23:32] <Tv> even if it doesn't change that aspect, it's still nicer not to have extra shells around
[23:32] <Tv> +email = test+foo@dreamhost.example.com.test
[23:32] <joshd> fair enough
[23:32] <Tv> please use .invalid, it's actually reserved for that
[23:33] <joshd> ok
[23:37] <Tv> the radosgw_admin user creation needs to happen only once, i think now it does it for every client
[23:37] <Tv> if you're using the same users for them all...
[23:38] <Tv> you know, i'd actually argue for generating random access keys and secrets
[23:38] <joshd> that's true, I'd rather use different ones for each client
[23:39] <wido> talking about user creation
[23:39] <Tv> joshd: because i'm afraid of nodes accidentally talking across cluster
[23:39] <wido> is it correct that new users are suspended by default?
[23:40] <joshd> yehudasa: ^
[23:40] <Tv> joshd: so if that'd do something like user_id and email contain the client id, access_key and secret_key are base64 of os.urandom(...)
[23:40] <wido> noticed it today
[23:40] <yehudasa> joshd: <
[23:40] <Tv> joshd: summary: if it works, feel free to merge, i'd like to see the above cleanups/improvements
[23:41] <yehudasa> wido: are you on the latest?
[23:41] <wido> on 0.30
[23:41] <joshd> Tv: thanks, I'd like to clean up the user creation first
[23:41] <Tv> joshd: oh fyi the conversation wido and yehudasa are having has impact for this too, there's an automated radosgw_admin user enable there that might be unnecessary
[23:41] <yehudasa> wido: well, that's not the latest.. there was a short lived bug, might have gone into 0.30
[23:41] <Tv> joshd: yeah just please keep track of what i listed ;)
[23:42] <joshd> Tv: sure
[23:43] <wido> yehudasa: ah, ok. I'll try a newer version tomorrow

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.