#ceph IRC Log

Index

IRC Log for 2012-10-06

Timestamps are in GMT/BST.

[0:00] <rweeks> I knew I was channeling something good. :)
[0:03] <tren> Tv_: lol!
[0:07] * gregaf1 (~Adium@2607:f298:a:607:b805:59b1:a7d9:a820) has joined #ceph
[0:07] * gregaf (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[0:21] * gregaf (~Adium@2607:f298:a:607:f059:6fdb:5366:6f3f) has joined #ceph
[0:27] * gregaf1 (~Adium@2607:f298:a:607:b805:59b1:a7d9:a820) Quit (Quit: Leaving.)
[0:28] * gminks_ (~ginaminks@108-210-41-138.lightspeed.austtx.sbcglobal.net) has joined #ceph
[0:40] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[0:41] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[0:44] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[0:46] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:49] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[0:51] * cornchips (~nate@85.114.54.27) has joined #ceph
[0:53] * cblack101 (c037362a@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[0:56] <sagewk> slang: first wip-client-stale patch looks good
[0:56] <sagewk> slang: on the second, it may be better for the stale logic to skip sessions in the opening state and avoid the possibility entirely
[0:58] <sagewk> slang: hmm, in fact it looks like find_idle_sessions() should only be picking off OPEN sessions, not OPENING ones...
[0:59] <slang> sagewk: I thought it did that
[0:59] <slang> get_oldest_session(Session::STATE_OPEN)
[0:59] <slang> grr smiley
[1:00] <sagewk> in master you mean? that's what i'm looking at
[1:01] <slang> yes, it looks like it only picks off sessions in STATE_OPEN
[1:01] <sagewk> in that case, i think the 54ab1de6bc3207bb62ac5b8b4db6094702c088c0 )top wip-client-stale) commit is unnecessary/pointless.. because we won't send stale messages on OPENING sessions
[1:02] <sagewk> oooh, i see.
[1:02] <sagewk> nevermind :)
[1:02] <slang> oh, but STATE_OPEN gets set before ...
[1:02] <sagewk> in thta case, i'd just move the touch inside _session_logged()
[1:02] <sagewk> right where it sets the state to OPEN
[1:03] <slang> ok
[1:03] <sagewk> we try to keep the Context finish() methods just calling out to the method that does all the work, since we don't normally look there
[1:03] <slang> *nods*
[1:03] <slang> yeah that makes sense
[1:03] <sagewk> thanks, with that change let's pull it into master
[1:03] <slang> k
[1:04] <slang> just that one or the client side changes too?
[1:06] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[1:07] <slang> sagewk: it looks like a similar issue might exist with renewcaps
[1:07] <sagewk> k
[1:07] <sagewk> both
[1:07] * tren (~Adium@184.69.73.122) Quit (Quit: Leaving.)
[1:07] <sagewk> client patch looks good
[1:08] * ninkotech (~duplo@89.177.137.231) Quit (Remote host closed the connection)
[1:08] <slang> i.e. if its stale, we probably want to touch again before sending
[1:10] * sagelap1 (~sage@38.122.20.226) has joined #ceph
[1:16] <Tv_> sagewk: oh and fyi the safety guards you asked are bugs #3259 #3256 etc
[1:28] * Tv_ (~tv@2607:f298:a:607:5c1e:e9a0:aa30:35e7) Quit (Ping timeout: 480 seconds)
[1:31] * jlogan1 (~Thunderbi@2600:c00:3010:1:4d70:2bbd:6949:8d94) Quit (Ping timeout: 480 seconds)
[1:34] * miroslavk (~miroslavk@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:36] <cornchips> got this issue: i just installed two osds on two separate nodes... i fire off a 600 mb transfer and it takes about 30 seconds to complete on the server side.. the client returns immediately.. it seems as soon as a user makes a copy, some buffer fills up on the server. However, the problem is once this buffer is full, copies during the sync are terribly slow. so, when i fire another transfer, the server seems to get overloaded... i got a dstat dump on both, show
[1:37] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:38] <cornchips> running fedora 17.. 0.52... an ambitious project to sit on this fs.. the nodes themselves are commodity (not in the gluster sense ;o)
[1:38] <gregaf> cornchips: what interface, clients, etc are you using?
[1:40] <cornchips> realtek 8111C , took some tweaking to make sure the drivers were good.. they arent that high class (but im not sure if it could be a fifo thing).. the client is one machine mounted... using dd, iozone and cp with timestamps to test.
[1:40] <gregaf> sorry, I meant are you using the posix filesystem, raw RADOS, RBD, and with the userspace or kernel versions
[1:41] <gregaf> sounds like CephFS, but using ceph-fuse or the kernel client?
[1:41] <cornchips> have ceph installed and doing a mount -t ceph storage1(2):/ /mnt/ceph
[1:41] <gregaf> okay
[1:42] <gregaf> how much memory is on the servers and the client?
[1:43] <cornchips> 8 on one server, 4 on the other, and 4 on the client
[1:43] <gregaf> and what's the exact commands you're using for the test?
[1:43] <cornchips> ecc ddr3.. let me load em up
[1:44] <gregaf> can you walk me through your test scenario a bit more explicitly?
[1:46] <gregaf> if you're not forcing those tools to do syncs it sounds to me like maybe you're just writing 600MB into local RAM, but then when you start up the second client it starts flushing out all that data and has to wait for that to complete before doing anything else
[1:46] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[1:46] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[1:59] <cornchips> sry, the dd i cant seem to find.. one of them is /opt/iozone/bin/iozone -c -e -r 64k -s 1g -l 2 -i 0 -i 1 -F /mnt/ceph/iozone.tmp, the other is a date && cp bigiso.iso /mnt/ceph/iozone.tmp && date
[1:59] <cornchips> the test is simple: two nodes, each has a seperate root and data disks.. of varying size
[2:00] <cornchips> exactly what you said... it seems it is stuck syncing
[2:00] <cornchips> however, its sync rate seems a bit slow
[2:01] <gregaf> what's your ceph.conf look like, and what's the output of ceph -s?
[2:01] <gregaf> (pastebin, please :) )
[2:01] <cornchips> yep
[2:02] <cornchips> http://pastebin.com/9BLpQEJN
[2:03] <cornchips> http://pastebin.com/GSCVW0YH
[2:05] <gregaf> heh, if you move the "debug ms = 1" line out of your global section you won't get that output in your user tools
[2:05] <cornchips> thanks! i was wondering about that
[2:06] <cornchips> it was a collab so a mix of stuff went in there
[2:06] <gregaf> so you've got a disk mounted at /srv/eph/osd/<osd_name> on each node?
[2:06] <gregaf> nothing here looks obviously wrong
[2:07] <gregaf> what filesystem is on the OSD disks?
[2:07] <gregaf> and what kind of throughput are you seeing that seems slow?
[2:08] <cornchips> xfs... throughput is awesome.. only when the buffer gets full then it seems like there is heavy contention
[2:08] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) has left #ceph
[2:08] <gregaf> you are running these tests from a third computer, right?
[2:09] <gregaf> can you run "ceph -w" and then in another terminal run "ceph osd tell \* bench"?
[2:09] <cornchips> yep. i get like 30mbs-50mbs to the disk on each servers dstat...
[2:09] <cornchips> ok ceph -w on a server and the bench local?
[2:09] * lofejndif (~lsqavnbok@9YYAAJLM4.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[2:10] <gregaf> just two separate terminals
[2:10] <gregaf> it'll tell each OSD to run a basic benchmark on its store and the results are reported to the "central log" which you can watch with ceph -w
[2:10] <cornchips> right on it
[2:10] <cornchips> ceph -w to a log ?
[2:11] <gregaf> nah, just leave it open; it doesn't get much traffic
[2:12] <cornchips> k, done
[2:15] <cornchips> during the bench im getting heavy writes on both servers
[2:15] <gregaf> you should see something like "bench: wrote 1024 MB in blocks of 4MB in x sec at y/sec" come out of the log at some point
[2:15] <gregaf> you should be seeing writes; it's going to write 1GB through the full pipeline in 4MB chunks
[2:15] <cornchips> http://pastebin.com/aVYxKFDk
[2:15] <cornchips> ends right there
[2:16] <cornchips> writes die down after about 2 min
[2:16] <cornchips> rather a min
[2:16] <gregaf> that'll show up in the ceph -w window; the command to tell it to bench just kicks off the process
[2:16] <gregaf> doesn't wait for it to finish the benchmark
[2:16] <cornchips> wrote 1024 MB in blocks of 4096 KB in 64.935760 sec at 16147 KB/sec
[2:16] <cornchips> ench: wrote 1024 MB in blocks of 4096 KB in 30.975039 sec at 33852 KB/sec
[2:16] <cornchips> bench: wrote 1024 MB in blocks of 4096 KB in 62.892138 sec at 16672 KB/sec
[2:16] <cornchips> the third
[2:17] <gregaf> okay, so one of your OSDs under essentially ideal conditions can only write at 16MB/s
[2:17] <gregaf> given you have two OSDs, that would explain why it's slow
[2:18] <cornchips> one of them is behaving improperly? how can i dig more where the hangup is?
[2:18] <cornchips> some queue?
[2:18] <gregaf> it might just be that you have a slow disk — did you say it gets 30-60MB in disk benchmarks?
[2:19] <cornchips> i can eval eachs xfs if youd like
[2:19] <gregaf> the OSD journal takes all writes before they go to the permanent store, so when you co-locate them on one drive that means the drive takes every write twice
[2:19] <gregaf> and since all writes are by default replicated to two nodes, that means every write needs to go on that OSD before it's considered stable
[2:19] <gregaf> so yeah, you should run some disk benchmarks (including syncs to flush all the data out to disk!) and see what you can get out of the raw disk
[2:19] <cornchips> back in a min with them
[2:20] * nhm (~nhm@174-20-35-45.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[2:22] * sagelap (~sage@248.sub-70-197-144.myvzw.com) has joined #ceph
[2:22] <cornchips> http://pastebin.com/uxBRu2Av
[2:26] * sagelap1 (~sage@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:27] * BManojlovic (~steki@212.200.240.160) has joined #ceph
[2:30] <gregaf> I'm not super-familiar with iozone — that's claiming that it had two simultaneous writers and both saw a minimum of ~34MB/s?
[2:30] * Cube1 (~Adium@12.248.40.138) Quit (Quit: Leaving.)
[2:30] * sagelap (~sage@248.sub-70-197-144.myvzw.com) Quit (Ping timeout: 480 seconds)
[2:33] <joao> my experience with iozone is that it must be used carefully if one wants results that actually mean something
[2:33] <cornchips> i think the throughput per process avg is the most important... showing that a disk on one is getting a write of 70mbs a sec and a disk on the other is getting a 67mbs sec
[2:34] <joao> cornchips, how much ram do you have on that machine?
[2:34] <cornchips> is that command look right: /opt/iozone/bin/iozone -c -e -r 64k -s 1g -l 2 -i 0 -i 1 -F /mnt/ceph/iozone.tmp?
[2:34] <cornchips> 8 on one, 4 on the other
[2:35] <joao> from my experience with iozone, if you want proper results on disk throughput, you should great a big, big file that does not fit on memory, and you should use a read workload after the creation of the file
[2:35] <joao> otherwise, the page cache will do wonders for your write throughput
[2:36] <joao> but then again, you're using a whole lot of options, and I don't remember what each and everyone of them do ;)
[2:37] <gregaf> what we see fairly often is that a disk which can sustain 60-90 MB/s on a single writer will drop precipitously with two writers, but that doesn't appear to be what happened here, assuming iozone isn't lying
[2:37] <gregaf> joao: the -e option is supposed to include an fsync in the timing calculations
[2:37] <joao> oh, alright then
[2:38] <cornchips> iozone not my favorite option, but other tests have confirmed that the rates are nearly the same... not sure what else i could use... testing the gamut here
[2:38] <cornchips> wish there were config files for it
[2:38] <gregaf> although given my limited experiences I'd be more comfortable with a "time dd <appropriate options> && sync" (however you would need to escape those so time counts the whole thing)
[2:38] <gregaf> but I don't actually expect that to turn up anything different; I'm just very surprised
[2:40] <gregaf> unfortunately we've now reached the limit of my debugging options; I've never seen a separate utility and osd bench disagree so dramatically
[2:41] <gregaf> nhm would be my choice if he were around, but he's off :(
[2:41] <gregaf> perhaps put together what we've found here and email ceph-devel asking for advice?
[2:41] * sagelap (~sage@98.sub-70-197-153.myvzw.com) has joined #ceph
[2:42] * sjustlaptop (~sam@m980436d0.tmodns.net) has joined #ceph
[2:43] <cornchips> thanks gregaf! how should i start?
[2:43] <cornchips> pastebins and all..?
[2:43] <gregaf> sure
[2:44] <gregaf> try and organize it but basically dump what you've collected and mention the highlights (osd bench 16MB/s [aka 32MB/s disk], iozone 2*35MB/s)
[2:44] <cornchips> on it... wondering if any ceph devs would be into google groups? kind of makes things usery, no?
[2:44] <cornchips> search and stuff
[2:44] <cornchips> (i know google site:)
[2:45] <gregaf> I've used Google Groups and I hate it :(
[2:46] <sjustlaptop> gregaf: why?
[2:46] <cornchips> i found the labeling, notifications prefs and all that quite soothing. i think some of the models of q/a and issues forms can be expressed/modeled better.
[2:47] <cornchips> (than mailing lists)
[2:47] <gregaf> dunno, maybe just didn't spend enough time setting it up properly
[2:47] <cornchips> i know that "users" love it.. geeks can always get by :-)
[2:47] <cornchips> just my two cents
[2:48] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[2:50] <gregaf> that's odd because I really just thought it was hard to browse or find things
[2:50] <gregaf> but I don't know that I spent that long at it
[2:52] <cornchips> search is a users biggest help, if they can search existing questions without too much fuss, i think it eases everyone's life, user and devel... i can tell you that some of the fed guys werent happy when i mentioned that.. i still dont understand why
[2:53] <cornchips> might be the masochism of running mailservers pleases them
[2:53] <gregaf> email is pretty ingrained into open source projects, too
[2:53] <gregaf> but yeah, I dunno
[2:54] <joao> I would say that the biggest disadvantage for ceph-devel to run on google groups would be leaving vger behind
[2:55] <joao> maybe a ceph-users@ list, if we happen to need one in the future, could take advantage of a more user-friendly approach like google groups
[2:56] <cornchips> like gluster did the stack* type q/a, im sure they got a devel, but something in that direction would benefit what i see as a promising design
[2:56] <cornchips> not sure how it pans in the long run, but it wouldnt hurt
[2:57] * sjustlaptop (~sam@m980436d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[2:58] <cornchips> unless im off base, and ceph was more of a big dc thing than a commodity small businesses trying to grow built on open sources kinds of thing.
[2:58] <cornchips> but im sure thats not the case, after reading (and listening) in
[2:59] * maelfius (~mdrnstm@66.209.104.107) Quit (Quit: Leaving.)
[3:00] <gregaf> it's sort of got a split personality ;)
[3:01] <cornchips> hahaha, im hoping i can get a little more of homegrown
[3:04] * miroslavk (~miroslavk@c-98-248-210-170.hsd1.ca.comcast.net) has joined #ceph
[3:04] * BManojlovic (~steki@212.200.240.160) Quit (Quit: Ja odoh a vi sta 'ocete...)
[3:04] <cornchips> what kind of workloads would you say its best... could this be something people "could" mount their homedirs on (if the applications were tuned for it)? what about data intensive, crawling/processing/indexing?
[3:05] <gregaf> it depends a lot
[3:05] <cornchips> is this the split thing, controlled by crush algos?
[3:05] <gregaf> with the right hardware, Ceph should be able to handle most workloads
[3:05] <gregaf> rsync is just about pessimal for it atm, although we should be able to improve that
[3:05] <gregaf> but normal use of homedirs it actually ought to handle pretty well
[3:06] <gregaf> and of course streaming large reads and writes are just dandy
[3:07] <cornchips> different pools for different types of load i would assume.. if it becomes necessary? as of right now a big disk in our sky would be nice.... thinking to move a mongo cluster on top of it, any thoughts?
[3:07] <gregaf> depends on how different your loads are, but potentially
[3:07] <gregaf> mongodb on Ceph? no idea about that particular one, sorry
[3:09] <cornchips> i always love pioneering when a lot of things are riding on it.. once and if i get this moving, you can be sure i will make it known, i hope the same for the others here. thanks again gregaf. im out for the night!
[3:12] <gregaf> thanks for testing! make sure to write that email ;)
[3:12] * miroslavk (~miroslavk@c-98-248-210-170.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:17] * cornchips (~nate@85.114.54.27) has left #ceph
[3:17] * sagelap (~sage@98.sub-70-197-153.myvzw.com) Quit (Ping timeout: 480 seconds)
[3:40] * chutzpah (~chutz@199.21.234.7) Quit (Quit: Leaving)
[3:59] * deepsa (~deepsa@122.172.212.203) Quit (Quit: Computer has gone to sleep.)
[4:06] * Ryan_Lane (~Adium@216.38.130.165) Quit (Quit: Leaving.)
[4:18] * tren (~Adium@216-19-187-18.dyn.novuscom.net) has joined #ceph
[4:20] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (Quit: Leaving.)
[4:23] * Cube1 (~Adium@184.251.50.8) has joined #ceph
[4:24] * miroslavk (~miroslavk@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[4:41] * davidz1 (~Adium@2607:f298:a:607:64ee:4859:aab8:92e8) Quit (Quit: Leaving.)
[5:01] * Cube1 (~Adium@184.251.50.8) Quit (Quit: Leaving.)
[5:07] * miroslavk (~miroslavk@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[5:12] * sjustlaptop (~sam@66-214-139-112.dhcp.gldl.ca.charter.com) has joined #ceph
[5:30] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[6:02] * gminks_ (~ginaminks@108-210-41-138.lightspeed.austtx.sbcglobal.net) Quit (Quit: gminks_)
[6:11] * sjustlaptop (~sam@66-214-139-112.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[6:22] * gaveen (~gaveen@112.134.112.234) has joined #ceph
[6:51] * grant (~grant@60-240-78-43.static.tpgi.com.au) has joined #ceph
[6:51] * grant (~grant@60-240-78-43.static.tpgi.com.au) Quit ()
[6:51] * grant (~grant@60-240-78-43.static.tpgi.com.au) has joined #ceph
[6:53] <grant> Hi guys, I just rebuilt my cluster with 0.48.2 and kernel 3.5.0.16 for new btrfs backend. Whilst "stress testing" by dd'ing lots of data in to my cephfs mount point, I am seeing lots of [WRN] Slow request for various OSDs.
[6:53] <grant> Is there something I should be looking at specifically to troubleshoot these?
[6:54] <grant> The slow requests are always around 33+ seconds old.
[6:55] <grant> 2012-10-06 14:54:33.917619 osd.13 [WRN] slow request 60.205080 seconds old, received at 2012-10-06 14:53:33.712476: osd_op(mds.0.1:49917 200.00000012 [write 776061~11142] 1.9b9a1338) v4 currently waiting for sub ops
[6:55] <grant> 2012-10-06 14:54:29.625590 osd.16 [WRN] slow request 30.046442 seconds old, received at 2012-10-06 14:53:59.579065: osd_op(client.6058.1:1286296 10000000004.0003a9dd [write 0~4194304] 0.c0c2651d snapc 1=[]) currently waiting for sub ops
[7:03] * gregaf (~Adium@2607:f298:a:607:f059:6fdb:5366:6f3f) Quit (Read error: Connection reset by peer)
[7:03] * gregaf (~Adium@2607:f298:a:607:f059:6fdb:5366:6f3f) has joined #ceph
[7:05] * hijacker_ (~hijacker@213.91.163.5) has joined #ceph
[7:05] * hijacker (~hijacker@213.91.163.5) Quit (Read error: Connection reset by peer)
[7:26] * Cube1 (~Adium@184.251.50.8) has joined #ceph
[7:28] * Cube1 (~Adium@184.251.50.8) Quit ()
[7:53] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[8:17] * grant (~grant@60-240-78-43.static.tpgi.com.au) Quit (Ping timeout: 480 seconds)
[8:36] * Cube1 (~Adium@184.251.50.8) has joined #ceph
[8:37] * loicd (~loic@82.235.173.177) Quit (Quit: Leaving.)
[8:39] * loicd (~loic@magenta.dachary.org) has joined #ceph
[8:42] * grant (~grant@60-240-78-43.static.tpgi.com.au) has joined #ceph
[8:45] * Cube1 (~Adium@184.251.50.8) Quit (Quit: Leaving.)
[8:46] * Cube1 (~Adium@184.251.50.8) has joined #ceph
[8:49] * tziOm (~bjornar@ti0099a340-dhcp0358.bb.online.no) has joined #ceph
[8:49] <tziOm> How would one back up a ceph cluster to external location? Is that even something considered?
[8:53] * gaveen (~gaveen@112.134.112.234) Quit (Remote host closed the connection)
[8:55] * hijacker_ (~hijacker@213.91.163.5) Quit (Ping timeout: 480 seconds)
[8:55] * grant (~grant@60-240-78-43.static.tpgi.com.au) Quit (Ping timeout: 480 seconds)
[9:04] * gaveen (~gaveen@112.134.112.179) has joined #ceph
[9:07] * dmick (~dmick@2607:f298:a:607:1a03:73ff:fedd:c856) Quit (Quit: Leaving.)
[9:18] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[9:22] * gregaf1 (~Adium@2607:f298:a:607:f059:6fdb:5366:6f3f) has joined #ceph
[9:24] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[9:28] * gregaf (~Adium@2607:f298:a:607:f059:6fdb:5366:6f3f) Quit (Ping timeout: 480 seconds)
[9:29] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[9:52] * Cube1 (~Adium@184.251.50.8) Quit (Quit: Leaving.)
[10:01] * grant (~grant@60-240-78-43.static.tpgi.com.au) has joined #ceph
[10:20] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[10:23] * loicd (~loic@90.84.146.244) has joined #ceph
[10:31] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[10:31] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[10:50] * grant (~grant@60-240-78-43.static.tpgi.com.au) Quit (Ping timeout: 480 seconds)
[10:50] * grant (~grant@60-240-78-43.static.tpgi.com.au) has joined #ceph
[11:20] * tren (~Adium@216-19-187-18.dyn.novuscom.net) Quit (Quit: Leaving.)
[11:23] * BManojlovic (~steki@212.200.240.160) has joined #ceph
[11:36] * tziOm (~bjornar@ti0099a340-dhcp0358.bb.online.no) Quit (Remote host closed the connection)
[11:48] * loicd (~loic@90.84.146.244) Quit (Ping timeout: 480 seconds)
[11:50] * Cube1 (~Adium@184.251.50.8) has joined #ceph
[11:54] * Cube1 (~Adium@184.251.50.8) Quit ()
[11:56] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[12:26] * grant (~grant@60-240-78-43.static.tpgi.com.au) Quit (Ping timeout: 480 seconds)
[13:02] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[13:11] * grant (~grant@60-240-78-43.static.tpgi.com.au) has joined #ceph
[13:18] * jamespage (~jamespage@tobermory.gromper.net) Quit (Quit: Coyote finally caught me)
[13:18] * jamespage (~jamespage@tobermory.gromper.net) has joined #ceph
[13:46] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[13:57] * grant (~grant@60-240-78-43.static.tpgi.com.au) Quit (Ping timeout: 480 seconds)
[13:58] * Cube (~cube@12.248.40.138) Quit (Quit: Leaving.)
[13:58] * Cube (~cube@12.248.40.138) has joined #ceph
[14:53] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) has joined #ceph
[15:14] * lofejndif (~lsqavnbok@04ZAAABOC.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:14] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[15:20] * gminks_ (~ginaminks@108-210-41-138.lightspeed.austtx.sbcglobal.net) has joined #ceph
[15:23] * SvenDowideit (~SvenDowid@203-206-171-38.perm.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:23] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) has joined #ceph
[15:29] * gaveen (~gaveen@112.134.112.179) Quit (Ping timeout: 480 seconds)
[15:37] * gaveen (~gaveen@112.134.112.212) has joined #ceph
[15:46] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:49] * gminks_ (~ginaminks@108-210-41-138.lightspeed.austtx.sbcglobal.net) Quit (Quit: gminks_)
[15:56] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[16:51] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[16:56] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[16:56] * nhm (~nhm@174-20-35-45.mpls.qwest.net) has joined #ceph
[16:57] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[17:00] * ogelbukh (~weechat@nat3.4c.ru) Quit (Ping timeout: 480 seconds)
[17:03] * ogelbukh (~weechat@nat3.4c.ru) has joined #ceph
[17:07] * lofejndif (~lsqavnbok@04ZAAABOC.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[17:31] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[17:31] * gminks_ (~ginaminks@108-210-41-138.lightspeed.austtx.sbcglobal.net) has joined #ceph
[17:33] * gminks_ (~ginaminks@108-210-41-138.lightspeed.austtx.sbcglobal.net) Quit ()
[17:34] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[17:36] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit ()
[17:37] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[17:58] * yehudasa (~yehudasa@2607:f298:a:607:a441:8a7d:2bd6:4130) Quit (Ping timeout: 480 seconds)
[18:07] * yehudasa (~yehudasa@2607:f298:a:607:d6be:d9ff:fe8e:174c) has joined #ceph
[18:12] * cornchips (~nate@85.114.54.27) has joined #ceph
[18:17] <cornchips> howdy again... carefully crafted my email. either i must be functionally incapable or old school mailing lists are fucking stupid ... three attempts at sending and subscribing to ceph-devel with    ceph-devel@vger.kernel.org. tried sending the commands as specified..
[18:18] <cornchips> returned : Delivery to the following recipient failed permanently:
[18:19] <joao> that's odd
[18:20] <joao> the list appears to be working just fine; received an email roughly 1h ago
[18:21] <cornchips> what do i do, send the auth line in a message, right?
[18:21] <joao> is that the confirmation email?
[18:22] <cornchips> y, You can interact with the Majordomo software by sending it commands
[18:22] <cornchips> in the body of mail messages addressed to "Majordomo@vger.kernel.org". i sent him auth *** subscribe ceph-devel maizechips@gmail.com
[18:24] <joao> yeah, that sounds about right
[18:24] <joao> are you sure it's a plain text email?
[18:24] <joao> vger is known for refusing non-plain text emails
[18:25] <cornchips> ugh. hahahaha.. this is retarded.
[18:25] <joao> no it's not
[18:25] <joao> patches usually come in emails
[18:26] <cornchips> you cant build great open software if people have to jump in to this depth just to report a "bug"... i understand the software development process, throughly... i prefer code review systems
[18:26] <joao> you can use the tracker as well
[18:26] <cornchips> no codereview going on in ceph?
[18:26] <joao> the mailing list is just awesome to share more stuff with the community
[18:27] <joao> we do code reviewing, yes
[18:27] <joao> on github mainly, or any patches that come to the mailing list
[18:27] <cornchips> so, someone has to transfer of a patch from a mailing list to git?
[18:27] <cornchips> *of
[18:28] <cornchips> or direct on git is accepted too?
[18:29] <joao> well, traditionally, patches are taken from the mailing list; I believe git itself has features for that
[18:29] <joao> I've seen pull requests on github too though
[18:29] <cornchips> when you say "the mailing list is just awesome to share more stuff with the community"... what community?
[18:29] <joao> the ceph community
[18:29] <joao> whoever is subscribed to the list; and then all archives are public -- gmane, for instance, takes care of that
[18:30] <joao> so, technically, to share with anyone that either follows the list of checks on the list's archives
[18:32] <joao> besides, the list may be the best medium to share infos, behaviors, or stuff that might be target of participation of other interested parties
[18:32] <cornchips> community includes users too, of which i dont see many.... just people "playing".. granted the only commercial installation i know of is dreamhost storage service. if ceph keeps going this way, gluster is going to just chomp it out... need some user/marketing folk, bad
[18:34] <joao> inktank has marketing people putting the ceph word out in the world; I'm sure others do some of it too
[18:34] <cornchips> anyways, enough ranting, back to work.. what do i do with message
[18:34] <joao> but the even those that are just "playing" with it, are users or potential users
[18:36] <joao> with the message? well, trying sending the auth email in plain-text might be the solution
[18:37] <joao> I'm not sure what kind of error or why you are having troubles with it
[18:37] <cornchips> ok.. i promise you it was easier to do this (10 sec) vs the mailing list (several minutes, including pissing about it here)... https://groups.google.com/forum/#!forum/ceph-devel
[18:39] <cornchips> i dont care to be admin. please someone take it from me.
[18:40] * gminks_ (~ginaminks@108-210-41-138.lightspeed.austtx.sbcglobal.net) has joined #ceph
[18:41] <joao> cornchips, I, for one, would suggest you could tell that on the mailing list :p
[18:42] <cornchips> hahaha
[18:42] <cornchips> i hope sage is listening somewhere!
[18:48] <cornchips> hahaha, 4th attempt... **** Address already subscribed to ceph-devel
[18:49] <joao> there you go then ;)
[18:49] <cornchips> 5th attempt
[18:51] <cornchips> nuts
[18:51] <cornchips> not the healthy kind either
[18:53] * gaveen (~gaveen@112.134.112.212) Quit (Remote host closed the connection)
[18:59] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Quit: This computer has gone to sleep)
[18:59] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[19:07] * nhm (~nhm@174-20-35-45.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[19:08] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[19:09] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit ()
[19:12] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) Quit (Quit: Leaving.)
[19:14] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[19:30] * gaveen (~gaveen@112.134.112.148) has joined #ceph
[19:33] * loicd (~loic@90.84.144.183) has joined #ceph
[19:40] * cornchips (~nate@85.114.54.27) has left #ceph
[19:50] * loicd (~loic@90.84.144.183) Quit (Read error: No route to host)
[19:50] * loicd (~loic@90.84.144.183) has joined #ceph
[20:05] * gminks_ (~ginaminks@108-210-41-138.lightspeed.austtx.sbcglobal.net) Quit (Quit: gminks_)
[20:07] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[20:10] * loicd (~loic@90.84.144.183) Quit (Ping timeout: 480 seconds)
[20:17] * nhm (~nhm@174-20-35-45.mpls.qwest.net) has joined #ceph
[20:22] * tren (~Adium@2001:470:b:2e8:e1ba:84f7:15ed:4fa7) has joined #ceph
[20:38] * nhm (~nhm@174-20-35-45.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[20:45] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[21:00] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[21:30] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[21:38] * tziOm (~bjornar@ti0099a340-dhcp0358.bb.online.no) has joined #ceph
[21:39] * BManojlovic (~steki@212.200.240.160) Quit (Quit: Ja odoh a vi sta 'ocete...)
[21:41] * BManojlovic (~steki@212.200.240.160) has joined #ceph
[21:56] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:57] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:59] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[22:17] * miroslavk (~miroslavk@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[22:20] * gaveen (~gaveen@112.134.112.148) Quit (Remote host closed the connection)
[22:38] * danieagle (~Daniel@177.99.134.150) has joined #ceph
[23:01] * gminks_ (~ginaminks@108-210-41-138.lightspeed.austtx.sbcglobal.net) has joined #ceph
[23:14] * miroslavk (~miroslavk@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[23:21] * tziOm (~bjornar@ti0099a340-dhcp0358.bb.online.no) Quit (Remote host closed the connection)
[23:25] * danieagle (~Daniel@177.99.134.150) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[23:35] * tren (~Adium@2001:470:b:2e8:e1ba:84f7:15ed:4fa7) Quit (Quit: Leaving.)
[23:39] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[23:58] * grant (~grant@60-240-78-43.static.tpgi.com.au) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.