#ceph IRC Log

Index

IRC Log for 2012-08-29

Timestamps are in GMT/BST.

[0:01] * jlogan (~chatzilla@2600:c00:3010:1:787f:2aee:1e81:7627) has joined #ceph
[0:01] * jlogan (~chatzilla@2600:c00:3010:1:787f:2aee:1e81:7627) Quit ()
[0:03] * jlogan (~Thunderbi@2600:c00:3010:1:787f:2aee:1e81:7627) has joined #ceph
[0:05] * alexxy[home] (~alexxy@2001:470:1f14:106::2) has joined #ceph
[0:06] * wido__ (~wido@2a00:f10:104:206:9afd:45af:ae52:80) has joined #ceph
[0:07] * masterpe_ (~masterpe@87.233.7.43) has joined #ceph
[0:08] * wido (~wido@2a00:f10:104:206:9afd:45af:ae52:80) Quit (Ping timeout: 480 seconds)
[0:08] * masterpe (~masterpe@2001:990:0:1674::1:82) Quit (Ping timeout: 480 seconds)
[0:09] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Ping timeout: 480 seconds)
[0:16] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[0:34] * The_Bishop (~bishop@2a01:198:2ee:0:70ca:a7e5:e493:f4df) has joined #ceph
[0:39] * sjust (~sam@2607:f298:a:607:baac:6fff:fe83:5a02) has joined #ceph
[0:41] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[0:43] * JT (~john@astound-64-85-239-164.ca.astound.net) has joined #ceph
[0:52] <elder> dmick, I am at the moment.
[0:53] <elder> dmick, I was on the call this morning but my mute button was not working, so I sat silent. But I heard you mention something about an RBD kernel client bug.
[0:53] <dmick> hey, yeah, that
[0:54] <dmick> so I'm not sure if it's ours or btrfs's, to be certain, but I know it's easy to reproduce
[0:54] <dmick> http://tracker.newdream.net/issues/2937
[0:54] <dmick> I was trying to collect some more info about it, but
[0:55] <dmick> basically, I think it's that bio_split is unhappy with what rbd_rq_fn is feeding it
[0:55] <elder> OK. Let me look at the code to see what's going on. Actually, give me about 5 minutes first to go grab a plate of food.
[0:56] <sjust> 'osd auto upgrade tmap' and 'osd tmapput sets users tmap' should probably be in a different section if they are included at all
[0:56] <sjust> they are only relevant to older versions of ceph (pre-argonaut)
[0:56] <JT> Ok. I added those at josh's suggestion, but agree they are probably lower on the priority list. In your opinion, what are the most important OSD settings in ordinal rank?
[0:58] <JT> I see there is an 'osd uuid' setting, but haven't seen it used in a system or in an example. However, I've seen fsid, and generating a uuid for the cluster. How would we use uuid for OSDs?
[0:59] <sjust> actually, that one is probably important for chef deployments, we'll need to consult TV
[1:00] <JT> Ok. For now, I'll assume uuid is similar to what we do in Chef deployment.
[1:00] <sjust> so, the most important ones are probably the set of settings referenced in the install sequence
[1:01] <sjust> so 'osd journal', 'osd data'
[1:01] <sjust> 'keyring'
[1:01] <sjust> after that, we are talking more about performance, right?
[1:01] <joshd> also pretty important are public addr/network and cluster addr/network
[1:01] <sjust> good catch
[1:01] <joshd> don't think those are in the docs at all yet
[1:01] <JT> Ok. I have that in a separate file now. I can send that along, but let me pull it up and tell you what I have...
[1:02] <sjust> those settings allow the user to specify which network interfaces are used for what kind of traffic
[1:02] <sjust> and critically to separate the osd-osd traffic from the osd-client traffic
[1:02] <JT> Ok... for 'public network' I have "The IP address and netmask of the public (front-side) network (e.g., ``10.20.30.40/24``). "
[1:03] <JT> For 'public addr' I have "The IP address for the public (front-side) network."
[1:03] <JT> So this is the front-side network. Is that what Ceph uses by default?
[1:04] <JT> In other words, if you do not specify either the "public network" or "cluster network" settings, Ceph works. How is it working?
[1:04] <sjust> it will bind to some address
[1:04] <sjust> it just may not be the one you want
[1:04] <Cube> So yeah, thats an important setting.
[1:04] <sjust> specifying the address or network allows you to constrict the interface used
[1:05] <JT> For 'cluster network' I have "The IP address and netmask of the cluster (back-side) network (e.g., ``10.20.30.41/24``). "
[1:05] <Cube> Thats for all the osd-osd traffic, correct?
[1:05] <JT> For 'cluster addr' I have "The IP address for the cluster (back-side) network."
[1:05] <sjust> Yeah, that's for osd-osd traffic (and osd-mon traffic for that matter)
[1:06] <JT> Cube. I believe so. The reason is that when you have 3+ replicas, OSD to OSD traffic is substantially higher than client-to-OSD traffic.
[1:06] <sjust> right
[1:07] <JT> These settings, namely 'cluster network' 'cluster addr' 'public network' and 'public addr' should be set under [global], correct?
[1:08] <gregaf> cluster network and public network should be; public addr and cluster addr are per-daemon settings
[1:08] <sjust> no, addr would definitely be a property of a particular osd (you wouldn't want all of the osds to bind to the same network!) and network could be either a property of a particular osd or of osds as a whole
[1:08] <gregaf> since those are specific addresses ;)
[1:08] <sjust> *****bind to the same address
[1:08] <sjust> you probably would want them to bind to the same network...
[1:09] <Cube> network in global, addr per osd
[1:09] <JT> Got it.
[1:09] * tnt (~tnt@11.164-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[1:11] <JT> What about IP tables for these then? I know that the processes listen on the ports 6800 ports. For port settings, I have: iptables -A INPUT -m multiport -p tcp -s 192.168.1.0/24 --dports 6789,6800:6803 -j ACCEPT. Of course, I'm running a small cluster. But I'd need to have ports open on both the front and back-side networks, right?
[1:12] <JT> So I'd have to specify my network, my monitor ports, and a range of ports for my OSDs and MDSs, correct?
[1:14] <elder> dmick, back.
[1:17] <sjust> yeah, you would
[1:17] * jlogan (~Thunderbi@2600:c00:3010:1:787f:2aee:1e81:7627) Quit (Ping timeout: 480 seconds)
[1:17] <sjust> there is a good reason why you can't specify the osd port number, I'm afraid
[1:19] <JT> That's because it binds to the first open port within a range, correct?
[1:19] <sjust> and avoids ports it's used before, but yeah
[1:19] <sjust> when it's marked down incorrectly, it picks a new port, I think
[1:20] <sjust> so the other osds can ignore the old port
[1:20] <Cube> ahh, okay
[1:22] <JT> So if you are deploying a cluster with 200 OSDs, does it just increment higher? For example, if you've used port 6899, it will try 6900 and up? Also, since the OSD may pick up a new port, do you need to have some open/free ports in addition to the number of hosts/daemons you have running?
[1:22] <sjust> well, it's only on a per machine basis
[1:23] * Cube (~Adium@12.248.40.138) has left #ceph
[1:24] * Cube (~Adium@12.248.40.138) has joined #ceph
[1:24] <JT> Ok, but same question... 200 hosts, and multiple OSDs per host, it will continue to increment passed the default 6800 port range, correct?
[1:24] <sjust> eventually
[1:24] <sjust> but you would only have a few osds on a single machine
[1:26] * nhmhome (~nh@67-220-20-222.usiwireless.com) has joined #ceph
[1:26] * nhm_ (~nh@67-220-20-222.usiwireless.com) has joined #ceph
[1:26] <sjust> so you would need to allow through probably 6800-6900 I think
[1:26] <JT> What does 'max open files' do? It's disabled by default--i.e., it's set to 0
[1:28] <sjust> it limits the number of file descriptors the osd will leave open
[1:30] <JT> So 'max open files' is better thought of as an OSD setting? Currently, it's not prepended with osd in config_opts.h
[1:30] <sjust> whoa
[1:30] <sjust> nvm
[1:30] <sjust> hang on
[1:31] <sjust> intereseting, it appears to be obsolete
[1:32] <JT> If you can verify, that'd be great. Less is more. :)
[1:32] <sjust> nope, wrong again
[1:32] <sjust> ugh
[1:32] <sjust> ok
[1:33] <sjust> if you set it, ceph service start will also set the os level max open fds
[1:33] <sjust> so it's useful for preventing the ceph-osd daemon from running out of fds
[1:33] <sjust> sorry about taht
[1:33] * BManojlovic (~steki@212.200.243.134) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:36] <JT> It's 0 (disabled) by default. Is there a better setting, or is it optimal to leave it disabled?
[1:37] <JT> Also, regarding 'osd uuid', we have a general 'fsid' setting too. I think Tommi had mentioned something about this... that we should be consistent about using `fsid` or `uuid`.
[1:38] <sjust> it's best to ignore unless you hit problems
[1:38] <sjust> fsid and uuid here are distinct, I think
[1:38] <JT> back to 'osd journal size'. Now I have "Begin with 10GB. Should approximate twice the product of the expected speed multiplied by ``filestore_min_sync_interval``."
[1:39] <JT> I'm not completely satisfied with "expected speed." Speed of the disk? The CPU? The network? All three? Read time, write time?
[1:39] <sjust> Oh, sorry
[1:39] <sjust> speed is the expect throughput of the osd backing disk
[1:39] <sjust> so if it's a 7200rpm disk, probably 100MB/s
[1:39] <sjust> more if it's an ssd or something crazy
[1:40] <sjust> 10GB would be quite a bit
[1:40] <sjust> it's unlikely that you'd need more
[1:40] <sjust> (anyone want to interject with better information?)
[1:41] <Leseb> network + disk speed
[1:41] <Leseb> if you store the journal within the same disk as the backend filesystem
[1:41] <sjust> actually, that should be filestore_max_sync_interval, not min_sync_interval
[1:42] <JT> Most of the filestore stuff isn't commented, and I'm only about halfway through that. For 'filestore min sync interval', I have: The minimum interval in seconds for synchronizing the filestore.
[1:42] <sjust> you mean MIN(network, disk speed)?
[1:42] <sjust> that's right
[1:42] <JT> Ok... as I said, I have very little on that at this point. For 'filestore max sync interval', I have: "The maximum interval in seconds for synchronizing the filestore."
[1:43] <Leseb> JT: 5 right?
[1:43] <JT> I have 5 for default max, and .01 for default min.
[1:44] <JT> Would anyone want to change these defaults? Should they?
[1:44] <Leseb> it depends for how long do you want your data remain in the journal?
[1:45] <Leseb> 5 is ok
[1:46] <Leseb> if your network writes to 100MB/sec and your disks at 100MB/sec, and if the journal is stored on the same hdd as the backend fs this split your writes because you write on the journal then to the backend fs
[1:46] <Leseb> so ~50M for the journal and 50M for the backend fs
[1:46] <sjust> Actually, 5 is probably a bit on the low side, you might want to adjust the max_sync_interval up to reduce the sync frequency
[1:47] <sjust> so filestore_max_sync_interval should probably be pointed out as a good place to look when tuning the osd throughput
[1:48] <sjust> especially on small ios
[1:48] <JT> So reading between the lines, I'm getting that it's a good practice to store the journal on a separate disk from the OSD data?
[1:48] <sjust> a larger sync_inteval might allow the filesystem to do a better job of coalescing writes
[1:48] <Leseb> sjust: it depends if you're battery backend or not I guess
[1:49] <sjust> Leseb: what do you mean?
[1:49] <sjust> JT: separate disk maximizes throughput, but there are tradeoffs
[1:49] <sjust> one option is to use one fast ssd for 3-4 osds
[1:49] <Leseb> sjust: you don't really want to wait to much time for your sync right?
[1:49] <sjust> for the journals
[1:50] <sjust> Leseb: if the filesystem is doing a decent job of flushing data anyway (which actually isn't the case) then a larger sync interval just gives it the freedom to put off flushing things like metadata
[1:50] <sjust> if the interval contains serveral writes to the same object, then you might be able to update the metadata fewer times
[1:51] <sjust> or coalesce the smaller writes better
[1:51] <Leseb> sjust: ok, I got your point
[1:51] <sjust> unfortunately we don't generally see benefits from larger sync intervals at the moment
[1:52] <sjust> which brings us to the filestore_flusher
[1:52] <JT> Not to change the subject too drastically, but for ``filestore`` I have no description, and the default is "false" ...
[1:52] <sjust> wow
[1:52] <sjust> hang on
[1:54] <sjust> I don't see it used anywhere, we should probably ignore that one for now
[1:54] <sjust> the filestore_flusher option enables the filestore flusher which for large writes calls sync_file_range on the written range
[1:54] <sjust> it's an effort to encourage the filesystem to write out data before the full sync hopefully shortening the sync operation
[1:55] <sjust> it defaults to true. turning it off usually improves performance
[1:55] <Cube> Why is is defaulted to true?
[1:55] <Cube> is it*
[1:56] <sjust> not really sure, arguably we should change the default
[1:56] * Leseb_ (~Leseb@79.142.65.25) has joined #ceph
[1:56] <Cube> okay, just making sure im not missing anything
[1:56] <sjust> we just haven't done the legwork to say conclusively which way is better
[1:57] <JT> What is 'filestore fiemap'? It's defaulted to "false" it says, // (try to) use fiemap
[1:58] <sjust> joshd: I think you know a bit more about that one
[1:59] * yoshi (~yoshi@p28146-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[2:01] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[2:01] * Leseb_ is now known as Leseb
[2:01] <JT> There are still a bunch of osd settings I don't have any descriptions for:
[2:01] <JT> osd command max records
[2:02] <JT> osd kill backfill at
[2:03] <Cube> osd backfill scan min/max
[2:03] <JT> Yes. Don't have descriptions for those either.
[2:04] <sjust> filestore fiemap is disabled by default because it hardly ever works correctly
[2:04] <sjust> it allows the osd to determine which bits of a file have actually been written and do efficient sparse reads
[2:04] <sjust> but the underlying fs support is very spotty so we don't use it currently
[2:05] <sjust> osd kill backfill at is a debug option, it's for a specific test
[2:06] <sjust> osd command max records restricts the max output from an osd command
[2:07] <sjust> it appears to be used in particular to restrict how many missing objects we return
[2:07] <JT> You mean the size of a json or text dump?
[2:07] <sjust> in this case, it really just limits how many missing objects we return
[2:09] * Leseb (~Leseb@79.142.65.25) Quit (Quit: Leseb)
[2:10] <sjust> it doesn't appear to be used anywhere else
[2:10] <sjust> so not important
[2:10] * lofejndif (~lsqavnbok@09GAAHY8H.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[2:11] <yehudasa> gregaf: wip-2923, wip-swift-manifest can use some spare eyeballs
[2:12] * Cube (~Adium@12.248.40.138) Quit (Quit: Leaving.)
[2:12] <yehudasa> gregaf: oh, forget about wip-2923 for now, I want to work on it a bit more
[2:19] * maelfius (~mdrnstm@66.209.104.107) Quit (Quit: Leaving.)
[2:26] * JT (~john@astound-64-85-239-164.ca.astound.net) Quit (Quit: Leaving)
[2:28] * chutzpah (~chutz@100.42.98.5) Quit (Quit: Leaving)
[2:44] * mrjack_ (mrjack@office.smart-weblications.net) Quit ()
[2:53] * The_Bishop (~bishop@2a01:198:2ee:0:70ca:a7e5:e493:f4df) Quit (Ping timeout: 480 seconds)
[3:02] * The_Bishop (~bishop@2a01:198:2ee:0:98b0:e899:6285:5c44) has joined #ceph
[3:19] * The_Bishop (~bishop@2a01:198:2ee:0:98b0:e899:6285:5c44) Quit (Ping timeout: 480 seconds)
[3:27] * The_Bishop (~bishop@2a01:198:2ee:0:70ca:a7e5:e493:f4df) has joined #ceph
[3:55] * adjohn (~adjohn@0127ahost2.starwoodbroadband.com) has joined #ceph
[3:55] * adjohn (~adjohn@0127ahost2.starwoodbroadband.com) Quit ()
[3:58] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (Quit: Leaving.)
[4:11] * The_Bishop (~bishop@2a01:198:2ee:0:70ca:a7e5:e493:f4df) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[5:03] * Ryan_Lane (~Adium@216.38.130.164) Quit (Quit: Leaving.)
[5:35] * deepsa (~deepsa@115.242.55.199) has joined #ceph
[5:41] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[5:43] * dmick (~dmick@2607:f298:a:607:44ac:37a3:2aad:d0eb) Quit (Quit: Leaving.)
[5:46] * eightyeight (~atoponce@pinyin.ae7.st) Quit (Read error: Operation timed out)
[5:47] * eightyeight (~atoponce@pinyin.ae7.st) has joined #ceph
[6:08] * glowell (~Adium@c-98-210-226-131.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[6:08] * glowell (~Adium@c-98-210-226-131.hsd1.ca.comcast.net) has joined #ceph
[6:27] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:27] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[6:31] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[6:36] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[6:36] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has left #ceph
[6:39] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[7:17] * renzhi (~renzhi@180.169.73.90) Quit (Ping timeout: 480 seconds)
[7:53] * renzhi (~renzhi@180.169.73.90) has joined #ceph
[8:23] * tnt (~tnt@11.164-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[8:28] * mikeryan (mikeryan@lacklustre.net) Quit (Ping timeout: 480 seconds)
[8:48] * verwilst (~verwilst@d5152D6B9.static.telenet.be) has joined #ceph
[8:56] * Ryan_Lane1 (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[8:56] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[8:58] * mikeryan (mikeryan@2600:3c00::f03c:91ff:fe96:571) has joined #ceph
[9:01] * loicd (~loic@brln-4dbc3b23.pool.mediaWays.net) has joined #ceph
[9:02] * masterpe_ is now known as masterpe
[9:11] * deepsa (~deepsa@115.242.55.199) Quit (Remote host closed the connection)
[9:12] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:13] * Ryan_Lane1 (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[9:17] * deepsa (~deepsa@117.203.5.60) has joined #ceph
[9:23] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:25] * tnt (~tnt@11.164-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[9:31] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[9:31] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[9:37] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[9:42] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:43] * andret (~andre@pcandre.nine.ch) Quit (Remote host closed the connection)
[9:44] * andret (~andre@pcandre.nine.ch) has joined #ceph
[9:47] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[9:58] * mikeryan (mikeryan@2600:3c00::f03c:91ff:fe96:571) Quit (Remote host closed the connection)
[9:59] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[10:06] * mikeryan (mikeryan@lacklustre.net) has joined #ceph
[10:06] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[10:07] * steki-BLAH (~steki@85.222.178.138) has joined #ceph
[10:07] * BManojlovic (~steki@91.195.39.5) Quit (Read error: Operation timed out)
[10:10] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[10:15] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * nhmhome (~nh@67-220-20-222.usiwireless.com) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * MK_FG (~MK_FG@188.226.51.71) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * mkampe (~markk@38.122.20.226) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * ajm (~ajm@adam.gs) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * rturk (~rturk@ps94005.dreamhost.com) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * kblin (~kai@kblin.org) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * dpemmons (~dpemmons@204.11.135.146) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * rosco (~r.nap@188.205.52.204) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * gregaf (~Adium@2607:f298:a:607:4990:c1e3:3fe9:b77f) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * eternaleye (~eternaley@tchaikovsky.exherbo.org) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * steki-BLAH (~steki@85.222.178.138) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * tnt (~tnt@212-166-48-236.win.be) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * renzhi (~renzhi@180.169.73.90) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * markl (~mark@tpsit.com) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * liiwi (liiwi@idle.fi) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * thingee_zz (~thingee@ps91741.dreamhost.com) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * jantje_ (~jan@paranoid.nl) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * stass (stas@ssh.deglitch.com) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * rz_ (~root@ns1.waib.com) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * dabeowulf (dabeowulf@free.blinkenshell.org) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * sjust (~sam@2607:f298:a:607:baac:6fff:fe83:5a02) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * vhasi (vhasi@vha.si) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * scheuk (~scheuk@67.110.32.249.ptr.us.xo.net) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * epitron_ (~epitron@bito.ponzo.net) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * asadpanda (~asadpanda@67.231.236.80) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * Enigmagic (enigmo@c-24-6-51-229.hsd1.ca.comcast.net) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * iggy (~iggy@theiggy.com) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * Leseb (~Leseb@193.172.124.196) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * glowell (~Adium@c-98-210-226-131.hsd1.ca.comcast.net) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * nhm_ (~nh@67-220-20-222.usiwireless.com) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * sage (~sage@cpe-76-94-40-34.socal.res.rr.com) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * johnl (~johnl@2a02:1348:14c:1720:edc7:d5b3:c28d:eaf) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * Ludo_ (~Ludo@falbala.zoxx.net) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * ninkotech_ (~duplo@89.177.137.231) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * Meths (rift@2.25.193.120) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * trhoden (~trhoden@pool-108-28-184-160.washdc.fios.verizon.net) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * jeffhung_ (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * __jt__ (~james@jamestaylor.org) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * Azrael (~azrael@terra.negativeblue.com) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * brambles (xymox@grip.espace-win.org) Quit (charon.oftc.net resistance.oftc.net)
[10:15] * cephalobot` (~ceph@ps94005.dreamhost.com) Quit (charon.oftc.net resistance.oftc.net)
[10:17] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[10:17] * glowell (~Adium@c-98-210-226-131.hsd1.ca.comcast.net) has joined #ceph
[10:17] * nhm_ (~nh@67-220-20-222.usiwireless.com) has joined #ceph
[10:17] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[10:17] * sage (~sage@cpe-76-94-40-34.socal.res.rr.com) has joined #ceph
[10:17] * johnl (~johnl@2a02:1348:14c:1720:edc7:d5b3:c28d:eaf) has joined #ceph
[10:17] * Ludo_ (~Ludo@falbala.zoxx.net) has joined #ceph
[10:17] * ninkotech_ (~duplo@89.177.137.231) has joined #ceph
[10:17] * Meths (rift@2.25.193.120) has joined #ceph
[10:17] * trhoden (~trhoden@pool-108-28-184-160.washdc.fios.verizon.net) has joined #ceph
[10:17] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[10:17] * jeffhung_ (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[10:17] * __jt__ (~james@jamestaylor.org) has joined #ceph
[10:17] * cephalobot` (~ceph@ps94005.dreamhost.com) has joined #ceph
[10:17] * brambles (xymox@grip.espace-win.org) has joined #ceph
[10:17] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[10:18] * verwilst (~verwilst@d5152D6B9.static.telenet.be) Quit (Killed (synthon.oftc.net (Nick collision (new))))
[10:18] * verwilst (~verwilst@d5152D6B9.static.telenet.be) has joined #ceph
[10:18] * steki-BLAH (~steki@85.222.178.138) has joined #ceph
[10:18] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[10:18] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[10:18] * renzhi (~renzhi@180.169.73.90) has joined #ceph
[10:18] * sjust (~sam@2607:f298:a:607:baac:6fff:fe83:5a02) has joined #ceph
[10:18] * SpamapS (~clint@xencbyrum2.srihosting.com) has joined #ceph
[10:18] * markl (~mark@tpsit.com) has joined #ceph
[10:18] * liiwi (liiwi@idle.fi) has joined #ceph
[10:18] * thingee_zz (~thingee@ps91741.dreamhost.com) has joined #ceph
[10:18] * jantje_ (~jan@paranoid.nl) has joined #ceph
[10:18] * stass (stas@ssh.deglitch.com) has joined #ceph
[10:18] * rz_ (~root@ns1.waib.com) has joined #ceph
[10:18] * dabeowulf (dabeowulf@free.blinkenshell.org) has joined #ceph
[10:18] * vhasi (vhasi@vha.si) has joined #ceph
[10:18] * scheuk (~scheuk@67.110.32.249.ptr.us.xo.net) has joined #ceph
[10:18] * epitron_ (~epitron@bito.ponzo.net) has joined #ceph
[10:18] * asadpanda (~asadpanda@67.231.236.80) has joined #ceph
[10:18] * Enigmagic (enigmo@c-24-6-51-229.hsd1.ca.comcast.net) has joined #ceph
[10:18] * iggy (~iggy@theiggy.com) has joined #ceph
[10:19] * eternaleye (~eternaley@tchaikovsky.exherbo.org) has joined #ceph
[10:19] * mkampe (~markk@38.122.20.226) has joined #ceph
[10:19] * ajm (~ajm@adam.gs) has joined #ceph
[10:19] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) has joined #ceph
[10:19] * rturk (~rturk@ps94005.dreamhost.com) has joined #ceph
[10:19] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[10:19] * kblin (~kai@kblin.org) has joined #ceph
[10:19] * dpemmons (~dpemmons@204.11.135.146) has joined #ceph
[10:19] * rosco (~r.nap@188.205.52.204) has joined #ceph
[10:19] * gregaf (~Adium@2607:f298:a:607:4990:c1e3:3fe9:b77f) has joined #ceph
[10:19] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[10:19] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[10:19] * nhmhome (~nh@67-220-20-222.usiwireless.com) has joined #ceph
[10:19] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[10:28] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) Quit (synthon.oftc.net resistance.oftc.net)
[10:28] * nhmhome (~nh@67-220-20-222.usiwireless.com) Quit (synthon.oftc.net resistance.oftc.net)
[10:28] * MK_FG (~MK_FG@188.226.51.71) Quit (synthon.oftc.net resistance.oftc.net)
[10:28] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[10:28] * gregaf (~Adium@2607:f298:a:607:4990:c1e3:3fe9:b77f) Quit (synthon.oftc.net resistance.oftc.net)
[10:28] * rosco (~r.nap@188.205.52.204) Quit (synthon.oftc.net resistance.oftc.net)
[10:28] * dpemmons (~dpemmons@204.11.135.146) Quit (synthon.oftc.net resistance.oftc.net)
[10:28] * rturk (~rturk@ps94005.dreamhost.com) Quit (synthon.oftc.net resistance.oftc.net)
[10:28] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) Quit (synthon.oftc.net resistance.oftc.net)
[10:28] * ajm (~ajm@adam.gs) Quit (synthon.oftc.net resistance.oftc.net)
[10:28] * mkampe (~markk@38.122.20.226) Quit (synthon.oftc.net resistance.oftc.net)
[10:28] * eternaleye (~eternaley@tchaikovsky.exherbo.org) Quit (synthon.oftc.net resistance.oftc.net)
[10:28] * kblin (~kai@kblin.org) Quit (synthon.oftc.net resistance.oftc.net)
[10:28] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (synthon.oftc.net resistance.oftc.net)
[10:29] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[10:29] * nhmhome (~nh@67-220-20-222.usiwireless.com) has joined #ceph
[10:29] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[10:29] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[10:29] * gregaf (~Adium@2607:f298:a:607:4990:c1e3:3fe9:b77f) has joined #ceph
[10:29] * rosco (~r.nap@188.205.52.204) has joined #ceph
[10:29] * dpemmons (~dpemmons@204.11.135.146) has joined #ceph
[10:29] * kblin (~kai@kblin.org) has joined #ceph
[10:29] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[10:29] * rturk (~rturk@ps94005.dreamhost.com) has joined #ceph
[10:29] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) has joined #ceph
[10:29] * ajm (~ajm@adam.gs) has joined #ceph
[10:29] * mkampe (~markk@38.122.20.226) has joined #ceph
[10:29] * eternaleye (~eternaley@tchaikovsky.exherbo.org) has joined #ceph
[10:40] * mkampe (~markk@38.122.20.226) Quit (resistance.oftc.net oxygen.oftc.net)
[10:40] * ajm (~ajm@adam.gs) Quit (resistance.oftc.net oxygen.oftc.net)
[10:40] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) Quit (resistance.oftc.net oxygen.oftc.net)
[10:40] * rturk (~rturk@ps94005.dreamhost.com) Quit (resistance.oftc.net oxygen.oftc.net)
[10:40] * dpemmons (~dpemmons@204.11.135.146) Quit (resistance.oftc.net oxygen.oftc.net)
[10:40] * rosco (~r.nap@188.205.52.204) Quit (resistance.oftc.net oxygen.oftc.net)
[10:40] * gregaf (~Adium@2607:f298:a:607:4990:c1e3:3fe9:b77f) Quit (resistance.oftc.net oxygen.oftc.net)
[10:40] * nhmhome (~nh@67-220-20-222.usiwireless.com) Quit (resistance.oftc.net oxygen.oftc.net)
[10:40] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (resistance.oftc.net oxygen.oftc.net)
[10:40] * kblin (~kai@kblin.org) Quit (resistance.oftc.net oxygen.oftc.net)
[10:40] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (resistance.oftc.net oxygen.oftc.net)
[10:40] * MK_FG (~MK_FG@188.226.51.71) Quit (resistance.oftc.net oxygen.oftc.net)
[10:40] * eternaleye (~eternaley@tchaikovsky.exherbo.org) Quit (resistance.oftc.net oxygen.oftc.net)
[10:40] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) Quit (resistance.oftc.net oxygen.oftc.net)
[10:40] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[10:40] * nhmhome (~nh@67-220-20-222.usiwireless.com) has joined #ceph
[10:40] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[10:40] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[10:40] * gregaf (~Adium@2607:f298:a:607:4990:c1e3:3fe9:b77f) has joined #ceph
[10:40] * rosco (~r.nap@188.205.52.204) has joined #ceph
[10:40] * dpemmons (~dpemmons@204.11.135.146) has joined #ceph
[10:40] * kblin (~kai@kblin.org) has joined #ceph
[10:40] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[10:40] * rturk (~rturk@ps94005.dreamhost.com) has joined #ceph
[10:40] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) has joined #ceph
[10:40] * ajm (~ajm@adam.gs) has joined #ceph
[10:40] * mkampe (~markk@38.122.20.226) has joined #ceph
[10:40] * eternaleye (~eternaley@tchaikovsky.exherbo.org) has joined #ceph
[10:40] * steki-BLAH (~steki@85.222.178.138) Quit (Ping timeout: 480 seconds)
[10:44] * mkampe (~markk@38.122.20.226) Quit (resistance.oftc.net oxygen.oftc.net)
[10:44] * ajm (~ajm@adam.gs) Quit (resistance.oftc.net oxygen.oftc.net)
[10:44] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) Quit (resistance.oftc.net oxygen.oftc.net)
[10:44] * rturk (~rturk@ps94005.dreamhost.com) Quit (resistance.oftc.net oxygen.oftc.net)
[10:44] * dpemmons (~dpemmons@204.11.135.146) Quit (resistance.oftc.net oxygen.oftc.net)
[10:44] * rosco (~r.nap@188.205.52.204) Quit (resistance.oftc.net oxygen.oftc.net)
[10:44] * gregaf (~Adium@2607:f298:a:607:4990:c1e3:3fe9:b77f) Quit (resistance.oftc.net oxygen.oftc.net)
[10:44] * nhmhome (~nh@67-220-20-222.usiwireless.com) Quit (resistance.oftc.net oxygen.oftc.net)
[10:44] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (resistance.oftc.net oxygen.oftc.net)
[10:44] * kblin (~kai@kblin.org) Quit (resistance.oftc.net oxygen.oftc.net)
[10:44] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (resistance.oftc.net oxygen.oftc.net)
[10:44] * MK_FG (~MK_FG@188.226.51.71) Quit (resistance.oftc.net oxygen.oftc.net)
[10:44] * eternaleye (~eternaley@tchaikovsky.exherbo.org) Quit (resistance.oftc.net oxygen.oftc.net)
[10:44] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) Quit (resistance.oftc.net oxygen.oftc.net)
[10:45] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[10:45] * nhmhome (~nh@67-220-20-222.usiwireless.com) has joined #ceph
[10:45] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[10:45] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[10:45] * gregaf (~Adium@2607:f298:a:607:4990:c1e3:3fe9:b77f) has joined #ceph
[10:45] * rosco (~r.nap@188.205.52.204) has joined #ceph
[10:45] * dpemmons (~dpemmons@204.11.135.146) has joined #ceph
[10:45] * kblin (~kai@kblin.org) has joined #ceph
[10:45] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[10:45] * rturk (~rturk@ps94005.dreamhost.com) has joined #ceph
[10:45] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) has joined #ceph
[10:45] * ajm (~ajm@adam.gs) has joined #ceph
[10:45] * mkampe (~markk@38.122.20.226) has joined #ceph
[10:45] * eternaleye (~eternaley@tchaikovsky.exherbo.org) has joined #ceph
[10:47] * jamespage (~jamespage@tobermory.gromper.net) has joined #ceph
[10:55] * steki-BLAH (~steki@85.222.178.138) has joined #ceph
[11:01] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[11:04] * steki-BLAH (~steki@85.222.178.138) Quit (Read error: Operation timed out)
[11:23] * ihwtl (~ihwtl@odm-mucoffice-02.odmedia.net) has joined #ceph
[12:20] * mikeryan (mikeryan@lacklustre.net) Quit (Ping timeout: 480 seconds)
[12:32] * renzhi (~renzhi@180.169.73.90) Quit (Quit: Leaving)
[12:39] * yoshi (~yoshi@p28146-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[12:44] * qubemarker (~qubemarke@182.19.51.2) has joined #ceph
[12:45] <qubemarker> is there any campatability issue with 64bit ceph cluster and 32 bit client?. I am newbie to ceph.
[12:55] * gregaf1 (~Adium@2607:f298:a:607:f071:cf6f:2842:fbb4) has joined #ceph
[12:56] * gregaf (~Adium@2607:f298:a:607:4990:c1e3:3fe9:b77f) Quit (Read error: Operation timed out)
[13:15] * mikeryan (mikeryan@lacklustre.net) has joined #ceph
[13:29] <NaioN> that shouldn't be a problem
[13:31] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) has joined #ceph
[13:39] <tnt> Mmm, seems that my weird messages in dmesg were fixed by installing ntp ...
[13:39] <tnt> you need proper time sync between nodes for cephx I guess ?
[13:40] <NaioN> well for ceph in general
[13:43] <tnt> Oh really ? I missed that requirement ...
[13:45] <NaioN> yeah you also get warnings if the time drifts more than 2 seconds if I'm correct
[13:46] <NaioN> i saw them when I was experimenting with a couple of VM's and no ntp
[13:47] <joao> we had a couple of issues on our nightly runs due to clock drifting as well
[13:47] <joao> shockingly, ntp was to blame :p
[13:48] <tnt> mmm, even with ntp running we have > 10 sec clock difference.
[13:49] <joao> are you using external ntp servers?
[13:49] <NaioN> physical or virtual?
[13:50] <pmjdebruijn> tnt: more importantly you need to make sure you're syncing against the SAME ntp server
[13:50] <NaioN> are you sure ntp is functional?
[13:58] <tnt> checking that now ... obviously something is not working as it should.
[14:03] <qubemarker> Is there any compatability issue with 64bit ubuntu OS ceph cluster and 32 bit ubuntu kernel client?
[14:03] <NaioN> qubemarker: not that I know
[14:03] <NaioN> It shouldn't be a problem
[14:05] <qubemarker> I am facing some problem in 64 bit cluster and 32 bit kernel client.64bit kernel client is working fine.
[14:05] <NaioN> which problems?
[14:06] <qubemarker> we are archiving big video files in our test cluster. when reading video files from 64bit kernel client no issue. But when i am reading from 32 bit kernel client showing error
[14:07] <pmjdebruijn> with cephfs? or plain filesystem on an rbd
[14:07] <pmjdebruijn> and what particular error?
[14:07] <pmjdebruijn> do you see anything in dmesg?
[14:07] <pmjdebruijn> etc
[14:08] <qubemarker> i have mounted with mount -t ceph command
[14:08] <qubemarker> Nothing dmesg
[14:08] <qubemarker> showing frame out of range error in my video reading utility.
[14:10] <qubemarker> same utlity is working fine with local mounted USB drive contains video
[14:10] <qubemarker> one problem in reading from ceph cluster.
[14:11] <qubemarker> 32 bit ubuntu OS client is working fine with HDFS cluster.
[14:12] <pmjdebruijn> wat doe you get when accessing it other
[14:12] <pmjdebruijn> for example
[14:12] <pmjdebruijn> dd if=./fileonceph of=./filetolocaldrive bs=1M
[14:13] <qubemarker> i got from forum "There, your biggest problem tends to be the 32-bit
[14:13] <qubemarker> limitation; ceph inodes are 64-bit, and ceph-fuse really wants to run
[14:13] <qubemarker> on a 64-bit machine." is it true?
[14:13] <qubemarker> i can copy file from 32 bit kernel client.
[14:14] <pmjdebruijn> qubemarker: what does the dd produce
[14:15] <qubemarker> currently i am offline from my cluster.
[14:19] <joao> what forum is that?
[14:20] <qubemarker> ceph DFS development, discussion
[15:19] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) has joined #ceph
[15:20] <NaioN> qubemarker: could you paste the url?
[15:23] <NaioN> qubemarker: hmmm are you using the fuse module or the in kernel?
[15:43] * deepsa (~deepsa@117.203.5.60) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[15:48] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[15:59] <tnt> When using the Async IO option of librbd, how / when will the callbacks be called ? I mean if I call rbd_aio_write and then do a while(1) in my app, how could the library call my callback ???
[16:02] * kblin (~kai@kblin.org) Quit (Server closed connection)
[16:02] * kblin (~kai@kblin.org) has joined #ceph
[16:15] <qubemarker> NaioN, I am using kenel
[16:15] <qubemarker> NaioN, I am using kernel
[16:17] * eternaleye (~eternaley@tchaikovsky.exherbo.org) Quit (Server closed connection)
[16:17] * eternaleye (~eternaley@tchaikovsky.exherbo.org) has joined #ceph
[16:29] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) has joined #ceph
[16:35] <tnt> Mmm, turns out my weird message were not due to time difference because they still happen despite a now working ntp sync.
[16:35] <tnt> http://pastebin.com/tRDrnRgv
[16:36] <tnt> "cephx: verify_authorizer could not get service secret for service osd secret_id=0"
[16:48] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[16:59] * deepsa (~deepsa@117.203.5.60) has joined #ceph
[17:12] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:12] * allsystemsarego (~allsystem@188.25.135.235) has joined #ceph
[17:17] * andret (~andre@pcandre.nine.ch) Quit (Remote host closed the connection)
[17:18] * andret (~andre@pcandre.nine.ch) has joined #ceph
[17:31] * verwilst (~verwilst@d5152D6B9.static.telenet.be) Quit (Ping timeout: 480 seconds)
[17:33] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[17:41] * Leseb (~Leseb@193.172.124.196) Quit (Read error: Connection reset by peer)
[17:42] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[17:53] * joshd (~joshd@38.122.20.226) has joined #ceph
[17:57] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[18:04] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[18:10] * mkampe (~markk@38.122.20.226) Quit (Server closed connection)
[18:11] * mkampe (~markk@2607:f298:a:607:222:19ff:fe31:b5d3) has joined #ceph
[18:14] * tnt (~tnt@11.164-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[18:20] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[18:23] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[18:33] * fzylogic (~fzylogic@69.170.166.146) has joined #ceph
[18:35] * jlogan (~Thunderbi@2600:c00:3010:1:e035:f831:b035:e863) has joined #ceph
[18:49] <mikeryan> tnt: take my mailing list response re: aio callbacks with a grain of salt
[18:49] <mikeryan> i've only touched code that uses aio completions a handful of times
[18:57] * nhm (~nhm@67-220-20-222.usiwireless.com) Quit (Ping timeout: 480 seconds)
[18:57] * nhm_ (~nh@67-220-20-222.usiwireless.com) Quit (Ping timeout: 480 seconds)
[18:57] * nhmhome (~nh@67-220-20-222.usiwireless.com) Quit (Ping timeout: 480 seconds)
[18:58] * The_Bishop (~bishop@e179001132.adsl.alicedsl.de) has joined #ceph
[19:01] * nhm (~nhm@67-220-20-222.usiwireless.com) has joined #ceph
[19:01] * nhm_ (~nh@67-220-20-222.usiwireless.com) has joined #ceph
[19:01] * nhmhome (~nh@67-220-20-222.usiwireless.com) has joined #ceph
[19:03] <tnt> mikeryan: ok thanks :) I think I'll copy what the QEMU people have done and use a pipe for interthread comm
[19:06] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[19:09] * dmick (~dmick@2607:f298:a:607:6472:67d5:d6fe:8557) has joined #ceph
[19:12] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:13] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit ()
[19:15] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[19:18] * The_Bishop (~bishop@e179001132.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[19:27] <nhm> stand-up is messed up today...
[19:27] <dmick> what's up?
[19:28] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:28] <mikeryan> dunno what was happening there, we used the mute on the physical mic
[19:28] <mikeryan> apparently that's insufficient
[19:28] <mikeryan> so we muted it from go to meeting
[19:28] <dmick> weird
[19:28] <sjust> it might be using a different mike
[19:28] <nhm> dmick: pretty bad audio in general, and that crazy thing a min ago
[19:28] <mikeryan> our machine was stuttering badly, perhaps the USB soundcard driver has a bug
[19:28] <mikeryan> it was definitely coming from us, whatever it was
[19:35] * ajm (~ajm@adam.gs) Quit (Server closed connection)
[19:35] * ajm (~ajm@adam.gs) has joined #ceph
[19:41] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[19:56] <nhm> very interesting, adding a second controller and doubling the number of disks and SSDs yielded about a 5-10% increase in performance. For whatever reason it, seems like 650MB-700MB/s is where we top out per node a lot of the time.
[19:59] <nhm> I should try running concurrent workload generators.
[20:00] * ihwtl (~ihwtl@odm-mucoffice-02.odmedia.net) Quit (Ping timeout: 480 seconds)
[20:01] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[20:01] <sjust> nhm: how are you generating the load?
[20:02] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) Quit (Server closed connection)
[20:02] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) has joined #ceph
[20:03] <mikeryan> nhm: requesting pics of your basement
[20:03] <dmick> bow chicka wow wow
[20:06] <nhm> sjust: rados bench to localhost on the osd node
[20:07] <nhm> mikeryan: I took pics
[20:07] * The_Bishop (~bishop@2a01:198:2ee:0:7577:f786:513e:4122) has joined #ceph
[20:07] <nhm> mikeryan: need to get a blog account
[20:10] * rturk (~rturk@ps94005.dreamhost.com) Quit (Server closed connection)
[20:10] * rturk (~rturk@ps94005.dreamhost.com) has joined #ceph
[20:11] * deepsa (~deepsa@117.203.5.60) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[20:11] <nhm> vidyo changes looked great until I saw that 12.04 isn't supported. :/
[20:12] <gregaf1> seriously?
[20:12] <sjust> NOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
[20:13] <nhm> that's what the email says.
[20:13] <nhm> BTRFS
[20:13] <nhm> XFS
[20:13] <nhm> doh, wrong paste
[20:13] <nhm> 32/64 bit version Ubuntu 10.04, 10.10, 11.04, 11.10
[20:14] <nhm> who knows, maybe it will work anyway.
[20:14] * Ryan_Lane (~Adium@216.38.130.164) has joined #ceph
[20:17] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (Server closed connection)
[20:17] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[20:32] * houkouonchi-work (~linux@38.122.20.226) has joined #ceph
[20:33] * houkouonchi-work is now known as Sandon
[20:39] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[20:50] * dpemmons (~dpemmons@204.11.135.146) Quit (Server closed connection)
[20:50] * dpemmons (~dpemmons@204.11.135.146) has joined #ceph
[20:51] * jlogan (~Thunderbi@2600:c00:3010:1:e035:f831:b035:e863) Quit (Quit: jlogan)
[20:51] * jlogan (~Thunderbi@2600:c00:3010:1:e035:f831:b035:e863) has joined #ceph
[20:55] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[21:19] * jlogan (~Thunderbi@2600:c00:3010:1:e035:f831:b035:e863) Quit (Quit: jlogan)
[21:22] * stxShadow (~Jens@ip-178-203-169-190.unitymediagroup.de) has joined #ceph
[21:24] * rosco (~r.nap@188.205.52.204) Quit (Server closed connection)
[21:24] * rosco (~r.nap@188.205.52.204) has joined #ceph
[21:30] * jlogan (~Thunderbi@2600:c00:3010:1:e035:f831:b035:e863) has joined #ceph
[21:32] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Server closed connection)
[21:33] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[21:34] * maelfius (~mdrnstm@pool-71-160-33-115.lsanca.fios.verizon.net) has joined #ceph
[21:48] * MK_FG (~MK_FG@188.226.51.71) Quit (Server closed connection)
[21:49] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[21:55] * stxShadow (~Jens@ip-178-203-169-190.unitymediagroup.de) has left #ceph
[21:56] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) Quit (Server closed connection)
[21:56] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[21:57] * Disconnected.
[21:57] -solenoid.oftc.net- *** Looking up your hostname...
[21:57] -solenoid.oftc.net- *** Checking Ident
[21:57] -solenoid.oftc.net- *** No Ident response
[21:57] -solenoid.oftc.net- *** Found your hostname

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.