#ceph IRC Log


IRC Log for 2012-05-24

Timestamps are in GMT/BST.

[0:02] * s[X]_ (~sX]@eth589.qld.adsl.internode.on.net) has joined #ceph
[0:32] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[0:42] * andreask (~andreas@ has joined #ceph
[0:48] * lxo (~aoliva@lxo.user.oftc.net) Quit (Read error: No route to host)
[0:50] * danieagle (~Daniel@ Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[0:58] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[1:13] * sbohrer (~sbohrer@ Quit (Quit: Leaving)
[1:15] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) Quit (Remote host closed the connection)
[1:36] * BManojlovic (~steki@ Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:36] * mgalkiewicz (~mgalkiewi@toya.hederanetworks.net) Quit (Quit: Leaving)
[1:40] * rturk1 (~rturk@aon.hq.newdream.net) has joined #ceph
[1:40] <Qten> Hi All, just a thought what kind of protection can we use to cover our selves against malicious intent, eg someone breaking the password to an account and deleting all the vm's and customer backups?
[1:41] <gregaf> Qten: not sure what you're asking about
[1:41] <Qten> well if someone were to hack for example an openstack horizion password, and they deleted the customer vms
[1:41] <elder> sagewk, if the server responds BADAUTHORIZER it looks like we retry a connection attempt once.
[1:41] <sagewk> yeah.
[1:41] <elder> If a second attempt fails, we should just close down the connection completely,r ight?
[1:41] * detaos_ (~quassel@c-50-131-106-101.hsd1.ca.comcast.net) has joined #ceph
[1:42] <sagewk> yeah, i think so.
[1:42] <elder> OK.
[1:42] <Qten> be great alike zfs if there was some way to snapshot the whole system :\
[1:42] <sagewk> it's not a beautiful approach, but ok i think.
[1:42] <elder> That isn't (explicitly) happening.
[1:42] <gregaf> oh, I see
[1:43] <sagewk> qten: there is some infrastructure to support that, but it's incomplete.
[1:43] <elder> So I'll make it close the connection in that case. That seems like what is intended, it just isn't really done.
[1:43] <sagewk> elder: sounds good
[1:43] * Damian_ (~Damian@mountainmorningband.com) has joined #ceph
[1:44] * zykes_ (~zykes@184.79-161-107.customer.lyse.net) has joined #ceph
[1:44] <Qten> sagewk: without trying to sound like a panic attack any guesses as to how far along the list of things todo that might be :)
[1:44] <sagewk> very low priority at the moment.
[1:45] <sagewk> but voicing your desires e.g. on ceph-devel will help us shape our priorities
[1:45] <Qten> as were looking at being a public cloud i'm a little bit paranoid as i'm sure you can imagine
[1:45] <sagewk> yeah :)
[1:46] * andreask (~andreas@ Quit (resistance.oftc.net weber.oftc.net)
[1:46] * Ryan_Lane (~Adium@ Quit (resistance.oftc.net weber.oftc.net)
[1:46] * rturk (~rturk@aon.hq.newdream.net) Quit (resistance.oftc.net weber.oftc.net)
[1:46] * joao (~JL@aon.hq.newdream.net) Quit (resistance.oftc.net weber.oftc.net)
[1:46] * gohko_ (~gohko@natter.interq.or.jp) Quit (resistance.oftc.net weber.oftc.net)
[1:46] * MK_FG (~MK_FG@ Quit (resistance.oftc.net weber.oftc.net)
[1:46] * detaos (~quassel@c-50-131-106-101.hsd1.ca.comcast.net) Quit (resistance.oftc.net weber.oftc.net)
[1:46] * zykes (~zykes@184.79-161-107.customer.lyse.net) Quit (resistance.oftc.net weber.oftc.net)
[1:46] * Qu310 (~qgrasso@ppp59-167-157-24.static.internode.on.net) Quit (resistance.oftc.net weber.oftc.net)
[1:46] * dpejesh (~dholden@ Quit (resistance.oftc.net weber.oftc.net)
[1:46] * iggy (~iggy@theiggy.com) Quit (resistance.oftc.net weber.oftc.net)
[1:46] * Damian (~Damian@mountainmorningband.com) Quit (resistance.oftc.net weber.oftc.net)
[1:46] * ottodestrukt (~ANONYMOUS@9YYAAELTK.tor-irc.dnsbl.oftc.net) Quit (resistance.oftc.net weber.oftc.net)
[1:46] * Qu310 (~qgrasso@ppp59-167-157-24.static.internode.on.net) has joined #ceph
[1:47] * iggy (~iggy@theiggy.com) has joined #ceph
[1:49] * ivan` (~ivan`@li125-242.members.linode.com) Quit (Quit: ERC Version 5.3 (IRC client for Emacs))
[1:51] * gohko (~gohko@natter.interq.or.jp) has joined #ceph
[1:51] * LarsFronius (~LarsFroni@95-91-243-252-dynip.superkabel.de) Quit (Quit: LarsFronius)
[1:51] * MK_FG (~MK_FG@ has joined #ceph
[1:52] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) Quit (Remote host closed the connection)
[1:52] * andreask (~andreas@ has joined #ceph
[1:55] * detaos_ is now known as detaos
[1:56] * joao (~JL@aon.hq.newdream.net) has joined #ceph
[1:59] * brambles (brambles@ Quit (Ping timeout: 480 seconds)
[2:07] * Ryan_Lane (~Adium@ has joined #ceph
[2:08] * ivan` (~ivan`@li125-242.members.linode.com) has joined #ceph
[2:08] * ottodestrukt (~ANONYMOUS@9KCAAFK7U.tor-irc.dnsbl.oftc.net) has joined #ceph
[2:09] * rturk1 (~rturk@aon.hq.newdream.net) has left #ceph
[2:09] * LarsFronius (~LarsFroni@95-91-243-252-dynip.superkabel.de) has joined #ceph
[2:11] * bchrisman (~Adium@ Quit (Quit: Leaving.)
[2:13] * ottodestrukt (~ANONYMOUS@9KCAAFK7U.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[2:13] * Ryan_Lane (~Adium@ Quit (Quit: Leaving.)
[2:13] * LarsFronius (~LarsFroni@95-91-243-252-dynip.superkabel.de) Quit ()
[2:18] * andreask (~andreas@ Quit (Ping timeout: 480 seconds)
[2:21] * brambles (brambles@ has joined #ceph
[2:22] <CristianDM> Hi. What tool I can use for speed testing. I need test small files read. This is for an email and web server
[2:29] <ajm-> iozone is nice
[2:31] <CristianDM> Thanks. I will test. For the moment the speed in amazing
[2:49] * ottodestrukt (~ANONYMOUS@9YYAAGBXF.tor-irc.dnsbl.oftc.net) has joined #ceph
[2:55] * ottodestrukt (~ANONYMOUS@9YYAAGBXF.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[3:00] * Tv_ (~tv@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[3:00] * ottodestrukt (~ANONYMOUS@9YYAAGBXZ.tor-irc.dnsbl.oftc.net) has joined #ceph
[3:05] * ottodestrukt (~ANONYMOUS@9YYAAGBXZ.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[3:11] * Ryan_Lane (~Adium@c-98-210-205-93.hsd1.ca.comcast.net) has joined #ceph
[3:12] * ottodestrukt (~ANONYMOUS@9KCAAFLAF.tor-irc.dnsbl.oftc.net) has joined #ceph
[3:12] * ottodestrukt (~ANONYMOUS@9KCAAFLAF.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[3:35] * ottodestrukt (~ANONYMOUS@19NAAI1FX.tor-irc.dnsbl.oftc.net) has joined #ceph
[4:05] * dpejesh (~dholden@ has joined #ceph
[4:07] * yoshi (~yoshi@p3167-ipngn3601marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[4:13] * eightyeight (~atoponce@pthree.org) Quit (Ping timeout: 480 seconds)
[4:47] <nhm> sagewk: So at the same time when all of the write meta calls are happening in the blktrace logs, there are tons of write meta and truncate meta events from the filestore in the osd logs. Looks like you were right about it maybe being truncates.
[4:48] <elder> File truncates can be expensive on XFS as I recall.
[4:49] <elder> I may be mistaken though. There was something where only two extents at a time could ever be freed or something.
[4:49] <elder> Whatever the case, I never quite understood it or dug in deep to figure it out.
[4:50] <nhm> elder: I suppose I should go back and see how often this is happening during the rest of the run.
[4:50] <elder> What is being truncated? Files backed by objects, or the objects themselves?
[4:52] <nhm> elder: this is what I get from the logs:
[4:52] <nhm> 2012-05-02 14:44:42.876913 7f6624f65700 10 filestore(/srv/osd.0) truncate meta/28da36e7/pginfo_0.19/0 size 0 = 0
[4:53] <nhm> hrm, I think I need to create some histograms of these.
[4:53] <elder> I need a decoder ring to know what that is telling me.
[4:54] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has left #ceph
[4:54] <nhm> elder: Need Sam or Sage to tell us what that means.
[4:55] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[4:55] <elder> Whoops
[4:56] * chutzpah (~chutz@ Quit (Quit: Leaving)
[5:11] * cattelan_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[5:11] * cattelan_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) has joined #ceph
[5:14] * nhm_ (~nh@ has joined #ceph
[5:15] * nhm (~nh@ Quit (Read error: Connection reset by peer)
[5:17] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[5:24] * renzhi (~renzhi@raq2064.uk2.net) has joined #ceph
[5:48] * renzhi (~renzhi@raq2064.uk2.net) Quit (Ping timeout: 480 seconds)
[6:04] * renzhi (~renzhi@ has joined #ceph
[6:22] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[6:24] * adjohn (~adjohn@50-0-164-218.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[7:01] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (Remote host closed the connection)
[7:03] * joao (~JL@aon.hq.newdream.net) Quit (Quit: Leaving)
[7:05] * dmick (~dmick@aon.hq.newdream.net) Quit (Quit: Leaving.)
[7:10] * nhm_ (~nh@ Quit (Remote host closed the connection)
[7:11] * nhm (~nh@ has joined #ceph
[7:16] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[7:48] * cattelan_away is now known as cattelan_away_away
[8:06] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[8:20] * nhm (~nh@ Quit (Ping timeout: 480 seconds)
[8:24] * nhm (~nh@ has joined #ceph
[8:52] * nhm_ (~nh@ has joined #ceph
[8:52] * nhm (~nh@ Quit (Read error: Connection reset by peer)
[9:09] * s[X]_ (~sX]@eth589.qld.adsl.internode.on.net) Quit (Remote host closed the connection)
[9:11] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[9:18] * BManojlovic (~steki@ has joined #ceph
[9:20] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) Quit (Ping timeout: 480 seconds)
[9:20] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) has joined #ceph
[9:27] * Theuni (~Theuni@i59F75233.versanet.de) has joined #ceph
[9:37] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:39] * Theuni (~Theuni@i59F75233.versanet.de) Quit (Quit: Leaving.)
[9:49] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[9:59] * Theuni (~Theuni@i59F75233.versanet.de) has joined #ceph
[10:02] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) has joined #ceph
[10:20] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) Quit (Remote host closed the connection)
[10:32] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) has joined #ceph
[11:01] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) Quit (Remote host closed the connection)
[11:22] * brambles (brambles@ Quit (Quit: leaving)
[11:23] * brambles (brambles@ has joined #ceph
[11:25] * brambles (brambles@ Quit ()
[11:27] * brambles (brambles@ has joined #ceph
[11:29] * brambles (brambles@ Quit ()
[11:30] * brambles (brambles@ has joined #ceph
[11:30] * yoshi (~yoshi@p3167-ipngn3601marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:45] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[11:53] * ao (~ao@ has joined #ceph
[12:18] * renzhi (~renzhi@ Quit (Quit: Leaving)
[12:24] * mtk (~mtk@ool-44c35967.dyn.optonline.net) Quit (Remote host closed the connection)
[12:46] * mtk (~mtk@ool-44c35967.dyn.optonline.net) has joined #ceph
[12:50] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[12:56] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[13:05] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[13:07] * Theuni (~Theuni@i59F75233.versanet.de) Quit (Ping timeout: 480 seconds)
[14:08] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) has joined #ceph
[14:30] <nhm_> good morning #ceph
[14:30] * nhm_ is now known as nhm
[14:34] * Theuni (~Theuni@i59F75233.versanet.de) has joined #ceph
[15:00] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[15:08] <elder> nhm, I'm sorry I did not look at the XFS code for sync stuff now. Do you want me to do that now, or are you doing something else?
[15:09] <nhm> elder: I'm going to make some histograms of WM over time and then histograms of where we are writing and truncating metadata in ceph.
[15:09] <nhm> elder: I also want to run seekwatcher on an SSD osd to see how many seeks we do when throughput is high.
[15:11] <elder> OK.
[15:11] <elder> I'll take a quick look right now for where anything synchronous gets submitted by XFS.
[15:12] <filoo_absynth> morning, everyone
[15:12] <nhm> morning filoo_absynth
[15:12] <nhm> elder: ok, sounds good
[15:14] * eightyeight (~atoponce@pthree.org) has joined #ceph
[15:25] * mgalkiewicz (~mgalkiewi@toya.hederanetworks.net) has joined #ceph
[15:27] <mgalkiewicz> cant remove rbd volume https://gist.github.com/2781545
[15:30] <elder> nhm, I just wrote a nice message to you on the XFS channel.
[15:30] <elder> I'm a little surprised at this point, but it looks to me like XFS never initiates synchronous I/O itself except when it syncs its log.
[15:30] <elder> If the upper-level writeback code requests data written back synchronously then it will pass that through.
[15:30] <elder> Wait, xfs_file_sync() and a few others will do it too via filemap_wriite_and_wait_range().
[15:30] <elder> I guess with the knowledge I have now I'm pretty comfortable with the assumption that the synchronous request, and possibly the seek storms (or seek inclemency) are related to requests to sync a file and/or file system.
[15:34] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[15:40] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[15:43] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[15:51] * cattelan_away_away is now known as cattelan_away
[15:51] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[15:55] * jeh (~jeh@ferret.latigid.net) has joined #ceph
[16:04] * renzhi (~renzhi@ has joined #ceph
[16:05] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Quit: LarsFronius)
[16:27] * ssedov (stas@ssh.deglitch.com) Quit (Ping timeout: 480 seconds)
[16:39] <nhm> elder: sorry, was distracted with kids for a while. Yeah, that would make sense given the write metadata and truncate metadata operations I see in the ceph logs when it happens.
[16:46] * ao (~ao@ Quit (Quit: Leaving)
[16:48] * MarkDude (~MT@ip-64-134-236-53.public.wayport.net) has joined #ceph
[16:57] * LarsFronius (~LarsFroni@95-91-243-252-dynip.superkabel.de) has joined #ceph
[17:12] * verwilst (~verwilst@d5152FEFB.static.telenet.be) Quit (Quit: Ex-Chat)
[17:28] * joao (~JL@aon.hq.newdream.net) has joined #ceph
[17:29] * Tv_ (~tv@aon.hq.newdream.net) has joined #ceph
[17:37] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) Quit (Quit: Ex-Chat)
[17:38] * renzhi (~renzhi@ Quit (Quit: Leaving)
[17:46] * cattelan_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[17:51] * LarsFronius (~LarsFroni@95-91-243-252-dynip.superkabel.de) Quit (Quit: LarsFronius)
[17:51] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[17:52] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[17:54] * cattelan_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) has joined #ceph
[17:54] * jeh (~jeh@ferret.latigid.net) has left #ceph
[17:57] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[18:03] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[18:05] * joshd (~joshd@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[18:13] * aliguori (~anthony@ has joined #ceph
[18:17] * joshd (~joshd@p4FECF022.dip.t-dialin.net) has joined #ceph
[18:21] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[18:26] * BManojlovic (~steki@ Quit (Quit: Ja odoh a vi sta 'ocete...)
[18:29] <joshd> mgalkiewicz: thanks for reminding me - the osd side of watch timeouts needs to be fixed http://tracker.newdream.net/issues/2476
[18:32] * bchrisman1 (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[18:33] <mgalkiewicz> joshd: np:)
[18:34] <joshd> I've hit the same problem before, but haven't gotten around to fixing it yet
[18:38] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:39] * gregaf (~Adium@aon.hq.newdream.net) has left #ceph
[18:39] * gregaf (~Adium@aon.hq.newdream.net) has joined #ceph
[18:40] * Ryan_Lane (~Adium@c-98-210-205-93.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[18:41] * Dennis2310 (~dennis@ has joined #ceph
[18:41] <Dennis2310> Hello
[18:42] <joshd> hi
[18:42] <Dennis2310> Seem to having an issue compling 47.2, getting a osdc/ObjectCacher.cc:1043: error: 'INT_MAX' was not declared in this scope
[18:42] <Dennis2310> anyone seen that before?
[18:44] <joshd> what distro?
[18:44] <Dennis2310> centos 6.
[18:44] <Dennis2310> 2
[18:46] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) Quit ()
[18:47] <joshd> hmm, we must be implicitly including limits.h somewhere
[18:50] <Dennis2310> yes. I just added #include "limits.h" to ObjectCacher.cc, and it passes, but it seems IoCtxImpl.cc needs it as well
[18:51] <gregaf> nhm: elder: that truncate is a truncate operation on a file (directory "meta/28da36e7", file name "pginfo_0.19", file snap "0") to size 0, and it succeeded ("= 0")
[18:52] <joshd> Dennis2310: yeah, I'll add those includes - not sure why it's not a problem on debian
[18:55] <Dennis2310> i'm hoping 47.2 fixes bug #1047, I seem to be hitting some mds crashes similar to what someone posted in that bug. AnchorServer.cc 249 Failed assert's.
[18:55] * LarsFronius (~LarsFroni@95-91-243-252-dynip.superkabel.de) has joined #ceph
[18:56] <gregaf> Dennis2310: unfortunately not ??? there hasn't been any MDS work for several versions now
[18:58] <Dennis2310> ok. Not sure exactly what my issue is. I can benchmark fine but I put a user on it and he checked out some large svn files, and he manages to crash the mds nodes
[18:58] <Dennis2310> it's all good. We just playing around/experimenting with it
[19:15] <elder> gregaf, thank you for the decoder ring service.
[19:17] <Dennis2310> ty for the limits.h hint joshd, Got my rpm's now built for 47.2 :)
[19:31] <joshd> Dennis2310: no problem. fixed in master and stable too
[19:31] * Theuni (~Theuni@i59F75233.versanet.de) Quit (Quit: Leaving.)
[19:32] * chutzpah (~chutz@ has joined #ceph
[19:35] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) Quit (Quit: Leaving.)
[19:36] * Ryan_Lane (~Adium@ has joined #ceph
[19:37] * BManojlovic (~steki@ has joined #ceph
[19:38] * dmick (~dmick@aon.hq.newdream.net) has joined #ceph
[19:47] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) has joined #ceph
[20:02] * jmlowe (~Adium@140-182-131-63.dhcp-bl.indiana.edu) has joined #ceph
[20:23] * MarkDude (~MT@ip-64-134-236-53.public.wayport.net) Quit (Read error: Connection reset by peer)
[20:23] * MarkDude (~MT@ip-64-134-236-53.public.wayport.net) has joined #ceph
[20:26] * asadpand- (~asadpanda@ has joined #ceph
[20:31] * asadpanda (~asadpanda@ Quit (Ping timeout: 480 seconds)
[20:31] * asadpand- is now known as asadpanda
[20:31] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:31] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[21:01] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[21:29] * MarkDude (~MT@ip-64-134-236-53.public.wayport.net) Quit (Quit: Leaving)
[21:39] * jmlowe (~Adium@140-182-131-63.dhcp-bl.indiana.edu) Quit (Quit: Leaving.)
[21:45] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) has joined #ceph
[21:47] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) Quit ()
[21:47] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) has joined #ceph
[21:48] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) Quit ()
[21:49] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) has joined #ceph
[21:58] * Ryan_Lane1 (~Adium@ has joined #ceph
[21:58] * Ryan_Lane (~Adium@ Quit (Read error: Connection reset by peer)
[22:00] <elder> nhm, red indicates "slow" and blue "fast" in your chart?
[22:00] <nhm> elder: yeah, I just highlighted a couple of sections that looked interesting to me.
[22:02] <nhm> elder: though the fact that the peak throughput is like 73MB/s is kind of sad.
[22:02] <nhm> "good" throughput is like 40MB/s.. :/
[22:03] <nhm> and still has like 30 seeks per second except at the very end when for whatever reason it looks like data is mostly just getting flushed out to disk.
[22:05] <nhm> even there though, there's like 10-20seeks per second.
[22:08] <elder> What is the not-to-be-exceeded throughput of the system?
[22:08] <elder> I.e., theoretical peak?
[22:09] <nhm> elder: If I recall I think I was able to get around 120MB/s to one of those disks with dd.
[22:09] <nhm> going by random internet postings, it looks like a 7200rpm disk should be able to about 60-70 seeks per second.
[22:10] <elder> What exactly is collectl,?
[22:10] <nhm> elder: program that scrapes proc for various system metrics.
[22:11] <elder> So data throughput and journal throughput collected that way--are they osd data and osd journal or something?
[22:11] <nhm> elder: So collectl's throughput numbers are what proc reports, and seekwatcher's numbers are what it sees from blktrace.
[22:12] <nhm> elder: yeah, the two collectl values are for journal disk throughput and data disk throughput for a single OSD.
[22:12] <elder> And seekwatcher is reporting what? Data only?
[22:12] <nhm> the seekwatcher numbers are just for the data disk.
[22:12] <nhm> yep
[22:12] <elder> collectl is reporting a hell of a lot more throughput than seekwatcher
[22:13] <elder> do we journal everything twice for every data write?
[22:14] <elder> Oh wait, maybe it eventually catches up or something.
[22:14] <elder> Because at the end, collectl shows nothing.
[22:14] <nhm> elder: if I sum the values, I get: collectl journal: 3751064
[22:15] <nhm> collectl data: 3498964
[22:15] <elder> I think that must mean that collectl is showing us different info (buffered writes?)
[22:15] <nhm> seekwatcher data: 3458470
[22:16] <nhm> that seems fairly reasonable to me givne that a couple of the seekwatcher seconds are missing.
[22:16] <nhm> presumably the OSD data disk is still catching up at the end trying to write data in the journal out.
[22:16] <elder> So why all the 0's for the last half of the collectl journal?
[22:17] <nhm> elder: the client probably stopped sending data because the jouranl had too much data sitting in it already.
[22:17] <elder> (Also, I was looking at the wrong columns above)
[22:17] <elder> collectl data looks fairly in line with seekwatcher data
[22:18] * aliguori (~anthony@ Quit (Quit: Ex-Chat)
[22:18] <mgalkiewicz> elder: any progress on #2267 ?
[22:18] <nhm> Yeah, it's close enough that I think with timings being slightly off it more or less works oout.
[22:18] <elder> mgalkiewicz, none directly, I'm very sorry.
[22:18] <elder> I'm working on more general improvements to the messenger in the hopes it will address several problems.
[22:18] <mgalkiewicz> elder: do you need any help I mean more logs or sth
[22:19] <elder> It wouldn't matter at this point mgalkiewicz, I need to finish my messenger changes and see if things improve.
[22:19] * LarsFronius (~LarsFroni@95-91-243-252-dynip.superkabel.de) Quit (Quit: LarsFronius)
[22:19] <mgalkiewicz> elder: ok do you plan to release your changes in 0.48?
[22:20] <elder> They are on the client side, so would align more with a Linux release. It would be nice to include them in Linux 3.5.
[22:21] <elder> But either way, when they're done, they're available for people to try.
[22:21] <yehudasa> gregaf: just pushed trivial fix to master, you can take a look
[22:21] <mgalkiewicz> elder: ok thx
[22:23] <nhm> yehudasa: I want to leave these ageing tests going another day. Things have been slowing down again.
[22:30] <elder> nhm, so the blktrace write metadata and sync write metadata are those WM and WSM lines?
[22:30] <elder> Are you just trying to correlate them?
[22:30] <nhm> elder: yep
[22:31] <nhm> elder: very unscientific
[22:31] <elder> Off topic. I am having a hell of a time getting anything to run on teuthology right now. Spent the last couple of hours backing off commits, only to find now that just running with "master" I have a problem.
[22:31] <elder> Anyone else?
[22:31] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Quit: Leaving)
[22:32] <elder> INFO:teuthology.orchestra.run.err:2012-05-24 13:29:38.885818 7f22c895e7a0 -1 journal FileJournal::_open: unable to open journal: open() failed: (2) No such file or directory
[22:32] <nhm> elder: I just wanted to see if metadata activity seemed to be correlated with seeks/slowdowns.
[22:32] <elder> And do you see a correlation?
[22:32] <elder> I'm hoping to rely on you to find these things. It's a lot of data to make sense of.
[22:33] <nhm> elder: not for everything, but pretty often. Take a look at 14:46:04-14:46:07
[22:33] * Ryan_Lane1 (~Adium@ Quit (Quit: Leaving.)
[22:33] <nhm> column G is the count for WM occurances
[22:34] <elder> But what about 14:45:35?
[22:34] <elder> I have to go pick up my wife. Back later.
[22:35] <nhm> elder: low number of seeks at that time. Maybe for some reason it didn't need to seek as much for those metadata operations?
[22:36] <CristianDM> SSD Ocz Vertex 3 for journaling are good option?
[22:37] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[22:38] <nhm> CristianDM: performance should be good. There were some firmware problems a while back but I think those are resolved if you make sure you are up to date. Not sure I'd feel comfortable about reliability, but SSDs are a bit of a risk in general.
[22:38] <nhm> If you restrict the journal to only a portion of the disk and don't use the rest it'll help.
[22:39] <CristianDM> How i can restrict journal? I will usage rbd to store web files and emails
[22:39] <CristianDM> Too many small files
[22:40] <nhm> CristianDM: just make the journal partition like 10G or something.
[22:41] <CristianDM> nhm: Currently I have journal into an SATA Black Edition HDD
[22:41] <CristianDM> nhm: Into partition of 20GB
[22:41] <nhm> CristianDM: For some of our test machines I was putting 2 10G journals for 2 OSDs on a single 100G SSD.
[22:42] * Ryan_Lane (~Adium@ has joined #ceph
[22:43] * Ryan_Lane1 (~Adium@ has joined #ceph
[22:43] * Ryan_Lane (~Adium@ Quit (Read error: Connection reset by peer)
[22:46] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) has joined #ceph
[22:47] * Ryan_Lane1 is now known as Ryan_Lane
[22:53] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) has joined #ceph
[23:08] <The_Bishop> not about speed, but disk space usage: is there a lower bound for the journal size?
[23:09] <The_Bishop> i would like to cut the journals down to 0.25GB, does this affect the stability of ceph?
[23:14] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[23:17] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[23:22] <nhm> The_Bishop: I don't know that there is any strict lower bound. You might see lower performance depending on a lot of factors.
[23:41] <gregaf> 250MB is very small for a journal
[23:41] <gregaf> it will maintain correctness, yes, but if your journal isn't large enough to absorb the incoming data during a sync() you're going to get very low throughput
[23:41] <gregaf> The_Bishop: ^
[23:43] <elder> gregaf are you aware of any problems with the ceph master?
[23:43] <elder> My teuthology runs aren't working, even those that worked yesterday.
[23:43] <gregaf> elder: sure, all the open issues in the bug tracker ;)
[23:43] <elder> I'm not selecting a particular ceph commit.
[23:43] <gregaf> how are they failing?
[23:43] <dmick> "the ceph master"?
[23:43] <elder> I'm using the default, which I think is master.
[23:44] <dmick> oh, branch
[23:44] <elder> Here's the first message or two that seem to indicate a problem:
[23:44] <elder> INFO:teuthology.task.ceph:mount /dev/sdb on ubuntu@plana35.front.sepia.ceph.com -o ['noatime', 'user_subvol_rm_allowed']
[23:44] <elder> INFO:teuthology.orchestra.run.err:2012-05-24 13:29:38.885818 7f22c895e7a0 -1 journal FileJournal::_open: unable to open journal: open() failed: (2) No such file or directory
[23:45] <elder> dmick, grand master ceph, yo.
[23:45] <dmick> /dev/sdb on /tmp/cephtest/data/osd.0.data type btrfs (rw,noatime,user_subvol_rm_allowed)
[23:45] <dmick> fwiw, on plana35
[23:45] <elder> What does that mean?
[23:45] <gregaf> I got nothing, ask sjust or try not using btrfs
[23:45] <elder> It's already mounted
[23:45] <elder> ?
[23:45] <dmick> yeah, the mount succeeded
[23:46] <dmick> well I mean there are things we can look at yet right
[23:46] <elder> After the above there is:
[23:46] <elder> INFO:teuthology.orchestra.run.err:2012-05-24 13:29:39.439600 7f22c895e7a0 -1 filestore(/tmp/cephtest/data/osd.0.data) could not find 23c2fcde/osd_superblock/0 in index: (2) No such file or directory
[23:46] <elder> INFO:teuthology.orchestra.run.err:2012-05-24 13:29:39.448683 7f22c4668700 -1 filestore(/tmp/cephtest/data/osd.0.data) async snap create 'snap_1' transid 0 got (17) File exists
[23:46] <elder> INFO:teuthology.orchestra.run.err:os/FileStore.cc: In function 'void FileStore::sync_entry()' thread 7f22c4668700 time 2012-05-24 13:29:39.448713
[23:46] <elder> INFO:teuthology.orchestra.run.err:os/FileStore.cc: 3563: FAILED assert(0 == "async snap ioctl error")
[23:46] <dmick> that's data, not journal. Where's the journal supposed to be....
[23:46] <elder> INFO:teuthology.orchestra.run.err: ceph version 0.47.2-175-gca79f45 (commit:ca79f45a33f9c3f200029bf37efa643a59d4f54d)
[23:47] <nhm> heading out to a birthday party, bbl guys
[23:47] <dmick> osd.0.journal exists
[23:48] <dmick> I am somewhat depressed that 13:29:38.885818 7f22c895e7a0 -1 journal FileJournal::_open: unable to open journal: open() failed: (2) No such file or directory has no indication of what the path was
[23:48] <gregaf> ah, there we go, you should give people the whole backtrace elder :)
[23:48] <elder> I can provide more.
[23:48] <gregaf> check out ceph-devel, latest email should answer your question
[23:49] <elder> So it's fixed?
[23:49] <gregaf> yeah
[23:49] <elder> That blew my whole afternoon.
[23:49] <gregaf> tell me about it
[23:49] <elder> And also killed all my confidence in the long series of changes I was making.
[23:49] <elder> Hopefully they're all good after all, but I'm shaken.
[23:49] <gregaf> I doubt it will be available for teuthology for a while, though
[23:50] <gregaf> hopefully the sha1 from before that change Jim mentions is still available, try that
[23:50] <elder> How do I specify that again?
[23:50] <gregaf> if you're testing the kernel you should use a known-good userspace instead of the luck of the draw...
[23:50] <gregaf> I think sha1:
[23:50] <elder> I guess so.
[23:50] <gregaf> or you could use branch: stable
[23:50] <elder> But then I have to keep moving it forward and knowing what's known good.
[23:50] <gregaf> I mean, I'm with you, master ought to be good, but...*shrug*
[23:50] <elder> That should be known good?
[23:50] <elder> (stable)?
[23:51] <dmick> master appears to be ready
[23:51] <gregaf> if somebody breaks stable you can yell at them very loudly, anyway ;)
[23:51] <gregaf> dmick: I'm sure *a* master is ready, but not what Sage pushed
[23:51] <elder> I can't yell if they break master?
[23:51] <dmick> bea1e03
[23:51] <dmick> >>> Done at: Thu May 24 14:38:47 PDT 2012
[23:51] <gregaf> unless somebody intervenes manually it seems like it's lately been taking an hour or two for gitbuilder to build things, and then it takes several more hours for it to rsync into place for teuthology to grab it
[23:52] <dmick> oh well the rsync...true
[23:52] <gregaf> well, you can yell at people breaking master but "they" tend not to care much
[23:52] <elder> Those long build times are a growing problem too, in my opinion.
[23:52] <elder> I can get a complete build turned around on my own machine in less than 5 minutes, but it seems like it's 30 routinely on gitbuilder.
[23:52] <dmick> however:
[23:52] <dmick> http://gitbuilder.ceph.com/ceph-tarball-oneiric-x86_64-basic/sha1/bea1e03135852a4c62e6f378832a49a241c3297e/
[23:53] <dmick> so I bet it's actually there
[23:53] <gregaf> really need vercoi up and more power to the VMs
[23:53] * s[X]_ (~sX]@eth589.qld.adsl.internode.on.net) has joined #ceph
[23:53] <elder> 240 volts or more.
[23:53] <gregaf> and ways to make teuthology behave properly on local runs
[23:53] <dmick> 220, 221, whatever it takes
[23:54] <dmick> so yeah I'd try again elder, bet you get the fix now
[23:54] <gregaf> every time we force something to be run non-locally like teuthology-suite, it becomes more critical that builds be available quickly
[23:56] <elder> Ooops!!! Un unexpected error seems to have occurred. (Github)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.