#ceph IRC Log

Index

IRC Log for 2012-06-07

Timestamps are in GMT/BST.

[0:00] <sagewk> tv: and mon<->mon auth is a special case that uses the keyring. i can make regular auth also work against the keyring.. that's the simplest
[0:09] * BManojlovic (~steki@212.200.243.232) Quit (Remote host closed the connection)
[0:17] * cattelan_away_away is now known as cattelan_away_away_away
[0:21] * Tv_ (~tv@2607:f298:a:607:3d18:8cc4:48d5:29d6) has joined #ceph
[0:32] * Mareo (~mareo@mareo.fr) Quit (Ping timeout: 480 seconds)
[0:45] * szaydel (~szaydel@c-67-169-107-121.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[0:46] * szaydel (~szaydel@c-67-169-107-121.hsd1.ca.comcast.net) has joined #ceph
[1:05] * stass (stas@ssh.deglitch.com) Quit (Read error: Connection reset by peer)
[1:15] * jantje_ (jan@paranoid.nl) has joined #ceph
[1:15] * lofejndif (~lsqavnbok@9KCAAFZQU.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[1:21] * jantje (jan@paranoid.nl) Quit (Ping timeout: 480 seconds)
[1:27] * Mareo (~mareo@mareo.fr) has joined #ceph
[1:28] * stass (stas@ssh.deglitch.com) has joined #ceph
[2:03] * Tv_ (~tv@2607:f298:a:607:3d18:8cc4:48d5:29d6) Quit (Read error: Operation timed out)
[2:38] * cattelan_away_away_away is now known as cattelan_away_away
[2:40] * szaydel (~szaydel@c-67-169-107-121.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[2:40] * cattelan_away_away is now known as cattelan
[2:45] * yoshi (~yoshi@p37158-ipngn3901marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[3:01] * aliguori (~anthony@202.108.130.138) has joined #ceph
[3:15] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) Quit (Remote host closed the connection)
[3:23] * aliguori (~anthony@202.108.130.138) Quit (Quit: Ex-Chat)
[3:24] * aliguori (~anthony@202.108.130.138) has joined #ceph
[3:37] * cattelan is now known as cattelan_away
[3:54] * szaydel (~szaydel@c-67-169-107-121.hsd1.ca.comcast.net) has joined #ceph
[3:55] * szaydel (~szaydel@c-67-169-107-121.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[3:58] * chutzpah (~chutz@216.174.109.254) Quit (Quit: Leaving)
[4:01] * aliguori (~anthony@202.108.130.138) Quit (Ping timeout: 480 seconds)
[4:20] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (Quit: Leaving.)
[4:37] <renzhi> Hi, has anyone expericenced core dump with the librados api? I got core dump a few times at the rados_conf_read_file() and rados_shutdown().
[4:38] <renzhi> It's very hard to re-produce, but it happens when the system is busy, i.e. a lot of clients connect and disconnect
[4:38] <renzhi> it does not generate core dump file though...
[4:39] <dmick1> renzhi: core dumps can be hard to find these days
[4:39] <dmick1> Ubuntu sets up apport by default, and it squirrels them away where you don't expect them
[4:39] * dmick1 is now known as dmick
[4:39] <renzhi> dmick: I've been chasing this issue for 2 days now
[4:40] <dmick> https://wiki.ubuntu.com/Apport
[4:40] <renzhi> everytime, I have to let run for a while, but don't know when it will happen. Lots of debug log
[4:40] <dmick> the first step would be to look at the backtrace from the core
[4:41] <dmick> can you see if it's generating one, but off under apport, maybe?
[4:42] <renzhi> yeah, I'll try that, thx
[4:51] * aliguori (~anthony@202.108.130.138) has joined #ceph
[4:56] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[4:59] * Ryan_Lane1 (~Adium@dslb-178-005-175-188.pools.arcor-ip.net) has joined #ceph
[5:05] * Ryan_Lane (~Adium@dslb-094-223-084-033.pools.arcor-ip.net) Quit (Ping timeout: 480 seconds)
[5:10] <Qten> anyone have any docs handy on howto configure nova to use rbd?
[5:10] <dmick> Qten: I know Josh was just working on such
[5:12] <dmick> but I don't know that it's done yet
[5:12] <dmick> I see this:
[5:12] <dmick> http://docs.openstack.org/trunk/openstack-compute/admin/content/rados.html
[5:13] <Qten> yah i found that awhile ago but surely has to be more to it then that :)
[5:15] * joao (~JL@89.181.146.37) Quit (Ping timeout: 480 seconds)
[5:17] <dmick> I hope to know more about that soon, but I don't just yet
[5:17] <dmick> Josh is definitely the guy to ask
[5:17] <dmick> josh.durgin@inktank.com
[5:18] <Qten> no probs, thanks
[5:23] * s[X]_ (~sX]@eth589.qld.adsl.internode.on.net) has joined #ceph
[5:24] * joao (~JL@89-181-148-114.net.novis.pt) has joined #ceph
[5:25] <elder> sage, yuu committed "mon: set policy for client, mds before throttler" about an hour ago.
[5:25] <sage> yeah
[5:25] <sage> er, 5 minutes ago :)
[5:25] <elder> Is this a symptom of hte problem that fixes:
[5:25] <elder> INFO:teuthology.task.ceph.mon.a.err:msg/SimpleMessenger.h: 169: FAILED assert(policy_map.count(type))
[5:25] <sage> yeah
[5:25] <elder> Neat!
[5:26] <sage> go gitbuilder go!
[5:26] <elder> I'll watch it, and try again in a little while.
[5:26] <elder> Glad there is an explanation...
[5:26] * aliguori (~anthony@202.108.130.138) Quit (Remote host closed the connection)
[5:27] <elder> Is there a ceph branch that would be kept fairly up-to-date but not necessarily the absolute latest that I should be using for my testing rather than master?
[5:29] <sage> 'stable' is the last release
[5:29] <elder> Would you recommend I use that? I guess it's no more than a few weeks old, right?
[5:29] <sage> yeah, that's probably the best bet
[5:29] <sage> and if you do see bugs, that's a reminder to us to backport the fixes!
[5:30] <sage> most of the testing is focused on the bleeding edge
[5:30] <elder> Double-edged sword. Latest gets the most testing, but may be risky. Last release gets less testing, but may be risky.
[5:31] <dmick> Welcome to Agile Development! :)
[5:32] <elder> Looks like I'd do something like this:
[5:32] <elder> tasks:
[5:32] <elder> - ceph:
[5:32] <elder> branch: stable
[5:32] <elder> Right?
[5:33] <elder> I'll give it a try.
[5:33] <dmick> that's what the docstring of task() says near the top
[5:34] <dmick> you can also use tag: or sha1:
[5:34] <elder> Well that's what I read so I think I hvae it.
[5:34] <dmick> or your own copy at path:
[5:34] <elder> path: would specify a path on my local machine (from which I run the tests)?
[5:34] <dmick> yes
[5:35] <elder> Cool. Will have to add that to the kernel, though I'm not sure whether my faster builds here would overcome the longer transfer time across the country
[5:35] <elder> (or out of my home would more likely be the bottleneck)
[5:36] <sage> elder: that's right
[5:36] <elder> Trying it now.
[5:36] <dmick> kernel.py doesn't currently seem to have a path: option, fwiw
[5:37] <sage> dmick: ooh, that would be useful.
[5:41] <dmick> is it common to build a kernel tarball, or is it more-often deb(s)?
[5:41] <joao> I only build debs nowadays
[5:43] <dmick> ^sage , elder
[5:43] <sage> dmick the gitbuilders/teuthology do debs
[5:43] <sage> i think kernel developers probably install kernel manually, or test under kvm
[5:44] <dmick> yeah, I mean your and Alex's use case mostly I think
[5:45] <sage> i either test with uml (no install at all) or via teuthology
[5:45] <sage> and deb
[5:45] <sage> my lazy butt hasn't worked with kvm for development yet
[5:47] <dmick> rofl@"lazy"
[5:47] <dmick> if you're lazy I'm dead :)
[5:48] <joao> I can relate
[5:53] <sage> joao: isn't it like 5 am for you?
[5:53] <joao> unfortunately, yes
[5:53] <sage> i'm definitely to lazy to be on irc at that hour :)
[5:54] <joao> something came up, and I woke up like an hour ago
[5:54] <joao> haven't been able to fall asleep yet, so came here to get some stuff done :)
[5:54] <dmick> grr. gnome-shell lost me my world clock. I've just about had it with GNOME
[5:56] <iggy> i've kind of given up on the big 2 these days :/
[5:58] <dmick> I keep trying to be "in the majority" in hopes of making things better for everyone with usage/bug reporting/etc.
[5:58] <dmick> but GNOME keeps making it really hard to do that
[5:59] <joao> only when the bug reporter crashes
[5:59] <joao> it happened to me even this morning, but wasn't able to send the bug report for some reason
[6:00] <sage> still happy with fvwm (+gnome)
[6:00] <joao> is fvwm that awesome bar you have?
[6:01] <elder> dmick, I knew kernel didn't have the option, that's why I said "will have to add it."
[6:01] <dmick> ah. "the kernel" meaning "the kernel task". got you
[6:01] <elder> I normally build kernel modules in a tree separate from the source tree.
[6:02] <elder> With teuthology I just take whatever it does and am very happy it does it so seamlessly for me.
[6:03] <sage> that's FvwmButtons
[6:03] <joao> I wonder if I can get that to work with unity
[6:03] <sage> elder: "seamlessly" not to be confused with "speedily"
[6:03] <elder> I have adjusted to whatever the hell Ubuntu has stuck me with.
[6:04] <sage> joao: unlikely...
[6:04] <joao> just so I can launch gvim's in different directories with the click of a mouse
[6:04] <elder> sage, I'll take seamlessly over not-so-seamless but speedy any time.
[6:04] <joao> sage, if I weren't actively trying to get sleepy, I'd take that as a challenge :p
[6:05] <elder> DO IT!
[6:05] <dmick> troublemaker
[6:06] <elder> dmick, back to your kernel build question, I have a script that sets all kinds of stuff up for me.
[6:06] <elder> Well, not that much, but everything just the way I want it anyway.
[6:06] <elder> I build in the root of a kernel source tree (in this case, ceph-client git repository)
[6:07] <elder> Then my objects as they're being built get places somewhere under ../ceph-client-obj/
[6:07] <elder> And my install directory is ../ceph-client-root/
[6:07] <elder> Under those two directories I end up separating by architecture and by kernel config, so I can build different environments from the same tree.
[6:08] <sage> elder: next time you're out i want to see
[6:08] <elder> I normally preserve my old config, but I can have it rebuilt with a command line option.
[6:08] <dmick> so do you have the rough equivalent of /lib/modules/<release> there?
[6:08] <sage> elder: speaking of which...
[6:09] <elder> sage, sure. dmick, yes, under the ../ceph-client-root/ directory
[6:09] <elder> sage, is there an upcoming date that would be fortuitous?
[6:09] <sage> 6/22 and party of 6/21 i am out.. and the week before that kampe is out
[6:09] <elder> PARTY!
[6:09] <sage> HA
[6:10] <elder> Well I have a somewhat full June. I think I can wedge a trip in, but I have to look at a couple of calendars.
[6:10] <sage> the week after that i'm out on monday, too. and that tuesday i predict a headache.
[6:10] <dmick> elder: plus the /boot stuff I guess
[6:10] <elder> dmick, yes.
[6:10] <sage> so.. either you miss kampe (maybe not so bad), or proably after july 4.. aie
[6:10] * Tv (~tv@cpe-24-24-131-250.socal.res.rr.com) Quit (Quit: Tv)
[6:11] <elder> I have been roughly thinking it would be July. It also is OK if you're not there a day or two.
[6:11] <elder> I usually spend my first day swearing at my laptop anyway.
[6:12] <elder> My son is out of town on a mission trip at the end of the month and that may turn out to be a good time to go out West.
[6:12] <elder> I'll talk with my wife about it tomorrow to try to see what's out of the question.
[6:15] <dmick> I doubt it will matter, but I'm out 23-27Jul
[6:16] <elder> I suppose we should keep this stuff on a calendar somewhere...
[6:16] <elder> If only they had a way to do that online somehow.
[6:16] <dmick> I have dim memories of that existing, but maybe it was a past group
[6:16] <dmick> or maybe it was just Mark's calendar
[6:16] <elder> I'
[6:17] <elder> ll try to schedule it on a week when you're performing, dmick.
[6:30] <elder> Bedtime.
[6:33] <joao> well, at least something good came off of this whole insomnia thing
[6:34] <joao> now I have the time and the low brain functions to create decent docs
[6:35] <joao> this didn't come out right
[6:35] <joao> so I'll just leave
[6:37] <dmick> lol
[6:45] * joao (~JL@89-181-148-114.net.novis.pt) Quit (Ping timeout: 480 seconds)
[7:01] <renzhi> Using the librados api, would there be any problem if I have multiple threads create multiple rados_ioctx_t instances from one single rados_cluster_t instance?
[7:02] <renzhi> For example, I call rados_create() and rados_connect() only once at the start of the program
[7:03] <renzhi> then all threads subsequently will call rados_ioctx_create() from this single rados_cluster_t instance.
[7:04] <renzhi> I'd like to know if there is no race conditions, and if there's any performance issue doing that
[7:18] * aliguori (~anthony@202.108.130.138) has joined #ceph
[7:31] * dmick (~dmick@aon.hq.newdream.net) Quit (Quit: Leaving.)
[7:36] * aliguori (~anthony@202.108.130.138) Quit (Ping timeout: 480 seconds)
[9:07] * s[X]_ (~sX]@eth589.qld.adsl.internode.on.net) Quit (Remote host closed the connection)
[9:14] * BManojlovic (~steki@smile.zis.co.rs) has joined #ceph
[10:03] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[11:04] * aliguori (~anthony@202.108.130.138) has joined #ceph
[11:23] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[11:46] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[11:51] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[12:06] * yoshi (~yoshi@p37158-ipngn3901marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[12:08] * aliguori (~anthony@202.108.130.138) Quit (Quit: Ex-Chat)
[12:08] * mtk (~mtk@ool-44c35967.dyn.optonline.net) Quit (Remote host closed the connection)
[12:12] * mtk (~mtk@ool-44c35967.dyn.optonline.net) has joined #ceph
[12:22] * lofejndif (~lsqavnbok@09GAAFWIP.tor-irc.dnsbl.oftc.net) has joined #ceph
[12:59] * aliguori (~anthony@222.128.202.2) has joined #ceph
[13:30] * joao (~JL@89.181.148.114) has joined #ceph
[13:34] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[14:32] * cattelan_away is now known as cattelan
[14:45] * The_Bishop (~bishop@2a01:198:2ee:0:41a0:c7ac:f55a:be71) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[14:53] * aliguori (~anthony@222.128.202.2) Quit (Quit: Ex-Chat)
[14:54] * yanzheng (~zhyan@101.82.59.123) has joined #ceph
[14:57] * stxShadow (~Jens@ip-78-94-238-69.unitymediagroup.de) has joined #ceph
[15:02] * stxShadow (~Jens@ip-78-94-238-69.unitymediagroup.de) Quit (Read error: Connection reset by peer)
[15:14] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[15:28] * s[X]__ (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[15:29] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Ping timeout: 480 seconds)
[16:15] * The_Bishop (~bishop@2a01:198:2ee:0:a4c6:6eb9:d9c5:a283) has joined #ceph
[16:21] * yanzheng (~zhyan@101.82.59.123) Quit (Ping timeout: 480 seconds)
[16:30] * mtk (~mtk@ool-44c35967.dyn.optonline.net) Quit (Remote host closed the connection)
[16:32] * mtk (~mtk@ool-44c35967.dyn.optonline.net) has joined #ceph
[16:32] * mtk (~mtk@ool-44c35967.dyn.optonline.net) Quit (Remote host closed the connection)
[16:36] * yanzheng (~zhyan@114.81.121.52) has joined #ceph
[16:38] * mtk (~mtk@ool-44c35967.dyn.optonline.net) has joined #ceph
[16:41] * mtk (~mtk@ool-44c35967.dyn.optonline.net) Quit (Remote host closed the connection)
[16:48] * mtk (~mtk@ool-44c35967.dyn.optonline.net) has joined #ceph
[16:52] * adjohn (~adjohn@50-0-133-101.dsl.static.sonic.net) has joined #ceph
[16:52] * adjohn (~adjohn@50-0-133-101.dsl.static.sonic.net) Quit (Remote host closed the connection)
[16:56] * s[X]__ (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[16:59] * verwilst (~verwilst@d5152FEFB.static.telenet.be) Quit (Quit: Ex-Chat)
[17:15] * yanzheng (~zhyan@114.81.121.52) Quit (Ping timeout: 480 seconds)
[17:20] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:34] * yanzheng (~zhyan@101.84.196.176) has joined #ceph
[17:48] * yanzheng (~zhyan@101.84.196.176) Quit (Read error: Operation timed out)
[17:58] * diggalabs_ (~jrod@199.244.52.23) has joined #ceph
[17:59] * diggalabs (~jrod@cpe-72-177-238-137.satx.res.rr.com) Quit (Read error: Connection reset by peer)
[17:59] * diggalabs_ (~jrod@199.244.52.23) Quit (Read error: Connection reset by peer)
[18:00] * diggalabs (~jrod@199.244.52.23) has joined #ceph
[18:04] * diggalabs_ (~jrod@cpe-72-177-238-137.satx.res.rr.com) has joined #ceph
[18:04] * diggalabs (~jrod@199.244.52.23) Quit (Read error: Connection reset by peer)
[18:04] * diggalabs_ is now known as diggalabs
[18:05] * lofejndif (~lsqavnbok@09GAAFWIP.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[18:22] * BManojlovic (~steki@smile.zis.co.rs) Quit (Remote host closed the connection)
[18:23] * Tv_ (~tv@2607:f298:a:607:3d18:8cc4:48d5:29d6) has joined #ceph
[18:24] * Tv (~tv@aon.hq.newdream.net) has joined #ceph
[18:25] <gregaf1> renzhi: I updated your bug; I'm pretty sure you're hitting a resource limit somewhere
[18:32] <elder> Tv, I just had a test fail because /tmp/cephtest was not empty. Looking there, /tmp/cephtest/workunit.client.0 exists as an empty directory. Looks like workunit.py should remove that in its "finally" clause. Any idea what's going on? The rest of my test looked like it passed fine.
[18:32] <elder> I think I've seen this before, but have no record of it.
[18:34] <Tv_> elder: something failed before it got to the cleanup
[18:34] <Tv_> elder: the teuthology log of the run should tell you
[18:34] <Tv_> elder: detecting these is exactly why i don't want an rm -rf in there
[18:34] <elder> Oh, you're right.
[18:35] <elder> gzip: stdin: unexpected end of file
[18:36] * Tv (~tv@aon.hq.newdream.net) has left #ceph
[18:37] <Tv_> being on the same channel on two computers next to each other was too confusing..
[18:40] <joao> sjust, around?
[18:40] <sjust> joao: yup
[18:41] <joao> sjust, if you issue a put on the KeyValueDB, with a value with a smaller length than any previous value on that same key, is the previous value truncated before we put the new one?
[18:42] <joao> or should we manage that on the issuer end?
[18:42] <Tv_> sagewk: get-or-create-key doesn't seem to be able to --set-uid=0, for client.admin
[18:42] <sjust> it replaces the old value entirely
[18:42] <joao> cool
[18:42] <joao> thanks
[18:42] <sjust> the interface won't give you any other behavior
[18:43] * Qu310 (~qgrasso@ppp59-167-157-24.static.internode.on.net) Quit (Ping timeout: 480 seconds)
[18:43] <joao> I was assuming so, as that was the behavior that made the most sense in my mind, but had to confirm
[18:43] <joao> :)
[18:44] <sagewk> tv_: what command are you doing exactly?
[18:45] <sagewk> i don't think you need the --set-uid=0 piece
[18:49] <elder> Tv_, I'm pretty sure the problem was due to not being able to fetch the ceph image from the repository.
[18:49] * Qu310 (~qgrasso@ppp59-167-157-24.static.internode.on.net) has joined #ceph
[18:50] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[18:52] <Tv_> elder: :(
[18:53] <Tv_> sagewk: to create client.admin key
[18:53] <Tv_> sagewk: i don't actually understand the auid thing all that well, but i've been cargo culting it for creation of client.admin
[18:53] <sagewk> tv_; just leave it off
[18:54] <sagewk> tv_: it's not needed here
[18:54] <jmlowe> I subscribe to "The Daily WTF", and I just had my first possible submission as described here http://www-304.ibm.com/support/docview.wss?uid=swg21258418
[18:54] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[18:54] <Tv_> sagewk: well that's easy ;)
[18:56] <jmlowe> shortened version: backup failed, completed successfully error 12; backup completed successfully but shows failed, "cause: working as designed"
[18:57] * dmick (~dmick@2607:f298:a:607:f04f:10a7:7055:7d8a) has joined #ceph
[19:02] <Tv_> jmlowe: gotta love the ANS5216E message added at TSM 5.3 level
[19:03] * Tv_ hates obscure abbreviations
[19:08] <darkfader> I'm not sure, but I think there is no line of output from IBM that isn't numbered
[19:08] <darkfader> it's their legacy showing
[19:10] <joao> do you guys realize you have the camera on?
[19:10] <joao> I can see sjust picking his nose
[19:11] <Tv_> hahaha
[19:11] <Tv_> joao: i bet you can eavesdrop on everything too
[19:11] <sjust> I am so glad this channel is archived
[19:11] <joao> only on the keystrokes
[19:11] <joao> not much to eavesdrop
[19:13] * lofejndif (~lsqavnbok@82VAAEBX3.tor-irc.dnsbl.oftc.net) has joined #ceph
[19:28] * BManojlovic (~steki@212.200.243.232) has joined #ceph
[19:31] * chutzpah (~chutz@216.174.109.254) has joined #ceph
[19:32] <sagewk> sjust: wip-assert2?
[19:42] <sjust> lookin
[19:43] <Tv_> btw, browsing the F1 slides.. "Very high commit latency - 50-100ms" "Reads take 5-10ms - much slower than MySQL" "High throughput"
[19:43] <Tv_> that tradeoff seems to be very fashionable
[19:44] <dmick> F1?
[19:44] * diggalabs_ (~jrod@199.244.52.23) has joined #ceph
[19:44] <Tv_> for the journal club.. http://research.google.com/pubs/archive/38125.pdf
[19:45] <Tv_> (though we still need to get our grubby hands on something more than slides, if it is to be journal club material)
[19:46] <Tv_> 200ms client-visible latencies, but predictably so, no long tail
[19:46] * diggalabs_ (~jrod@199.244.52.23) Quit (Read error: Connection reset by peer)
[19:48] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[19:48] * diggalabs_ (~jrod@cpe-72-177-238-137.satx.res.rr.com) has joined #ceph
[19:48] * diggalabs (~jrod@cpe-72-177-238-137.satx.res.rr.com) Quit (Read error: Connection reset by peer)
[19:48] * diggalabs_ is now known as diggalabs
[19:50] <Tv_> hmm HN says "It's an industrial presentation. As such, there's no paper about it, just the talk given at SIGMOD and the slides. When Google unveiled Megastore, F1 predecessor, it did it as an industrial presentation too. Years, and many improvements later, they published a paper on Megastore."
[19:50] <sagewk> sjust: osd-queries branch!
[19:50] <Tv_> gregaf1: so i think we need to skip F1 as far as journal club goes
[19:51] <gregaf1> Tv_: yes, I'd gathered that when I couldn't find a paper and realized it was six days old ;)
[20:00] * diggalabs (~jrod@cpe-72-177-238-137.satx.res.rr.com) Quit (Read error: Connection reset by peer)
[20:02] * diggalabs (~jrod@cpe-72-177-238-137.satx.res.rr.com) has joined #ceph
[20:07] <Tv_> joshd: see the "Random data corruption" email.. that's nasty
[20:19] * diggalabs (~jrod@cpe-72-177-238-137.satx.res.rr.com) Quit (Ping timeout: 480 seconds)
[20:21] * diggalabs (~jrod@cpe-72-177-238-137.satx.res.rr.com) has joined #ceph
[20:26] * diggalabs_ (~jrod@cpe-72-177-238-137.satx.res.rr.com) has joined #ceph
[20:26] * diggalabs (~jrod@cpe-72-177-238-137.satx.res.rr.com) Quit (Read error: Connection reset by peer)
[20:26] * diggalabs_ is now known as diggalabs
[20:37] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[20:53] * diggalabs_ (~jrod@cpe-72-177-238-137.satx.res.rr.com) has joined #ceph
[20:58] * diggalabs (~jrod@cpe-72-177-238-137.satx.res.rr.com) Quit (Ping timeout: 480 seconds)
[20:58] * diggalabs_ is now known as diggalabs
[21:53] * Tv_ (~tv@2607:f298:a:607:3d18:8cc4:48d5:29d6) Quit (Quit: Tv_)
[21:53] * Tv_ (~tv@2607:f298:a:607:3d18:8cc4:48d5:29d6) has joined #ceph
[21:53] * Tv_ (~tv@2607:f298:a:607:3d18:8cc4:48d5:29d6) has left #ceph
[21:58] <joshd> renzhi: just looked at the ioctx_create, and there is a race condition there (http://tracker.newdream.net/issues/2525)
[22:02] * Tv_ (~tv@2607:f298:a:607:bd15:990e:65cd:46db) has joined #ceph
[22:29] <Tv_> does the current ceph.conf have any fields that would have internal key-value structure?
[22:30] <Tv_> i'm looking at something like [global] crush location = row:foo rack:bar host:baz
[22:31] <joshd> I think the only structured thing is a list (for e.g. mon_hosts and auth_supported)
[22:33] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:40] * cattelan (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) Quit (Read error: Operation timed out)
[22:50] * cattelan (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) has joined #ceph
[22:55] * gregaf1 (~Adium@2607:f298:a:607:449e:8538:a3ab:e874) Quit (Quit: Leaving.)
[22:56] * gregaf (~Adium@2607:f298:a:607:495f:f540:7282:4591) has joined #ceph
[23:26] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:46] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.