#ceph IRC Log

Index

IRC Log for 2012-04-03

Timestamps are in GMT/BST.

[0:02] * nhm launched new test suite with 1536 tests.
[0:02] <nhm> s/launched/launches
[0:03] <nhm> poor plana nodes
[0:04] <Tv|work> save your mercy for things that bleed
[0:05] <nhm> I should really mirror the collectl package so it doesn't get downloaded from sourceforge 1536 times.
[0:06] <nhm> hrm.
[0:06] <Tv|work> nhm: we suck with everything else already
[0:06] <dmick> it's the potato internet. it's just a series of tubers.
[0:06] <Tv|work> nhm: once vercoi are happy, i'll put in apt-cacher-ng for apt repos, and some generic http proxy for the rest, and *some* solution for file hosting (dunno yet what)
[0:07] <nhm> Tv|work: good deal. apt-cacher-ng is a life saver.
[0:11] <nhm> will be interesting to see how rados copes with 256 simultaneous 64MB write requests.
[0:13] <nhm> hrm, actually going to remove one of the combinations.
[0:21] * Oliver1 (~oliver1@ip-37-24-160-195.unitymediagroup.de) Quit (Quit: Leaving.)
[0:22] * BManojlovic (~steki@212.200.240.52) Quit (Remote host closed the connection)
[0:22] * BManojlovic (~steki@212.200.240.52) has joined #ceph
[0:33] <nhm> interesting, rados bench with 64MB IOs doesn't work so hot.
[0:34] * nhm creates a work generator for sjust
[0:35] <sjust> nhm: hmm?
[0:35] <nhm> sjust: The more combinations I try, the more work I generate for you. ;)
[0:35] <sjust> heh, indeed
[0:37] <nhm> sjust: I'm running a new suite that tests 2, 4, and 8 OSDs with 1, 2, and 4 clients, with IO sizes from 16k to 64M, and 4-256 concurrent IOs on stable, master, wip-latency and with btrfs and xfs.
[0:37] <nhm> all with debugging on though.
[0:38] <nhm> next time I'll try to some large combinations without debugging to see if it changes the numbers.
[0:38] * rturk (~textual@aon.hq.newdream.net) has joined #ceph
[0:38] <nhm> s/large/high performing
[0:40] <gregaf> is that 256 IOs off a single client?
[0:40] <gregaf> 64MB*256=16GB
[0:41] <gregaf> nhm: ^
[0:42] <nhm> gregaf: it is for one of the combinations...
[0:43] <nhm> gregaf: for 2 clients 128/client, and for 4 clients 64/client
[0:43] <gregaf> I don't think the client would appreciate that much, forget whatever happens to the cluster (which'll be bad, more or less on purpose)
[0:44] <nhm> gregaf: indeed. I figure I might as well see what happens though. :)
[0:44] <nhm> test completed at least...
[0:44] <nhm> minimum latency of 8s...
[0:45] <gregaf> okay, just not sure if that's the data to collect right away, but this is your space :)
[0:45] <gregaf> I think the default throttles are at 100MB so those messages are going through the OSDs pretty much one at a time
[0:45] <nhm> gregaf: it's just part of some automated tests
[0:46] <nhm> gregaf: last set of tests were 16k 256k 4m. I figured I just just throw another multiple on. ;)
[0:46] <gregaf> heh
[0:46] <gregaf> did those go better, at least?
[0:47] <gregaf> (please say yes)
[0:47] <nhm> gregaf: Yes, though with some interesting bits. I'll send you the spreadsheet.
[0:47] <gregaf> cool
[0:47] <nhm> gregaf: sent
[0:48] <nhm> gregaf: those tests are done with 3 OSDs (one per plana node) and journals on the system disks.
[0:49] <nhm> gregaf: I've got a ton more data I need to parse, but this is somethign to start out with at least.
[0:49] * LarsFronius (~LarsFroni@e176057229.adsl.alicedsl.de) has joined #ceph
[0:50] <gregaf> avg latency > 1 sec (*cry*)
[0:50] <nhm> gregaf: yeah, Sam has been working on it...
[0:50] <sjust> with enough requests, avg latency is not relevant
[0:51] <gregaf> uh???.huh???.
[0:51] <sjust> that merely reflects our queue sizes and throttles
[0:51] <gregaf> oh, I get you
[0:51] <nhm> sjust: I'm curious about those max latencies that basically run through to the end of the test.
[0:51] <gregaf> I don't think those are hitting our thresholds though
[0:51] <sjust> sorry?
[0:51] <nhm> sjust: oh, on the spreadsheet I sent you earlier.
[0:52] <nhm> sjust: basically some of the tests show max latencies that are like ~60s
[0:53] <nhm> seems to only happen with a high number of concurrent ops.
[0:53] <sjust> nhm: hmm
[0:53] <sjust> can you check your logs from that run and look for the journal size?
[0:53] <nhm> sjust: journal size is static for all of the tests at 1GB
[0:53] <sjust> ah
[0:53] <sjust> block device or fileL?
[0:54] <nhm> sjust: I think it's just a file on the system disk. Whatever teuthology does by default.
[0:54] <sjust> kio
[0:54] <sjust> *ok
[0:54] <nhm> time to add more permutations. ;)
[0:55] <sjust> no lmiited to btrfs
[0:55] <sjust> *not limited to btrfs either
[0:55] <nhm> yeah
[0:55] <sjust> ugh, want 99% latency
[0:56] <nhm> sjust: I don't know much about the problem on congress, but yehuda was saying it might be related to operation timeouts? Could this be related?
[0:56] <sjust> the problem on congress with osds crashing is caused by timeouts
[0:56] <sjust> if it were limited to btrfs, than there would be some possibility that we are seeing the same bug
[0:56] <nhm> ok
[0:57] <sjust> but it's unlikely that either btrfs or xfs would take a minute to complete an operation with only a minute of ops
[0:57] <nhm> well, all of those tests have a decent amount of debugging info, plus collectl data, so I've got plenty of parsing work to keep me busy for a while.
[0:57] <sjust> actually, I think there is something wrong with that number, that op would have had to wait for the entire test
[0:58] <nhm> sjust: maybe a bug in rados bench results?
[0:58] <sjust> perhaps
[0:58] <joao> sjust, if there's snapshots involved, it may not be that unlikely
[0:58] <nhm> sjust: I did verify that it's what rados bench is reporting.
[0:58] <joao> sjust, btrfs snapshots I mean
[0:58] <sjust> no, I mean that any other op to the same pg would have had to wait in line behind that one
[0:59] <sjust> it should have knee capped performance
[1:01] <nhm> sjust: is there any way that the write happened quickly but the acknowledgment was delayed?
[1:01] <sjust> nhm: less likely, but possible
[1:01] <nhm> or maybe never got back to the client at all...
[1:02] <sjust> no, the test ended, so it must have gotten to the client
[1:03] <nhm> I wonder if the lack of new writes allowed it to complete some how.
[1:05] * BManojlovic (~steki@212.200.240.52) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:07] <nhm> woo, 139s latency on a 60s test. ;)
[1:07] * Tv|work (~Tv_@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[1:18] * LarsFronius (~LarsFroni@e176057229.adsl.alicedsl.de) Quit (Quit: LarsFronius)
[1:19] <sagelap> nhm: what is the test?
[1:22] <sagelap> doing --debug-objecter 10 -debug-ms 1 will tell you what's going on from client's perspective (e.g., single slow op)
[1:24] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:27] <nhm> sagelap: that was 64 64MB IOs coming from a single client. 256 64MB IOs from a single client caused a bad_alloc.
[1:28] <nhm> 256 4MB IOs worked fine though...
[1:28] <sagelap> that's a big io :)
[1:28] <nhm> sagelap: Yep. :D
[1:28] <nhm> ok, gotta run, bbl
[1:35] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[1:35] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[1:40] * ofical (ofical@112.161.134.227) has joined #ceph
[1:49] * sagelap (~sage@50-1-53-18.dedicated.static.sonic.net) Quit (Ping timeout: 480 seconds)
[1:54] * joao (~JL@89-181-151-120.net.novis.pt) Quit (Ping timeout: 480 seconds)
[1:54] * ofical is now known as oiig
[1:58] <dmick> ah, so this is better news
[1:58] <dmick> I now know where all the switches actually *are*, and it looks like only one of them is missing a pw
[1:58] <dmick> yay
[2:00] * sagelap (~sage@ace.ops.newdream.net) has joined #ceph
[2:11] * sagelap (~sage@ace.ops.newdream.net) Quit (Ping timeout: 480 seconds)
[2:12] * oiig (ofical@112.161.134.227) Quit ()
[2:12] * oiig (oiig@112.161.134.227) has joined #ceph
[2:30] * yoshi (~yoshi@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[2:36] * oiig (oiig@112.161.134.227) Quit ()
[2:37] * oiig (oiig@112.161.134.227) has joined #ceph
[2:38] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[2:43] * lofejndif (~lsqavnbok@09GAAEJ2A.tor-irc.dnsbl.oftc.net) Quit (Quit: Leaving)
[2:44] * Qten1 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[2:49] * rturk (~textual@aon.hq.newdream.net) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[2:52] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Remote host closed the connection)
[3:22] * perplexed_ (~ncampbell@216.113.168.141) has joined #ceph
[3:22] * perplexed_ (~ncampbell@216.113.168.141) Quit ()
[3:29] * perplexed (~ncampbell@216.113.168.141) Quit (Ping timeout: 480 seconds)
[3:33] * adjohn (~adjohn@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[4:10] * darkfader (~floh@188.40.175.2) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * dmick (~dmick@aon.hq.newdream.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * nhm (~nh@68.168.168.19) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * ottod_ (~ANONYMOUS@9YYAAELTK.tor-irc.dnsbl.oftc.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * cattelan_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * krisk (~kap@rndsec.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * ivan` (~ivan`@li125-242.members.linode.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * Qten1 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * oiig (oiig@112.161.134.227) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * mtk (~mtk@ool-44c35967.dyn.optonline.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * __jt__ (~james@jamestaylor.org) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * nyeates (~nyeates@pool-173-59-237-75.bltmmd.fios.verizon.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * rosco (~r.nap@188.205.52.204) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * grape (~grape@216.24.166.226) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * iggy (~iggy@theiggy.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * gohko (~gohko@natter.interq.or.jp) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * sboyette (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * eternaleye (~eternaley@tchaikovsky.exherbo.org) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * eightyeight (~atoponce@pthree.org) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * chutzpah (~chutz@216.174.109.254) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * yehudasa (~yehudasa@aon.hq.newdream.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * lxo (~aoliva@lxo.user.oftc.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * jpieper (~josh@209-6-86-62.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * sjust (~sam@aon.hq.newdream.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * sage (~sage@cpe-76-94-40-34.socal.res.rr.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * sagewk (~sage@aon.hq.newdream.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * mkampe (~markk@aon.hq.newdream.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * ajm (~ajm@64.188.63.86) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * guido (~guido@mx1.hannover.ccc.de) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * imjustmatthew (~imjustmat@pool-71-176-223-2.rcmdva.fios.verizon.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * ivan\ (~ivan@108-213-76-179.lightspeed.frokca.sbcglobal.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * gregaf (~Adium@aon.hq.newdream.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * edwardw`away (~edward@ec2-50-19-100-56.compute-1.amazonaws.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * chaos_ (~chaos@hybris.inf.ug.edu.pl) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * nolan (~nolan@phong.sigbus.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * MK_FG (~MK_FG@188.226.51.71) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * Azrael (~azrael@terra.negativeblue.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:10] * Meths (rift@2.25.214.237) Quit (reticulum.oftc.net synthon.oftc.net)
[4:11] * Qten1 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[4:11] * oiig (oiig@112.161.134.227) has joined #ceph
[4:11] * imjustmatthew (~imjustmat@pool-71-176-223-2.rcmdva.fios.verizon.net) has joined #ceph
[4:11] * mtk (~mtk@ool-44c35967.dyn.optonline.net) has joined #ceph
[4:11] * dmick (~dmick@aon.hq.newdream.net) has joined #ceph
[4:11] * yehudasa (~yehudasa@aon.hq.newdream.net) has joined #ceph
[4:11] * chutzpah (~chutz@216.174.109.254) has joined #ceph
[4:11] * __jt__ (~james@jamestaylor.org) has joined #ceph
[4:11] * nyeates (~nyeates@pool-173-59-237-75.bltmmd.fios.verizon.net) has joined #ceph
[4:11] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:11] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[4:11] * rosco (~r.nap@188.205.52.204) has joined #ceph
[4:11] * jpieper (~josh@209-6-86-62.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) has joined #ceph
[4:11] * grape (~grape@216.24.166.226) has joined #ceph
[4:11] * sjust (~sam@aon.hq.newdream.net) has joined #ceph
[4:11] * ivan\ (~ivan@108-213-76-179.lightspeed.frokca.sbcglobal.net) has joined #ceph
[4:11] * nhm (~nh@68.168.168.19) has joined #ceph
[4:11] * darkfader (~floh@188.40.175.2) has joined #ceph
[4:11] * Meths (rift@2.25.214.237) has joined #ceph
[4:11] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[4:11] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) has joined #ceph
[4:11] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[4:11] * nolan (~nolan@phong.sigbus.net) has joined #ceph
[4:11] * chaos_ (~chaos@hybris.inf.ug.edu.pl) has joined #ceph
[4:11] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[4:11] * guido (~guido@mx1.hannover.ccc.de) has joined #ceph
[4:11] * edwardw`away (~edward@ec2-50-19-100-56.compute-1.amazonaws.com) has joined #ceph
[4:11] * ajm (~ajm@64.188.63.86) has joined #ceph
[4:11] * mkampe (~markk@aon.hq.newdream.net) has joined #ceph
[4:11] * sagewk (~sage@aon.hq.newdream.net) has joined #ceph
[4:11] * sage (~sage@cpe-76-94-40-34.socal.res.rr.com) has joined #ceph
[4:11] * gregaf (~Adium@aon.hq.newdream.net) has joined #ceph
[4:11] * eternaleye (~eternaley@tchaikovsky.exherbo.org) has joined #ceph
[4:11] * sboyette (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) has joined #ceph
[4:11] * krisk (~kap@rndsec.net) has joined #ceph
[4:11] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[4:11] * gohko (~gohko@natter.interq.or.jp) has joined #ceph
[4:11] * cattelan_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) has joined #ceph
[4:11] * ottod_ (~ANONYMOUS@9YYAAELTK.tor-irc.dnsbl.oftc.net) has joined #ceph
[4:11] * ivan` (~ivan`@li125-242.members.linode.com) has joined #ceph
[4:11] * eightyeight (~atoponce@pthree.org) has joined #ceph
[4:11] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[4:11] * iggy (~iggy@theiggy.com) has joined #ceph
[4:44] * adjohn (~adjohn@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) Quit (Quit: adjohn)
[4:58] * notjacques (~claude_eg@122x212x156x18.ap122.ftth.ucom.ne.jp) has joined #ceph
[4:58] <notjacques> hi all
[4:59] <notjacques> I'm looking to get ceph working in a single server environment for testing purposes
[4:59] <notjacques> but the page seems to have disappeared from the wiki
[4:59] <notjacques> is there any place where I should look ?
[5:03] <dmick> try http://ceph.newdream.net/docs
[5:03] <dmick> specifically Operations
[5:06] <notjacques> thank you dmick, I actually came from there, and I am trying to apply what I see on http://ceph.newdream.net/docs/master/ops/install/mkcephfs/ to use only one node
[5:07] <notjacques> and I could not get a healthy cluster running, so I am trying to see where I messed up
[5:07] <dmick> ah
[5:08] <dmick> what's happening? perhaps I can help
[5:18] * sagelap (~sage@mdb0536d0.tmodns.net) has joined #ceph
[5:36] * sagelap (~sage@mdb0536d0.tmodns.net) Quit (Read error: Connection reset by peer)
[5:36] <iggy> also... how much are you expecting to test in a single server setup
[6:12] <notjacques> *back*
[6:12] <notjacques> sorry
[6:13] <notjacques> I'm actually testing some software that backups data on s3
[6:14] <notjacques> so I was looking for some self hosted application that was compatible with the s3 protocol
[6:14] <notjacques> my searches lead to ceph + radosgw
[6:15] <notjacques> and I am now trying to get ceph working for that
[6:25] * sagelap (~sage@ace.ops.newdream.net) has joined #ceph
[6:32] <dmick> notjacques: what's going wrong?
[6:34] * adjohn (~adjohn@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[6:38] * f4m8_ is now known as f4m8
[6:40] * perplexed (~ncampbell@c-76-21-85-168.hsd1.ca.comcast.net) has joined #ceph
[6:40] * sagelap (~sage@ace.ops.newdream.net) Quit (Ping timeout: 480 seconds)
[6:41] <notjacques> dmick: the command "ceph -k mycluster.keyring -c mycluster.conf health" returns 2012-04-03 13:38:57.190194 mon0 -> 'HEALTH_WARN 198 pgs degraded, 21/42 degraded (50.000%)' (0)
[6:42] <notjacques> my conf file is http://pastebin.com/MEDMvy92
[6:43] <dmick> you won't need the mds if you're not using cephfs, but that's not the issue
[6:45] <dmick> not sure what might be wrong; you could try ceph -k mycluster.keyring -c mycluster.conf ceph -k mycluster.keyring -c mycluster.conf -w and see if it seems to be changing
[6:56] <dmick> did that help at all?
[6:59] <notjacques> I tried the -w command
[6:59] <notjacques> I mean, I am trying it now
[6:59] <notjacques> and nothing seems to be moving
[6:59] <Qten1> heyas, i've got ceph setup on 3 machines with a single disk in each, using xfs as the filesystem as it supports extended attributs, just wondering using the fuse mount what kind of speed should i be getting doing writes? currently i'm only getting around 22mb/s
[7:02] <Qten1> watching bwm-ng it bursting all over the place 2mb/s-120mb/s seems strange
[7:06] * dmick (~dmick@aon.hq.newdream.net) Quit (Quit: Leaving.)
[7:11] <Qten1> large seq reads are not to bad at 65mb/s
[7:20] * sagelap (~sage@184.169.41.25) has joined #ceph
[7:22] <Qten1> also tried -o big_writes no luck
[7:23] * chutzpah (~chutz@216.174.109.254) Quit (Quit: Leaving)
[7:24] * yoshi (~yoshi@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[7:39] * sagelap (~sage@184.169.41.25) Quit (Read error: Connection reset by peer)
[8:23] * loicd (~loic@magenta.dachary.org) Quit (Ping timeout: 480 seconds)
[9:04] * perplexed (~ncampbell@c-76-21-85-168.hsd1.ca.comcast.net) Quit (Quit: perplexed)
[9:06] * imjustmatthew (~imjustmat@pool-71-176-223-2.rcmdva.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[9:07] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:19] * loicd (~loic@83.167.43.235) has joined #ceph
[9:26] * LarsFronius (~LarsFroni@e176057229.adsl.alicedsl.de) has joined #ceph
[9:49] * LarsFronius (~LarsFroni@e176057229.adsl.alicedsl.de) Quit (Quit: LarsFronius)
[10:13] * oiig is now known as oiig-1
[10:15] * oiig-1 is now known as oiig_
[10:33] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[11:15] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[11:15] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[12:00] * Azrael (~azrael@terra.negativeblue.com) Quit (Remote host closed the connection)
[12:08] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[12:09] * Azrael is now known as Guest551
[12:10] * Guest551 is now known as Azrael
[12:30] * adjohn (~adjohn@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) Quit (Quit: adjohn)
[12:40] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[12:43] * adjohn (~adjohn@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[12:53] * oiig_ is now known as oiig
[13:01] * adjohn (~adjohn@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) Quit (Quit: adjohn)
[13:05] * oliver1 (~oliver@p4FECFF3A.dip.t-dialin.net) has joined #ceph
[13:14] * nyeates (~nyeates@pool-173-59-237-75.bltmmd.fios.verizon.net) Quit (Quit: Zzzzzz)
[13:18] <nhm> doh, ran out of space on metropolis.
[13:21] * adjohn (~adjohn@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[13:24] <nhm> guess I should have been writing to /data. oops...
[13:30] * adjohn (~adjohn@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) Quit (Quit: adjohn)
[14:22] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[14:23] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[14:28] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Quit: Ex-Chat)
[14:29] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[14:34] * gohko (~gohko@natter.interq.or.jp) Quit (Quit: Leaving...)
[14:38] * gohko (~gohko@natter.interq.or.jp) has joined #ceph
[14:43] * gohko_ (~gohko@natter.interq.or.jp) has joined #ceph
[14:43] * gohko (~gohko@natter.interq.or.jp) Quit (Read error: Connection reset by peer)
[14:44] * gohko (~gohko@natter.interq.or.jp) has joined #ceph
[14:44] * gohko_ (~gohko@natter.interq.or.jp) Quit (Read error: Connection reset by peer)
[14:59] * oiig (oiig@112.161.134.227) Quit (Ping timeout: 480 seconds)
[15:14] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[15:15] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[15:32] * EDevil (~Adium@194.65.5.235) has joined #ceph
[15:34] <EDevil> I've been looking at Ceph for some time, specifically to use the RADOS object server, but from the documentation it seems it's not ready for production yet. Nevertheless, has anyone been using it in production? Any major issues?
[15:35] * Liam_SA (~Liam_SA@41.161.35.68) has joined #ceph
[15:35] * joao (~JL@89-181-151-120.net.novis.pt) has joined #ceph
[15:36] <Liam_SA> hi all when i try and mount the fs using mount -t or mount.ceph i get FATAL: Module ceph not found. can amyone help
[15:38] * Liam_SA (~Liam_SA@41.161.35.68) has left #ceph
[15:39] * Liam_SA (~Liam_SA@41.161.35.68) has joined #ceph
[15:39] <Liam_SA> hi all when i try and mount the fs using mount -t or mount.ceph i get FATAL: Module ceph not found. can amyone help
[15:41] <joao> well, I would suppose you're trying to use ceph's kernel client and you don't have its module on the system
[15:43] <joao> Liam_SA, cat /boot/config-`uname -r` | grep CONFIG_CEPH_FS
[15:43] * f4m8 is now known as f4m8_
[15:44] * adjohn (~adjohn@s24.GtokyoFL16.vectant.ne.jp) has joined #ceph
[15:51] <Liam_SA> joao, that doesnt return anything?
[15:53] <Liam_SA> joao, if i apt-get install ceph-client-tools i get dependancy issues would that be the problem
[15:54] <joao> Liam_SA, you are aiming at using the kernel client, right?
[15:55] <Liam_SA> joao, yes
[15:55] <joao> so you should have kernel support for ceph
[15:56] <joao> you should have ceph's kernel client either built-in or as a module
[15:56] <joao> that fatal error appears to be lack of kernel support
[15:57] <Liam_SA> joao, sorry noob here how do i install it as a module
[15:57] <joao> jecluis@Magrathea:~$ ls /lib/modules/`uname -r`/kernel/fs/ceph
[15:57] <joao> ceph.ko
[15:58] <joao> this, for instance, should appear if you have it compiled as a module
[15:58] <joao> Liam_SA, I have a ubuntu kernel and ceph's module is available
[15:59] <joao> if you're using a custom kernel, then you probably will have to enable it when configuring the kernel
[15:59] <joao> iirc, it is somewhere under network file systems
[15:59] <EDevil> is it possible to get an object from RADOS in a streaming fashion?
[16:01] <joao> EDevil, no idea, but the team should be around in a couple of hours. :)
[16:01] <Liam_SA> joao, /fs/ceph doesnt exist, im running debian2.6.32-5-686 in a vm
[16:02] <joao> Liam_SA, it looks like you have no ceph support then
[16:02] <joao> I have no idea if debian's kernels come with ceph
[16:03] <joao> let me check that out, maybe I can figure it out :)
[16:04] <Liam_SA> joao, so i sholud rather run in on ubuntu or if you know how to install it as a module
[16:04] <joao> Liam_SA, ceph is part of mainline kernel
[16:04] <joao> it will run on ubuntu, debian or any obscure linux distro
[16:05] <joao> the thing is, if your distros repos don't supply kernels with ceph compiled, then you just have to build a custom kernel with ceph enabled
[16:06] <joao> Liam_SA, just out of curiosity, which version of debian are you using?
[16:07] <Liam_SA> joao, debian-6.0.4-i386
[16:10] <Liam_SA> joao, build a custom kernel with ceph enabled? I'm not that linux savvy
[16:10] <joao> Liam_SA, it's not that hard really :)
[16:11] <joao> I'm sure there are .debs available somewhere, but I'm not really the best person to point you to them
[16:13] <Liam_SA> joao, ill do some google searching. thanks a mill for your help i'v gotta run. :)
[16:14] <joao> Liam_SA, there is a debian section on ceph's wiki, but I don't think it's up to date
[16:15] <joao> but in any case, I think the team will be around by 9am PST, so maybe then you can find someone who can point you to all the right places :)
[16:15] <Liam_SA> joao, thanks bye :)
[16:15] <joao> c ya
[16:15] * Liam_SA (~Liam_SA@41.161.35.68) Quit (Quit: MegaIRC v4.06 http://ironfist.at.tut.by)
[16:32] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[16:45] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[17:11] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[17:11] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[17:13] <gregaf> notjacques: that HEALTH_WARN is the result of only having one OSD; it's warning you that there's no replication
[17:13] <gregaf> you can fix it by setting your pool size to 1, if you like, or by adding an OSD to the cluster
[17:15] <gregaf> EDevil: what do you mean, "in a streaming fashion"?
[17:16] <EDevil> gregaf: Some APi that would allow me to access the object data as it is being downloaded.. Without using http.
[17:17] <gregaf> ah, not really then ?????but you can do partial reads, if that works for your use case
[17:18] * loicd (~loic@83.167.43.235) Quit (Quit: Leaving.)
[17:19] <EDevil> gregaf: Ah, ok, thanks. Do you know if anyone is using rados in production already? Any issues I should be aware of?
[17:20] <gregaf> hmm, there are some guys here who are using it to host RBD VMs; they've run into a few RBD problems but I think RADOS has been working for them
[17:20] <gregaf> Piston Cloud is using RBD and seems happy with it, though I don't know how many users they have so far
[17:20] <EDevil> gregaf: I just need to use the object server part.
[17:20] <gregaf> and DreamHost is launching a cloud storage service Real Soon Now and RADOS has been behaving for them
[17:21] <gregaf> it's still early days (and I'm a dev) but we're feeling pretty confident about it; we're spinning up commercial support
[17:22] <EDevil> gregaf: Do you know if there's an async RADOS driver I can use from Twisted python?
[17:24] <gregaf> we have python bindings, and I believe they cover pretty much the whole API
[17:24] <gregaf> if you've got a Ceph repo checkout you can look at src/pybind/*
[17:25] <gregaf> or browse it on github: https://github.com/ceph/ceph/tree/master/src/pybind
[17:26] <EDevil> gregaf: I've seen that one, it doesn't seem to support async operations. I was hoping there was another one.. :) Thanks.
[17:27] <gregaf> EDevil: I haven't looked at them in any detail, but I'm seeing functions like aio_write, aio_read???.aio being "asynchronous IO"
[17:28] <EDevil> gregaf: Sorry, didn't look hard enough.
[17:36] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[17:43] * loicd (~loic@magenta.dachary.org) has joined #ceph
[17:49] * Tv|work (~Tv_@aon.hq.newdream.net) has joined #ceph
[17:50] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:52] <nhm> anyone seen problems with teuthology occasionally not being able to connect to plana nodes even though they are up?
[17:52] <joao> nope
[17:53] * imjustmatthew (~imjustmat@pool-108-4-31-137.rcmdva.fios.verizon.net) has joined #ceph
[17:53] <nhm> It happens to me every once and a while. I get something like "ValueError: failed connect to ubuntu@plana94.front.sepia.ceph.com".
[17:54] <nhm> This last time it made my suite test die about 1/6 of the way thruogh. :(
[17:54] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Remote host closed the connection)
[17:55] <Tv|work> nhm: that's an ssh connection taking >60 sec to form
[17:55] <nhm> Tv|work: yeah, I don't know why it's happening though, as I never see it when I ssh to the node manually.
[17:55] <nhm> Tv|work: it's happened periodically since I started. Sometimes it's worse than others.
[17:55] <Tv|work> nhm: and you can blame sage for the sucky error message ;)
[17:56] <Tv|work> nhm: you ssh in manually less & with higher timeouts
[17:57] <nhm> Tv|work: seems like 60s should be more than enough though.
[17:57] <Tv|work> yeah, sounds like you have/had networking issues
[17:57] <nhm> Tv|work: this is from metropolis
[17:58] <Tv|work> *shrug*... if the openvpn connection fails, it'll take a while for it to detect & renegotiate too
[17:58] <nhm> Tv|work: Though I was seeing it from my box too.
[17:58] <Tv|work> sage's commit hid the underlying message
[17:58] <nhm> Tv|work: I'm doing this in a screen, so I don't think the vpn should matter...
[17:58] <Tv|work> so i don't know what kind of an error is actually happening
[17:58] <Tv|work> nhm: metropolis wouldn't be able to talk to plana without a vpn
[17:58] <nhm> Tv|work: ah, ok.
[17:58] <nhm> Tv|work: I haven't looked at how the routes are setup there.
[18:00] <nhm> Tv|work: think any kind of backoff/retry setup would make sense? Or at least be able to restart a suite based on tests that have passed?
[18:00] <Tv|work> well the timeout was put in place explicitly
[18:00] <Tv|work> so i'd imagine removing it / working around it is unwanted
[18:00] <Tv|work> hmm actually it's been there from the dawn of time
[18:01] <nhm> Tv|work: I imagine that if the host is truly unreachable to you don't want to be hanging on it forever.
[18:01] <Tv|work> nhm: suites, if that is the word you really meant to use, are independent anyway
[18:02] <nhm> Tv|work: I'm not sure what that has to do with restarting a specific suite?
[18:02] <Tv|work> nhm: i think you don't mean suite...
[18:05] * oliver1 (~oliver@p4FECFF3A.dip.t-dialin.net) has left #ceph
[18:05] <joao> is it possible to specify different arguments to different osds with teuthology?
[18:05] <nhm> Ok. What I actually have is a collection of fragments that result in about 650 tests being run. I run these through teuthology-suite with some modifications I made that allow you to archive the results locally with teuthology instead of teuthology-schedule.
[18:06] <sagewk> joao: yes
[18:06] <sagewk> - ceph:
[18:06] <sagewk> conf:
[18:06] <sagewk> osd.1:
[18:06] <sagewk> some option: value
[18:06] <sagewk> osd.2:
[18:06] <sagewk> some other option: foo
[18:06] <sagewk> in your tasks: section
[18:06] <joao> sagewk, does the options have to be supported by teuthology or something?
[18:07] <joao> just wondering if I can pass the equivalent of a different "--filestore-dump-file" to each of the osds
[18:07] <sagewk> those go into the generated ceph.conf file
[18:08] <sagewk> so that would be 'filestore dump file: /tmp/cephtest/archive/log/foo'
[18:08] <joao> great
[18:08] <joao> thanks
[18:09] <Tv|work> sagewk: fyi upstart experiment is successful.. i think we can make it friendly.. more details later
[18:10] <sagewk> tv|work: excellent
[18:12] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[18:14] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[18:19] <joao> btw, are there any precompiled ceph-client deb packages available for download?
[18:19] <Tv|work> joao: see the kernel gitbuilder
[18:19] <joao> there was a guy around today that didn't have ceph in his debian installation, and I didn't know where to point him to
[18:20] <Tv|work> joao: oh, perhaps you don't really mean "ceph-client"
[18:20] <joao> well, isn't ceph-client the repo holding our stable kernel tree?
[18:20] <Tv|work> joao: most modern kernels have enough kernelside support for a ceph dfs mount to work
[18:20] <Tv|work> joao: not really "stable"
[18:21] <sagewk> joao: a 3.3 mainline kernel will work well
[18:21] <Tv|work> joao: kernel-side support has been mainline for a long time
[18:21] <joao> Tv|work, apparently, his distro deb didn't have ceph available, either as a module or built-in
[18:22] <joao> I know, what I mean is that the kernel he had installed in his squeeze installation didn't have ceph.ko and, well, he had no idea how to install a kernel
[18:22] <joao> so I was wondering if there was some .debs lying around for such a case
[18:24] <sagewk> joao: http://gitbuilder.ceph.com/kernel-deb-oneiric-x86_64-basic/ref/v3.3/ is our debug/qa kernel. will probably work for him
[18:26] <joao> okay
[18:26] <joao> will write that down for future reference :)
[18:27] <joao> I though the gitbuilder kernels were customized for the planas though
[18:27] <Tv|work> joao: modern pcs require no customization
[18:28] * perplexed (~ncampbell@216.113.168.141) has joined #ceph
[18:32] <perplexed> any rados4j compiling experts out there this morning? Running into an error during build process. Initial complaint is "[exec] com_dokukino_rados4j_Rados.cpp:113: error: ???librados::pool_t??? has not been declared". I'm assuming there's an include that didn't get satisfied. Did as the wiki suggested... pulled code via git, then ant...
[18:37] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[18:40] * adjohn (~adjohn@s24.GtokyoFL16.vectant.ne.jp) Quit (Quit: adjohn)
[18:44] <gregaf> joao: he was running a 2.6.32 kernel ??? too old for our stuff to work anyway :(
[18:45] <joao> oh... I completely missed that
[18:48] <nhm> gregaf: coincidentally, that's the newest kernel that lustre supports. ;)
[18:48] <gregaf> perplexed: haven't used rados4j, but it looks to be pretty out of date :(
[18:49] <joao> btw, is gceph some graphical configuration tool?
[18:49] <joao> or is there something more to it?
[18:50] <gregaf> joao: not even that, it's just a graphical display of ceph -s output
[18:50] * BManojlovic (~steki@212.200.241.13) has joined #ceph
[18:51] <joao> are there any future plans for it?
[18:52] <gregaf> every time somebody mentions it I'm surprised it still exists
[18:52] <gregaf> (meaning no)
[18:53] <joao> well, just curious anyway
[18:53] <nhm> Tv|work: so regarding suites, was the description I gave above valid for how we define it?
[18:53] <joao> stumbled on it while looking into src/tools/ :)
[18:53] <Tv|work> nhm: well normally suites schedule a bunch of independent jobs for later execution
[18:53] <Tv|work> nhm: you hacked it to do something else, so it does something else for you
[18:54] <nhm> Tv|work: Well, that's a behavior. I thought a suite was just a set of collections of fragments.
[18:54] * dmick (~dmick@aon.hq.newdream.net) has joined #ceph
[18:55] <Tv|work> nhm: yes but for the mainline use, there's no "continue running the rest", as they are run async
[18:57] <nhm> Tv|work: When a suite is run via teuthology-schedule in the normal manner, does a failed connection to a node cause the entire suite to die? It seems like it would...
[18:57] <Tv|work> nhm: -suite submits individual things to a queue
[18:58] <sagewk> joao: ObjectStore::transaction already has a dump method.. it looks like it got duplicated in trace_dump()?
[18:58] * steki-BLAH (~steki@212.200.241.226) has joined #ceph
[18:58] <nhm> Tv|work: Ok, so only the specific test being run would die.
[19:00] <joao> sagewk, in a way, yes
[19:00] <joao> sagewk, I cleaned up it a bit
[19:00] <dmick> tracker borked again?
[19:00] <joao> to fit my purpose
[19:00] <dmick> there it is
[19:01] <joao> and wasn't sure what the impact doing it on the dump() function would be
[19:01] <joao> *of
[19:01] <nhm> dmick: it was going a bit slow for me earlier
[19:01] <sagewk> joao: what had to be cleaned up?
[19:01] <sagewk> the code shouldn't be duplicated.. it should fix dump() (if necessary) and use that
[19:02] <joao> sagewk, some open_section()/close_section()
[19:02] <joao> sagewk, I figured you'd call me on that
[19:03] <sagewk> :) which ones were missing? ceph-osd --dump-journal works as-is, although it may be sloppy with some of hte ops
[19:03] * ghaskins (~ghaskins@68-116-192-32.dhcp.oxfr.ma.charter.com) has joined #ceph
[19:03] <perplexed> Are there any better/current java solutions than rados4j? Is java-rados considered a better approach?
[19:04] <sagewk> nothing current for librados
[19:04] <sagewk> but there is an almost-ready wrapper for libcephfs that is nicely packaged and all that. ideally we'd morph rados4j into something similarly clean
[19:04] * ghaskins (~ghaskins@68-116-192-32.dhcp.oxfr.ma.charter.com) has left #ceph
[19:05] * BManojlovic (~steki@212.200.241.13) Quit (Ping timeout: 480 seconds)
[19:05] <joao> sagewk, basically, the original dump function added some sections I didn't need in this dump, and it would print the operation's names
[19:05] <joao> I prefered the operation's values, and to lose those extra sections
[19:05] <sagewk> which extra sections?
[19:05] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[19:06] <joao> sagewk, for instance, stuff like this
[19:06] <joao> f->open_object_section("rmcoll");
[19:06] <joao> f->dump_stream("collection") << cid;
[19:06] <joao> f->close_section();
[19:06] <joao> I removed the open_object_section() and the close_section()
[19:07] <joao> as I had no use to them, and they would simply make the dump cluttered with infos I didn't need
[19:08] <joao> I guess I could have made the dump() function a bit more versatile though, adding an extra, default, argument
[19:08] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[19:10] <sagewk> i don't think the verbosity matters here if the structure is consistent
[19:10] <joao> sagewk, I'll rework it as soon as I know it's working as I intended
[19:10] <sagewk> k
[19:14] <sagewk> vidyo!
[19:17] <joao> sagewk, is it on Danger Room?
[19:17] <nhm> yep
[19:18] <joao> nhm, are you there by any chance?
[19:18] <nhm> yeah
[19:18] <joao> I feel like I'm all alone
[19:18] <joao> I only see myself
[19:18] <nhm> joao: strange, try logging out and back in?
[19:19] <nhm> joao: I think that happened to Alex once too.
[19:19] <joao> still the same problem; restarting the daemon
[19:20] <nhm> joao: no idea
[19:20] <nhm> joao: you don't see anything?
[19:20] <joao> nop
[19:21] <joao> just me
[19:22] <joao> may the url have changed in the meantime?
[19:31] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[19:32] <sagewk> nhm: re #2233 there is throttling in librados, but i forget whether the bencher is using that or the objecter directly.
[19:34] * brambles (brambles@79.133.200.49) has joined #ceph
[19:34] <nhm> sagewk: hrm, ok. I suppose it depends on what we want to test. Do most things use the objecter or go through the throttling layer?
[19:35] <sagewk> only the exposed librados api gets the throttling, iirc
[19:35] <sagewk> joao: quick skype?
[19:35] <joao> sagewk, any idea who should I contact in order to figure out what's going on with my vidyo all of a sudden?
[19:35] <joao> sagewk, sure
[19:38] * aliguori (~anthony@32.97.110.59) has joined #ceph
[19:48] * lxo (~aoliva@lxo.user.oftc.net) Quit (Read error: Operation timed out)
[19:50] <perplexed> How long does it take before the cluster will consider a "down" OSD to be "out" and trigger re-distribution of its content? I'm hoping to demo the process to some folks, and wanted to get a sense of how quickly this transition will take normally. I'm assuming I can just force the transition with ceph osd down N and/or ceph osd out N though.
[19:52] <NaioN> perplexed: the default is about 5 min if i'm correct
[19:52] <NaioN> it's a setting you can adjust
[19:52] <yehudasa> perplexed: don't remember the default but ceph osd down and out should do the trick
[19:53] <gregaf> perplexed: NaioN: yep, the "mon osd down out" param which you can set on the monitors
[19:53] <perplexed> Thx all
[19:53] <gregaf> which as NaioN says defaults to 300 (seconds, ie 5 minutes)
[19:53] * EDevil (~Adium@194.65.5.235) Quit (Quit: Leaving.)
[19:53] <gregaf> 0 means "don't automatically mark out"
[19:59] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[20:02] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[20:09] <sagewk> interesting: http://lwn.net/Articles/490413/
[20:09] <sagewk> (udev and systemd source trees are merging)
[20:12] <sagewk> dmick: plana92 key changed?
[20:13] <dmick> not intentionally
[20:13] <dmick> but if someone reinstalled the kernel, it can happen
[20:14] <dmick> I mean, not by my intent
[20:16] <sagewk> k. well, it needs to be updated in the database in any case
[20:20] <Tv|work> sagewk: ehh, "systemd upstream attemps to hijack udev"
[20:21] <Tv|work> sagewk: i don't see anything on the udev side there
[20:22] * dmick looks at joshd
[20:28] <joshd> 92 and 07 had new keys - you can update them all with teuthology-updatekeys -v -a
[20:32] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[20:49] * chutzpah (~chutz@216.174.109.254) has joined #ceph
[20:57] * LarsFronius (~LarsFroni@e176052249.adsl.alicedsl.de) has joined #ceph
[21:14] * cattelan_away is now known as cattelan
[21:17] * imjustmatthew (~imjustmat@pool-108-4-31-137.rcmdva.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[21:43] * lofejndif (~lsqavnbok@19NAAHUES.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:51] * cattelan (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[21:55] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[22:07] * lofejndif (~lsqavnbok@19NAAHUES.tor-irc.dnsbl.oftc.net) Quit (Quit: Leaving)
[22:17] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Quit: Leaving)
[22:19] * cattelan (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) has joined #ceph
[22:21] * lofejndif (~lsqavnbok@28IAADQHM.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:23] * MK_FG (~MK_FG@188.226.51.71) Quit (Ping timeout: 480 seconds)
[22:35] * cattelan (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[22:39] <dmick> joshd: cool, thanks
[23:00] * cattelan (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) has joined #ceph
[23:11] * aliguori (~anthony@32.97.110.59) Quit (Quit: Ex-Chat)
[23:34] * MK_FG (~MK_FG@219.91-157-90.telenet.ru) has joined #ceph
[23:37] * imjustmatthew (~imjustmat@pool-71-176-237-208.rcmdva.fios.verizon.net) has joined #ceph
[23:43] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[23:56] * LarsFronius (~LarsFroni@e176052249.adsl.alicedsl.de) Quit (Quit: LarsFronius)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.