#ceph IRC Log


IRC Log for 2013-01-30

Timestamps are in GMT/BST.

[0:00] <dmick> sjust: you there?
[0:00] <xmltok> i've configured the rgw logging to be off, from what i can tell on http://ceph.com/docs/master/radosgw/config/, but my rados log is stilll growing at a very high rate, and i am worried its slowing things down
[0:03] * BillK (~BillK@58-7-74-106.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[0:08] <sjust> loicd: I think you need to remove the store_test_temp_dir and store_test_temp_journal
[0:08] * vata (~vata@2607:fad8:4:6:a512:68b7:27c0:9c13) Quit (Quit: Leaving.)
[0:09] <loicd> sjust: which I did. Then it blocks on a Cond::Wait ( gdb stack trace here http://paste.debian.net/230275/ )
[0:10] <sjust> or it might need an empty store_test_temp_dir
[0:10] <dmick> it definitely creates both
[0:10] <dmick> just doesn't seem to do much else
[0:10] <sjust> oh, you probably need --filestore-xattr-use-omap=true
[0:10] * loicd trying
[0:11] <dmick> better
[0:11] <sjust> if you add --log-to-stderr=true --debug-filestore=20, that would get you output
[0:11] <loicd> sjust: much better ;-)
[0:12] <sjust> sorry, those tools don't have particularly helpful output
[0:13] <dmick> hanging seems an odd behavior even at that
[0:13] <loicd> that also gives me something to improve my pending pull request ( I use a macro to set debug but it's better done with --debug... ) https://github.com/ceph/ceph/pull/34
[0:15] <loicd> sjust: these tools are very helpful to learn the code base, I'm very glad they are here ;-)
[0:15] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (Remote host closed the connection)
[0:17] * SpamapS (~clint@xencbyrum2.srihosting.com) has joined #ceph
[0:19] * NightDog (~Karl@ has joined #ceph
[0:20] * NightDog (~Karl@ Quit (Read error: Connection reset by peer)
[0:21] * aliguori (~anthony@ Quit (Quit: Ex-Chat)
[0:21] <dmick> anyone want to quickly review wip-3900, for http://tracker.ceph.com/issues/3900?
[0:21] * NightDog (~Karl@ has joined #ceph
[0:22] <sjust> loicd: yep!
[0:24] * benner (~benner@ has joined #ceph
[0:25] * benner_ (~benner@ Quit (Read error: Connection reset by peer)
[0:26] * NightDog (~Karl@ Quit (Read error: Connection reset by peer)
[0:29] <loicd> dmick: I'm taking a look at http://tracker.ceph.com/issues/3900 assuming you need a review on http://paste.debian.net/230278/ ?
[0:31] * DJF5 (~dennisdeg@backend0.link0.net) Quit (Read error: Connection reset by peer)
[0:31] * DJF5 (~dennisdeg@backend0.link0.net) has joined #ceph
[0:31] * rtek (~sjaak@empfindlichkeit.nl) Quit (Read error: Connection reset by peer)
[0:31] * NightDog (~Karl@ has joined #ceph
[0:31] * rtek (~sjaak@empfindlichkeit.nl) has joined #ceph
[0:31] * NightDog (~Karl@ Quit (Read error: Connection reset by peer)
[0:35] <dmick> no, it's in a wip branch on github
[0:35] <dmick> https://github.com/ceph/ceph/tree/wip-3900
[0:35] <dmick> specifically https://github.com/ceph/ceph/commit/c6f3f06bc6c5f70d677f92e16eeab2510ee234ad
[0:37] <loicd> ah
[0:38] * NightDog (~Karl@ has joined #ceph
[0:40] * NightDog (~Karl@ Quit (Read error: Connection reset by peer)
[0:43] * BillK (~BillK@124-169-233-28.dyn.iinet.net.au) has joined #ceph
[0:47] <loicd> dmick: I took a look at https://github.com/ceph/ceph/blob/wip-3900/src/ceph_common.sh#L74 to check if the additional command and the separator ( ; ) could be a problem but it looks good.
[0:47] <loicd> ( I know I'm not the best reviewer ;-)
[0:48] <dmick> yeah, I gave that a look, but basically just tested it and it seems to work
[0:50] <dmick> tnx loicd
[0:51] * PerlStalker (~PerlStalk@ Quit (Quit: ...)
[0:55] <dmick> btw, I verified by examining /proc/<pid>/limits. I'll add that to the commit msg
[0:55] * loicd (~loic@magenta.dachary.org) Quit (Ping timeout: 480 seconds)
[1:01] * loicd (~loic@ has joined #ceph
[1:02] <loicd> dmick: network died when I tried to say something meaningful ;-)
[1:02] <loicd> https://github.com/ceph/ceph/commit/c6f3f06bc6c5f70d677f92e16eeab2510ee234ad#commitcomment-2523927
[1:04] <loicd> I moved the comment up to the relevant line
[1:04] <loicd> https://github.com/ceph/ceph/commit/c6f3f06bc6c5f70d677f92e16eeab2510ee234ad#commitcomment-2523946
[1:05] * noob2 (~noob2@pool-71-244-111-36.phlapa.fios.verizon.net) has joined #ceph
[1:05] <dmick> responding there
[1:14] <phantomcircuit> dmick, lol @ getting what they deserve
[1:15] <loicd> dmick: :-D
[1:15] <phantomcircuit> is 0 even a valid value?
[1:15] <phantomcircuit> i would hope not
[1:16] * xiaoxi (~xiaoxiche@jfdmzpr06-ext.jf.intel.com) has joined #ceph
[1:17] <loicd> it's getting late here in france, good night :-)
[1:18] * ninkotech (~duplo@ Quit (Ping timeout: 480 seconds)
[1:18] * loicd (~loic@ Quit (Quit: Leaving.)
[1:20] <dmick> phantomcircuit: $ (ulimit -n 0) succeeds
[1:20] <dmick> I bet that shell is not very useful after that
[1:20] <dmick> but it succeeds
[1:20] <phantomcircuit> im surprised it doesn't instantly stop working
[1:20] <phantomcircuit> needs at least 3 fds for stdin/stdout/stderr
[1:20] <phantomcircuit> probably plus more for the shell itself
[1:21] * ninkotech (~duplo@ has joined #ceph
[1:21] <dmick> yeah. I suspect it doesn't affect already-open files
[1:22] <dmick> ls returns
[1:22] <dmick> bash: start_pipeline: pgrp pipe: Too many open files
[1:22] <dmick> ls: error while loading shared libraries: libselinux.so.1: cannot open shared object file: Error 24
[1:22] <phantomcircuit> heh
[1:25] <dmick> pretty much the definition of not very useful :)
[1:32] * partner (joonas@ajaton.net) Quit (Remote host closed the connection)
[1:34] * partner (joonas@ajaton.net) has joined #ceph
[1:41] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (Server closed connection)
[1:41] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[1:43] * ivoks (~ivoks@jupiter.init.hr) Quit (Server closed connection)
[1:44] * ivoks (~ivoks@jupiter.init.hr) has joined #ceph
[1:44] * jonah (~chatzilla@dhcp-37-52.EECS.Berkeley.EDU) has joined #ceph
[1:45] * jonah is now known as jandersonlee
[1:45] * Matt (matt@matt.netop.oftc.net) Quit (Server closed connection)
[1:45] * Matt (matt@spoon.pkl.net) has joined #ceph
[1:47] <jandersonlee> I'm new to ceph and having problems. is this the right forum for help?
[1:48] <jmlowe> Yeah, there are people who can help you here, I a fellow user but I'll do what I can
[1:48] <jmlowe> what are you having trouble with?
[1:48] <jandersonlee> the documents suggest the following for a mount...
[1:48] <jandersonlee> mount -t ceph /mnt/mycephfs -o name=admin,secretfile=/etc/ceph/admin.secret
[1:49] <jandersonlee> but I get: adding ceph secret key to kernel failed: Invalid argument.
[1:49] <jandersonlee> failed to parse ceph_options
[1:49] <jmlowe> what do you have in /etc/ceph?
[1:49] <jandersonlee> ls -l
[1:49] <jandersonlee> total 12
[1:49] <jandersonlee> -rw------- 1 root root 41 Jan 29 16:35 admin.secret
[1:49] <jandersonlee> -rw-r--r-- 1 root root 1934 Jan 29 15:30 ceph.conf
[1:49] <jandersonlee> -rw-r--r-- 1 root root 63 Jan 29 16:10 ceph.keyring
[1:50] <jandersonlee> (on client)
[1:50] <jmlowe> what happens if you drop the secretfile argument?
[1:51] <jandersonlee> mount error 22 = Invalid argument
[1:52] <jmlowe> does your ceph.keyring have a client.admin section?
[1:53] * darkfaded (~floh@ Quit (Server closed connection)
[1:53] * darkfader (~floh@xen03.xenvms.de) has joined #ceph
[1:53] <jandersonlee> yes
[1:54] <jandersonlee> ah! got it. thanks. :)
[1:54] <jmlowe> what was it?
[1:54] * illuminatis (~illuminat@0001adba.user.oftc.net) Quit (Server closed connection)
[1:55] * illuminatis (~illuminat@0001adba.user.oftc.net) has joined #ceph
[1:55] <jandersonlee> The secret needs to match the key in the [client.admin] section :P
[1:55] <jmlowe> yep, that will do it
[1:55] <jandersonlee> The document was unclear, so I thought any random key would do ;)
[1:56] <jandersonlee> thanks for the help. oAo
[1:56] * jandersonlee (~chatzilla@dhcp-37-52.EECS.Berkeley.EDU) Quit (Quit: ChatZilla 0.9.89 [Firefox 18.0.1/20130116073211])
[1:59] * scheuk (~scheuk@ Quit (Server closed connection)
[1:59] * scheuk (~scheuk@ has joined #ceph
[1:59] * jpieper (~josh@209-6-86-62.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) Quit (Server closed connection)
[2:00] * jpieper (~josh@209-6-86-62.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) has joined #ceph
[2:00] * LeaChim (~LeaChim@027ee384.bb.sky.com) Quit (Ping timeout: 480 seconds)
[2:00] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (Server closed connection)
[2:01] * ivan` (~ivan`@000130ca.user.oftc.net) has joined #ceph
[2:05] * noob2 (~noob2@pool-71-244-111-36.phlapa.fios.verizon.net) has left #ceph
[2:05] * DJF5_ (~dennisdeg@backend0.link0.net) has joined #ceph
[2:05] * scuttlemonkey_ (~scuttlemo@ has joined #ceph
[2:05] * Q310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[2:06] * jantje (~jan@paranoid.nl) has joined #ceph
[2:07] * alexxy[home] (~alexxy@2001:470:1f14:106::2) has joined #ceph
[2:07] * jmlowe1 (~Adium@c-71-201-31-207.hsd1.in.comcast.net) has joined #ceph
[2:08] * barnes_ (barnes@bissa.eu) has joined #ceph
[2:08] * doubleg_ (~doubleg@ has joined #ceph
[2:08] * nhm_ (~nh@184-97-251-146.mpls.qwest.net) has joined #ceph
[2:09] * psomas_ (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[2:09] * brambles_ (lechuck@s0.barwen.ch) has joined #ceph
[2:09] <ShaunR> configure needs to check for g++ btw
[2:09] * Anticime1 (anticimex@netforce.csbnet.se) has joined #ceph
[2:09] * fc___ (~fc@ has joined #ceph
[2:09] * DJF5 (~dennisdeg@backend0.link0.net) Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * scuttlemonkey (~scuttlemo@ Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * sagewk (~sage@2607:f298:a:607:58b:3536:b4:f25b) Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * dmick (~dmick@2607:f298:a:607:c856:8f85:e202:43cc) Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * nhm (~nh@184-97-251-146.mpls.qwest.net) Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * Anticimex (anticimex@netforce.csbnet.se) Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * Qten (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * frey (~frey@togt-130-208-247-19.ru.is) Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * barnes (barnes@bissa.eu) Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * yehudasa (~yehudasa@2607:f298:a:607:b566:3736:8550:69dc) Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * doubleg (~doubleg@ Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * jantje_ (~jan@paranoid.nl) Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * fc__ (~fc@ Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] * brambles (lechuck@s0.barwen.ch) Quit (synthon.oftc.net oxygen.oftc.net)
[2:09] <ShaunR> without it you get this error which really doesnt point anybody in the right direction...
[2:09] <ShaunR> checking for boost/spirit.hpp... no
[2:09] <ShaunR> configure: error: in `/usr/src/ceph-0.56.1':
[2:09] <ShaunR> configure: error: "Can't find boost spirit headers"
[2:10] <ShaunR> the headers exist... it's just that g++ failed
[2:10] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[2:10] <gregaf> ShaunR: what version are you looking at? that's familiar but I think it does check for g++...
[2:11] <ShaunR> 0.56.1
[2:11] * Matt (matt@spoon.pkl.net) Quit (Ping timeout: 480 seconds)
[2:11] <ShaunR> i installed gcc-c++ and it corrected the issue.
[2:12] <ShaunR> here's the config.log
[2:12] <ShaunR> configure:18825: checking boost/spirit/include/classic_core.hpp usability
[2:12] <ShaunR> configure:18825: g++ -c conftest.cpp >&5
[2:12] <ShaunR> ./configure: line 1910: g++: command not found
[2:12] <ShaunR> configure:18825: $? = 127
[2:12] <ShaunR> configure: failed program was:
[2:12] <ShaunR> so if it's checking, it's failing to check properly :)
[2:13] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[2:13] <gregaf> hmm, I'm looking at it running on my machine and there's a "checking for g++… g++" line in the output
[2:13] <gregaf> dmick, do you remember what this was?
[2:14] <ShaunR> let me check my config.log
[2:16] <gregaf> it's possible that some distros lie about g++ if you have gcc but not g++, but I really don't remember, sorry :(
[2:16] <gregaf> (lying = symlinking)
[2:16] <ShaunR> configure:12193: checking for g++
[2:16] <ShaunR> configure:12223: result: no
[2:16] <ShaunR> well, it checks...
[2:16] <iggy> ubuntu does at least
[2:16] <ShaunR> lol.. it just doesnt crap out
[2:17] <iggy> not symlinking, but sets up alias'es for a bunch of stuff
[2:17] <lightspeed> I accidentally ran "ceph auth add" without any further arguments, and now have an entry in "ceph auth list" with a name of "unknown."
[2:17] <lightspeed> any idea how I can get rid of it? because "ceph auth del unknown." just gives me "bad entity name unknown."
[2:18] <ShaunR> gregaf: http://pastebin.ca/2308785
[2:18] <iggy> tried ceph auth del?
[2:18] <jmlowe1> ceph auth del ''
[2:19] <jmlowe1> just a stab in the dark there
[2:19] <lightspeed> "ceph auth del" just returns usage info
[2:19] <gregaf> lightspeed: please file a bug, and I like jmlowe1's suggestion
[2:19] <lightspeed> "ceph auth del ''" gives "bad entity name"
[2:19] <lightspeed> ok, I'll file a bug
[2:19] <jmlowe1> yeah I'd call that a bug
[2:20] <gregaf> ShaunR: okay, guess our script is just off, although I'd swear we dealt with this previously and there was nothing to be done…:/
[2:20] <gregaf> *pokes glowell*
[2:21] <glowell> hi
[2:21] * frey (~frey@togt-130-208-247-19.ru.is) has joined #ceph
[2:24] * yehudasa (~yehudasa@ has joined #ceph
[2:24] <glowell> Doesn't look like we have a explicit check for a c++ compiler in the configure script.
[2:25] * dmick (~dmick@ has joined #ceph
[2:25] * sagewk (~sage@ has joined #ceph
[2:28] <gregaf> there's output and it does check, but in his script it checked, said "no", and then kept running
[2:28] <gregaf> I dunno how the configure stuff works well enough to know what's going on with that
[2:29] <glowell> It looks like that is a check that is internal to configure, not one that says a c++ compiler is required to build.
[2:29] <glowell> I've opend bug 3955.
[2:31] * joshd (~joshd@ has joined #ceph
[2:37] * alram (~alram@ Quit (Ping timeout: 480 seconds)
[2:41] * xmltok (~xmltok@pool101.bizrate.com) Quit (Quit: Leaving...)
[2:49] * dpippenger (~riven@cpe-75-85-17-224.socal.res.rr.com) Quit (Remote host closed the connection)
[2:52] * nwat (~Adium@soenat3.cse.ucsc.edu) Quit (Quit: Leaving.)
[3:09] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[3:16] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[3:21] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:22] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[3:29] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[3:46] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) has joined #ceph
[4:00] * pagefaulted (~pagefault@ Quit (Ping timeout: 480 seconds)
[4:02] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[4:08] * Q310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) has left #ceph
[4:13] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[4:16] <paravoid> I can't seem to be able to re-add a mon to my cluster
[4:16] <paravoid> anyone around for some (hopefully basic) troubleshooting?
[4:18] <joshd> I might be able to help
[4:19] <paravoid> huh
[4:19] <paravoid> now it worked
[4:19] <paravoid> so, basically, I have 3 monitors
[4:19] <paravoid> the box on one of them was reformatted
[4:19] <paravoid> so I tried mkfsing and running it
[4:19] <paravoid> and couldn't add it back
[4:19] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[4:20] <paravoid> with --debug_mon 5 it kept saying "slurp"
[4:20] <paravoid> 2013-01-30 03:18:29.899418 7fcdfd899700 1 -- --> mon.0 -- mon_probe(slurp c9da36e1-694a-4166-b346-9d8d4d1d1ac1 name ms-be1003 machine_name pgmap 2118319-2118762 new) v3 -- ?+0 0x30f66c0
[4:20] <joshd> that means it's synching state with the existing ones
[4:20] <paravoid> etc.
[4:21] <paravoid> it did that for several minutes
[4:21] <joshd> it can take a while if it's got a lot to catch up on
[4:21] <paravoid> then I restarted both of the other ones in turn and it got fixed
[4:21] <paravoid> well, that, or it just took some time
[4:21] <paravoid> 2013-01-30 03:16:23.315293 7f166877d700 1 -- --> mon.0 -- mon_probe(slurp c9da36e1-694a-4166-b346-9d8d4d1d1ac1 name ms-be1003 machine_name osdmap 111555-121027 new) v3 -- ?+0 0x27cf000
[4:22] <paravoid> that's an osdmap
[4:22] <paravoid> 111555 was fixed, 121027 kept increasing
[4:22] <paravoid> so, normal?
[4:22] <joshd> you mean you're osdmap epoch is increasing that much right now in ceph -s, or just in the log?
[4:22] <joshd> for the monitor's slurping
[4:23] <elder> joshd, are my review comments to patch 1 satisfactory?
[4:23] <paravoid> just in the log
[4:23] <joshd> that is a lot of maps. I'm not sure how efficient slurping is, but it seems reasonable for it to take a few minutes with that many
[4:23] <elder> I think I've made the changes already. I just looked at 2-12 and there's nothing left to change (I already updated the enum to all capitals, throughout the series).
[4:23] <paravoid> why would I have a lot of maps?
[4:24] <elder> Basically I'd like to know if I need to re-post, or if what I said I would do is adequate to address your comments. And if so, I'll add Reviewed-by and test the result overnight.
[4:25] <joshd> elder: yes, go ahead and add my reviewed-by.
[4:25] <elder> Excellent.
[4:25] <joshd> paravoid: long-running cluster, or one with lots of osds going up/down/in/out/changing pg temp (those are what trigger new osdmaps)
[4:26] <elder> I have two more small series that follow that. I think I'll hold off updating the testing branch until one or more of those is ready. (So whenever you can find time to review those it would be wonderful. But know that I want you to go home and get some sleep.)
[4:26] <paravoid> in general or during the time where that monitor was down?
[4:26] <paravoid> hm I guess it doesn't matter, it got reformatted
[4:27] <joshd> paravoid: yeah, since you reformatted, it's in general
[4:27] <paravoid> so mons keep the cluster state since forever?
[4:27] <joshd> elder: thanks. I'll probably respond tomorrow
[4:27] <paravoid> all of the maps that were ever present?
[4:27] <elder> OK.
[4:27] <paravoid> no pruning whatsoever?
[4:27] <joshd> paravoid: no, they should trim
[4:28] <joshd> I'm surprised they didn't in your case, although I don't remember all the constraints on the oldest one they have to keep
[4:28] <joshd> paravoid: I'd ask joao when he wakes up
[4:35] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:36] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[4:38] * Ryan_Lane (~Adium@ Quit (Quit: Leaving.)
[4:41] * nz_monkey (~nz_monkey@ Quit (Remote host closed the connection)
[4:42] * nz_monkey (~nz_monkey@ has joined #ceph
[4:42] <xiaoxi> joshd:could I ask a basic question? why pgmap keep increasing every second?even there is no data writes from client(idle or do reading)
[4:43] <joshd> xiaoxi: statistics like space utilization from the osds are stored in the pgmap
[4:44] <xiaoxi> joshd:thanks,and why there is a suggestion in ceph' doc say a SSD for monitor and MDS is recommened?
[4:45] <xiaoxi> is it relate with pgmap?
[4:45] <joshd> xiaoxi: where does it say that?
[4:45] <joshd> xiaoxi: MDS doesn't even use local disk except for debug logging
[4:46] <xiaoxi> Since the storage requirements for metadata servers and monitors are so low, solid state drives may provide an economical opportunity to improve performance.
[4:46] <xiaoxi> http://ceph.com/docs/master/install/hardware-recommendations/#data-storage
[4:46] <xiaoxi> joshd:yes, it looks a bit strange to me..
[4:47] <joshd> xiaoxi: monitors also don't have that much activity, and wouldn't really benefit from SSDs much unless you have a very strange usage
[4:48] <joshd> xiaoxi: want to file a doc bug?
[4:49] <xiaoxi> joshd:ofcourse~
[4:49] <joshd> xiaoxi: thanks
[4:52] <xiaoxi> joshd:another question:I have a 10GbE(with subnet 192.101.11.X) and a 1GbE(with subnet 192.168.11.x) for every OSD node, I mean to have the 10GbE for data traffic and 1GbE for admin traffic, is it ok for me to put my monitor on subnet 192.168.11.x?
[4:53] <joshd> yeah, as long as clients and OSDs can both reach it
[4:53] <joshd> monitors don't need a lot of bandwidth
[4:55] <xiaoxi> Thanks~
[4:55] <joshd> no problem
[4:56] <xiaoxi> Is there any solution/update for "The XFS volume ceph-osd locked up (hung in xfs_ilock) for somewhere between 2 and 4 minutes and cause the heartheat check failed"? (By sage post on the maillist with title *handling fs errors* several days ago)
[4:57] <xiaoxi> It seems that we are suffering this issue,some OSDs are reported down by other OSDs and after minutes it become up again..the osdmap keep changing
[4:59] <joshd> yeah, there were some recent fixes so that kind of thing would be reported sooner
[5:00] <joshd> I'm not sure how far it goes towards solving the problem exactly
[5:02] <xiaoxi> well,it seems a XFS bug(but may be related with ceph's usage pattern), reported sooner seems do not solve the issue ,right?
[5:03] <xiaoxi> it's likely to make the osdmap more fluctuate.
[5:05] <joshd> it makes osds hitting xfs bugs that effectively make them unresponsive get marked out sooner, so they don't degrade performance overall
[5:06] <joshd> it's just in the master branch so far, and sagewk or sjust could tell you more
[5:06] <joshd> I've got to go though. see you later
[5:07] <xiaoxi> joshd:thanks~bye
[5:07] * chutzpah (~chutz@ Quit (Quit: Leaving)
[5:22] * Cube1 (~Cube@ Quit (Ping timeout: 480 seconds)
[5:25] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[5:27] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit ()
[5:45] <phantomcircuit> joshd, there isn't anywhere in the monitor code that calls fsync?
[5:46] * fghaas (~florian@ has joined #ceph
[5:46] <fghaas> joshd: is it expected at this time that all writes to an RBD device fail while mapped under a 3.2.0 kernel?
[5:49] <fghaas> if the image has more than zero snapshots, that is, which kind of makes snapshots... useless?
[5:51] <dmick> fghaas: image-format 1, no. image-format 2, yes
[5:52] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[5:52] <fghaas> dmick, is format 2 the default in bobtail? the --help message says otherwise
[5:54] <fghaas> plus, removing an image that is currently mapped not only is still possible, it causes OSDs to crash with a segfault in remove_child()
[5:55] <phantomcircuit> that is probably not expected
[5:55] <dmick> fghaas: no, default should be 1
[5:55] <dmick> was there a fix for removing mapped images? I don't remember
[5:57] <dmick> and I don't see anything trying to stop that
[5:57] <dmick> 3.2 is fairly old in krbd time
[5:59] * fghaas (~florian@ has left #ceph
[5:59] * fghaas (~florian@ has joined #ceph
[5:59] <fghaas> dmick, it actually gets better
[5:59] <fghaas> has nothing to do with whether the image is mapped or has snapshots, "rbd rm" just crashes the osd. yay :)
[6:00] <dmick> hum. it certainly doesn't do that on *my* systems :)
[6:01] <fghaas> it reliably does here
[6:01] <fghaas> debian squeeze, 3.2.0 backports kernel, ceph.com 0.56.1 packages
[6:02] <dmick> got a backtrace handy?
[6:04] <fghaas> dmick: http://paste.debian.net/230353/ <- does this help?
[6:05] <dmick> hum, cls_log
[6:05] <dmick> wonder what happens if I turn that up
[6:06] <fghaas> log file = /var/log/ceph/$cluster-$id.log
[6:06] <fghaas> is the only log related non-default ceph.conf option
[6:06] <dmick> what is your debug objclass set to?
[6:06] <fghaas> dmick: like I said, unchanged from default
[6:06] <dmick> ah
[6:06] <dmick> wasn't sure 'log-related' meant that too. ok. weird.
[6:07] <dmick> sure looks like it's dying trying to write to the log
[6:07] <dmick> can you get a coredump and bt?
[6:07] <fghaas> rbd -n client.admin -k /etc/ceph/keyring rm test2
[6:07] <fghaas> Removing image: 99% complete...2013-01-30 05:00:36.741201 b21b9b70 0 -- >> pipe(0xa4521b8 sd=5 :0 pgs=0 cs=0 l=1).fault
[6:07] <fghaas> 2013-01-30 05:01:07.560788 b22bab70 0 -- >> pipe(0xa4440d0 sd=5 :0 pgs=0 cs=0 l=1).fault
[6:07] <fghaas> that's what it looks like from the client end, if that helps
[6:08] <dmick> remove_child() would be at the end of the deletion
[6:08] <fghaas> yup
[6:08] <fghaas> hence, 99% complete and then boom
[6:08] <dmick> yeah
[6:08] <dmick> no, that one's new to me
[6:10] <dmick> minorly interesting that it's a RETRY op, apparently
[6:10] <fghaas> might it make a difference that this is on an image that is 32bit, in order to be used for demos on 32bit desktops?
[6:10] <dmick> what provokes this? rbd create followed by rbd rm? any particular size?
[6:10] <dmick> and, it could, possibly
[6:11] <fghaas> rbd rm, not necessarily immediately preceded by rbd create
[6:16] <fghaas> hmmm. I wonder how I recover from this short term, though. rbd rm kills the osd, and attempting to manually kill the rados objects from the rbd pool returns -EBUSY
[6:17] <fghaas> ... except if you wait a minute, and then the manual rados rm suddenly does work
[6:17] <fghaas> color me puzzled
[6:17] <dmick> yjsy
[6:18] <dmick> sorry. that's probably the outstanding watch on the header object
[6:18] <fghaas> ok. at any rate, I've now completely pruned my rbd pool
[6:18] <dmick> but yes, that's very odd. Please file a tracker issue
[6:19] <fghaas> then created a new image, tried to delete it immediately after ... same error
[6:19] <dmick> if you can get a corefile from the daemon that would be awesome
[6:20] <fghaas> I first need to get this tutorial into a useful condition, but yes, considering how easy it is to reproduce this I can do that
[6:22] <fghaas> still, that's two pretty massive rbd bugs in 1 hour... not so sure I like this :)
[6:23] <dmick> ah. CLS_LOG(), the root of the problem, always does vsnprintf() before checking the level, so you can't stop this by turning down the level.
[6:23] <fghaas> so you're saying it all boils down to a logging function?
[6:23] <dmick> it looks from that backtrace like it got a segv inside printf, yes
[6:24] <fghaas> um. any workaround?
[6:24] <dmick> and I can imagine that might be a 32-bit issue; this is a %d with a 64-bit value. : (
[6:24] <dmick> no, that's what I was saying; if cls_log would check the level first you could turn the log level down (in fact this message is a level-20 message so wouldn't print anyway, default level is 5)
[6:25] <dmick> but since it always does the printf first, that won't help
[6:26] <dmick> so the bug *might* be rbd rm on 32-bit OSD dies in printf
[6:26] <dmick> I'll see if I can't set up a 32-bit toy cluster and find out
[6:39] <xiaoxi> can we use mkcephfs for part of the cluster? say a single node?
[6:42] <dmick> xiaoxi: not sure. It looks like you can do the "init daemons" bit on just the local node
[6:50] <dmick> fghaas: interesting. I'm looking at gitbuilder output that hasn't been updated since June. Did you build your own?
[6:52] * yoshi (~yoshi@p6124-ipngn1401marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[7:10] * qwerty0 (~chatzilla@bb121-7-98-251.singnet.com.sg) has joined #ceph
[7:20] <qwerty0> say a disk in my ceph cluster starts to get faulty and I want to remove it. What's the best way to do handle it?
[7:20] <qwerty0> Apparently, doc http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual say that I should:
[7:20] <qwerty0> 1) ceph osd out {osd-num} 2) /etc/init.d/ceph stop osd.{osd-num}
[7:20] <qwerty0> My question is once the osd is "out", and osd.num stopped, can I unmount and remove the disk directly? or will the re-balance operation read from that faulty disk?
[7:23] <qwerty0> (in which case I would have to wait for the re-balance operation to complete before unmounting and removing the disk)
[7:25] <dmick> fghaas: reproduced. doh.
[7:25] <fghaas> dmick: thanks :)
[7:25] <dmick> fghaas: did you file an issue?
[7:25] <dmick> if not I will
[7:26] <dmick> qwerty0: assuming you have enough osds so there are no lost PGs
[7:27] <dmick> you can shoot an OSD through the disk and set up a new one, and it will backfill to reproduce the missing data
[7:27] <qwerty0> dmick: understood. this for a semi large install, so yes, when one disk gets faulty, it should be fine.
[7:28] <qwerty0> dmick: but what ceph procedure should I follow to take that disk off line then put a new one in?
[7:28] <dmick> http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
[7:29] <dmick> unless you're working with ceph-deploy; not many are at the moment
[7:29] <dmick> if you have the *chance* to let data migrate off when removing an OSD, that's obviously easier for the cluster to deal with
[7:30] <dmick> but if you don't, things still work as long as there's at least one replica (or, well, min_size replicas)
[7:30] <qwerty0> dmick: yes, that's the link I sent earlier. it doesn't say when it's safe for me to unmount and physically remove the disk..
[7:30] <dmick> it's just describing the clean removal
[7:30] <dmick> unclean removal skips a few steps :) but the add is the same
[7:31] <qwerty0> dmick: ok. so if there are enough replicas, I can just cleanly set the osd to out. then unmount and remove my disk?
[7:31] <dmick> should work, yes. even just killing the proc, or just losing network, should work
[7:31] <dmick> that's what the redundancy is all about
[7:32] <qwerty0> so when I add back the osd with the new disk, it will just rebuild that disk, with replicas from elsewhere.
[7:32] <dmick> yep
[7:32] <qwerty0> dmick: thanks a lot. I'll get to it then. have a nice day.
[7:33] <dmick> gl. come back if there are probs
[7:33] <qwerty0> dmick: will do. thanks.
[7:33] * qwerty0 (~chatzilla@bb121-7-98-251.singnet.com.sg) has left #ceph
[7:34] <dmick> fghaas: mapping an image and reading appears to work
[7:34] <dmick> writing too
[7:35] <fghaas> dmick: yeah. try creating a snapshot and then writing to the original mapped image with dd oflag=dsync, sync or direct
[7:35] <dmick> ok
[7:36] <dmick> direct fails
[7:37] <dmick> ok when I remove the snap
[7:38] <dmick> bad again with the snap present
[7:38] <dmick> at this point we would likely say "update your rbd.ko" typically
[7:38] <dmick> but I don't know about this specific bug
[7:38] <dmick> btw: I'm assuming the answer is "no, you didn't file an issue", so I will
[7:39] <fghaas> dmick: thanks for confirming, says sage (who is about 3 feet away from me :) )
[7:39] <dmick> hi sage :)
[7:41] <dmick> ask him if he thinks cls_log() should check should_gather() before vsnprintf'ing :)
[7:41] * fghaas (~florian@ Quit (Quit: Leaving.)
[7:44] * fghaas (~florian@ has joined #ceph
[7:44] * sagelap (~sage@ has joined #ceph
[7:59] * fghaas (~florian@ Quit (Quit: Leaving.)
[7:59] * fghaas (~florian@ has joined #ceph
[7:59] <fghaas> dmick: can do :)
[8:03] <dmick> already did :)
[8:25] * scuttlemonkey_ (~scuttlemo@ Quit (Quit: This computer has gone to sleep)
[8:26] * fghaas (~florian@ Quit (Quit: Leaving.)
[8:42] * fghaas (~florian@ has joined #ceph
[8:55] * yoshi (~yoshi@p6124-ipngn1401marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[9:01] * sleinen (~Adium@2001:620:0:26:5da:4edf:8036:c44f) has joined #ceph
[9:06] * yoshi (~yoshi@EM117-55-68-131.emobile.ad.jp) has joined #ceph
[9:09] * yoshi_ (~yoshi@p6124-ipngn1401marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[9:11] * low (~low@ has joined #ceph
[9:11] * BManojlovic (~steki@ has joined #ceph
[9:13] * yoshi (~yoshi@EM117-55-68-131.emobile.ad.jp) Quit (Read error: Connection reset by peer)
[9:15] * yoshi_ (~yoshi@p6124-ipngn1401marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[9:15] <Kioob`Taff> Hi
[9:17] <Kioob`Taff> I would like to remove an OSD (disk failure) during some hours : what should I do ? just stop the OSD process, or should I reweight it ?
[9:17] <absynth_47215> if it is completely broken, you can mark it as lost
[9:17] <Kioob`Taff> yes
[9:18] <Kioob`Taff> it will be replaced by a new
[9:18] <absynth_47215> then i think you would have to mark it lost, but i might be mistaken
[9:19] <Kioob`Taff> ok, thanks, I will read about that
[9:19] <Kioob`Taff> in the doc : « Alternatively, if there is a catastrophic failure of osd.1 (e.g., disk failure), we can tell the cluster that it is lost and to cope as best it can. »
[9:19] <Kioob`Taff> (in http://ceph.com/docs/master/rados/operations/troubleshooting-osd/ )
[9:20] <Kioob`Taff> and : Important This is dangerous in that the cluster cannot guarantee that the other copies of the data are consistent and up to date.
[9:20] * fghaas (~florian@ Quit (Quit: Leaving.)
[9:21] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[9:22] <topro> qwerty0: there is a wiki page http://ceph.com/w/index.php?title=Replacing_a_failed_disk/OSD&oldid=4254
[9:23] <Kioob`Taff> oh thanks
[9:26] * fghaas (~florian@ has joined #ceph
[9:26] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[9:32] <Kioob`Taff> so... it was a bad idea
[9:33] <Kioob`Taff> 16 active+degraded+wait_backfill, 28 active+degraded+backfilling, 16 active+degraded+remapped+wait_backfill, 7 active+degraded+remapped+backfilling →→ cluster not usable
[9:33] <Kioob`Taff> it's *one* OSD on 40... and everything is down... great
[9:35] * seaturtle (~Adium@2601:9:5780:7c:79fc:8ebb:103b:8f59) has joined #ceph
[9:38] * leseb (~leseb@mx00.stone-it.com) has joined #ceph
[9:44] * Morg (d4438402@ircip1.mibbit.com) has joined #ceph
[9:50] * ScOut3R (~ScOut3R@rock.adverticum.com) has joined #ceph
[9:52] * ScOut3R_ (~ScOut3R@ has joined #ceph
[9:55] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[9:58] * ScOut3R (~ScOut3R@rock.adverticum.com) Quit (Ping timeout: 480 seconds)
[10:02] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[10:02] * Steki (~steki@46-172-222-85.adsl.verat.net) has joined #ceph
[10:05] * xiaoxi (~xiaoxiche@jfdmzpr06-ext.jf.intel.com) Quit (Ping timeout: 480 seconds)
[10:20] * Steki (~steki@46-172-222-85.adsl.verat.net) Quit (Remote host closed the connection)
[10:22] * LeaChim (~LeaChim@027ee384.bb.sky.com) has joined #ceph
[10:25] * sagelap (~sage@ Quit (Ping timeout: 480 seconds)
[10:30] * fghaas (~florian@ Quit (Quit: Leaving.)
[10:31] * loicd (~loic@lvs-gateway1.teclib.net) has joined #ceph
[10:32] <loicd> good morning
[10:34] <jksM> how can I get ceph to move stored objects away from a specific osd?
[10:34] <jksM> I have tried marking it out, which did the trick for me earlier, but now the osd the out, and the cluster has rebalanced, but still a lot of data on the drive
[10:40] <Kioob`Taff> jksM: reweight it to 0, no ?
[10:41] <jksM> Kioob`Taff, it is reweighted to 0, I'm afraid... happened automatically when I put it out
[10:41] <Kioob`Taff> ok
[10:41] <jksM> but if I do a "df", I still see 1,4 TB of data on the drive
[10:42] <jksM> all pgs are in active+clean state, so it has stopped rebalancing
[10:42] <Kioob`Taff> well, I'm not sure ceph will «cleanup» allocated data
[10:43] <jksM> it did that the last time I tried out'ing an osd... but it might have been coincidental
[10:43] * Robe (robe@amd.co.at) Quit (Server closed connection)
[10:43] * Robe (robe@amd.co.at) has joined #ceph
[10:43] <jksM> I really would like to clean it up, because I want to tarball the osd directory, replace the drive and unpack the files again after replacing the drive
[10:43] <jksM> which would be easy if it had 10 GB of data on it... but not so much when it is 1,4 TB ;-)
[10:45] <Kioob`Taff> last time I asked, it was a very bad idea, because of xattr
[10:46] <Kioob`Taff> format, then let ceph backfill data
[10:46] <jksM> ah, sounds reasonable... so it would actually be better to "burn" the osd and start over?
[10:46] <jksM> it's just that I wanted to keep the osd numbering "neat"... and I cannot find a way to reuse a osd number
[10:46] <jksM> so I end up having osd numbers scattered all over the place
[10:47] * dosaboy (~gizmo@faun.canonical.com) has joined #ceph
[10:54] * Meyer__ (meyer@c64.org) Quit (Server closed connection)
[10:54] * Meyer__ (meyer@c64.org) has joined #ceph
[10:55] * morpheus__ (~morpheus@foo.morphhome.net) Quit (Server closed connection)
[11:07] * dosaboy (~gizmo@faun.canonical.com) Quit (Remote host closed the connection)
[11:07] * dosaboy (~gizmo@faun.canonical.com) has joined #ceph
[11:08] <joao> paravoid, by default, the monitors keep some 500 epochs of the osdmap; maybe more depending on the pgmap's last clean epoch
[11:08] * Morg (d4438402@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[11:08] <joao> it ought to be trimmed every now and then though
[11:09] * cclien_ (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[11:10] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (Read error: Connection reset by peer)
[11:16] * scuttlemonkey (~scuttlemo@ has joined #ceph
[11:16] * ChanServ sets mode +o scuttlemonkey
[11:17] * sagelap (~sage@diaman3.lnk.telstra.net) has joined #ceph
[11:19] <Kioob`Taff> joao: I have a recovery problem
[11:19] <Kioob`Taff> the current status is « 5 pgs backfilling; 74 pgs degraded; 2 pgs recovering; 13 pgs recovery_wait; 88 pgs stuck unclean; recovery 101974/2576943 degraded (3.957%) »
[11:19] <Kioob`Taff> and I don't see any change
[11:19] <Kioob`Taff> it seems «frozen»
[11:20] <Kioob`Taff> how can I see what's happening ?
[11:28] <Kioob`Taff> ok, it's working, but it's slowwww
[11:29] * eternaleye (~eternaley@tchaikovsky.exherbo.org) Quit (Server closed connection)
[11:29] * eternaleye (~eternaley@tchaikovsky.exherbo.org) has joined #ceph
[11:32] * Anticime1 is now known as Anticimex
[11:33] <Kioob`Taff> any idea why recovering is so slow, and why cluster is not usable during recovering ?
[11:33] * liiwi (liiwi@idle.fi) Quit (Server closed connection)
[11:33] * liiwi (liiwi@idle.fi) has joined #ceph
[11:34] <Kioob`Taff> all PG have replicate, so... it should stay running
[11:35] * cclien_ (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (resistance.oftc.net osmotic.oftc.net)
[11:35] * BManojlovic (~steki@ Quit (resistance.oftc.net osmotic.oftc.net)
[11:35] * sagewk (~sage@ Quit (resistance.oftc.net osmotic.oftc.net)
[11:35] * partner (joonas@ajaton.net) Quit (resistance.oftc.net osmotic.oftc.net)
[11:35] * ircolle (~ircolle@c-67-172-132-164.hsd1.co.comcast.net) Quit (resistance.oftc.net osmotic.oftc.net)
[11:35] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (resistance.oftc.net osmotic.oftc.net)
[11:35] * partner (joonas@ajaton.net) has joined #ceph
[11:36] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[11:36] * ircolle (~ircolle@c-67-172-132-164.hsd1.co.comcast.net) has joined #ceph
[11:36] * BManojlovic (~steki@ has joined #ceph
[11:36] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[11:37] * Robe (robe@amd.co.at) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * ScOut3R_ (~ScOut3R@ Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * low (~low@ Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * barnes_ (barnes@bissa.eu) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * scheuk (~scheuk@ Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * BillK (~BillK@124-169-233-28.dyn.iinet.net.au) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * rtek (~sjaak@empfindlichkeit.nl) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * Kioob (~kioob@luuna.daevel.fr) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * jackhill (jackhill@pilot.trilug.org) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * xdeller (~xdeller@broadband-77-37-224-84.nationalcablenetworks.ru) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * chftosf (uid7988@hillingdon.irccloud.com) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * MrNPP (~mr.npp@0001b097.user.oftc.net) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * Zethrok (~martin@ Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * NaioN (stefan@andor.naion.nl) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * thelan_ (~thelan@paris.servme.fr) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * andret (~andre@pcandre.nine.ch) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * Gugge-47527 (gugge@kriminel.dk) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * absynth_47215 (~absynth@irc.absynth.de) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * bstaz (~bstaz@ext-itdev.tech-corps.com) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * jamespage (~jamespage@tobermory.gromper.net) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * phantomcircuit (~phantomci@covertinferno.org) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * wonko_be_ (bernard@november.openminds.be) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * l3akage (~l3akage@martinpoppen.de) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * michaeltchapman (~mxc900@ Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * jochen (~jochen@laevar.de) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * `10 (~10@juke.fm) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * Lennie`away (~leen@lennie-1-pt.tunnel.tserv11.ams1.ipv6.he.net) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * nwl (~levine@atticus.yoyo.org) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (charon.oftc.net reticulum.oftc.net)
[11:37] * jochen (~jochen@laevar.de) has joined #ceph
[11:37] * absynth_47215 (~absynth@2a00:12c0:1:65:1:666:1:6667) has joined #ceph
[11:37] * nwl (~levine@atticus.yoyo.org) has joined #ceph
[11:37] * thelan (~thelan@paris.servme.fr) has joined #ceph
[11:37] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[11:37] * Lennie`away (~leen@lennie-1-pt.tunnel.tserv11.ams1.ipv6.he.net) has joined #ceph
[11:37] * andret (~andre@pcandre.nine.ch) has joined #ceph
[11:37] * `10 (~10@juke.fm) has joined #ceph
[11:37] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[11:37] * jamespage (~jamespage@tobermory.gromper.net) has joined #ceph
[11:37] * low (~low@ has joined #ceph
[11:37] * wonko_be (bernard@november.openminds.be) has joined #ceph
[11:37] * BillK (~BillK@124-169-233-28.dyn.iinet.net.au) has joined #ceph
[11:37] * rtek (~sjaak@empfindlichkeit.nl) has joined #ceph
[11:37] * barnes (barnes@bissa.eu) has joined #ceph
[11:37] * MrNPP (~mr.npp@ has joined #ceph
[11:37] * scheuk (~scheuk@ has joined #ceph
[11:37] * asadpanda (~asadpanda@ Quit (Server closed connection)
[11:37] * ScOut3R (~ScOut3R@ has joined #ceph
[11:37] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[11:37] * asadpanda (~asadpanda@ has joined #ceph
[11:37] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[11:37] * Zethrok (~martin@ has joined #ceph
[11:37] * Gugge-47527 (gugge@kriminel.dk) has joined #ceph
[11:38] * michaeltchapman (~mxc900@ has joined #ceph
[11:38] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[11:38] * phantomcircuit (~phantomci@covertinferno.org) has joined #ceph
[11:38] * chftosf (uid7988@hillingdon.irccloud.com) has joined #ceph
[11:38] * xdeller (~xdeller@broadband-77-37-224-84.nationalcablenetworks.ru) has joined #ceph
[11:38] * bstaz (~bstaz@ext-itdev.tech-corps.com) has joined #ceph
[11:38] * Robe (robe@amd.co.at) has joined #ceph
[11:38] * jackhill (jackhill@pilot.trilug.org) has joined #ceph
[11:38] * NaioN (stefan@andor.naion.nl) has joined #ceph
[11:38] * l3akage (~l3akage@l3akage.de) has joined #ceph
[11:38] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[11:39] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[11:39] * KindOne (KindOne@h113.42.28.71.dynamic.ip.windstream.net) Quit (Ping timeout: 480 seconds)
[11:40] * sagewk (~sage@ has joined #ceph
[11:40] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (Server closed connection)
[11:40] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[11:40] * sleinen (~Adium@2001:620:0:26:5da:4edf:8036:c44f) Quit (Quit: Leaving.)
[11:40] * sleinen (~Adium@ has joined #ceph
[11:40] * LeaChim (~LeaChim@027ee384.bb.sky.com) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * gohko (~gohko@natter.interq.or.jp) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * snaff (~z@81-86-160-226.dsl.pipex.com) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * ShaunR (~ShaunR@staff.ndchost.com) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * kylehutson (~kylehutso@dhcp231-11.user.cis.ksu.edu) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * wer (~wer@wer.youfarted.net) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * Kdecherf (~kdecherf@shaolan.kdecherf.com) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * HauM1 (~HauM1@login.univie.ac.at) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * Active2 (~matthijs@callisto.vps.ar-ix.net) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * Psi-jack (~psi-jack@psi-jack.user.oftc.net) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * cephalobot (~ceph@ds2390.dreamservers.com) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * markl (~mark@tpsit.com) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * dec (~dec@ec2-54-251-62-253.ap-southeast-1.compute.amazonaws.com) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * jefferai (~quassel@quassel.jefferai.org) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * sileht (~sileht@sileht.net) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * vhasi (vhasi@vha.si) Quit (resistance.oftc.net larich.oftc.net)
[11:40] * vhasi (vhasi@vha.si) has joined #ceph
[11:40] * Active2 (~matthijs@callisto.vps.ar-ix.net) has joined #ceph
[11:40] * HauM1 (~HauM1@login.univie.ac.at) has joined #ceph
[11:40] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[11:40] * markl (~mark@tpsit.com) has joined #ceph
[11:41] * jefferai (~quassel@quassel.jefferai.org) has joined #ceph
[11:41] * dec (~dec@ec2-54-251-62-253.ap-southeast-1.compute.amazonaws.com) has joined #ceph
[11:41] * snaff (~z@81-86-160-226.dsl.pipex.com) has joined #ceph
[11:41] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[11:41] * Psi-jack (~psi-jack@yggdrasil.hostdruids.com) has joined #ceph
[11:41] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[11:41] * wer (~wer@wer.youfarted.net) has joined #ceph
[11:41] * LeaChim (~LeaChim@027ee384.bb.sky.com) has joined #ceph
[11:41] * gohko (~gohko@natter.interq.or.jp) has joined #ceph
[11:41] * Kdecherf (~kdecherf@shaolan.kdecherf.com) has joined #ceph
[11:41] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[11:41] * sileht (~sileht@sileht.net) has joined #ceph
[11:43] * sleinen1 (~Adium@ has joined #ceph
[11:44] * cephalobot (~ceph@ds2390.dreamservers.com) has joined #ceph
[11:44] * sleinen2 (~Adium@2001:620:0:25:18a9:9ce:8f42:e7d8) has joined #ceph
[11:45] -coulomb.oftc.net- Server Terminating: received signal SIGTERM
[11:45] * Disconnected.
[11:45] -reticulum.oftc.net- *** Looking up your hostname...
[11:45] -reticulum.oftc.net- *** Checking Ident
[11:45] -reticulum.oftc.net- *** No Ident response
[11:45] -reticulum.oftc.net- *** Found your hostname

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.