#ceph IRC Log

Index

IRC Log for 2012-07-27

Timestamps are in GMT/BST.

[0:01] * andrewbogott (~andrewbog@50-93-251-66.fttp.usinternet.com) has joined #ceph
[0:04] * gregaf (~Adium@2607:f298:a:607:706a:894f:625f:b0e1) Quit (Quit: Leaving.)
[0:05] * asadpanda (~asadpanda@2001:470:c09d:0:20c:29ff:fe4e:a66) Quit (Read error: No route to host)
[0:05] * gregaf (~Adium@2607:f298:a:607:d8e6:7160:f1a4:d864) has joined #ceph
[0:07] * asadpanda (~asadpanda@2001:470:c09d:0:20c:29ff:fe4e:a66) has joined #ceph
[0:23] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[0:25] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[0:25] * gregaf (~Adium@2607:f298:a:607:d8e6:7160:f1a4:d864) Quit (Quit: Leaving.)
[0:31] * gregaf (~Adium@2607:f298:a:607:d8e6:7160:f1a4:d864) has joined #ceph
[0:35] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) has joined #ceph
[0:41] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) Quit (Remote host closed the connection)
[0:41] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) has joined #ceph
[1:08] * tnt (~tnt@93.56-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[1:20] * andrewbogott (~andrewbog@50-93-251-66.fttp.usinternet.com) Quit (Quit: andrewbogott)
[1:21] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[1:25] * loicd (~loic@magenta.dachary.org) has joined #ceph
[1:25] * glowell (~glowell@141.142.134.93) Quit (Remote host closed the connection)
[1:29] * glowell (~glowell@141.142.134.93) has joined #ceph
[1:32] * duerF (~tommi@31-209-240-108.dsl.dynamic.simnet.is) has joined #ceph
[1:36] * BManojlovic (~steki@212.200.241.106) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:40] * yehudasa (~yehudasa@2607:f298:a:607:4ceb:23f0:731b:359c) has joined #ceph
[1:42] * johnl (~johnl@2a02:1348:14c:1720:a9a0:d515:3fc9:8dfb) has joined #ceph
[1:53] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[1:56] * danieagle (~Daniel@177.43.213.15) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[2:05] * mosu001 (~mosu001@en-439-0331-001.esc.auckland.ac.nz) has joined #ceph
[2:06] <mosu001> Hi everyone, I seem to have a hanging ceph system...
[2:07] <mosu001> At least ceph -w doesn't show much changing, is there an easy way to check if the system is "making progress", i.e., restoring or...?
[2:09] <mosu001> ceph -s gives me
[2:09] <mosu001> health HEALTH_WARN 460 pgs peering; 460 pgs stuck inactive; 460 pgs stuck unclean
[2:09] <mosu001> monmap e1: 2 mons at {0=10.19.99.123:6789/0,1=10.19.99.124:6789/0}, election epoch 2, quorum 0,1 0,1
[2:09] <mosu001> osdmap e12: 12 osds: 12 up, 12 in
[2:09] <mosu001> pgmap v458: 2304 pgs: 1844 active+clean, 460 peering; 6168 bytes data, 24036 MB used, 21416 GB / 22340 GB avail
[2:09] <mosu001> mdsmap e7: 1/1/1 up {0=1=up:creating}, 1 up:standby
[2:17] * Qu310 (~qgrasso@120.88.69.209) has joined #ceph
[2:22] <mikeryan> ceph pg dump
[2:23] <mikeryan> should give you the state of all the PGs
[2:23] <mikeryan> mosu001:
[2:23] * Qten (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Ping timeout: 480 seconds)
[2:30] * Leseb_ (~Leseb@62.233.37.122) has joined #ceph
[2:30] * lofejndif (~lsqavnbok@659AABY2H.tor-irc.dnsbl.oftc.net) has joined #ceph
[2:35] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[2:35] * Leseb_ is now known as Leseb
[2:45] * Leseb (~Leseb@62.233.37.122) Quit (Quit: Leseb)
[2:58] <mosu001> mikeryan: thanks, many pgs peering, most with no scrub_stamp... My system seems to get stuck peering?
[2:59] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:00] <mosu001> mikeryan: I should probably mention that I am "erasing" the system, so multiple pgs may have been corrupted, maybe split brain trying to decide who has the master copy?
[3:12] * Ryan_Lane (~Adium@216.38.130.166) Quit (Quit: Leaving.)
[3:14] * mib_77v4gi (3ad6e9d6@ircip1.mibbit.com) has joined #ceph
[3:16] <mib_77v4gi> hi,anybody can help me? about this issue: mdsmap e657: 2/2/2 up {0=1=up:resolve(laggy or crashed),1=0=up:rejoin}
[3:18] <mib_77v4gi> how can i do?
[3:22] <gregaf> mib_77v4gi: are all your PGs active?
[3:22] * Qu310 (~qgrasso@120.88.69.209) Quit (Read error: Connection reset by peer)
[3:22] <gregaf> if they are, you've probably hit an MDS bug ??? the POSIX filesystem isn't production-ready yet :(
[3:22] <gregaf> but you should look and see if both your MDS daemons are running, and if they aren't, grab the backtrace out of the logs and save any core dumps for inspection
[3:22] <mib_77v4gi> my god!
[3:23] <mib_77v4gi> yes mds is runing, but can't use
[3:23] <mib_77v4gi> health HEALTH_WARN mds 1 is laggy
[3:23] <gregaf> are they both running?
[3:23] <mib_77v4gi> yes
[3:23] <gregaf> do you have any logging enabled?
[3:25] <mib_77v4gi> yes
[3:25] <gregaf> what's the end of the output look like?
[3:27] <mib_77v4gi> mds.0 log:2012-07-27 09:18:34.272592 2b38329ee700 1 mds.1.5 handle_mds_map state change up:reconnect --> up:rejoin
[3:27] <gregaf> okay, that's not much logging :)
[3:27] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:28] <gregaf> try adding "debug mds = 20" to the config file in the MDS section and restarting that daemon and seeing if it gets stuck again
[3:28] <mib_77v4gi> ok
[3:28] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:30] <mib_77v4gi> where i can paste long text? can i send the log file?
[3:32] <gregaf> pastebin if it'll fit
[3:32] <gregaf> otherwise you can host it somewhere or send it to me (greg@inktank.com)
[3:33] <mib_77v4gi> http://mibpaste.com/UpZ9jr
[3:33] <gregaf> I can't promise to fix right now it but I'd like to find out if it's something we've seen
[3:34] * lofejndif (~lsqavnbok@659AABY2H.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[3:34] <mib_77v4gi> great! thks
[3:34] <gregaf> mib_77v4gi: can you run "ceph -s" and paste the output again?
[3:35] <mib_77v4gi> http://mibpaste.com/N9Ljzc
[3:36] <gregaf> okay, so the one you sent looks to be mds.0 in your config file, which is mds.1 to the system ??? can you check that the other one is running, and paste its log?
[3:37] <mib_77v4gi> the integral log? just so long!
[3:38] <gregaf> well the log you sent me matches the mds which is in state "rejoin", but the thing that's holding it up is that the other MDS is laggy or crashed (which almost always means crashed)
[3:39] <gregaf> are you sure the process is still running?
[3:42] <mib_77v4gi> http://mibpaste.com/mDDYtG
[3:42] <mib_77v4gi> yes i have send log for you
[3:43] <gregaf> mib_77v4gi: hmm, that looks like normal operation ??? is it still going forward?
[3:44] <mib_77v4gi> you want see more log?
[3:44] <gregaf> actually that looks like it's done replaying and has started doing real work
[3:44] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) Quit (Remote host closed the connection)
[3:45] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (Quit: Leaving.)
[3:45] <mib_77v4gi> but i can't use for client that use mount.ceph
[3:45] <gregaf> I don't know what to tell you :/
[3:46] <gregaf> if you want to bundle up all the logs and send them to me I'll see if I can divine some meaning from them
[3:46] <gregaf> but I have to head home now
[3:46] <mib_77v4gi> thks you very much :P
[3:46] <gregaf> welcome :)
[3:46] <mib_77v4gi> ok i try send the log for you
[3:46] <gregaf> (such as it is)
[3:48] * ajm (~ajm@adam.gs) has left #ceph
[3:48] <mib_77v4gi> # du -sh mds.1.log 1.4G mds.1.log
[3:49] <mib_77v4gi> :(
[3:50] <gregaf> mib_77v4gi: they should compress well???if you'd rather upload them somewhere you can create a bug on the tracker (tracker.newdream.net) and attach them to that
[3:53] * glowell (~glowell@141.142.134.93) Quit (Remote host closed the connection)
[3:53] <mib_77v4gi> ok, i will upload the log to the host for you
[3:54] <mib_77v4gi> thks, the tracker i confused! i don't know how to use it :(
[3:56] <gregaf> register as a new user; then create a "New issue" in the Ceph project
[3:56] <gregaf> really am off now, though
[3:58] <mib_77v4gi> ok i try
[4:03] <mib_77v4gi> http://ext.jump.verycloud.cn/mds.1.log.gz this is log for mds.1,sorry is so easy web :/
[4:03] <mib_77v4gi> thks you for help so much
[4:06] <nhm> gregaf: still working?
[4:06] <joao> I think he just left
[4:07] <joao> and so am I going to
[4:07] <joao> have a good night, #ceph
[4:07] <joao> o/
[4:07] <nhm> joao: good night
[4:07] <mib_77v4gi> yes it's still runing :O
[4:08] <mib_77v4gi> ok, good night for you!:)
[4:09] <mib_77v4gi> sorry! i'm was wrong! :(
[4:14] * ryann (~chatzilla@216.81.130.180) has joined #ceph
[4:25] * renzhi (~renzhi@180.169.73.90) has joined #ceph
[4:32] * lxo (~aoliva@lxo.user.oftc.net) Quit (Read error: Connection reset by peer)
[4:32] * mib_77v4gi (3ad6e9d6@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[4:35] <elder> Is dmick on vacation this week?
[4:35] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[4:36] * loicd (~loic@magenta.dachary.org) has joined #ceph
[4:44] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:44] <ryann> attempting cephfs /[mounted cephfs]/Folder set_layout --pool 6 (legit osd pool) and i get "Invalid argument" what am I missing?
[4:46] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[4:46] * loicd (~loic@magenta.dachary.org) has joined #ceph
[4:58] * chutzpah (~chutz@100.42.98.5) Quit (Quit: Leaving)
[4:59] * Guangliang (~glzhao@222.126.194.154) has joined #ceph
[4:59] * lxo (~aoliva@lxo.user.oftc.net) Quit (Read error: No route to host)
[4:59] * Guangliang (~glzhao@222.126.194.154) has left #ceph
[5:29] * deepsa (~deepsa@122.172.170.181) has joined #ceph
[5:31] * deepsa (~deepsa@122.172.170.181) Quit ()
[5:31] <nhm> elder: don't know
[5:31] <elder> Just haven't seen him on today.
[5:32] <nhm> elder: I think TV is on vacation, not sure about dmick
[5:33] <elder> http://developers.slashdot.org/story/12/07/26/1917223/hp-offers-free-access-to-openstack
[5:33] <elder> (I haven't read it. Seemed relevant.)
[5:35] <elder> Scanning the /. comments, people aren't impressed.
[5:36] * deepsa (~deepsa@122.172.170.181) has joined #ceph
[6:19] <mosu001> hi, everyone, I'm still having problems with my ceph system hanging and pgs gettign stuck peering. I have left the system "doing its thing" for a whilenow and ceph -w gives me
[6:19] <mosu001> 2012-07-27 16:17:55.073954 osd.9 [WRN] 1 slow requests, 1 included below; oldest blocked for > 15845.834563 secs
[6:19] <mosu001> 2012-07-27 16:17:55.073960 osd.9 [WRN] slow request 15845.834563 seconds old, received at 2012-07-27 11:53:49.239356: osd_op(mds.0.2:15 604.00000000 [setxattr path (12),setxattr parent (39),tmapup 0~0] 1.43e85c95 RETRY) v4 currently delayed
[6:20] <mosu001> Ever since I updated to the latest version of ceph abput 1 month ago I haven't been able to get my system back up and running...
[6:20] <mosu001> Anyone with any ideas?
[6:23] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[7:04] * deepsa (~deepsa@122.172.170.181) Quit (Quit: Computer has gone to sleep.)
[7:07] <mikeryan> mosu001: have you double checked that the osd's haven't crashed?
[7:08] <mosu001> What is the best way to check that?
[7:08] <mikeryan> uh, ps aux | grep ceph-osd
[7:08] <mosu001> If I stop ceph and restart it then I get a different number of pgs hanging and the slow request on a different osd
[7:09] <mosu001> All 12 OSDs ceph-osd processes are up and running...
[7:10] <mosu001> osd.9 still waiting around...
[7:12] <mikeryan> anything showing up in dmesg?
[7:12] <mikeryan> could be a general system health issue
[7:12] <mikeryan> i dunno, grasping at straws here
[7:12] <mikeryan> i've only been with the project about a month
[7:16] * deepsa (~deepsa@122.172.170.181) has joined #ceph
[7:18] <mosu001> mikeryan: no problem, I've been playing with Ceph for a while but am not a *nix person, so struggle when things go "haywire"
[7:19] <mikeryan> your best bet is to try again at around 10:30 AM PDT
[7:19] <mikeryan> in exactly 12 hours
[7:19] <mosu001> My set-up is 2 servers with 6 OSDs on each and mapping 1 replica to each
[7:19] <mosu001> dmesg on server 1 =
[7:19] <mosu001> [3215779.938005] SFW2-INext-DROP-DEFLT IN=bond0 OUT= MAC=00:25:90:18:14:80:00:30:48:ca:32:f2:08:00 SRC=10.19.99.124 DST=10.19.99.121 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=27989 DF PROTO=TCP SPT=46207 DPT=6810 WINDOW=14600 RES=0x00 SYN URGP=0 OPT (020405B40402080ABFF494250000000001030309)
[7:19] <mosu001> [3215798.217916] SFW2-INext-DROP-DEFLT IN=bond0 OUT= MAC=00:25:90:18:14:80:00:25:90:18:31:9a:08:00 SRC=10.19.99.122 DST=10.19.99.121 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62601 DF PROTO=TCP SPT=53081 DPT=6816 WINDOW=14600 RES=0x00 SYN URGP=0 OPT (020405B40402080AC01618360000000001030309)
[7:19] <mosu001> dmesg on server 2 =
[7:19] <mosu001> [3201526.867407] btrfs: device fsid 48780e9b-a7d5-42d0-9ad9-517cf48fc851 devid 1 transid 28 /dev/sdb
[7:19] <mosu001> [3201526.867454] btrfs: device fsid 3640637d-b644-457d-b98d-d6994f14ca2f devid 1 transid 28 /dev/sdc
[7:19] <mosu001> [3201526.867490] btrfs: device fsid a1d6321c-5a57-4eea-89fc-2d98766167ac devid 1 transid 28 /dev/sdd
[7:19] <mosu001> [3201526.868080] btrfs: device fsid dc33fe7f-6de2-4e31-9595-572704e65bb4 devid 1 transid 4 /dev/sde
[7:19] <mosu001> [3201526.868348] btrfs: device fsid 17613962-cd4a-4051-a16f-db6a17b27fbc devid 1 transid 28 /dev/sdf
[7:19] <mosu001> [3201526.868406] btrfs: device fsid 6bf710b9-1ef1-4709-b733-7388d070c2c5 devid 1 transid 28 /dev/sdg
[7:19] <mosu001> [3201526.870376] btrfs: device fsid dc33fe7f-6de2-4e31-9595-572704e65bb4 devid 1 transid 4 /dev/sde
[7:20] <mosu001> [3201526.870924] btrfs: disk space caching is enabled
[7:20] <mikeryan> how's your free space looking on btrfs?
[7:20] <mosu001> I wonder if server 1 is having connection problems?
[7:20] <mosu001> Should be absolutely fine as there is nothing of value on the disks and nothing big, but I can check if you tell me how...!
[7:20] <mikeryan> there are some caveats
[7:20] <mikeryan> start with df
[7:21] <mikeryan> df -h output would be useful
[7:21] <mosu001> /dev/sdb 1.9T 2.0G 1.8T 1% /data/osd.11
[7:21] <mosu001> /dev/sdc 1.9T 2.0G 1.8T 1% /data/osd.12
[7:21] <mosu001> /dev/sdd 1.9T 2.0G 1.8T 1% /data/osd.13
[7:21] <mosu001> /dev/sde 1.9T 2.0G 1.8T 1% /data/osd.14
[7:21] <mosu001> /dev/sdf 1.9T 2.0G 1.8T 1% /data/osd.15
[7:21] <mosu001> /dev/sdg 1.9T 2.0G 1.8T 1% /data/osd.16
[7:21] <mosu001> ss1:~ #
[7:21] <mikeryan> yeah those guys are all nearly empty, shouldn't be an issue
[7:21] <mosu001> /dev/sdf 1.9T 2.0G 1.8T 1% /data/osd.25
[7:21] <mosu001> /dev/sdg 1.9T 2.0G 1.8T 1% /data/osd.26
[7:21] <mosu001> /dev/sdb 1.9T 2.0G 1.8T 1% /data/osd.21
[7:21] <mosu001> /dev/sdc 1.9T 2.0G 1.8T 1% /data/osd.22
[7:21] <mosu001> /dev/sdd 1.9T 2.0G 1.8T 1% /data/osd.23
[7:21] <mosu001> /dev/sde 1.9T 2.0G 1.8T 1% /data/osd.24
[7:21] <mosu001> ss2:~ #
[7:22] <mikeryan> lsb-release -c
[7:22] <mosu001> Do you know how to check the connectivity?
[7:22] <mikeryan> you could try pinging i suppose
[7:22] <mosu001> ping is fine
[7:22] <mosu001> Should I run lsb-release -c on servers?
[7:23] <mikeryan> yea, i want to see what version of ubuntu you've got rollin there
[7:23] <mikeryan> or at least what it thinks it is
[7:23] <mosu001> Using OpenSUSE
[7:23] <mikeryan> ah
[7:23] <mosu001> Get Codename: Asparagus
[7:23] <mikeryan> at least it's modern
[7:23] <mikeryan> nothing jumping out at me ...
[7:24] <mosu001> OpenSUSE 12.1
[7:25] <mikeryan> well, it's 10:30 PM local time
[7:25] <mikeryan> i doubt anyone from the team is going to be here
[7:26] <mosu001> OK, thanks mikeryan, I'll try again at a better time
[7:26] <mikeryan> yeah, sorry i couldn't help, best of luck
[7:26] <mosu001> No worries
[7:26] * mosu001 (~mosu001@en-439-0331-001.esc.auckland.ac.nz) Quit (Quit: Leaving)
[7:29] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[7:30] * loicd (~loic@magenta.dachary.org) has joined #ceph
[7:30] <lurbs> 5:30 pm on Friday in Auckland, no wonder he gave up so quickly.
[7:47] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[7:47] * loicd (~loic@magenta.dachary.org) has joined #ceph
[7:55] * danieagle (~Daniel@177.43.213.15) has joined #ceph
[8:07] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:07] * loicd (~loic@magenta.dachary.org) has joined #ceph
[8:08] * tnt (~tnt@93.56-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[8:18] * danieagle (~Daniel@177.43.213.15) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[8:38] * mosu001 (~mosu001@en-439-0331-001.esc.auckland.ac.nz) has joined #ceph
[8:39] <mosu001> mikeryan: if you are still there it turns out my firewal was blocking ceph traffic
[8:39] <mosu001> Originally it was all turned off (experimental network behind external firewall) but when I updated OpenSuSE and ceph I turned one of them on....
[8:40] <mosu001> As a Linux newbie I never thought to check this until looking at dmesg (it had network related messages), so you helped a lot!
[8:44] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:51] * tnt (~tnt@93.56-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[8:55] * lynn_yudi (3ad6e9d6@ircip3.mibbit.com) has joined #ceph
[8:55] * renzhi (~renzhi@180.169.73.90) Quit (Quit: Leaving)
[8:57] * lynn_yudi (3ad6e9d6@ircip3.mibbit.com) Quit ()
[9:01] <mikeryan> mosu001: heh, i should have suggested that
[9:01] <mikeryan> just wasn't sure how to check on suse
[9:01] * lynn_yudi (3ad6e9d6@ircip1.mibbit.com) has joined #ceph
[9:02] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:02] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) Quit (Remote host closed the connection)
[9:02] * lynn_yudi (3ad6e9d6@ircip1.mibbit.com) Quit ()
[9:03] * tnt (~tnt@office.intopix.com) has joined #ceph
[9:03] * lynn_yudi (3ad6e9d6@ircip4.mibbit.com) has joined #ceph
[9:05] <mikeryan> well, glad to help anyway!
[9:08] <lynn_yudi> health HEALTH_WARN mds 1 is laggy
[9:08] <lynn_yudi> how can i do ?
[9:15] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[9:17] * deepsa_ (~deepsa@122.172.2.2) has joined #ceph
[9:18] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:19] * deepsa (~deepsa@122.172.170.181) Quit (Ping timeout: 480 seconds)
[9:19] * deepsa_ is now known as deepsa
[9:24] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:26] * fghaas (~florian@91-119-129-178.dynamic.xdsl-line.inode.at) has joined #ceph
[9:31] * lynn_yudi (3ad6e9d6@ircip4.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[9:39] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[10:24] * tnt (~tnt@office.intopix.com) Quit (Ping timeout: 480 seconds)
[10:28] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[10:32] * tnt (~tnt@office.intopix.com) has joined #ceph
[10:45] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[10:45] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[10:54] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Ping timeout: 480 seconds)
[10:57] * loicd (~loic@83.167.43.235) has joined #ceph
[10:57] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[11:01] * fc (~fc@83.167.43.235) has joined #ceph
[11:05] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[11:06] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[11:12] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:22] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[11:24] * nickname (~nickname@ip-48.ias.rwth-aachen.de) has joined #ceph
[11:25] <nickname> Hi, could it be that there are broken deps in your debian repo?
[11:26] <nickname> I have not tried the "testing" designated one, yet, though.
[11:29] <tnt> what distrib ?
[11:29] <tnt> worked fine on ubuntu 12.04 for me
[11:29] * dabeowulf (dabeowulf@free.blinkenshell.org) has joined #ceph
[11:30] <nickname> Squeeze.
[11:30] <nickname> The ceph-client-tools deb specifically.
[11:31] <nickname> ceph does not pull that in / depend on it.
[11:31] <nickname> You have that installed as well?
[11:32] <tnt> I pulled ceph and that installed all I needed to setup a cluster
[11:33] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[11:34] <nickname> I wanted to try the filesystem. The package provides the kernel driver from whatI gathered. Maybe I can still try fuse. Well, and the "testing" repo.
[11:35] <tnt> I'm not sure client-tools has the kernel driver
[11:37] <fghaas> nickname: no, the kernel driver is supposed to come from your kernel :)
[11:37] <fghaas> use the 3.2.0 backports kernel if you're on squeeze
[11:38] <nickname> Thanks, I thought there was a module.
[11:38] <fghaas> well it is, but it just doesn't ship separately
[11:39] <nickname> I see. What about the rbd module. I wonder if that is in y current kernel already.
[11:39] <nickname> *m
[11:39] <fghaas> same
[11:39] <fghaas> been in since 2.6.37
[11:39] <fghaas> or .38, something like that
[11:39] <nickname> Thanks a lot!
[11:42] <nickname> Oh, maybe a quick yes or no from you (= famous ;> ) on if CephFS makes sense as a cluster FS to put on DRBD for dual primary operation?
[11:42] <fghaas> no it doesn't
[11:42] <fghaas> because ceph handles all replication internally, and better than drbd in that it allows arbitrary replication sets
[11:42] <nickname> I should try GlusterFS, then I guess.
[11:43] <nickname> I feared it would bring another replication along.
[11:43] <fghaas> glusterfs on drbd makes zero sense as well
[11:43] <nickname> Well, "*feared*".
[11:43] <nickname> Oh.
[11:43] <fghaas> because glusterfs comes with its own replication just the same
[11:44] <fghaas> glusterfs is definitely a better idea than dual primary drbd with gfs2/ocfs2 though
[11:45] <fghaas> albeit not as scalable & flexible as ceph
[11:45] <nickname> Did you ever try OrangeFS?
[11:45] <fghaas> nope
[11:47] <nickname> Thanks again for these pointers!
[11:48] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[12:10] * allsystemsarego (~allsystem@188.25.134.117) has joined #ceph
[12:25] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:32] * nickname (~nickname@ip-48.ias.rwth-aachen.de) Quit (Quit: Leaving)
[12:41] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[13:03] * deepsa (~deepsa@122.172.2.2) Quit (Ping timeout: 480 seconds)
[13:28] * deepsa (~deepsa@122.172.26.142) has joined #ceph
[14:31] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:52] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) has joined #ceph
[14:55] * andrewbogott (~andrewbog@50-93-251-66.fttp.usinternet.com) has joined #ceph
[14:56] * andrewbogott (~andrewbog@50-93-251-66.fttp.usinternet.com) Quit ()
[15:30] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[15:32] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[15:36] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[15:58] * nasko (~nasko@84.21.199.18) has joined #ceph
[16:01] <nasko> Hi guys. I'm new to ceph. I've just set up a ceph 0.48, six nodes cluster on Fedora 16(i built the ceph from source)and everything seamed fine, except the fact that when i mount the FS via the kernel module in some occasions the FS doesn't allow me to write
[16:02] <nasko> with FUSE- it is fine
[16:02] <nasko> thanks
[16:02] <fghaas> what's your client kernel version?
[16:02] <nasko> again 0.48
[16:03] <fghaas> er, no. your linux kernel on the client where you're mounting ceph.
[16:03] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[16:03] <fghaas> as per uname -r, that is
[16:03] <nasko> Linux st1.lvs 3.4.5-1.fc16.x86_64
[16:04] <nasko> the interesting part is that i can write some of the files
[16:04] <nasko> for example, the ones, which i've created
[16:04] <fghaas> are you mounting on the same nodes that run ceph osds?
[16:04] <nasko> while i was mounted with fuse
[16:04] <nasko> yes
[16:04] <nasko> monitor processes are 3
[16:04] <nasko> osd-s are 6
[16:05] <nasko> 6 server setup
[16:05] <nasko> so 3 of the osd servers are mon servers also
[16:05] <fghaas> yeah, not a good idea, has deadlock potential
[16:05] <fghaas> I mean combining mons and osds is perfectly find
[16:05] <fghaas> fine
[16:06] <fghaas> it's mounting ceph on an osd node that may be a problem
[16:06] <nasko> so it is not good to mount ceph on the same server, where the osd is on
[16:06] <nasko> right?
[16:07] <fghaas> http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/7141
[16:08] <nasko> ok. might be that. thanks a lot. so the best approach is to have separate servers
[16:08] <nasko> i was testing the setup for kvm virtual infrastructure
[16:08] <joao> fghaas, do you have those mailing list threads indexed or something? I was trying to find just that and you managed that before I even got to gmane :p
[16:08] <fghaas> joao: I was googling for "ceph mount deadlock"
[16:09] <nasko> but the kvm is using the same machines for processing
[16:09] <joao> fghaas, that makes much more sense than to find this one thread by hand :x
[16:09] <joao> silly me
[16:09] <fghaas> nasko: consider kvm+rbd, really, not kvm+cephfs
[16:10] <fghaas> or even better, openstack+ceph with the rbd backends for glance and nova-volume/cinder
[16:10] <nasko> does kvm+rbd have the same requirement - osds not on the kvm machines?
[16:11] <fghaas> hmm, that's a good question, as there is no double filesystem I/O path in kernel is involved it might not, but don't take my word for that. joao?
[16:12] <joao> no idea; josh is probably the go to guy on anything rbd
[16:13] <joao> although I'd be curious to know that as well
[16:18] <dspano> nasko: I did the same thing at first, running clients on the OSDs, and got lots bizarre kernel crashes. I guess it causes a loopback problem if I remember correctly.
[16:19] <dspano> nasko: I use kvm+rbd with Openstack as Florian suggested, and it runs great.
[16:20] <nasko> thanks a lot for all your comments. I'll research the rbd
[16:20] * loicd1 (~loic@83.167.43.235) has joined #ceph
[16:20] * loicd (~loic@83.167.43.235) Quit (Read error: No route to host)
[16:20] <dspano> That begs the question, does the same issues arise if you mount cephfs on the same servers that run mon and mds?
[16:20] <nasko> hmm taht would be interesting
[16:22] <joao> I doubt that about mon
[16:22] <joao> mon's store is specific to the daemon, and only the daemon will access it
[16:23] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[16:23] <dspano> I've been too afraid to try something like that.
[16:24] <dspano> I screwed up enough already.
[16:24] <joao> not sure about mds though
[16:24] <joao> but I don't recall the mds having a store
[16:24] <joao> but then again, haven't paid much attention to the mds, so I could be wrong
[16:24] * loicd (~loic@83.167.43.235) has joined #ceph
[16:25] * loicd1 (~loic@83.167.43.235) Quit ()
[16:26] <dspano> Yeah I think you're right, I think it stores everything in rados.
[16:26] * loicd1 (~loic@83.167.43.235) has joined #ceph
[16:26] * loicd (~loic@83.167.43.235) Quit (Read error: Connection reset by peer)
[16:28] * loicd (~loic@83.167.43.235) has joined #ceph
[16:28] * loicd1 (~loic@83.167.43.235) Quit (Read error: Connection reset by peer)
[16:30] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Quit: LarsFronius)
[16:31] <fghaas> mds have no local store, no
[16:36] * fghaas (~florian@91-119-129-178.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[16:53] * nasko (~nasko@84.21.199.18) Quit (Quit: Leaving)
[17:01] * fc (~fc@83.167.43.235) Quit (Quit: leaving)
[17:13] * loicd (~loic@83.167.43.235) Quit (Quit: Leaving.)
[17:18] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[17:28] * nhm smacks burnupi14
[17:30] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:30] <nhm> ugh, these controllers are just awful.
[17:30] <elder> Do you have a virtual smacking device of some kind?
[17:31] <nhm> elder: it's my backup company if we spend all of our startup money on booze.
[17:32] <elder> Don't let that idea go. Make sure you pursue it even if the startup money doesn't disappear.
[17:32] <elder> "The Clapper" was a pretty big success you know, and it's something of a cultural icon now.
[17:33] <nhm> I figure I'll market it to the Japanese first as part of a gameshow pitch.
[17:38] * glowell (~glowell@141.142.134.93) has joined #ceph
[17:39] * loicd (~loic@magenta.dachary.org) has joined #ceph
[17:41] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[17:53] * kfogel (~Karl@74-92-190-113-Illinois.hfc.comcastbusiness.net) has joined #ceph
[17:59] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[17:59] * loicd (~loic@magenta.dachary.org) has joined #ceph
[18:00] * EmilienM (~EmilienM@191.223.101.84.rev.sfr.net) has left #ceph
[18:08] * tnt (~tnt@office.intopix.com) Quit (Ping timeout: 480 seconds)
[18:09] * BManojlovic (~steki@212.200.241.106) has joined #ceph
[18:31] <kfogel> loicd: hey there.
[18:32] <kfogel> Anyone here know of companies that offer ceph POSIX-compat mountable network filesystems priced by the gigabyte or terabyte? It doesn't seem inktank.com offers that particular service; they seem to be more boutique consulting and other expen$ive stuff :-).
[18:34] <gregaf> kfogel: so you're looking for NAS appliances that use Ceph?
[18:34] <gregaf> there aren't any yet that I'm aware of
[18:35] <kfogel> gregaf: yes, basically. I guess really I'm looking for any NAS services that offer a mountable filesystem priced by gig/terabyte, but I'd certainly prefer one that implements it using ceph.
[18:35] <kfogel> gregaf: maybe someday soon they will exist -- seems like a natural fit.
[18:36] <gregaf> yep!
[18:37] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[18:44] <loicd> kfogel: \o
[18:48] <loicd> kfogel: if you rent a cloud based block storage and run a NFS server from a virtual machine, does it match what you would want to use ?
[18:49] * tnt (~tnt@93.56-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[18:50] <kfogel> loicd: yes, but we want to trade $$ for time/competence. We *could* assemble this ourselves. But we have a lot of system to adminstrate already; I'm really looking for someone who will sell us a mount point for money, and take care of the details behind the scenes. I'm tired of spending all my time on sysadminning instead of the thing I originally set out to do :-).
[18:51] <kfogel> loicd: it's actually rather surprising to me, how hard it is to find anyone selling this service. Many services will go 90% of the way there, but not to the actual 100% of selling a POSIX-compatible NFS mount point.
[18:51] <mikeryan> kfogel: it's a dirty secret, but we're not focusing on POSIX compat
[18:51] <mikeryan> for now
[18:51] <kfogel> mikeryan: ah, okay. That's useful to know. I'd heard that feature was in beta dev still, and I wasn't sure how quickly it is progressing.
[18:52] <mikeryan> the official word is 6-12 months out
[18:52] <kfogel> mikeryan: *nod* Thank you.
[18:52] <loicd> kfogel: understood and approved ;-)
[18:52] <mikeryan> editorial: i think that's a mistake on our part
[18:52] <mikeryan> but i'm just a programmer, what do i know ;)
[18:53] <gregaf> good save, fresh fish ;)
[18:53] <kfogel> mikeryan: I think you might be right, but I'm biased. I assume there are a lot of little groups like ours who don't need fast performance from the filesystem, they just want a HUGE and reliable storage area for large assets. We can copy to faster disk when we need to work with something.
[18:53] <mikeryan> kfogel: that mirrors my thinking exactly
[18:53] <mikeryan> people with very deep pockets want rgw and rbd, but lots of people with shallow pockets want POSIX
[18:53] <kfogel> mikeryan: *exactly*
[18:54] <kfogel> mikeryan: sheesh. Maybe I should learn how to roll it myself and then offer it, if I'm so sure there's a market. Hmmmmrm.
[18:54] <mikeryan> if it makes you feel any better i'm on a campaign to fix that misalignment
[18:54] <kfogel> mikeryan: feel free to add this one anecdotal data point in support of that campaign.
[18:54] <mikeryan> duly noted
[18:54] <gregaf> just as soon as the object store stops losing data out from under us, you can bet your ass we'll start playing with POSIX again
[18:55] <kfogel> gregaf: gulp. I agree with that priority, at least! :-)
[18:55] <gregaf> kfogel: it doesn't really do that, and hasn't for a long time
[18:55] <gregaf> but 6 months ago it was shaky in several ways it isn't any more, and working on the filesystem just wasn't going to help if the object store stayed shaky
[18:56] <gregaf> and we realized part of the problem was that making the MDS work was way more sexy so we were prioritizing it inappropriately *sigh*
[18:57] <mikeryan> i've spent a few weeks hacking on the core, making it crash, and resurrecting it
[18:58] <mikeryan> it's not *all* fire and brimstone
[18:58] <mikeryan> though the standard disclaimer of "i've only been here a month" must apply
[19:00] <kfogel> gregaf, mikeryan: thanks for the update. I will be following ceph, even if I can't use it right away; this is all good to know.
[19:10] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[19:20] * chutzpah (~chutz@100.42.98.5) has joined #ceph
[19:48] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:48] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[19:48] * Leseb_ is now known as Leseb
[19:50] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:50] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[19:50] * Leseb_ is now known as Leseb
[19:55] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[19:55] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:00] * duerF (~tommi@31-209-240-108.dsl.dynamic.simnet.is) Quit (Ping timeout: 480 seconds)
[20:20] <dspano> gregaf: I decided to rebuild my cluster from scratch. I think I did too many naughty things to my original cluster which caused it to act funky. Many rookie mistakes when issuing commands to the OSDs.
[20:22] <gregaf> dspano: I'm having trouble remembering your story off-hand, but I hope it works better this time :)
[20:22] <dspano> Also, my original install started with 0.41, and I then upgraded to argonaut. I noticed after installing over from scratch with the upstream stable packages, that there were many underlying changes from 0.41 to 0.48.
[20:23] <gregaf> yes, yes there were ??? they should have rolled forward happily, but we aren't comprehensively testing that yet...
[20:23] <dspano> gregaf: My OSDs weren't peering with each other after I would shut down the ceph daemons and reboot the server.
[20:23] <gregaf> ah, right
[20:24] <dspano> gregaf: I upgraded from the Ubuntu packages to argonaut, so I don't know if that had anything to do with it.
[20:25] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[20:35] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[20:36] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:42] <elder> Is Yahuda around?
[20:43] <elder> (I know. Lunch time.)
[20:45] * duerF (~tommi@31-209-240-108.dsl.dynamic.simnet.is) has joined #ceph
[20:47] <mikeryan> elder: aon hits the elevators at 11:45 sharp
[20:47] <mikeryan> you can set your watch by it
[20:54] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[20:57] * ryann (~chatzilla@216.81.130.180) Quit (Read error: Connection reset by peer)
[20:57] * ryann (~chatzilla@216.81.130.180) has joined #ceph
[21:01] <elder> mikeryan, but today is in-house lunch.
[21:02] <mikeryan> away for three weeks and i already forgot the joyous perks
[21:03] <nhm> hehe
[21:03] <elder> joshd, snapshot ids never change, snapshot names can, right?
[21:05] <elder> When a header gets refreshed, it goes through the list of snapshots in reverse, in order to handle some sort of scenario that I think can't happen.
[21:05] <elder> This is what I wanted to ask Yehuda about, for some history.
[21:06] <elder> It appears we want to watch out for a situation like:
[21:06] <elder> - rename snapshot "a" -> "b"
[21:06] <elder> - create snapshot "a"
[21:06] <elder> and somehow mistaking the second, new snapshot as being the other one.
[21:07] <elder> But in fact, we aren't even looking at names, just id's, so I think the whole do-it-in-reverse thing is unnecessary.
[21:08] <elder> It matters to me because I'm working on re-implementing this for version 2 (or refactoring it to support version 2) and it would be nice to start with something a little simpler.
[21:08] <elder> Code in questino is __rbd_init_snaps_header() in the kernel client.
[21:11] * Cube (~Adium@12.248.40.138) has joined #ceph
[21:14] * Ryan_Lane (~Adium@216.38.130.166) has joined #ceph
[21:19] * andrewbogott (~andrewbog@c-76-113-214-220.hsd1.mn.comcast.net) has joined #ceph
[21:25] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[21:27] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[21:27] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[21:27] * Leseb_ is now known as Leseb
[21:33] <joshd> elder: snapshots can't be renamed
[21:36] <elder> Why not?
[21:37] * allsystemsarego (~allsystem@188.25.134.117) Quit (Quit: Leaving)
[21:38] <joao> is there any chance we are unable to run ceph on a 32-bit architecture?
[21:38] * Cube (~Adium@12.248.40.138) Quit (Quit: Leaving.)
[21:38] <joshd> elder: there's no technical reason they can't be, there's just no way to do it right now
[21:39] <elder> OK.
[21:39] <elder> But id's will never change regardless.
[21:39] <joshd> right
[21:39] <elder> (snapshot id)
[21:39] <elder> OK, then I think I will pursue making this snapshot update thing a bit easier to follow.
[21:40] <joshd> ok
[21:47] <andrewbogott> Yesterday I was running benchmarks on cephfs, and worrying about them being slow. Today I created a couple of rbds with xfs and ran the same benchmark on them. They were even slower. That's surprising, right?
[21:49] <joshd> depends on the benchmarks and whether you're using the kernel rbd module, or the userspace one with client side caching
[21:55] <andrewbogott> HM??? how do I know which module I'm using?
[21:57] <joshd> do you have entries in /sys/bus/rbd/devices?
[21:59] <andrewbogott> # ls /sys/bus/rbd/devices/
[21:59] <andrewbogott> 0
[21:59] <joshd> yeah, that's the kernel module
[22:00] <andrewbogott> The benchmark was a random io test. I would expect it to be slow, just not slower than cephfs.
[22:01] <andrewbogott> Any intuition about whether I'm doing something horribly wrong or if rbd volumes are just kinda slow?
[22:01] <andrewbogott> Or if xfs is for some reason a pathological case to combine with rdb?
[22:01] * andrewbogott tries again with ext3
[22:02] <joshd> kernel rbd with directio is going to be slow, because it will wait for the writes to go to all replicas before returning
[22:02] <andrewbogott> Ah, that makes sense.
[22:02] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[22:03] <andrewbogott> And I suppose if I use caching then I'm vulnerable to data loss.
[22:03] <joshd> not really - it's just like a hardware disk cache - it's flushed when the OS tells it to
[22:04] <joshd> so if you sync, you're guaranteed everything is written to the osds
[22:04] <joshd> and it starts writeback after 1 second by default
[22:04] <joshd> you can also use it in writethrough mode to help reads, but always have writes go to the osds
[22:06] <andrewbogott> can you point me in the direction of getting caching enabled?
[22:11] <joshd> you'll need to run qemu to use the userspace library. the caching options are described in http://thread.gmane.org/gmane.comp.file-systems.ceph.devel/6397/focus=6402
[22:12] <joshd> http://ceph.com/w/index.php?title=QEMU-RBD might help with some context
[22:12] <andrewbogott> I will read. Thanks!
[22:13] <joshd> you're welcome :)
[22:21] <andrewbogott> joshd: Would you expect qemu-rbd to be faster than cephfs, or equivalent? (So far it sounds like it uses similar tech.)
[22:21] * izdubar (~MT@c-50-137-1-13.hsd1.wa.comcast.net) Quit (Quit: Leaving)
[22:23] <joshd> slower for metadata heavy workloads, for others similar
[22:29] * lofejndif (~lsqavnbok@04ZAAEQVN.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:47] <elder> OK, I'm going away for a few hours, maybe quitting for the day.
[23:12] * andrewbogott (~andrewbog@c-76-113-214-220.hsd1.mn.comcast.net) Quit (Quit: andrewbogott)
[23:12] * andrewbogott (~andrewbog@c-76-113-214-220.hsd1.mn.comcast.net) has joined #ceph
[23:44] * __jt___ (~james@jamestaylor.org) Quit (Quit: Lost terminal)
[23:46] * __jt__ (~james@jamestaylor.org) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.