#ceph IRC Log

Index

IRC Log for 2014-08-30

Timestamps are in GMT/BST.

[0:00] * dgurtner (~dgurtner@241-236.197-178.cust.bluewin.ch) Quit (Quit: leaving)
[0:03] * dgurtner (~dgurtner@241-236.197-178.cust.bluewin.ch) has joined #ceph
[0:04] * dgurtner (~dgurtner@241-236.197-178.cust.bluewin.ch) Quit ()
[0:05] * dgurtner (~dgurtner@241-236.197-178.cust.bluewin.ch) has joined #ceph
[0:07] <absynth_> mh, anyone awake?
[0:09] <tnt> awake ... yeah. alert ? not quite.
[0:11] <absynth_> how do i start osds one by one using the debian/ubuntu initscripts?
[0:11] <absynth_> having a slight fit here
[0:12] <tnt> service ceph start osd.N
[0:12] <absynth_> mh, thought so to
[0:12] <absynth_> but alas, /etc/init.d/ceph: osd.7 not found (/etc/ceph/ceph.conf defines , /var/lib/ceph defines )
[0:13] <tnt> well ... I think for this to work, you need to have the osd sections defined in the conf. You can't rely on the 'auto' stuff which is only udev based.
[0:14] <absynth_> log is spammed with
[0:14] <absynth_> 2014-08-30 00:14:11.397576 7fcf401d0700 1 heartbeat_map is_healthy 'OSD::op_tp thread 0x7fcf3d1ca700' had timed out after 15
[0:14] <absynth_> what is that...?!
[0:14] * tinklebear (~tinklebea@cpe-066-057-253-171.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:14] * dmsimard is now known as dmsimard_away
[0:17] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[0:18] * dgurtner (~dgurtner@241-236.197-178.cust.bluewin.ch) Quit (Read error: Connection reset by peer)
[0:19] * ricardo (~ricardo@2404:130:0:1000:5542:f2e7:27eb:c1c1) Quit (Ping timeout: 480 seconds)
[0:20] * chuffpdx (~chuffpdx@208.186.186.51) Quit (Read error: Connection reset by peer)
[0:20] * chuffpdx (~chuffpdx@208.186.186.51) has joined #ceph
[0:21] * dgurtner (~dgurtner@241-236.197-178.cust.bluewin.ch) has joined #ceph
[0:21] * joef1 (~Adium@2601:9:280:f2e:dc2:d771:f1af:ff04) Quit (Quit: Leaving.)
[0:23] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[0:26] <absynth_> what is that error message?
[0:26] <absynth_> 2014-08-30 00:26:11.148200 7fb68a162700 1 heartbeat_map is_healthy 'OSD::op_tp thread 0x7fb68695b700' had timed out after 15
[0:27] <absynth_> i have that on 4 OSDs that went down 45 mins ago and cannot be brought up again
[0:28] * ricardo (~ricardo@2404:130:0:1000:549:f999:bfed:1c69) has joined #ceph
[0:29] * qhartman (~qhartman@den.direwolfdigital.com) Quit (Quit: Ex-Chat)
[0:29] <carmstrong> anyone know why I'm getting No handlers could be found for logger "ceph_deploy" when running as a new ceph user?
[0:29] <carmstrong> I thought I shouldn't run ceph-deploy as root
[0:35] * rendar (~I@host230-19-dynamic.3-79-r.retail.telecomitalia.it) Quit ()
[0:37] * dgurtner (~dgurtner@241-236.197-178.cust.bluewin.ch) Quit (Ping timeout: 480 seconds)
[0:38] * ircolle is now known as ircolle-afk
[0:38] * simulx (~simulx@66-194-114-178.static.twtelecom.net) Quit (Read error: Connection reset by peer)
[0:38] <absynth_> someone from redhat around? gnrmpf
[0:39] * simulx (~simulx@66-194-114-178.static.twtelecom.net) has joined #ceph
[0:48] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[0:52] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Ping timeout: 480 seconds)
[0:54] <sjustwork> absynth_: anything in dmesg?
[0:56] <absynth_> not at all
[0:56] <absynth_> just filed a ticket
[0:56] <absynth_> the cluster is not starting rebalance, either
[0:56] <sjustwork> what happened to cause the osds to go down?
[0:56] <absynth_> no idea
[0:56] <sjustwork> are they all on the same node?
[0:57] <absynth_> 2014-08-29 23:53:27.153825 7f98a1caa700 1 heartbeat_map is_healthy 'OSD::op_tp thre
[0:57] <absynth_> ad 0x7f989eca4700' had timed out after 15
[0:57] <absynth_> this is the first message indicating an issue
[0:57] <absynth_> no asserts, nothing
[0:57] <absynth_> all four are on the same machine
[0:57] <sjustwork> are they seen as up?
[0:57] <absynth_> no
[0:57] <absynth_> after the flapping got too intense, we killed the processes
[0:57] <sjustwork> try restarting them
[0:57] <sjustwork> ok
[0:58] <absynth_> but as soon as we restart them, they do stuff for a while, then hit the message i pasted
[0:58] <sjustwork> is there memory pressure?
[0:58] <sjustwork> that often indicates a filesystem hang
[0:58] * dmick (~dmick@2607:f298:a:607:7851:3223:bb7c:2385) Quit (Ping timeout: 480 seconds)
[0:58] <absynth_> 5 gig free, so i'd say no
[0:58] <sjustwork> can you sync the filesystems for those osds, unmount, remount them?
[0:58] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) Quit (Remote host closed the connection)
[0:58] <sjustwork> or perhaps better yet set nodown and reboot the node
[0:58] <absynth_> that's out of the question (rebooting)
[0:59] <absynth_> root@fcmsnode3:~# ps auxxw|grep -c qemu-system-x86
[0:59] <absynth_> 130
[0:59] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:59] <absynth_> just a _tad_ too many collaterals
[1:00] <absynth_> oliver is remounting now
[1:00] <absynth_> hopefully that will work
[1:00] * Cybertinus (~Cybertinu@2a00:6960:1:1:0:24:107:1) has joined #ceph
[1:00] <absynth_> remounted, now carefully starting the OSDs
[1:01] <sjustwork> this is xfs?
[1:02] <sjustwork> nothing notable changed recently?
[1:02] <absynth_> nope
[1:02] <kraken> http://i.imgur.com/c4gTe5p.gif
[1:02] <sjustwork> is there anything interesting about that hardware?
[1:02] <absynth_> we just got awoken by SMSes
[1:02] <absynth_> no
[1:02] <sjustwork> just xfs on spinning disk?
[1:02] <absynth_> yep
[1:03] <absynth_> ssd journal, and we have that ssd cache on the raid controller
[1:03] <absynth_> maybe the SSD died? but it would die loudly, wouldnt it?
[1:03] <sjustwork> I have no idea
[1:03] <sjustwork> particularly with that cache
[1:03] <absynth_> it worked just fine for a year or so
[1:04] <sjustwork> can you check the status on that?
[1:04] <sjustwork> these 4 osds share the same ssd?
[1:04] <sjustwork> is that all of the ssds on the host?
[1:04] <sjustwork> *osds
[1:06] <absynth_> wait
[1:06] * dmick (~dmick@2607:f298:a:607:649e:a8fc:9d86:f096) has joined #ceph
[1:07] <absynth_> there's one ssd per OSD
[1:07] <sjustwork> is that all of the osds on the host?
[1:07] <absynth_> yes
[1:07] <sjustwork> sounds like the controller or some xfs kernel state
[1:08] <absynth_> both should appear in dmesg
[1:08] <sjustwork> is it still happening?
[1:08] <absynth_> but yes, looks like something global
[1:08] <absynth_> yeah, it's still down
[1:08] <seapasul1i> I am thinking about redoing my ceph cluster but I have data that I need in the cluster. Is there any way for me to slowly down my initial cluster, rebuild a new one, and copy the data between a local and foreign ceph pool?
[1:08] <sjustwork> ok, pick one of the osds
[1:08] <sjustwork> and restart with
[1:08] <sjustwork> debug osd = 20
[1:08] <sjustwork> debug filestore = 20
[1:08] <sjustwork> debug ms = 1
[1:09] <sjustwork> and debug journal = 20
[1:10] <sjustwork> dumpling?
[1:10] <absynth_> ceph version 0.67.5 (a60ac9194718083a4b6a225fc17cad6096c69bd1)
[1:12] <sjustwork> absynth_: is it flapping or staying down?
[1:12] <absynth_> right now it's all down, becuase we have kept the OSDs down
[1:13] <sjustwork> I guess you have noout set to prevent rebalancing/
[1:13] <sjustwork> ?
[1:13] <absynth_> the usual behavior is that one OSD comes up, a lot of slow reqs start flying that way and it gets marked down by its peers
[1:13] <absynth_> it's not set
[1:13] <absynth_> still, the cluster is not rebalancing
[1:14] <sjustwork> wait, each OSD has it's own ssd journal?
[1:14] <sjustwork> are they marked out?
[1:14] <sjustwork> you have 4 hdds and 4 ssds in that machine?
[1:14] <absynth_> quite sure we do
[1:14] <sjustwork> are the osds marked in or out?
[1:15] <absynth_> wait
[1:15] <absynth_> they aren't in
[1:15] <absynth_> osdmap e1985: 40 osds: 36 up, 40 in
[1:15] <sjustwork> yes they are
[1:15] <absynth_> oh wait, they are in
[1:15] <sjustwork> if you want it to rebalance, you'll have to mark them out
[1:15] <absynth_> we're looking at the controlelr in that box now
[1:15] <sjustwork> (it'll do it on its own eventually)
[1:16] <absynth_> aftr what timeout?
[1:16] <sjustwork> mon_osd_down_out_interval
[1:16] <sjustwork> I think
[1:16] <sjustwork> defaults to 5 minutes
[1:17] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) has joined #ceph
[1:17] <absynth_> yeah, 5 mins here, but no rebalancing and some OSDs are down for i dunno, an hour now
[1:19] <sjustwork> are the processes running?
[1:20] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[1:20] * sleinen1 (~Adium@2001:620:0:68::100) Quit (Ping timeout: 480 seconds)
[1:20] <sjustwork> are the ssds doing anything other than journaling?
[1:20] <absynth_> no, because we killed them to prevent more flapping
[1:20] <absynth_> nope
[1:20] <kraken> http://i.imgur.com/iSm1aZu.gif
[1:20] <sjustwork> well, probably should just manually mark them out
[1:21] <sjustwork> all pgs are active, right?
[1:21] * oms101 (~oms101@p20030057EA4CC400C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:21] <sjustwork> what is the output of ceph -s?
[1:22] <absynth_> health HEALTH_WARN 7218 pgs degraded; 7218 pgs stuck unclean; 1209 requests are blocked > 32 sec; recovery 2176431/14119520 degraded (15.414%); 2 near full osd(s); 4/40 in osds are down; noout flag(s) set
[1:22] <sjustwork> you do have noout set
[1:22] <sjustwork> what is the output of ceph -s?
[1:22] <sjustwork> also, you have two near-full osds
[1:22] <sjustwork> so you may not want to mark them out
[1:22] <sjustwork> since that will cause other osds to gain more data
[1:22] <sjustwork> not good
[1:22] <absynth_> monmap e1: 3 mons at {fcmsmon0=10.10.10.4:6789/0,fcmsmon1=10.10.10.5:6789/0,fcmsmon2=10.10.10.6:6789/0}, election epoch 4, quorum 0,1,2 fcmsmon0,fcmsmon1,fcmsmon2
[1:22] <absynth_> osdmap e1985: 40 osds: 36 up, 40 in
[1:23] <absynth_> pgmap v10476640: 23160 pgs: 15942 active+clean, 7218 active+degraded; 26257 GB data, 52912 GB used, 28973 GB / 81886 GB avail; 7293KB/s rd, 22419KB/s wr, 1045op/s; 2176431/14119520 degraded (15.414%)
[1:23] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[1:23] <absynth_> we are in the middle of trying to rebalance data on that cluster, but since the balancing mechanics of ceph are, to say it politely, far from optimal, we are having a hard time doing so
[1:23] <sjustwork> two replicas?
[1:23] <absynth_> and since there was no clear best practice on how to dimension pools and pgs per pool when we started our cluster, we are really fucked
[1:23] <absynth_> yes
[1:24] <sjustwork> well, the host appears to be having trouble and marking the osds out will cause full osds
[1:25] <absynth_> 2014-08-30 01:25:50.531955 7fc0563fa700 0 -- 10.10.10.8:6802/7633 >> 10.10.10.175:6822/1014364 pipe(0x2e49000 sd=86 :64992 s=2 pgs=201363 cs=19727 l=0 c=0x78dd22c0).fault, initiating reconnect
[1:25] <absynth_> what are these?
[1:26] <sjustwork> usually harmless
[1:26] <absynth_> remember seeing these before
[1:26] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Quit: leaving)
[1:27] <sage> why so many blocked requests? what says 'ceph health detail | grep slow' ?
[1:27] * zultron (~zultron@99-190-134-148.lightspeed.austtx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[1:27] <absynth_> 1194 requests are blocked > 32 sec; 4 osds have slow requests;
[1:27] * fsimonce (~simon@host135-17-dynamic.8-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:27] <absynth_> those 4 are the downed OSDs
[1:28] * ircolle-afk is now known as ircolle
[1:28] * kevinc (~kevinc__@client64-180.sdsc.edu) Quit (Quit: Leaving)
[1:28] <sage> oh, so it's noise. (a reporting bug we fixed a while back, iirc)
[1:28] <absynth_> as soon as one of the OSDs lives, i can see those slow reqs
[1:29] <absynth_> and they are really many
[1:29] <sjustwork> absynth_: yeah, it looks like something is preventing IO from working on that host
[1:29] <absynth_> we think it's the controller, we have seen some weird stuff just now
[1:30] <sjustwork> that's not great, the osds will be suspect even if you swap out the controller
[1:30] * oms101 (~oms101@p20030057EA5CFE00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:30] <absynth_> these controllers sometimes hang
[1:30] <absynth_> seen it before
[1:30] <sjustwork> how do you usually clear it?
[1:30] <absynth_> reboot
[1:30] <absynth_> at least that's the only idea everyone here has
[1:32] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[1:32] <absynth_> meh
[1:34] <absynth_> http://nopaste.info/e30501c100.html
[1:34] <absynth_> this statement about sums it up (by christian)
[1:35] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) has joined #ceph
[1:53] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[1:55] * Pedras (~Adium@216.207.42.129) Quit (Ping timeout: 480 seconds)
[1:58] * reed (~reed@rackspacesf2.static.monkeybrains.net) Quit (Ping timeout: 480 seconds)
[2:04] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:04] <absynth_> i'm handing over to oliver and jens now, i opened ticket 180 in zendesk
[2:04] <kraken> absynth_ might be talking about http://tracker.ceph.com/issues/180 [Return ENOTEMPTY when trying to remove a directory which has a snapshot]
[2:11] * sjustwork (~sam@2607:f298:a:607:25fe:4f82:e03b:52f) has left #ceph
[2:11] * sjustwork (~sam@2607:f298:a:607:25fe:4f82:e03b:52f) has joined #ceph
[2:15] * BManojlovic (~steki@93-86-44-204.dynamic.isp.telekom.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[2:15] * BManojlovic (~steki@net73-154-245-109.mbb.telenor.rs) has joined #ceph
[2:16] * adamcrume (~quassel@2601:9:6680:47:cc3c:790b:4b7f:6f50) Quit (Remote host closed the connection)
[2:17] * monsterz_ (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[2:22] * jjgalvez (~JuanJose@162.219.179.70) has joined #ceph
[2:25] * ircolle (~Adium@2601:1:a580:145a:90f2:e77:7428:686e) Quit (Quit: Leaving.)
[2:38] * BManojlovic (~steki@net73-154-245-109.mbb.telenor.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[2:38] * BManojlovic (~steki@93-86-44-204.dynamic.isp.telekom.rs) has joined #ceph
[2:52] * zerick (~eocrospom@190.187.21.53) Quit (Read error: Operation timed out)
[2:54] * BManojlovic (~steki@93-86-44-204.dynamic.isp.telekom.rs) Quit (Ping timeout: 480 seconds)
[2:56] * sjustwork (~sam@2607:f298:a:607:25fe:4f82:e03b:52f) Quit (Quit: Leaving.)
[2:57] * Pedras (~Adium@50.185.218.255) has joined #ceph
[2:57] * filoo-jens (~jens@jump.filoo.de) has joined #ceph
[2:58] <carmstrong> can someone elaborate on: Note We do not recommend comingling monitors and OSDs on the same host.
[3:02] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[3:02] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[3:04] <filoo-jens> Hi guys .... sam: are you around ?
[3:05] * yanzheng (~zhyan@171.221.143.132) Quit ()
[3:07] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[3:10] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[3:13] * angdraug (~angdraug@host-200-119.pubnet.pdx.edu) Quit (Quit: Leaving)
[3:14] <Sysadmin88> carmstrong... ceph will run on almost any configuration... recommendations are likely for huge clusters that get busy
[3:14] <carmstrong> Sysadmin88: gotcha
[3:15] <classicsnail> carmstrong: it works, and works well enough in many situations, but as sysadmin88 notes, when it gets busy, you're better to split
[3:15] <carmstrong> fair enough
[3:15] <carmstrong> this is a tiny 3-node cluster
[3:15] <carmstrong> so I should be fine
[3:16] <classicsnail> if it's only a handful of disks, it will probably work fine
[3:16] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:16] <classicsnail> from experience, 3 lots of 72 disk chassis, with mons on same hosts as osds, can start to have load problems when rebuilds occur
[3:17] <carmstrong> hmmm now I'm getting admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[3:17] <carmstrong> trying to Dockerify Ceph
[3:18] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[3:26] * tinklebear (~tinklebea@64.237.37.117) has joined #ceph
[3:26] * mtl1 (~Adium@c-98-245-49-17.hsd1.co.comcast.net) has joined #ceph
[3:33] * Pedras (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[3:36] <loicd> houkouonchi-home: there seems to be a problem with http://gitbuilder.sepia.ceph.com/gitbuilder-ceph-tarball-precise-i386-basic/log.cgi?log=bb26c66b826ab095a981b35ba87bea6952d82d49
[3:37] <houkouonchi-home> loicd: did you see my email to dev? :P
[3:37] <houkouonchi-home> should work now
[3:38] <loicd> ah cool, thanks :-)
[3:49] * tinklebear (~tinklebea@64.237.37.117) Quit (Quit: Nettalk6 - www.ntalk.de)
[3:53] * Pedras (~Adium@50.185.218.255) has joined #ceph
[4:04] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[4:12] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[4:16] * Pedras (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[4:17] * sjm (~sjm@pool-108-53-147-245.nwrknj.fios.verizon.net) has joined #ceph
[4:18] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[4:25] * DP (~oftc-webi@zccy01cs104.houston.hp.com) Quit (Remote host closed the connection)
[4:27] * angdraug (~angdraug@host-200-119.pubnet.pdx.edu) has joined #ceph
[4:28] * dneary (~dneary@96.237.180.105) has joined #ceph
[4:40] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[4:44] * angdraug (~angdraug@host-200-119.pubnet.pdx.edu) Quit (Quit: Leaving)
[4:53] * sjm (~sjm@pool-108-53-147-245.nwrknj.fios.verizon.net) has left #ceph
[5:00] * sjustlaptop (~sam@24-205-54-233.dhcp.gldl.ca.charter.com) has joined #ceph
[5:01] * longguang_ (~chatzilla@123.126.33.253) has joined #ceph
[5:05] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[5:05] * longguang (~chatzilla@123.126.33.253) Quit (Ping timeout: 480 seconds)
[5:05] * longguang_ is now known as longguang
[5:13] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[5:17] * Pedras (~Adium@50.185.218.255) has joined #ceph
[5:17] * Pedras (~Adium@50.185.218.255) Quit ()
[5:19] * yuriw1 (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) has joined #ceph
[5:23] * filoo-jens (~jens@jump.filoo.de) Quit (Quit: Ex-Chat)
[5:24] * dneary (~dneary@96.237.180.105) Quit (Ping timeout: 480 seconds)
[5:26] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:30] * sjustlaptop (~sam@24-205-54-233.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[5:32] * Vacuum_ (~vovo@88.130.197.213) has joined #ceph
[5:39] * Vacuum (~vovo@i59F79D9C.versanet.de) Quit (Ping timeout: 480 seconds)
[6:06] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[6:11] * vbellur (~vijay@117.201.204.241) has joined #ceph
[6:14] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[6:33] * MACscr (~Adium@c-98-214-170-53.hsd1.il.comcast.net) Quit (Quit: Leaving.)
[7:04] * zultron (~zultron@cpe-173-172-66-14.austin.res.rr.com) has joined #ceph
[7:07] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[7:15] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[7:16] * diegows (~diegows@190.190.5.238) has joined #ceph
[7:17] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[7:27] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[8:07] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[8:08] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[8:15] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[8:19] * KevinPerks1 (~Adium@2606:a000:80a1:1b00:287e:9962:8e48:a193) Quit (Quit: Leaving.)
[8:29] * rendar (~I@host36-181-dynamic.3-87-r.retail.telecomitalia.it) has joined #ceph
[8:30] * davidz1 (~Adium@cpe-23-242-12-23.socal.res.rr.com) Quit (Quit: Leaving.)
[8:55] * MACscr (~Adium@c-98-214-170-53.hsd1.il.comcast.net) has joined #ceph
[8:59] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[9:15] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (Read error: Operation timed out)
[9:16] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[9:23] * longguang_ (~chatzilla@123.126.33.253) has joined #ceph
[9:24] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) Quit (Quit: No Ping reply in 180 seconds.)
[9:26] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) has joined #ceph
[9:28] * longguang (~chatzilla@123.126.33.253) Quit (Ping timeout: 480 seconds)
[9:28] * longguang_ is now known as longguang
[9:36] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) Quit (Quit: No Ping reply in 180 seconds.)
[9:38] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) has joined #ceph
[9:45] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[9:46] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[9:47] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[9:48] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[9:48] * sleinen1 (~Adium@2001:620:0:68::100) has joined #ceph
[9:55] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[10:04] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:11] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[10:13] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[10:29] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) Quit (Ping timeout: 480 seconds)
[10:34] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[10:43] * jtang_ (~jtang@80.111.83.231) Quit (Remote host closed the connection)
[10:54] * Daviey__ (~DavieyOFT@bootie.daviey.com) has joined #ceph
[10:55] * Daviey (~DavieyOFT@bootie.daviey.com) Quit (Remote host closed the connection)
[11:02] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[11:07] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[11:08] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:08] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[11:16] * jtang_ (~jtang@80.111.83.231) has joined #ceph
[11:21] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[11:25] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[11:26] * jtang_ (~jtang@80.111.83.231) Quit (Remote host closed the connection)
[11:33] * BManojlovic (~steki@212.200.65.135) has joined #ceph
[11:42] * steki (~steki@212.200.65.129) has joined #ceph
[11:43] * houkouonchi-home (~linux@2001:470:c:c69::2) Quit (Ping timeout: 480 seconds)
[11:44] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:45] * BManojlovic (~steki@212.200.65.135) Quit (Ping timeout: 480 seconds)
[11:49] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[11:50] * dignus (~jkooijman@t-x.dignus.nl) has left #ceph
[11:50] * houkouonchi-home (~linux@houkouonchi-1-pt.tunnel.tserv15.lax1.ipv6.he.net) has joined #ceph
[11:50] * dignus (~jkooijman@t-x.dignus.nl) has joined #ceph
[11:52] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[11:53] * masta (~masta@190.7.213.210) Quit (Quit: Leaving...)
[11:54] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[12:02] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[12:06] * ikrstic (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) has joined #ceph
[12:23] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[12:30] * vbellur1 (~vijay@117.198.250.120) has joined #ceph
[12:30] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Quit: brb)
[12:31] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Remote host closed the connection)
[12:32] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[12:35] * vbellur (~vijay@117.201.204.241) Quit (Ping timeout: 480 seconds)
[12:45] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) has joined #ceph
[12:48] * LeaChim (~LeaChim@host86-174-29-56.range86-174.btcentralplus.com) has joined #ceph
[12:50] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[12:53] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[13:00] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[13:00] * yanzheng (~zhyan@171.221.143.132) Quit ()
[13:11] * BManojlovic (~steki@93-86-44-204.dynamic.isp.telekom.rs) has joined #ceph
[13:14] * steki (~steki@212.200.65.129) Quit (Ping timeout: 480 seconds)
[13:18] * LeaChim (~LeaChim@host86-174-29-56.range86-174.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[13:22] * simulx (~simulx@66-194-114-178.static.twtelecom.net) Quit (Ping timeout: 480 seconds)
[13:23] * BManojlovic (~steki@93-86-44-204.dynamic.isp.telekom.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[13:24] * simulx (~simulx@66-194-114-178.static.twtelecom.net) has joined #ceph
[13:29] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[13:41] * houkouonchi-home (~linux@houkouonchi-1-pt.tunnel.tserv15.lax1.ipv6.he.net) Quit (Ping timeout: 480 seconds)
[13:50] * D-Spair (~dphillips@cpe-74-130-79-134.swo.res.rr.com) has joined #ceph
[13:53] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[13:54] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[13:59] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[14:01] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[14:02] * AfC (~andrew@customer-hotspot.esshotell.se) has joined #ceph
[14:16] * monsterzz (~monsterzz@94.19.146.224) Quit (Read error: Connection reset by peer)
[14:16] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[14:18] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (Remote host closed the connection)
[14:18] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[14:19] -solenoid.oftc.net- *** Looking up your hostname...
[14:19] -solenoid.oftc.net- *** Checking Ident
[14:19] -solenoid.oftc.net- *** Couldn't look up your hostname
[14:19] -solenoid.oftc.net- *** No Ident response
[14:19] * CephLogBot (~PircBot@92.63.168.213) has joined #ceph
[14:19] * Topic is 'http://ceph.com/get || dev channel #ceph-devel || Calamari is Open Source! http://ceph.com/?p=5862'
[14:19] * Set by scuttlemonkey!~scuttle@nat-pool-rdu-t.redhat.com on Fri Jun 20 18:43:30 CEST 2014
[14:21] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[14:22] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (Remote host closed the connection)
[14:24] -coulomb.oftc.net- *** Looking up your hostname...
[14:24] -coulomb.oftc.net- *** Checking Ident
[14:24] -coulomb.oftc.net- *** Couldn't look up your hostname
[14:24] -coulomb.oftc.net- *** No Ident response
[14:24] * CephLogBot (~PircBot@92.63.168.213) has joined #ceph
[14:24] * Topic is 'http://ceph.com/get || dev channel #ceph-devel || Calamari is Open Source! http://ceph.com/?p=5862'
[14:24] * Set by scuttlemonkey!~scuttle@nat-pool-rdu-t.redhat.com on Fri Jun 20 18:43:30 CEST 2014
[14:24] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[14:27] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) Quit (Quit: Leaving)
[14:29] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) has joined #ceph
[14:36] * monsterz_ (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[14:36] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (Remote host closed the connection)
[14:37] -coulomb.oftc.net- *** Looking up your hostname...
[14:37] -coulomb.oftc.net- *** Checking Ident
[14:37] -coulomb.oftc.net- *** Couldn't look up your hostname
[14:37] -coulomb.oftc.net- *** No Ident response
[14:37] * CephLogBot (~PircBot@92.63.168.213) has joined #ceph
[14:37] * Topic is 'http://ceph.com/get || dev channel #ceph-devel || Calamari is Open Source! http://ceph.com/?p=5862'
[14:37] * Set by scuttlemonkey!~scuttle@nat-pool-rdu-t.redhat.com on Fri Jun 20 18:43:30 CEST 2014
[14:50] * simulx2 (~simulx@vpn.expressionanalysis.com) has joined #ceph
[14:55] * simulx (~simulx@66-194-114-178.static.twtelecom.net) Quit (Ping timeout: 480 seconds)
[14:56] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[14:57] * diegows (~diegows@190.190.5.238) has joined #ceph
[15:05] * sbadia (~sbadia@yasaw.net) Quit (Quit: Bye)
[15:05] * dgurtner (~dgurtner@46-236.197-178.cust.bluewin.ch) has joined #ceph
[15:09] * AfC (~andrew@customer-hotspot.esshotell.se) Quit (Quit: Leaving.)
[15:09] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[15:12] * KevinPerks (~Adium@2606:a000:80a1:1b00:48fa:a7fd:6724:11d) has joined #ceph
[15:38] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[15:48] * julian (~julian@221.237.148.132) Quit (Quit: Leaving)
[16:09] * dgurtner_ (~dgurtner@124-227.197-178.cust.bluewin.ch) has joined #ceph
[16:11] * dgurtner (~dgurtner@46-236.197-178.cust.bluewin.ch) Quit (Ping timeout: 480 seconds)
[16:17] * joerocklin (~joe@cpe-65-185-149-56.woh.res.rr.com) Quit (Quit: ZNC - http://znc.in)
[16:20] * diegows (~diegows@190.190.5.238) has joined #ceph
[16:20] * joerocklin (~joe@cpe-65-185-149-56.woh.res.rr.com) has joined #ceph
[16:40] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[16:49] * Pedras (~Adium@50.185.218.255) has joined #ceph
[16:52] * Pedras (~Adium@50.185.218.255) Quit ()
[16:54] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[16:55] * dgurtner_ (~dgurtner@124-227.197-178.cust.bluewin.ch) Quit (Ping timeout: 480 seconds)
[16:59] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[16:59] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[17:01] * sbadia (~sbadia@195.154.119.118) has joined #ceph
[17:07] * [fred] (fred@earthli.ng) Quit (Remote host closed the connection)
[17:23] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[17:33] * maxxware_ (~maxx@149.210.133.105) Quit (Quit: leaving)
[17:37] * maxxware (~maxx@149.210.133.105) has joined #ceph
[17:54] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[18:02] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[18:09] * codice_ (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[18:11] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (Ping timeout: 480 seconds)
[18:14] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[18:15] * yuriw1 is now known as yuriw
[18:24] * [fred] (fred@earthli.ng) has joined #ceph
[18:44] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[18:45] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) has joined #ceph
[18:53] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[18:56] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[18:57] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:00] * maxxware (~maxx@149.210.133.105) Quit (Quit: leaving)
[19:01] * maxxware (~maxx@149.210.133.105) has joined #ceph
[19:07] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[19:13] * schmee_ (~quassel@phobos.isoho.st) has joined #ceph
[19:13] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[19:16] * BranchPr1dictor (branch@predictor.org.pl) has joined #ceph
[19:17] * BranchPredictor (branch@predictor.org.pl) Quit (Read error: Connection reset by peer)
[19:17] * schmee (~quassel@41.78.129.253) Quit (Ping timeout: 480 seconds)
[19:17] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[19:43] * matt__ (~matt@64.191.222.109) has joined #ceph
[19:45] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[19:46] <matt__> Is anyone around? I have a serious problem on a product ceph cluster
[19:46] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[19:46] <matt__> production*
[19:46] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[19:47] <cookednoodles> maybe you should say your issue exactly ?
[19:50] <matt__> I have a ceph cluster that holds a bunch of rbd images for a proxmox server. These are raw format block devices for virtual machines.
[19:50] <matt__> I had to shut everything down for maintenance last night and when I brought everything back up, the proxmox servers can???t seem to communicate with the ceph cluster anymore
[19:51] <matt__> ceph status reports health ok and all pgs are active+clean but I can???t get proxmox to see any of the images anymore
[19:52] <steveeJ> matt__: how does proxmox attempt to access the images?
[19:53] <matt__> I believe it used librbd, I???m not actually sure. I used the GUI in proxmox to plug in the pool, monitors, etc.
[19:53] <steveeJ> could it simply be a connectivity issue and the monitors are not reachable from your proxmox host?
[19:54] <matt__> I can ping them
[19:54] <matt__> and if I do netstat on one of the monitors
[19:54] <steveeJ> try to telnet to the mon ports from your proxmox host
[19:54] <matt__> I can see connections hitting 6789 from all my proxmox hosts
[19:54] <matt__> Ok
[19:55] <steveeJ> and there are no log entries at all to show here? proxmox (qemu) must be complaining about something right?
[19:56] <matt__> telnet works root@prox1-A:/etc/pve/priv/ceph# telnet 192.168.220.11 6789
[19:56] <matt__> Trying 192.168.220.11...
[19:56] <matt__> Connected to 192.168.220.11.
[19:56] <matt__> Escape character is '^]'.
[19:56] <matt__> ceph v027?????????
[19:56] <matt__> ????????????
[19:57] <steveeJ> have you checked if the the admin keyring is still available? on your proxmox host? the proxmox-fs they use in /etc/pve has been doing strange things for me when i played with it
[19:57] <matt__> no relevant logs in /var/log/syslog on the proxmox servers
[19:57] <matt__> yeah my keyring is in /etc/pve/priv/ceph
[19:57] <steveeJ> matt__: it is probably in the vm-related log
[19:57] <matt__> where do I look for the vm-related log?
[19:58] <steveeJ> finding them under /var/log shouldn't be too hard
[20:00] <steveeJ> may be just use the big gun and do: grep -riE "(ceph|rbd)" /var/log/
[20:07] <matt__> I see a couple of these errors before the systems went down last night
[20:07] <matt__> var/lib/rrdcached/db/pve2-storage/prox1-B/storage1-rbd-pool) failed with status -1. (/var/lib/r rdcached/db/pve2-storage/prox1-B/storage1-rbd-pool: illegal attempt to update using time 140934 5682 when last update time is 1409345682 (minimum one second step))
[20:07] <matt__> then I just see the start failed errors from when I tried to start VMs that were stored on rbd
[20:08] <matt__> I???m worried this is a problem with ceph and not proxmox. How can I test whether these rbd images can be read properly from another place?
[20:09] <steveeJ> you could try to map them using the "rbd map" command
[20:09] <steveeJ> it uses the kernel rbd driver to map an image to a block device in /dev/ for the image
[20:15] <matt__> I tried doing a: rbd info -p rbd --image TEST -m 192.168.220.11 --keyfile /etc/pve/priv/ceph/storage1-rbd-pool.keyring
[20:15] * vbellur1 (~vijay@117.198.250.120) Quit (Quit: Leaving.)
[20:15] <matt__> from the proxmox server
[20:16] <matt__> and I got: 2014-08-30 14:14:56.443951 7f25474b7760 -1 auth: failed to decode key '[client.admin]
[20:16] <matt__> key = AQAC1cFSkI33MBAABQcCpPkrDR45002MmTgb7w==
[20:16] <matt__> '
[20:16] <matt__> should that have worked?
[20:17] <steveeJ> no, that's not your keyfile, that's your keyring
[20:17] <steveeJ> the keyfile should contain the key only
[20:18] <steveeJ> also it's a very private string. you shouldn't post it in chat rooms
[20:19] <steveeJ> personally though i don't need to provide a keyfile on a mon/osd host with the keyring in place
[20:19] <steveeJ> just try skipping that argument
[20:19] <steveeJ> oh sorry, just realized that's your proxmox host
[20:20] <matt__> no rbd kernel module on proxmox though right?
[20:21] <steveeJ> don't know. i had compiled my own kernel when i was using it
[20:22] <steveeJ> unfortunately i haven't tried it with ceph. are you using ceph from the proxmox hosts only?
[20:22] <matt__> I can do it from one of my ceph hosts though
[20:23] <matt__> yes, I don???t access ceph from anywhere else
[20:24] <matt__> I just did rbd map TEST from one of my monitors and it took
[20:24] <matt__> does that mean anything though?
[20:24] <steveeJ> what did it take?
[20:24] <matt__> TEST now shows in /dev/rbd/rbd/
[20:25] <steveeJ> that's good
[20:26] <matt__> if I export the image of one of my vms can I point proxmox to that file and boot it?
[20:27] <steveeJ> that would be possible yes
[20:27] <steveeJ> but that's not how proxmox planned it
[20:27] <steveeJ> i'm pretty sure they're passing the rbd string to qemu
[20:29] <matt__> but you???re staying I should be able to extract this image and boot it like any other image file?
[20:29] <steveeJ> yes, it's in raw format
[20:29] <matt__> ok, because I really just need to save one virtual machine
[20:29] * rendar (~I@host36-181-dynamic.3-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:29] <matt__> I don???t care if the rest are gone forever
[20:30] <steveeJ> okay
[20:31] <steveeJ> sry, gotta go
[20:32] <steveeJ> if you decide to get proxmox talk to ceph again, try "qemu-img info rbd:<pool>/<image>" to see what's going wrong
[20:32] <steveeJ> good luck
[20:32] * rendar (~I@host36-181-dynamic.3-87-r.retail.telecomitalia.it) has joined #ceph
[20:32] <matt__> thanks!
[20:38] * songlei (~songlei@182.18.56.73) has joined #ceph
[20:39] * songlei (~songlei@182.18.56.73) Quit ()
[20:39] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[20:39] * songlei (~songlei@182.18.56.73) has joined #ceph
[20:40] * songlei (~songlei@182.18.56.73) has left #ceph
[20:40] * songlei (~songlei@182.18.56.73) has joined #ceph
[20:42] * songlei (~songlei@182.18.56.73) Quit ()
[20:44] * joshwambua (~joshwambu@154.72.0.90) Quit (Quit: No Ping reply in 180 seconds.)
[20:45] * joshwambua (~joshwambu@154.72.0.90) has joined #ceph
[21:04] * matt__ (~matt@64.191.222.109) Quit (Quit: matt__)
[21:06] * linuxkidd (~linuxkidd@cpe-066-057-017-151.nc.res.rr.com) Quit (Read error: Operation timed out)
[21:12] * joshwambua (~joshwambu@154.72.0.90) Quit (Quit: No Ping reply in 180 seconds.)
[21:14] * joshwambua (~joshwambu@154.72.0.90) has joined #ceph
[21:14] * sz0 (~sz0@94.55.197.185) has joined #ceph
[21:19] * linuxkidd (~linuxkidd@rrcs-70-62-120-189.midsouth.biz.rr.com) has joined #ceph
[21:22] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[21:28] * linuxkidd (~linuxkidd@rrcs-70-62-120-189.midsouth.biz.rr.com) Quit (Ping timeout: 480 seconds)
[21:29] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[21:29] * masta (~masta@190.7.205.254) has joined #ceph
[21:31] * longguang_ (~chatzilla@123.126.33.253) has joined #ceph
[21:32] * longguang (~chatzilla@123.126.33.253) Quit (Ping timeout: 480 seconds)
[21:32] * longguang_ is now known as longguang
[21:43] * linuxkidd (~linuxkidd@cpe-066-057-017-151.nc.res.rr.com) has joined #ceph
[21:46] <mnaser> erasure coding and cache tiering falls outside the scope of ceph-deploy.. right?
[22:06] * dgurtner (~dgurtner@51-224.197-178.cust.bluewin.ch) has joined #ceph
[22:16] <alfredodeza> mnaser: correct
[22:16] <mnaser> alfredodeza: figured so, i have two pools with two different roots
[22:16] * mnaser is slowly feeling more comfortable wiht ceph
[22:17] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) has left #ceph
[22:18] * Pedras (~Adium@50.185.218.255) has joined #ceph
[22:22] * matt__ (~matt@64.191.222.109) has joined #ceph
[22:23] * kfei (~root@61-227-15-21.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[22:30] * matt__ (~matt@64.191.222.109) Quit (Quit: matt__)
[22:31] * tinklebear (~tinklebea@cpe-066-057-253-171.nc.res.rr.com) has joined #ceph
[22:33] * tab (~oftc-webi@93-103-91-169.dynamic.t-2.net) has joined #ceph
[22:35] * kfei (~root@114-27-89-2.dynamic.hinet.net) has joined #ceph
[22:37] * angdraug (~angdraug@host-200-119.pubnet.pdx.edu) has joined #ceph
[22:44] <mnaser> can I safely remove data, metadata and rbd pools (if i created my own pool, will be using block storage only)
[22:44] * fmanana (~fdmanana@bl4-181-106.dsl.telepac.pt) Quit (Quit: Leaving)
[22:45] * dgurtner (~dgurtner@51-224.197-178.cust.bluewin.ch) Quit (Read error: Connection reset by peer)
[22:45] * dgurtner (~dgurtner@212.243.10.250) has joined #ceph
[22:46] <mnaser> found my answer here .. http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-April/001163.html
[22:50] * adamcrume (~quassel@2601:9:6680:47:148a:d987:494e:9db3) has joined #ceph
[22:51] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (Read error: Operation timed out)
[22:52] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[23:01] * [fred] (fred@earthli.ng) Quit (Quit: +++ATH0)
[23:05] * sz0 (~sz0@94.55.197.185) Quit ()
[23:26] * [fred] (fred@earthli.ng) has joined #ceph
[23:35] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[23:38] * ikrstic (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[23:39] * ikrstic (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) has joined #ceph
[23:54] * adamcrume (~quassel@2601:9:6680:47:148a:d987:494e:9db3) Quit (Remote host closed the connection)
[23:59] <lightspeed> I'm getting "mount error 95 = Operation not supported" when trying to mount cephfs on a client using the cephfs kernel module (kernel version 3.15.8) - any idea what might be wrong?
[23:59] <lightspeed> ceph version in use is 0.80.5

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.