#ceph IRC Log

Index

IRC Log for 2014-08-11

Timestamps are in GMT/BST.

[0:02] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Remote host closed the connection)
[0:09] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Remote host closed the connection)
[0:28] * diegows (~diegows@190.190.5.238) has joined #ceph
[0:35] * fdmanana (~fdmanana@bl5-173-96.dsl.telepac.pt) has joined #ceph
[0:35] * garphy`aw is now known as garphy
[0:37] * garphy is now known as garphy`aw
[0:43] * rendar (~I@95.234.176.93) Quit ()
[0:47] * capri_on (~capri@212.218.127.222) has joined #ceph
[0:49] * Cube (~Cube@66-87-131-55.pools.spcsdns.net) has joined #ceph
[0:50] * Sysadmin88 (~IceChat77@94.4.0.39) has joined #ceph
[0:53] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[0:54] * Guest140 (~pop@se2x.mullvad.net) has joined #ceph
[0:55] <Guest140> HI I AM A BITCOIN DONATION BOT PLEASE DONATE BTC TO 1337rD387Bzo9kuRVPfYQmtYDVDfNT2Jwk (EVEN SMALL AMOUNTS HELP)
[1:01] * JC (~JC@AMontpellier-651-1-445-156.w81-251.abo.wanadoo.fr) has joined #ceph
[1:02] * JC (~JC@AMontpellier-651-1-445-156.w81-251.abo.wanadoo.fr) Quit ()
[1:03] * Guest140 (~pop@se2x.mullvad.net) Quit (autokilled: Do not spam. Mail support@oftc.net with questions. (2014-08-10 23:03:26))
[1:04] * JC (~JC@AMontpellier-651-1-445-156.w81-251.abo.wanadoo.fr) has joined #ceph
[1:19] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[1:26] * Jakey (uid1475@id-1475.uxbridge.irccloud.com) Quit (Quit: Connection closed for inactivity)
[1:27] * JC (~JC@AMontpellier-651-1-445-156.w81-251.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[1:52] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[2:03] * jamin (~jamin@65-100-221-49.dia.static.qwest.net) has joined #ceph
[2:04] * oms101 (~oms101@p20030057EA639E00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:13] * oms101 (~oms101@p20030057EA0C7D00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[2:17] * Cube (~Cube@66-87-131-55.pools.spcsdns.net) Quit (Quit: Leaving.)
[2:34] * LeaChim (~LeaChim@host86-159-115-162.range86-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:41] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (Read error: Connection reset by peer)
[2:41] * infernixx (nix@cl-1404.ams-04.nl.sixxs.net) has joined #ceph
[2:42] * infernixx is now known as infernix
[2:42] * Cube (~Cube@66-87-131-55.pools.spcsdns.net) has joined #ceph
[2:43] * b0e (~aledermue@x2f28411.dyn.telefonica.de) has joined #ceph
[3:04] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[3:10] * lupu (~lupu@86.107.101.214) has joined #ceph
[3:26] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[3:28] <guppy> what is a bit coin donation bot besides spam?
[3:30] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[3:34] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[3:41] * Sysadmin88 (~IceChat77@94.4.0.39) Quit (Ping timeout: 480 seconds)
[3:45] * zhaochao (~zhaochao@111.204.252.9) has joined #ceph
[3:46] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (Read error: Connection reset by peer)
[3:46] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) has joined #ceph
[3:47] * b0e (~aledermue@x2f28411.dyn.telefonica.de) Quit (Quit: Leaving.)
[3:49] * lucas1 (~Thunderbi@222.240.148.154) has joined #ceph
[4:02] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[4:11] * longguang (~chatzilla@123.126.33.253) has joined #ceph
[4:11] <longguang> how to debug crush?
[4:14] * zhangdongmao (~zhangdong@203.192.156.9) has joined #ceph
[4:14] * vz (~vz@122.167.206.156) has joined #ceph
[4:16] * yuriw1 (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) has joined #ceph
[4:22] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:31] * Jakey (uid1475@id-1475.uxbridge.irccloud.com) has joined #ceph
[4:33] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[4:33] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:35] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:36] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[4:43] <Jakey> why my ceph monitors is rank -1 ???????????????
[4:43] <Jakey> please help
[4:43] <Jakey> where is dmick
[4:47] <Jakey> this is what i am getting from the mon_status
[4:47] <Jakey> https://www.irccloud.com/pastebin/gfRKGyvl
[4:47] <Jakey> please help
[4:51] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:52] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[4:57] * codice_ (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[4:59] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (Ping timeout: 480 seconds)
[5:10] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[5:10] * jtaguinerd (~Adium@112.205.6.82) has joined #ceph
[5:19] * Vacum__ (~vovo@88.130.202.135) has joined #ceph
[5:25] * jjgalvez (~JuanJose@ip72-193-50-198.lv.lv.cox.net) has joined #ceph
[5:26] * Vacum_ (~vovo@i59F793F8.versanet.de) Quit (Ping timeout: 480 seconds)
[5:26] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[5:28] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[5:33] * Cube (~Cube@66-87-131-55.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[5:33] * Cube2 (~Cube@66.87.131.55) has joined #ceph
[5:33] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[5:39] * lucas1 (~Thunderbi@222.240.148.154) Quit (Ping timeout: 480 seconds)
[5:45] * yguang11 (~yguang11@2406:2000:ef96:e:9d5a:6ef1:a198:eac9) has joined #ceph
[6:04] * dmsimard_away is now known as dmsimard
[6:05] * yguang11 (~yguang11@2406:2000:ef96:e:9d5a:6ef1:a198:eac9) Quit ()
[6:05] * Cube2 (~Cube@66.87.131.55) Quit (Ping timeout: 480 seconds)
[6:12] * kanagaraj (~kanagaraj@117.197.190.0) has joined #ceph
[6:13] * Duron (~Duron@113.69.100.143) has joined #ceph
[6:14] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[6:16] <Duron> hello, I am using rbd on production to host KVM images, but when I delete a rbd image, the cluster will be a lot of blocked reqeusts, any idea ?
[6:16] <Duron> I check the all the osd nodes , the CPU DISK and network , there are all OK
[6:22] * rdas (~rdas@121.244.87.115) has joined #ceph
[6:24] * sjm (~sjm@108.53.250.33) has joined #ceph
[6:26] * blahnana (~bman@us1.blahnana.com) Quit (Remote host closed the connection)
[6:28] * kanagaraj_ (~kanagaraj@115.244.226.85) has joined #ceph
[6:30] * blahnana (~bman@us1.blahnana.com) has joined #ceph
[6:31] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[6:32] * lalatenduM (~lalatendu@122.171.64.194) has joined #ceph
[6:32] * kanagaraj (~kanagaraj@117.197.190.0) Quit (Ping timeout: 480 seconds)
[6:33] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[6:35] * vz (~vz@122.167.206.156) Quit (Quit: Leaving...)
[6:42] * kanagaraj_ (~kanagaraj@115.244.226.85) Quit (Ping timeout: 480 seconds)
[6:51] * dmsimard is now known as dmsimard_away
[7:00] * Duron (~Duron@113.69.100.143) Quit (Quit: Going offline, see ya! (www.adiirc.com))
[7:03] * vbellur (~vijay@122.167.177.49) Quit (Ping timeout: 480 seconds)
[7:16] * rendar (~I@host75-182-dynamic.37-79-r.retail.telecomitalia.it) has joined #ceph
[7:23] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[7:27] * jtaguinerd1 (~Adium@112.205.20.187) has joined #ceph
[7:27] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:28] * v2 (~vshankar@121.244.87.117) has joined #ceph
[7:30] * jtaguinerd (~Adium@112.205.6.82) Quit (Ping timeout: 480 seconds)
[7:31] * jtaguinerd (~Adium@112.205.20.187) has joined #ceph
[7:32] * jtaguinerd1 (~Adium@112.205.20.187) Quit (Read error: Connection reset by peer)
[7:33] * dlan_ (~dennis@116.228.88.131) Quit (Remote host closed the connection)
[7:37] * jtaguinerd1 (~Adium@112.205.20.187) has joined #ceph
[7:41] * CAPSLOCK2000 (~oftc@2001:610:748:1::8) Quit (Ping timeout: 480 seconds)
[7:41] * jtaguinerd (~Adium@112.205.20.187) Quit (Ping timeout: 480 seconds)
[7:41] * jtaguinerd1 (~Adium@112.205.20.187) Quit ()
[7:42] * jtaguinerd (~Adium@112.205.20.187) has joined #ceph
[7:49] * dlan (~dennis@116.228.88.131) has joined #ceph
[7:50] * ashishchandra (~ashish@49.32.0.66) has joined #ceph
[8:01] * longguang_ (~chatzilla@123.126.33.253) has joined #ceph
[8:06] * longguang (~chatzilla@123.126.33.253) Quit (Ping timeout: 480 seconds)
[8:06] * longguang_ is now known as longguang
[8:13] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[8:14] * garphy`aw is now known as garphy
[8:16] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:16] * jtaguinerd1 (~Adium@112.198.82.89) has joined #ceph
[8:19] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:20] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[8:22] * jtaguinerd1 (~Adium@112.198.82.89) Quit (Remote host closed the connection)
[8:23] * jtaguinerd (~Adium@112.205.20.187) Quit (Ping timeout: 480 seconds)
[8:24] * lczerner (~lczerner@ip56-4.tvtrinec.cz) has joined #ceph
[8:25] * KevinPerks (~Adium@2606:a000:80a1:1b00:b07e:b574:c401:16b6) Quit (Quit: Leaving.)
[8:26] * cok (~chk@2a02:2350:18:1012:bcf5:fcf4:6af7:21b1) has joined #ceph
[8:37] * fsimonce (~simon@host135-17-dynamic.8-79-r.retail.telecomitalia.it) has joined #ceph
[8:41] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[8:41] * pressureman (~pressurem@62.217.45.26) Quit (Ping timeout: 480 seconds)
[8:48] * lcavassa (~lcavassa@89.184.114.246) has joined #ceph
[8:50] * pressureman (~pressurem@62.217.45.26) has joined #ceph
[8:50] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[8:54] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit ()
[8:54] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[8:54] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[8:59] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[9:04] * JC (~JC@AMontpellier-651-1-445-156.w81-251.abo.wanadoo.fr) has joined #ceph
[9:05] * lupu (~lupu@86.107.101.214) has joined #ceph
[9:05] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[9:05] * ChanServ sets mode +v andreask
[9:08] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:09] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[9:10] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Remote host closed the connection)
[9:11] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:19] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[9:24] * vbellur (~vijay@121.244.87.117) has joined #ceph
[9:25] * analbeard (~shw@support.memset.com) has joined #ceph
[9:27] * oro (~oro@2001:620:20:16:a98b:9a85:fdd6:e8b2) has joined #ceph
[9:41] * ikrstic (~ikrstic@93-87-118-93.dynamic.isp.telekom.rs) has joined #ceph
[9:44] * jtang_ (~jtang@80.111.83.231) Quit (Remote host closed the connection)
[9:44] * jtang_ (~jtang@80.111.83.231) has joined #ceph
[9:56] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:57] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:00] * Cube (~Cube@66-87-64-112.pools.spcsdns.net) has joined #ceph
[10:04] * dmsimard_away is now known as dmsimard
[10:05] * vbellur (~vijay@121.244.87.117) has joined #ceph
[10:05] * hybrid512 (~walid@195.200.167.70) Quit (Quit: Leaving.)
[10:07] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[10:07] * Cube (~Cube@66-87-64-112.pools.spcsdns.net) Quit (Quit: Leaving.)
[10:11] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:12] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[10:14] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (Read error: Operation timed out)
[10:14] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[10:16] * cok (~chk@2a02:2350:18:1012:bcf5:fcf4:6af7:21b1) Quit (Quit: Leaving.)
[10:18] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[10:19] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[10:20] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:24] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[10:25] * ade (~abradshaw@dslb-094-223-123-106.094.223.pools.vodafone-ip.de) has joined #ceph
[10:26] * saurabh (~saurabh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[10:27] <Jakey> i am getting this error
[10:27] <Jakey> [ceph@node7 m_cluster]$ ceph mon dump
[10:27] <Jakey> dumped monmap epoch 1
[10:27] <Jakey> epoch 1
[10:27] <Jakey> fsid e39f4ebf-7ee2-47ef-8a65-6e5d19bd84f2
[10:27] <Jakey> last_changed 0.000000
[10:27] <Jakey> created 0.000000
[10:27] <Jakey> 0: 192.168.50.9:6789/0 mon.node7
[10:28] <andreask> hmm .. .what error?
[10:28] <andreask> brand new installation?
[10:29] * jjgalvez (~JuanJose@ip72-193-50-198.lv.lv.cox.net) Quit (Quit: Leaving.)
[10:29] * jjgalvez (~JuanJose@ip72-193-50-198.lv.lv.cox.net) has joined #ceph
[10:36] * wido_ is now known as wido
[10:37] * jjgalvez (~JuanJose@ip72-193-50-198.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[10:40] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[10:44] * rdas (~rdas@121.244.87.115) has joined #ceph
[10:45] * LeaChim (~LeaChim@host86-159-115-162.range86-159.btcentralplus.com) has joined #ceph
[10:53] * zhangdongmao (~zhangdong@203.192.156.9) Quit (Quit: Konversation terminated!)
[10:53] * zhangdongmao (~zhangdong@203.192.156.9) has joined #ceph
[10:58] * sjm (~sjm@108.53.250.33) Quit (Remote host closed the connection)
[11:01] <Joffer> Would the newly open sourced calamari work fine with an older ceph (0.67.9)? (plan to upgrade ceph later)
[11:03] <longguang> how to use libcls_hello?
[11:04] <andreask> Joffer: yes, that should work fine
[11:05] <Joffer> Great. Think I should look into calamari to get more insight and stats on my ceph. Might help me figure out why it goes into slow/blocking requests several times a day
[11:10] * dmsimard is now known as dmsimard_away
[11:20] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[11:27] * thb (~me@2a02:2028:20d:8870:6df3:b9e0:a6d6:df46) has joined #ceph
[11:28] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[11:29] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[11:29] <tserong> here's a new toy: https://susestudio.com/a/eEqfPk/calamari-opensuse-13-1
[11:29] <tserong> in case anyone wants to play
[11:30] * Sysadmin88 (~IceChat77@05452df5.skybroadband.com) has joined #ceph
[11:30] <tserong> theoretically might work regardless of what distro you're running on your ceph nodes -- see http://lists.ceph.com/pipermail/ceph-calamari-ceph.com/2014-August/000243.html for some notes
[11:39] * vmx (~vmx@dslb-084-056-050-159.084.056.pools.vodafone-ip.de) has joined #ceph
[11:40] * zhaochao (~zhaochao@111.204.252.9) Quit (Ping timeout: 480 seconds)
[11:51] * zhaochao (~zhaochao@111.204.252.9) has joined #ceph
[11:58] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) has joined #ceph
[12:02] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[12:03] * lalatenduM (~lalatendu@122.171.64.194) Quit (Ping timeout: 480 seconds)
[12:14] * dmsimard_away is now known as dmsimard
[12:19] * longguang (~chatzilla@123.126.33.253) Quit (Ping timeout: 480 seconds)
[12:19] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[12:22] * longguang (~chatzilla@123.126.33.253) has joined #ceph
[12:23] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Quit: Leaving.)
[12:26] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[12:32] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[12:45] * longguang (~chatzilla@123.126.33.253) Quit (Ping timeout: 480 seconds)
[12:46] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Ping timeout: 480 seconds)
[12:51] * drankis (~drankis__@89.111.13.198) has joined #ceph
[12:57] * longguang (~chatzilla@123.126.33.253) has joined #ceph
[12:59] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[13:01] <longguang> does mds have a journal file?
[13:01] * AbyssOne is now known as a1-away
[13:02] * a1-away is now known as AbyssOne
[13:04] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Ping timeout: 480 seconds)
[13:04] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[13:19] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[13:21] * lucas1 (~Thunderbi@222.247.57.50) Quit ()
[13:28] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[13:31] * KevinPerks (~Adium@2606:a000:80a1:1b00:b4e1:8d6f:bf1c:b62) has joined #ceph
[13:32] * cok (~chk@46.30.211.29) has joined #ceph
[13:35] * b0e1 (~aledermue@213.95.15.4) has joined #ceph
[13:36] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[13:37] * saurabh (~saurabh@121.244.87.117) Quit (Quit: Leaving)
[13:41] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[13:41] * ChanServ sets mode +v andreask
[13:44] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[13:51] * b0e1 (~aledermue@213.95.15.4) Quit (Ping timeout: 480 seconds)
[13:56] * Japje (~Japje@2001:968:672:1::12) has joined #ceph
[14:00] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:05] * zigo (quasselcor@ipv6-ftp.gplhost.com) Quit (Quit: No Ping reply in 180 seconds.)
[14:05] * zigo (quasselcor@ipv6-ftp.gplhost.com) has joined #ceph
[14:07] * zhaochao (~zhaochao@111.204.252.9) has left #ceph
[14:11] * lalatenduM (~lalatendu@122.172.85.86) has joined #ceph
[14:24] * diegows (~diegows@190.190.5.238) has joined #ceph
[14:25] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) has joined #ceph
[14:34] * aknapp (~aknapp@64.202.160.233) has joined #ceph
[14:35] * jrankin (~jrankin@d47-69-66-231.try.wideopenwest.com) has joined #ceph
[14:37] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[14:37] * dneary (~dneary@107-1-123-195-ip-static.hfc.comcastbusiness.net) has joined #ceph
[14:38] * ashishchandra (~ashish@49.32.0.66) Quit (Quit: Leaving)
[14:40] * tupper (~chatzilla@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[14:43] * dneary (~dneary@107-1-123-195-ip-static.hfc.comcastbusiness.net) Quit (Quit: Exeunt dneary)
[14:46] * dmsimard (~dmsimard@198.72.123.202) Quit (Quit: Signed off)
[14:47] * dmsimard (~dmsimard@198.72.123.202) has joined #ceph
[14:47] * ade (~abradshaw@dslb-094-223-123-106.094.223.pools.vodafone-ip.de) Quit (Quit: Too sexy for his shirt)
[14:50] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:50] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has left #ceph
[14:54] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) has joined #ceph
[15:03] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Remote host closed the connection)
[15:04] * b0e (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[15:07] * aknapp (~aknapp@64.202.160.233) Quit (Remote host closed the connection)
[15:08] * aknapp (~aknapp@64.202.160.233) has joined #ceph
[15:08] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[15:11] * cok1 (~chk@94.191.185.65.mobile.3.dk) has joined #ceph
[15:13] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[15:14] * cok (~chk@46.30.211.29) Quit (Ping timeout: 480 seconds)
[15:16] * aknapp (~aknapp@64.202.160.233) Quit (Ping timeout: 480 seconds)
[15:19] * cok1 (~chk@94.191.185.65.mobile.3.dk) Quit (Ping timeout: 480 seconds)
[15:20] * lupu (~lupu@86.107.101.214) has joined #ceph
[15:21] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[15:29] <mtl1> Hi. Has anyone dealt with one OSD being overly full compared to other OSDs on the same server, with the same weight, etc???? Would reweighting the overfull OSD down slightly be the right solution?
[15:38] * jtang_ (~jtang@80.111.83.231) Quit (Remote host closed the connection)
[15:39] * jtang_ (~jtang@80.111.83.231) has joined #ceph
[15:47] * lczerner (~lczerner@ip56-4.tvtrinec.cz) Quit (Ping timeout: 480 seconds)
[15:47] * brad_mssw (~brad@shop.monetra.com) has joined #ceph
[15:51] * joao|lap (~JL@a79-168-5-220.cpe.netcabo.pt) has joined #ceph
[15:51] * ChanServ sets mode +o joao|lap
[15:54] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[15:57] * madkiss1 (~madkiss@chello080108052132.20.11.vie.surfer.at) Quit (Read error: Connection reset by peer)
[15:57] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[16:00] * madkiss (~madkiss@chello080108052132.20.11.vie.surfer.at) has joined #ceph
[16:02] * cok (~chk@2a02:2350:18:1012:48d8:81fd:7141:cec4) has joined #ceph
[16:03] * joao|lap (~JL@a79-168-5-220.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[16:04] * vbellur (~vijay@122.172.200.139) has joined #ceph
[16:05] * jtang_ (~jtang@80.111.83.231) Quit (Quit: Leaving)
[16:08] * v2 (~vshankar@121.244.87.117) Quit (Quit: Leaving)
[16:08] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[16:08] * madkiss (~madkiss@chello080108052132.20.11.vie.surfer.at) Quit (Read error: No route to host)
[16:09] * andreask (~andreask@zid-vpnn076.uibk.ac.at) has joined #ceph
[16:09] * ChanServ sets mode +v andreask
[16:09] <jiffe> anyone using ceph to successfully serve web content? I'm using it to serve our piwik webstats for testing and it seems after running for a day or so everything seems to back up and I have to kill apache, umount (which takes a while) and remount the ceph disk and then start apache again
[16:10] * garphy is now known as garphy`aw
[16:11] * hufman (~hufman@cpe-184-58-235-28.wi.res.rr.com) has joined #ceph
[16:12] <hufman> hello good friends!
[16:12] * madkiss (~madkiss@chello080108052132.20.11.vie.surfer.at) has joined #ceph
[16:12] <hufman> would anyone be able to help debug my ceph-osd daemons, and why they seem to not be booting properly?
[16:13] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:14] <hufman> the ceph-osd starts, discovers its IP address, detects xfs, opens and closes the journal, and then freezes
[16:14] <hufman> strace shows that it's waiting on a futex, but i'm not sure what for
[16:15] <hufman> another issue, which might be related: when the ceph-osd service starts, the "ceph osd crush create-or-move" call never returns
[16:15] <hufman> but all 3 of my ceph-mons are up and in quorum
[16:16] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[16:18] <hufman> nothing looks obviously wrong in the mon logs
[16:19] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[16:22] * terje_ (~joey@63.228.91.225) has joined #ceph
[16:23] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[16:23] <hufman> well, the mon logs are spamming tons of lines about paxos active and paxos recovering
[16:24] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) has joined #ceph
[16:24] * terje__ (~joey@63.228.91.225) Quit (Ping timeout: 480 seconds)
[16:27] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[16:31] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[16:32] * Isotopp (kris@newroot.koehntopp.de) has joined #ceph
[16:32] * tcatm (~quassel@2a01:4f8:151:13c3:5054:ff:feff:cbce) Quit (Quit: No Ping reply in 180 seconds.)
[16:32] <Isotopp> hello. I have a ceph installation using xfs as a local file system.
[16:32] * mgarcesMZ (~mgarces@5.206.228.5) has joined #ceph
[16:32] <Isotopp> I also have external logs.
[16:33] <mgarcesMZ> hi there
[16:33] <Isotopp> i need to specify the external log device on ceph filesystem mount
[16:33] <Isotopp> where do I do that?
[16:33] <mgarcesMZ> I cant find radosgw-agent rpm in RHEL7 repos??? anyone knows why?
[16:33] <Isotopp> mount -t xfs logdev=/dev/fioa4 /dev/sdb1 /var/lib/ceph/osd/ceph-1 and the like
[16:33] <Isotopp> for each disk
[16:33] * tcatm (~quassel@2a01:4f8:151:13c3:5054:ff:feff:cbce) has joined #ceph
[16:34] <Isotopp> there must be a table where for each device i specify the logdev, because there is no linear relationship
[16:34] <Isotopp> and xfs for some reason does not write down logdev or logdev uuid in the superblock
[16:35] * linuxkidd (~linuxkidd@cpe-066-057-017-151.nc.res.rr.com) has joined #ceph
[16:35] <Isotopp> grr @ http://tracker.ceph.com/issues/2549
[16:35] <Isotopp> ceph-disk-activate does not respect the mount options yet.
[16:37] * diegows (~diegows@190.190.5.238) has joined #ceph
[16:37] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[16:39] <andreask> Isotopp: that makes things very complicated .... tried specifying the specific mount options per osd?
[16:40] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[16:41] <Isotopp> where do i do that?
[16:41] <Isotopp> andreask: yes, i do not understand why xfs for example does not write down mount options for the logdev in the superblock, but they do not
[16:41] <Isotopp> so i have a handcrafted mount statement per device
[16:41] <Isotopp> because I need to restate the logdev option for each mount for each device
[16:41] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[16:42] <andreask> try adding osd_fs_mount_options_xfs for each osd in its configuration file setting [osd.$id]
[16:43] <Isotopp> i assume you mean I have to add [osd.$id] sections to ceph.conf?
[16:44] <andreask> yes, for each osd
[16:45] * nhm (~nhm@65-128-141-191.mpls.qwest.net) has joined #ceph
[16:45] * ChanServ sets mode +o nhm
[16:47] <Isotopp> [243204.620815] XFS (sdb1): filesystem is marked as having an external log; specify logdev on the mount command line.
[16:47] <Isotopp> [243204.620820] XFS (sdb1): SB validate failed with error 22.
[16:47] <Isotopp> hm
[16:47] <Isotopp> how do i get the shell command run in a debug?
[16:49] <andreask> for ceph-deploy? there is a log in the directory where you run it
[16:51] <Isotopp> no, it was running before already
[16:51] <Isotopp> i did initctl start ceph-osd-all
[16:51] <Isotopp> that fires the udev rule
[16:51] <Isotopp> that fires ceph-disk-activate
[16:52] <Isotopp> that is a shell script that wraps ceph-disk activate
[16:52] <andreask> .... hmm ... having something like "osd_fs_mount_options_xfs = logdev=/dev/xfslogs/osd_$id" should also work
[16:54] <andreask> let lvm help ...
[16:54] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[16:54] <Isotopp> in ceph.conf
[16:54] <Isotopp> [osd.0]
[16:54] <Isotopp> odf_fs_mount_options_xfs = logdev=/dev/fioa5
[16:54] <Isotopp> root@cloud12:/usr/sbin# ceph-disk activate /dev/sdb1
[16:54] <Isotopp> ceph-disk: Mounting filesystem failed: Command '['/bin/mount', '-t', 'xfs', '-o', 'noatime', '--', '/dev/sdb1', '/var/lib/ceph/tmp/mnt.DFhuD8']' returned non-zero exit status 32
[16:54] <Isotopp> that does not pick up my mount option
[16:55] <andreask> odf_fs ?
[16:56] <Isotopp> right
[16:56] <Isotopp> [osd.0]
[16:56] <Isotopp> osd_fs_mount_options_xfs = logdev=/dev/fioa5
[16:56] * The_Bishop (~bishop@2001:470:50b6:0:c1ba:4d17:cefb:fa45) has joined #ceph
[16:56] <Isotopp> root@cloud12:/usr/sbin# ceph-disk activate /dev/sdb1
[16:56] <Isotopp> ceph-disk: Mounting filesystem failed: Command '['/bin/mount', '-t', 'xfs', '-o', 'noatime', '--', '/dev/sdb1', '/var/lib/ceph/tmp/mnt.CB7T5g']' returned non-zero exit status 32
[16:56] <Isotopp> no change
[16:57] <Isotopp> manually, root@cloud12:/usr/sbin# !mount
[16:57] <Isotopp> mount -t xfs -o logdev=/dev/fioa5 /dev/sdb1 /mnt
[16:57] <Isotopp> works
[16:57] <madkiss> https://github.com/ceph/ceph/blob/d7b0c7faafd37e4ae8a1680edfa60c22b419cbd8/src/ceph-disk#L1592
[16:57] <madkiss> it''s "osd_mount_options_xfs" now.
[16:57] <madkiss> i think.
[16:58] <Isotopp> [osd.0]
[16:58] <Isotopp> osd_mount_options_xfs = logdev=/dev/fioa5
[16:58] <Isotopp> root@cloud12:/usr/sbin# ceph-disk activate /dev/sdb1
[16:58] <Isotopp> ceph-disk: Mounting filesystem failed: Command '['/bin/mount', '-t', 'xfs', '-o', 'noatime', '--', '/dev/sdb1', '/var/lib/ceph/tmp/mnt.4_vbNx']' returned non-zero exit status 32
[16:58] <Isotopp> no change
[16:58] * baylight (~tbayly@74-220-196-40.unifiedlayer.com) has joined #ceph
[16:59] * ircolle (~Adium@2601:1:a580:145a:edb5:d98c:6aa6:1b3c) has joined #ceph
[17:00] <andreask> I never tried this mount options outside the global settings ... can you verify if they work there?
[17:00] <hufman> how do i keep my ceph-mon from constantly re-electing?
[17:01] <andreask> network issues?
[17:01] <hufman> i can ping all the things
[17:01] <Isotopp> andreask: they do
[17:01] <Isotopp> /dev/sdb1 878002176 6109564 871892612 1% /var/lib/ceph/osd/ceph-5
[17:01] <Isotopp> /dev/sdb1 on /var/lib/ceph/osd/ceph-5 type xfs (rw,logdev=/dev/fioa5)
[17:01] <hufman> it gets quorum for 5 seconds and then re-elects
[17:02] <Isotopp> osd_mount_options_xfs = logdev=/dev/fioa5
[17:02] <Isotopp> osd_fs_mount_options_xfs = logdev=/dev/fioa6
[17:02] <ganders> hufman: re-election happens for a reason, if something is going on over the cluster, that behaviour could happend
[17:02] <Isotopp> so what is used is osd_mount_options_xfs in global
[17:02] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[17:02] <ganders> maybe some drop over the network? that is causing that
[17:02] * cok (~chk@2a02:2350:18:1012:48d8:81fd:7141:cec4) Quit (Quit: Leaving.)
[17:03] <andreask> Isotopp: then you could try if creating vgs/lvs with the osd daemon id in the name is working
[17:04] <madkiss> urg
[17:05] <andreask> yes, urg
[17:06] <hufman> why would it keep saying lease_timeout?
[17:06] <Isotopp> root@cloud12:/usr/sbin# ceph-conf --cluster=ceph --name osd.0 --lookup osd_mount_options_xfs
[17:06] <Isotopp> logdev=/dev/fioa5
[17:06] <Isotopp> that works
[17:06] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:07] <Isotopp> but madkiss says that what is called is --name osd. and not --name osd.0
[17:07] <Isotopp> because at that point the osd number is unknown
[17:08] <andreask> well you can always file a bug
[17:09] <madkiss> in this case, he can't. the problem is unsolvable.
[17:09] <madkiss> In order to figure out which OSD ceph-disk is dealing with (and that is what it needs to know to fetch the matching configuration from ceph.conf for that particular OSD), it would need to tmp-mount it
[17:09] <madkiss> which fails, because XFS refuses to be mounted without a proper log argument
[17:10] <andreask> would need some extra magic info on the disk, like the labels for ceph
[17:11] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[17:12] <andreask> Isotopp: is an extra journal that much faster in your benchmarks?
[17:12] * bauruine (~bauruine@2a01:4f8:150:6381::545) Quit (Remote host closed the connection)
[17:13] * bauruine (~bauruine@2a01:4f8:150:6381::545) has joined #ceph
[17:15] <Isotopp> XFS auf Disk, Log internal auf Disk, IOPS=9332, 149325KB/s
[17:15] <Isotopp> XFS auf Disk, Log 1G auf FIO, iops=36380, 582089 KB/s
[17:15] <Isotopp> fio --filename=$(pwd)/keks --sync=1 -rw=randwrite --bs=16k --size=4G --numjobs=32 --runtime=60s --group_reporting --name=file1
[17:15] <Isotopp> that's the xfs log only
[17:16] <Isotopp> the ceph log is still unchanged, is next to go
[17:16] * tupper (~chatzilla@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[17:17] <Isotopp> 16K randwrite is a mysql innodb commit test, basically
[17:17] <Isotopp> 32-way parallel in my case b/c 40 cores, so 32 is realistic
[17:18] <Isotopp> hp dl380g8+ 256g ram, 40 core, way to few spindles
[17:18] <Isotopp> (only 6 osd per device right now)
[17:18] <andreask> impressive ... will be interesting to tune the osd and rbd settings to this
[17:18] <Isotopp> i do not think that this is going to be production hardware.
[17:19] <Isotopp> it's what i have ATM, 10 of these, 3 ceph, 3 openstack, 4 for playing
[17:19] <Isotopp> i think i need more spindles in a production ceph node, and multiple ssd instead of fusion-io.these are preproduction 160g leftovers.
[17:19] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) Quit (Quit: Leaving.)
[17:19] <Isotopp> way too small for anything but log stores.
[17:20] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[17:20] * Nats (~natscogs@2001:8000:200c:0:c11d:117a:c167:16df) Quit (Read error: Connection reset by peer)
[17:21] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) has joined #ceph
[17:21] * Nats (~natscogs@2001:8000:200c:0:c11d:117a:c167:16df) has joined #ceph
[17:21] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) Quit ()
[17:22] * markbby (~Adium@168.94.245.2) has joined #ceph
[17:24] <Isotopp> and the answer is:
[17:24] <Isotopp> mount each and every xfs manually to /mnt
[17:24] <Isotopp> read /mnt/whoami
[17:24] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[17:24] <Isotopp> hand craft a fstab line
[17:24] <Isotopp> mount -av
[17:24] <Isotopp> and initctl start ceph-osd-all
[17:25] <Isotopp> and then fuck you udev and ceph-disk activate
[17:25] <Isotopp> now i only have to do all this again on the other nodes.
[17:30] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[17:33] <Isotopp> ftr http://oss.sgi.com/archives/xfs/2001-07/msg00564.html
[17:33] <Isotopp> that works
[17:36] * Cube (~Cube@66.87.64.112) has joined #ceph
[17:37] <darkfader> Isotopp: so xfs with external journal was slower than internal one
[17:37] <darkfader> ?
[17:39] <stupidnic> I am using ceph-deploy to create a cluster, but I am having issues with pg_num and pgp_num. When I do my deployment of OSDs to the servers, they aren't picking up the default values in the ceph.conf
[17:40] <stupidnic> also there seems to be some abiguity in the naming of config directives
[17:40] <stupidnic> some examples have spaces others have underscores
[17:40] <stupidnic> are spaces deprecated?
[17:42] * joef (~Adium@2620:79:0:131:a9a1:90ce:eff6:b2a6) has joined #ceph
[17:44] * garphy`aw is now known as garphy
[17:45] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[17:49] <hufman> here is a snippet of my mon log: http://pastebin.com/wZ3wjhm4
[17:49] <hufman> a packet trace doesn't show any retransmissions
[17:49] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[17:50] <jiffe> anyone using ceph to successfully serve web content? I'm using it to serve our piwik webstats for testing and it seems after running for a day or so everything seems to back up and I have to kill apache, umount (which takes a while) and remount the ceph disk and then start apache again
[17:50] * garphy is now known as garphy`aw
[17:50] <jiffe> wasn't even a day this time, it made it about 3 hours
[17:53] <jcsp> jiffe: are you using RBD or the ceph filesystem?
[17:53] <jiffe> jcsp: cephfs
[17:53] <hufman> iiinteresting, the quorum_status says that the monmap was created at 0.0000, which seems unusual
[17:54] <jcsp> jiffe: if you're running a fairly recent version and you're finding I/O stalls, it would be good to capture some debug logs (http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/#subsystem-log-and-debug-settings) and maybe send them to the mailing list
[17:55] <jcsp> cephfs isn't production ready, but getting info about failing cases gets us closer
[17:55] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[17:56] <jiffe> is there another way to present general web content to apache/php?
[17:57] * alram (~alram@38.122.20.226) has joined #ceph
[17:59] <stupidnic> deploy the php code to your compute nodes
[17:59] <stupidnic> use puppet etc to keep it updated only use shared storage for things that need to be shared
[18:00] <stupidnic> (only a suggestion)
[18:00] <jiffe> problem is piwik writes files out that need to be shared
[18:00] <jiffe> so there needs to be a filesystem in place
[18:00] <stupidnic> sure just adjust the config to write the files to a shared location
[18:01] <gchristensen> jiffe: your best bet is to try and make piwik write the files to something that isn't a POSIX filesystem
[18:01] <jiffe> this is just for testing, we want to put our general web hosting on this too so regardless we need a filesystem
[18:01] <gchristensen> jiffe: S3, etc. might require patches
[18:02] <stupidnic> jiffe: yeah that is our plan too but that is why we test things
[18:02] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:02] <stupidnic> but we are going to use rbd for the shared data (/var/cpanel /home, etc)
[18:02] * b0e (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[18:04] * rwheeler (~rwheeler@173.48.207.57) has joined #ceph
[18:04] <hufman> in the paxos recovering log message, looking something like this:
[18:04] <hufman> mon.cephmon2@1(electing).paxos(paxos recovering c 11564827..11565515) is_readable
[18:04] <hufman> is that first number (11564827) supposed to update?
[18:04] <jiffe> any suggestions for log levels for which subsystems I should set?
[18:06] * oro (~oro@2001:620:20:16:a98b:9a85:fdd6:e8b2) Quit (Ping timeout: 480 seconds)
[18:06] * rweeks (~rweeks@pat.hitachigst.com) has joined #ceph
[18:08] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[18:08] <mgarcesMZ> is anyone using radosgw ?
[18:09] <mgarcesMZ> with mod_fcgid, not mod_fastcgi ?
[18:10] <jiffe> maybe a better question is what subsystems are different or only exist when using cephfs
[18:11] * andreask (~andreask@zid-vpnn076.uibk.ac.at) has left #ceph
[18:12] <jcsp> jiffe: the 'mds' subsystem on the server side, and if you're using the FUSE client then the 'client' subsystem there
[18:13] <jiffe> I'm using the kernel module, is fuse recommended?
[18:13] * pressureman (~pressurem@62.217.45.26) Quit (Ping timeout: 480 seconds)
[18:13] <jcsp> the kernel module is fine, I'm just less familiar with the logging side of it
[18:17] <jiffe> sure
[18:19] <stupidnic> hey ceph why don't you stop mounting the OSD drives after I remove them... that'd be great... thanks
[18:19] <gchristensen> stupidnic: are you marking them as out?
[18:19] <stupidnic> yep
[18:20] <stupidnic> best part I can watch the status and it adds them right back
[18:20] <gchristensen> are you marking it as down?
[18:20] <gchristensen> IIRC you have to mark them as down and out
[18:20] <stupidnic> hmmm checking my script
[18:20] <Xiol> Hi guys. We're moving our OSD journals to SSDs soon and I believe this can be done without losing any OSD data (e.g. http://bit.ly/1pKYIp8). We deployed with ceph-deploy, so we don't have a fully populated ceph.conf - everything just starts up automatically. How do I go about pointing the OSDs at the new journal? Or is it just a case that I need to have the journal mount under /journal? (If so, where do I change what disk it mounts if it's not in ceph.conf?)
[18:20] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:20] <gchristensen> stupidnic: let me verify before you do anything silly
[18:20] <stupidnic> it's fine
[18:20] <stupidnic> it is just test data
[18:21] <stupidnic> osdmap e196: 0 osds: 0 up, 0 in
[18:21] * bandrus (~Adium@4.31.55.106) has joined #ceph
[18:21] <stupidnic> but when I try to add osds back into the cluster ceph jsut says "oh hi I remember you and adds it right back"
[18:21] <stupidnic> which is not what I want
[18:22] <stupidnic> I went to each node and deleted the osd/ceph-* dirs
[18:22] <gchristensen> stupidnic: did you zap the disk?
[18:22] * pressureman (~pressurem@62.217.45.26) has joined #ceph
[18:22] * dis (~dis@109.110.66.143) Quit (Ping timeout: 480 seconds)
[18:22] <stupidnic> That's what I am trying to do (ceph-deploy --zap-disk --fs-type btrfs
[18:23] <stupidnic> the problem is ceph keeps remounting the osd drive so I can't zap it
[18:23] <stupidnic> ceph-deploy is a bit broken in that regard
[18:24] * sjm1 (~sjm@108.53.250.33) has joined #ceph
[18:24] <stupidnic> a prepare shouldn't active the osd, but it does
[18:24] <stupidnic> activate
[18:24] <gchristensen> I've gone beyond my knowledge
[18:24] <stupidnic> no problem
[18:25] <stupidnic> I am there too
[18:25] * linuxkidd_ (~linuxkidd@rtp-isp-nat-pool1-1.cisco.com) has joined #ceph
[18:25] <stupidnic> I just wish it would do what the documentation says it does
[18:25] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[18:25] * bandrus (~Adium@4.31.55.106) Quit ()
[18:25] <gchristensen> file a documentation bug fix :P
[18:26] <stupidnic> I have contributed enough code to ceph-deploy this month :)
[18:26] <stupidnic> I have to do work for the people that actually pay me, or they get testy
[18:27] <stupidnic> the nerve of some people
[18:27] * sjm1 (~sjm@108.53.250.33) Quit ()
[18:28] * sjm (~sjm@108.53.250.33) has joined #ceph
[18:28] * rldleblanc (~rdleblanc@69-195-66-44.unifiedlayer.com) has joined #ceph
[18:29] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[18:31] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[18:32] <rldleblanc> Just getting up to speed with Ceph. I have one monitor and two OSD set-up and running in VMs. From another VM, I've mounted an rbd and can read/write just fine. I can not get fstrim to operate correctly. All of the documentation talks about Qemu and KVM, but this is just a mounted rbd without KVM. Can someone point me to where I can find a solution?
[18:32] <rldleblanc> From what I understand, the OSD does not require the underlying disk to support discard as ceph will just hole-punch the underlaying file for thin provisioning.
[18:34] * bandrus (~Adium@4.31.55.106) has joined #ceph
[18:35] <rldleblanc> The error is:
[18:35] <rldleblanc> root@ceph-client:/mnt/ceph/foo# fstrim -v /mnt/ceph/foo/
[18:35] <kraken> \o
[18:35] <rldleblanc> fstrim: /mnt/ceph/foo/: FITRIM ioctl failed: Operation not supported
[18:41] <ircolle> no one is toasting you kraken
[18:41] <alfredodeza> kraken: you are horrible
[18:41] * kraken is tear-jerking by the substantial declaration of proscription
[18:43] * lcavassa (~lcavassa@89.184.114.246) Quit (Quit: Leaving)
[18:47] * dis (~dis@109.110.66.158) has joined #ceph
[18:51] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:55] * rturk|afk is now known as rturk
[18:55] * mgarcesMZ (~mgarces@5.206.228.5) Quit (Quit: mgarcesMZ)
[18:59] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[19:00] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[19:00] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[19:00] * bandrus (~Adium@4.31.55.106) Quit (Quit: Leaving.)
[19:04] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[19:07] * rendar (~I@host75-182-dynamic.37-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[19:07] * rendar_ (~I@host75-182-dynamic.37-79-r.retail.telecomitalia.it) has joined #ceph
[19:09] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[19:12] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[19:12] * swat30 (~swat30@204.13.51.130) has joined #ceph
[19:12] * davidz (~Adium@cpe-23-242-12-23.socal.res.rr.com) has joined #ceph
[19:12] <swat30> hi all
[19:12] <swat30> running into a problem with an OSD
[19:13] <swat30> puking out these errors:
[19:13] <swat30> 2014-08-11 17:13:01.970555 7facf223b700 0 -- 10.100.250.1:6803/18601 >> 10.100.250.1:6810/13278 pipe(0x26a1500 sd=141 :47923 s=2 pgs=1516617 cs=1123245 l=0).fault, initiating reconnect
[19:13] <swat30> 2014-08-11 17:13:01.970909 7facf243d700 0 -- 10.100.250.1:6803/18601 >> 10.100.250.1:6802/12408 pipe(0x26a1780 sd=140 :59546 s=2 pgs=1569101 cs=1108615 l=0).fault, initiating reconnect
[19:13] <swat30> 2014-08-11 17:13:01.972047 7facf223b700 0 -- 10.100.250.1:6803/18601 >> 10.100.250.1:6810/13278 pipe(0x26a1500 sd=141 :47925 s=2 pgs=1516618 cs=1123247 l=0).fault, initiating reconnect
[19:13] <swat30> 2014-08-11 17:13:01.972225 7facf243d700 0 -- 10.100.250.1:6803/18601 >> 10.100.250.1:6802/12408 pipe(0x26a1780 sd=140 :59548 s=2 pgs=1569102 cs=1108617 l=0).fault, initiating reconnect
[19:13] <swat30> 2014-08-11 17:13:01.973497 7facf223b700 0 -- 10.100.250.1:6803/18601 >> 10.100.250.1:6810/13278 pipe(0x26a1500 sd=141 :47927 s=2 pgs=1516619 cs=1123249 l=0).fault, initiating reconnect
[19:13] <swat30> 2014-08-11 17:13:01.973794 7facf243d700 0 -- 10.100.250.1:6803/18601 >> 10.100.250.1:6802/12408 pipe(0x26a1780 sd=140 :59550 s=2 pgs=1569103 cs=1108619 l=0).fault, initiating reconnect
[19:13] <swat30> 2014-08-11 17:13:01.975124 7facf243d700 0 -- 10.100.250.1:6803/18601 >> 10.100.250.1:6802/12408 pipe(0x26a1780 sd=140 :59552 s=2 pgs=1569104 cs=1108621 l=0).fault, initiating reconnect
[19:13] <swat30> 2014-08-11 17:13:01.975171 7facf223b700 0 -- 10.100.250.1:6803/18601 >> 10.100.250.1:6810/13278 pipe(0x26a1500 sd=141 :47929 s=2 pgs=1516620 cs=1123251 l=0).fault, initiating reconnect
[19:14] <swat30> seems that the pgs is continually incrementing as well
[19:14] <rldleblanc> I guess my problem with discard may be due to the kernel rbd module not having it. http://tracker.ceph.com/issues/190. I'm going to try userland.
[19:20] <swat30> other OSDs are seeing:
[19:20] <swat30> 2014-08-11 17:20:21.551819 7ffc9d760700 -1 osd.19 17397 heartbeat_check: no reply from osd.21 since 2014-08-11 16:30:21.842530 (cutoff 2014-08-11 17:20:01.551817)
[19:20] <swat30> 2014-08-11 17:20:22.192557 7ffc8e742700 -1 osd.19 17397 heartbeat_check: no reply from osd.21 since 2014-08-11 16:30:21.842530 (cutoff 2014-08-11 17:20:02.192555)
[19:20] <swat30> 2014-08-11 17:20:22.552050 7ffc9d760700 -1 osd.19 17397 heartbeat_check: no reply from osd.21 since 2014-08-11 16:30:21.842530 (cutoff 2014-08-11 17:20:02.552048)
[19:20] <swat30> 2014-08-11 17:20:23.552190 7ffc9d760700 -1 osd.19 17397 heartbeat_check: no reply from osd.21 since 2014-08-11 16:30:21.842530 (cutoff 2014-08-11 17:20:03.552188)
[19:21] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[19:23] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[19:23] * rturk is now known as rturk|afk
[19:24] * adamcrume (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) has joined #ceph
[19:24] * Cube is now known as Concubidated
[19:24] * rturk|afk is now known as rturk
[19:25] * Concubidated (~Cube@66.87.64.112) Quit (Quit: Leaving.)
[19:25] * Cube (~Concubida@66.87.64.112) has joined #ceph
[19:26] * Cube (~Concubida@66.87.64.112) Quit ()
[19:27] <joao> swat30, don't do that; if you intend to paste a bunch of lines, please use a paste service (e.g, fpaste, pastebin) and provide us a url
[19:27] <swat30> sorry about that joao
[19:29] * Concubidated (~Adium@66-87-64-112.pools.spcsdns.net) has joined #ceph
[19:29] <Isotopp> darkfader: how do you come to that conclusion?
[19:29] <Isotopp> darkfader: iops internal log < 10000 (P830 controller has 4G cache)
[19:30] <Isotopp> darkfader: iops log on fusion-io > 35000 (disk still behin p830, but xfs log is 1G on fusion-io)
[19:30] <Isotopp> testing with 16k random-write
[19:30] <swat30> joao, would you like me to paste into pastebin and re-post?
[19:32] <Isotopp> darkfader: if i put too many xfs logs and ceph logs onto a single fio, i will eventually overload it, but at the moment i have only 6 spindles in the ceph per box, and i hope the fio makes it
[19:32] <Isotopp> i am pretty confident that it can handle 4 spindles.
[19:32] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[19:35] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[19:36] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[19:36] <darkfader> Isotopp: understood. i also saw that it's pretty tricky to get better performance out of flash than off the controllers' ram
[19:38] <darkfader> i'll keep an eye on your results when you paste them here ;))
[19:41] <swat30> anyone able to assist? looks like we have a pretty broken ceph cluster right now. 1 mon, 3 osds, isn't allowing exports via rbd
[19:44] <gchristensen> swat30: re-post your content in a pastebin, state a summary of your problem and what you have attempted, and your desired end goal
[19:44] <swat30> gchristensen, ok, will do. tks
[19:44] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[19:45] * bandrus (~Adium@4.31.55.106) has joined #ceph
[19:46] <swat30> we were experiencing an issue when attempting to add OSDs to ceph. Our plan is to move our three OSDs over to new hosts. When trying to do so, we were experiencing a MON crash. Tried the kvstore fix, had to back out as MON wouldn't start afterwards (we backed up the MON before starting).
[19:46] <swat30> Now when trying to restart an OSD, we get http://pastebin.com/UUuDdS1V
[19:47] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[19:47] <swat30> and this on the other OSDs http://pastebin.com/f12r4W2s
[19:48] <swat30> since trying to restart, exports have become unresponsive
[19:48] <swat30> we want to get the cluster back to a working state so that we can figure out what to do with the data moving forward
[19:48] <swat30> being operational without data loss is #1 priority
[19:49] * astellwag (~astellwag@209.132.181.86) Quit (Read error: Connection reset by peer)
[19:51] <swat30> rbd ls works fine
[19:56] * astellwag (~astellwag@209.132.181.86) has joined #ceph
[19:58] <swat30> anyone able to provide some direction?
[19:59] <swat30> anyone able to provide any direction? really in a jam and need to get this guy back online :/
[20:00] <swat30> sorry, didn't mean to double post there
[20:01] <darkfader> i don't know if there's some emergency support without prior contract at inktank, but i'd try to get it just in case
[20:01] <darkfader> call them while you wait for advice here
[20:03] * Sysadmin88 (~IceChat77@05452df5.skybroadband.com) Quit (Quit: Easy as 3.14159265358979323846... )
[20:03] * dmsimard is now known as dmsimard_away
[20:06] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) Quit (Quit: Leaving)
[20:06] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) has joined #ceph
[20:09] <cookednoodles> tried the mailing list ?
[20:09] * toabctl (~toabctl@toabctl.de) Quit (Quit: Adios)
[20:09] * toabctl (~toabctl@toabctl.de) has joined #ceph
[20:16] <swat30> cookednoodles, I haven't actually. I'll give that a shot now
[20:16] <swat30> darkfader, was trying to avoid that, but may need to
[20:17] * KevinPerks1 (~Adium@2606:a000:80a1:1b00:a571:f795:1391:78a8) has joined #ceph
[20:20] * garphy`aw is now known as garphy
[20:21] * KevinPerks (~Adium@2606:a000:80a1:1b00:b4e1:8d6f:bf1c:b62) Quit (Ping timeout: 480 seconds)
[20:29] * wrencsok1 (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[20:29] * rturk is now known as rturk|afk
[20:30] * rturk|afk is now known as rturk
[20:32] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[20:32] * lalatenduM (~lalatendu@122.172.85.86) Quit (Quit: Leaving)
[20:37] * Concubidated1 (~Adium@66.87.67.231) has joined #ceph
[20:37] * Concubidated (~Adium@66-87-64-112.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[20:40] * b0e (~aledermue@x2f2e702.dyn.telefonica.de) has joined #ceph
[20:40] <jiffe> what is considered a 'high' log level?
[20:40] * b0e (~aledermue@x2f2e702.dyn.telefonica.de) Quit ()
[20:46] * rotbeard (~redbeard@2a02:908:df19:4b80:76f0:6dff:fe3b:994d) has joined #ceph
[20:46] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[20:46] <jcsp> jiffe: 20 is highest, 10 is pretty high. I generally go up to 20 if I'm only doing 1 or 2 subsystems, and only fall back if the resulting volume is unmanageable
[20:47] <jiffe> I'm going to have to add disk to these vms to hold these logs :)
[20:49] * KevinPerks1 (~Adium@2606:a000:80a1:1b00:a571:f795:1391:78a8) Quit (Quit: Leaving.)
[20:49] * KevinPerks (~Adium@2606:a000:80a1:1b00:a571:f795:1391:78a8) has joined #ceph
[20:49] <jcsp> hmm, if you're running VMs and getting deadlocks, check how your VMs are laid out across physical hosts and whether they're overprovisioned. There is potential for things to get back if you have clients and servers competing for the same memory
[20:49] <jcsp> *get bad
[20:50] * Concubidated1 is now known as Concubidated
[20:50] <jiffe> I have vmware rules to make sure they don't run on the same disks/hosts
[20:52] <jiffe> hmm, that's a good point, apparently half the vmware hosts are down for maintenance
[20:54] <jiffe> the remaining hosts are running at about 50% usage on cpu/memory though
[20:56] * sjusthm (~sam@24-205-54-233.dhcp.gldl.ca.charter.com) has joined #ceph
[20:57] <jiffe> I read through http://ceph.com/docs/firefly/dev/kernel-client-troubleshooting, there's no dmesg output when the problems occurred so I've got debugging up on the mds server and will cat /sys/kernel/debug/ceph/*/mdsc next time things back up
[20:58] <swat30> we've restarted the cluster and are now seeing this: http://pastebin.com/g2yFG2y8
[21:05] <rldleblanc> I've loop mounted the rbd off of fuse and fstrim does not complain, but space is not freed on the OSDs. fstrim is showing that it is trimming space.
[21:09] * rturk is now known as rturk|afk
[21:14] * jamin (~jamin@65-100-221-49.dia.static.qwest.net) Quit (Quit: leaving)
[21:15] * rturk|afk is now known as rturk
[21:20] * rotbart (~redbeard@aftr-37-24-151-26.unity-media.net) has joined #ceph
[21:31] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Quit: Ex-Chat)
[21:31] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[21:31] * diegows (~diegows@190.190.5.238) has joined #ceph
[21:36] * marvin0815 (~oliver.bo@dhcp-admin-217-66-51-235.pixelpark.com) has joined #ceph
[21:41] * rturk is now known as rturk|afk
[21:41] * rturk|afk is now known as rturk
[21:49] * ircolle is now known as ircolle-afk
[21:52] <stupidnic> alfredodeza: are you about? Just a note that you have some debugging left on in the normalized_release()
[21:53] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has left #ceph
[21:55] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[21:57] <alfredodeza> stupidnic: thank you sir, just removed them
[21:58] <stupidnic> coolio
[21:59] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[22:00] * swat30 (~swat30@204.13.51.130) Quit (Ping timeout: 480 seconds)
[22:00] * rturk is now known as rturk|afk
[22:03] * dh (~DHsueh@207.239.47.22) has joined #ceph
[22:04] <dh> hello
[22:04] <dh> anyone here have experience reading multiple ranges in one librados call?
[22:07] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[22:10] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[22:10] * BManojlovic (~steki@109-92-246-30.dynamic.isp.telekom.rs) has joined #ceph
[22:11] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[22:11] * kfei (~root@114-27-61-6.dynamic.hinet.net) Quit (Read error: Connection reset by peer)
[22:13] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[22:13] * ChanServ sets mode +v andreask
[22:18] * xarses (~andreww@12.164.168.117) Quit (Remote host closed the connection)
[22:18] * steki (~steki@212.200.65.137) has joined #ceph
[22:18] * qhartman (~qhartman@den.direwolfdigital.com) has joined #ceph
[22:20] * colonD (~colonD@173-165-224-105-minnesota.hfc.comcastbusiness.net) Quit (Read error: Operation timed out)
[22:21] * colonD (~colonD@173-165-224-105-minnesota.hfc.comcastbusiness.net) has joined #ceph
[22:23] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[22:23] * dh (~DHsueh@207.239.47.22) has left #ceph
[22:24] * BManojlovic (~steki@109-92-246-30.dynamic.isp.telekom.rs) Quit (Ping timeout: 480 seconds)
[22:24] <stupidnic> Question about pools and pg_num
[22:24] <stupidnic> by default there are three pools created (data, metadata, and rbd)
[22:24] * jrankin (~jrankin@d47-69-66-231.try.wideopenwest.com) Quit (Quit: Leaving)
[22:25] <stupidnic> When I first created the cluster it said that I didn't have enough pgs, so I changed the pgs on only the data pool (to 250)
[22:25] <stupidnic> and now the cluster reports that it is healthy
[22:25] <stupidnic> (which is an improvement over what I have had in the past)
[22:26] <stupidnic> My question is... are pgs shared between these three default pools? They seem to have their own pg_num values
[22:27] <stupidnic> When I changed the pg_num on data it reported HEALTH_OK in ceph status
[22:27] <stupidnic> even though metadata and rbd still report pg_num 64
[22:28] * xarses (~andreww@12.164.168.117) has joined #ceph
[22:30] * kfei (~root@114-27-80-154.dynamic.hinet.net) has joined #ceph
[22:32] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[22:33] * The_Bishop (~bishop@2001:470:50b6:0:c1ba:4d17:cefb:fa45) Quit (Ping timeout: 480 seconds)
[22:35] * The_Bishop (~bishop@2001:470:50b6:0:c1ba:4d17:cefb:fa45) has joined #ceph
[22:35] * lightspeed (~lightspee@2001:8b0:16e:1:8326:6f70:89f:8f9c) has joined #ceph
[22:38] * steki (~steki@212.200.65.137) Quit (Ping timeout: 480 seconds)
[22:41] * BManojlovic (~steki@212.200.65.139) has joined #ceph
[22:47] <Gugge-47527> stupidnic: each pool has its own pg's
[22:48] <Gugge-47527> im pretty sure the status check only checks for total number of pgs
[22:48] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[22:52] * Hell_Fire (~hellfire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[22:57] * alram (~alram@38.122.20.226) Quit (Read error: Connection reset by peer)
[22:57] * gregsfortytwo (~Adium@38.122.20.226) has joined #ceph
[22:57] * alram (~alram@38.122.20.226) has joined #ceph
[22:57] * gregsfortytwo1 (~Adium@2607:f298:a:607:a9e4:5f9c:10d8:c5f8) Quit (Read error: Connection reset by peer)
[22:58] * brad_mssw (~brad@shop.monetra.com) Quit (Quit: Leaving)
[23:07] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[23:07] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[23:07] * ChanServ sets mode +v andreask
[23:07] * jharley (~jharley@192-171-36-233.cpe.pppoe.ca) has joined #ceph
[23:10] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[23:10] * ikrstic (~ikrstic@93-87-118-93.dynamic.isp.telekom.rs) Quit (Remote host closed the connection)
[23:10] * ikrstic (~ikrstic@93-87-118-93.dynamic.isp.telekom.rs) has joined #ceph
[23:13] * rotbart (~redbeard@aftr-37-24-151-26.unity-media.net) Quit (Quit: Leaving)
[23:17] * ircolle-afk is now known as ircolle
[23:25] * RandomUser (~oftc-webi@70-91-207-249-BusName-SFBA.hfc.comcastbusiness.net) has joined #ceph
[23:25] <RandomUser> hi
[23:26] <RandomUser> i am following the 5 minute startup tutorial http://ceph.com/docs/dumpling/start/quick-ceph-deploy/
[23:26] <RandomUser> on a ubuntu 64 bit system
[23:26] <RandomUser> and when i run the sudo service ceph -a start command, i get this error:
[23:26] <RandomUser> failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.0 --keyring=/var/lib/ceph/osd/ceph-0/keyring osd crush create-or-move -- 0 0.02 host=ip-10-214-158-187 root=default'
[23:29] * rotbeard (~redbeard@2a02:908:df19:4b80:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[23:30] * Concubidated (~Adium@66.87.67.231) Quit (Ping timeout: 480 seconds)
[23:31] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[23:32] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[23:32] * garphy is now known as garphy`aw
[23:34] * hufman (~hufman@cpe-184-58-235-28.wi.res.rr.com) Quit (Quit: leaving)
[23:36] * swat30 (~swat30@204.13.51.130) has joined #ceph
[23:38] * Sysadmin88 (~IceChat77@05452df5.skybroadband.com) has joined #ceph
[23:41] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[23:41] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[23:46] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[23:47] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[23:48] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[23:49] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[23:51] * fsimonce (~simon@host135-17-dynamic.8-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[23:55] * BManojlovic (~steki@212.200.65.139) Quit (Ping timeout: 480 seconds)
[23:55] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[23:58] * rturk|afk is now known as rturk

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.