#ceph IRC Log

Index

IRC Log for 2014-08-08

Timestamps are in GMT/BST.

[0:02] * rendar (~I@host29-178-dynamic.19-79-r.retail.telecomitalia.it) Quit ()
[0:05] * ikrstic (~ikrstic@77-46-171-25.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[0:06] * Cube (~Cube@tacocat.concubidated.com) Quit (Ping timeout: 480 seconds)
[0:07] * b0e (~aledermue@x2f35fa5.dyn.telefonica.de) has joined #ceph
[0:07] * b0e (~aledermue@x2f35fa5.dyn.telefonica.de) Quit ()
[0:10] * JayJ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[0:12] * Guest3837 (~coyo@thinks.outside.theb0x.org) Quit (Remote host closed the connection)
[0:15] * brad_mssw (~brad@shop.monetra.com) Quit (Quit: Leaving)
[0:18] * Coyo (~coyo@thinks.outside.theb0x.org) has joined #ceph
[0:18] * Coyo is now known as Guest5118
[0:19] * theanalyst (theanalyst@0001c1e3.user.oftc.net) Quit (Quit: ZNC - http://znc.in)
[0:19] * TiCPU (~jeromepou@216-80-70-224.c3-0.alc-ubr4.chi-alc.il.cable.rcn.com) has joined #ceph
[0:20] * sigsegv (~sigsegv@188.25.123.201) Quit (Quit: sigsegv)
[0:31] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[0:32] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[0:32] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[0:33] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[0:33] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[0:39] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[0:41] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Ping timeout: 480 seconds)
[0:46] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Remote host closed the connection)
[0:57] * dmsimard_away is now known as dmsimard
[1:01] * dmick (~dmick@2607:f298:a:607:7c97:43da:1058:324f) Quit (Ping timeout: 480 seconds)
[1:04] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[1:04] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) has joined #ceph
[1:08] * qhartman (~qhartman@den.direwolfdigital.com) has joined #ceph
[1:10] * dmick (~dmick@2607:f298:a:607:18a2:745c:bc8f:3f75) has joined #ceph
[1:10] * oms101 (~oms101@p20030057EA262C00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:13] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[1:13] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[1:16] * dmsimard is now known as dmsimard_away
[1:18] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[1:19] * oms101 (~oms101@p20030057EA1B7100EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:24] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[1:33] * ircolle is now known as ircolle-afk
[1:43] * TiCPU (~jeromepou@216-80-70-224.c3-0.alc-ubr4.chi-alc.il.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[1:43] * dmsimard_away is now known as dmsimard
[1:45] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[1:46] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[2:01] * dave_cas1y (~ceph@67.32.109.20) has joined #ceph
[2:03] * dave_casey (~ceph@67.32.109.20) Quit (Ping timeout: 480 seconds)
[2:07] * aknapp_ (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[2:07] * aknapp_ (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Remote host closed the connection)
[2:07] * aknapp_ (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[2:14] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Ping timeout: 480 seconds)
[2:15] * aknapp_ (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Ping timeout: 480 seconds)
[2:20] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:26] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:31] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[2:32] * joao|lap (~JL@78.29.191.247) has joined #ceph
[2:32] * ChanServ sets mode +o joao|lap
[2:33] <stupidnic> alfredodeza: pull request submitted
[2:33] * alfredodeza looks
[2:33] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[2:33] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[2:34] <stupidnic> Tested it out on my local system as well and it properly detects CentOS7, now I just need the repos
[2:35] <alfredodeza> stupidnic: commented
[2:35] <alfredodeza> looks good! and it has tests!
[2:35] <houkouonchi-work> stupidnic: we don't have builds for centos7 yet but that will probably be pretty soon, the rhel7 builds will probably work both being el7 though
[2:36] <alfredodeza> !norris stupidnic
[2:36] <kraken> stupidnic doesn't read. he just stares at the book until he gets the information he wants.
[2:36] <alfredodeza> thanks kraken
[2:36] * kraken is astonished by the multipotent asseveration of admiration
[2:36] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[2:36] * sputnik13 (~sputnik13@207.8.121.241) Quit (Read error: Connection reset by peer)
[2:36] * sarob (~sarob@2001:4998:effd:600:cd26:43d3:c222:d927) Quit (Remote host closed the connection)
[2:37] * sarob (~sarob@2001:4998:effd:600:cd26:43d3:c222:d927) has joined #ceph
[2:41] <stupidnic> alfredodeza: check my comment
[2:42] <stupidnic> edited because I suck at markdown
[2:43] <flaf> Hi, if I want to create a Ceph cluster with 3 monitors, must I put the 3 monitors in the parameters "mon initial members" and "mon host" of my /etc/ceph/ceph.conf file? (or just the first mon)
[2:43] <stupidnic> alfredodeza: and if we should move this to ceph-devel please let me know and I will join over there
[2:43] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[2:45] * sarob (~sarob@2001:4998:effd:600:cd26:43d3:c222:d927) Quit (Ping timeout: 480 seconds)
[2:47] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[2:48] * lucas1 (~Thunderbi@222.247.57.50) Quit (Read error: Connection reset by peer)
[2:49] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[2:49] * zerick (~eocrospom@190.187.21.53) Quit (Read error: Connection reset by peer)
[2:50] * dmsimard is now known as dmsimard_away
[2:51] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[2:56] * sputnik1_ (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:01] * joef (~Adium@2620:79:0:131:5d44:98ee:a2ab:be5a) Quit (Remote host closed the connection)
[3:05] * barnim (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Read error: Permission denied)
[3:05] * barnim (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[3:33] <Nats> flaf, you should have all 3
[3:34] <flaf> Nats: ok thx.
[3:34] <Nats> when you bring something online, like an rbd client for example, it uses that list to connect to the cluster
[3:34] <Nats> if you only have one and it goes down, then you wont be able to start any services
[3:35] <flaf> Ok.
[3:35] * joao|lap (~JL@78.29.191.247) Quit (Ping timeout: 480 seconds)
[3:35] <Nats> things that are already online would be ok, since they will learn of the other monitors from the initial one
[3:35] <Nats> but to me, thats still too fragile
[3:38] * bkopilov (~bkopilov@213.57.64.132) Quit (Ping timeout: 480 seconds)
[3:39] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[3:42] <flaf> When I populate the monitor daemon with "ceph-nom --mkfs ...", what is the meaning of --monmap option?
[3:44] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[3:48] * bkopilov (~bkopilov@213.57.64.150) has joined #ceph
[3:48] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[3:49] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[3:50] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[3:56] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:57] * vz (~vz@122.172.253.118) has joined #ceph
[3:58] * zhaochao (~zhaochao@124.207.139.17) has joined #ceph
[4:01] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[4:01] * Jakey (uid1475@id-1475.uxbridge.irccloud.com) has joined #ceph
[4:02] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[4:04] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[4:06] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[4:10] * dave_cas1y (~ceph@67.32.109.20) Quit (Ping timeout: 480 seconds)
[4:10] * haomaiwang (~haomaiwan@124.248.208.2) Quit (Ping timeout: 480 seconds)
[4:12] * dmsimard_away is now known as dmsimard
[4:14] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[4:15] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:18] * dave_casey (~ceph@67.32.109.20) has joined #ceph
[4:22] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[4:34] * sarob (~sarob@15.219.162.26) has joined #ceph
[4:38] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[4:40] * LeaChim (~LeaChim@host86-159-115-162.range86-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[4:42] * RameshN (~rnachimu@101.222.246.240) has joined #ceph
[4:42] * fmanana (~fdmanana@bl5-173-96.dsl.telepac.pt) has joined #ceph
[4:43] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:47] * zerick (~eocrospom@190.187.21.53) Quit (Ping timeout: 480 seconds)
[4:50] * fdmanana (~fdmanana@bl5-173-238.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[4:51] * dmsimard is now known as dmsimard_away
[4:56] <gleam> om nom nom
[4:57] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[5:13] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) Quit (Read error: Connection reset by peer)
[5:22] * Vacum_ (~vovo@i59F7A77E.versanet.de) has joined #ceph
[5:25] <Jakey> Jakey> i use ceph-deploy mon create but all the mon daemon ranks are -1
[5:25] <Jakey> 10:35 <Jakey> what does that mean
[5:29] * Vacum (~vovo@88.130.206.134) Quit (Ping timeout: 480 seconds)
[5:29] * theanalyst (theanalyst@0001c1e3.user.oftc.net) has joined #ceph
[5:32] * adamcrume (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[5:57] * bandrus (~oddo@216.57.72.205) Quit (Quit: Leaving.)
[5:57] * Cube (~Cube@tacocat.concubidated.com) has joined #ceph
[5:58] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[5:59] * dmsimard_away is now known as dmsimard
[6:04] * dmsimard is now known as dmsimard_away
[6:14] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) Quit (Quit: Leaving.)
[6:17] * lupu (~lupu@86.107.101.214) has joined #ceph
[6:21] * vbellur (~vijay@122.167.201.111) Quit (Ping timeout: 480 seconds)
[6:29] * jianingy (~jianingy@211.151.112.5) has joined #ceph
[6:38] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[6:40] * vz (~vz@122.172.253.118) Quit (Ping timeout: 480 seconds)
[6:42] * sarob (~sarob@15.219.162.26) Quit (Remote host closed the connection)
[6:42] * ashishchandra (~ashish@49.32.0.88) has joined #ceph
[6:42] * sarob (~sarob@15.219.162.26) has joined #ceph
[6:43] * sarob_ (~sarob@15.219.162.26) has joined #ceph
[6:47] * houkouonchi-home (~linux@pool-71-177-96-154.lsanca.fios.verizon.net) Quit (Remote host closed the connection)
[6:48] * vbellur (~vijay@121.244.87.117) has joined #ceph
[6:51] * sarob (~sarob@15.219.162.26) Quit (Ping timeout: 480 seconds)
[6:51] * vz (~vz@122.172.253.118) has joined #ceph
[6:51] * sarob_ (~sarob@15.219.162.26) Quit (Ping timeout: 480 seconds)
[6:51] * vz (~vz@122.172.253.118) Quit ()
[6:53] * capri (~capri@212.218.127.222) has joined #ceph
[6:55] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[6:55] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[6:55] * capri_oner (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[6:56] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit ()
[7:04] * ashishchandra (~ashish@49.32.0.88) Quit (Ping timeout: 480 seconds)
[7:10] * rdas (~rdas@121.244.87.115) has joined #ceph
[7:14] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[7:17] * ashishchandra (~ashish@49.32.0.114) has joined #ceph
[7:18] * Cube (~Cube@tacocat.concubidated.com) Quit (Remote host closed the connection)
[7:24] * capri (~capri@212.218.127.222) has joined #ceph
[7:28] * swami (~swami@49.32.0.126) has joined #ceph
[7:32] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[7:34] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[7:34] * Qten (~Qu310@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[7:34] * Qu310 (~Qu310@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Read error: Connection reset by peer)
[7:38] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[7:38] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[7:38] * lucas1 (~Thunderbi@218.76.25.66) Quit ()
[7:41] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[7:43] * capri (~capri@212.218.127.222) Quit (Read error: Connection reset by peer)
[7:45] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:46] * capri (~capri@212.218.127.222) has joined #ceph
[7:46] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Ping timeout: 480 seconds)
[7:47] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[7:56] * v2 (~vshankar@121.244.87.117) has joined #ceph
[7:57] * thb (~me@2a02:2028:2b3:72b1:8c8b:e62d:2d0:aaef) has joined #ceph
[7:57] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[7:58] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[7:59] * lucas1 (~Thunderbi@222.247.57.50) Quit ()
[8:17] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[8:26] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[8:28] * yguang11_ (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[8:30] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Read error: Connection reset by peer)
[8:34] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[8:39] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[8:42] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[8:44] * vbellur (~vijay@121.244.87.117) has joined #ceph
[8:45] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[8:45] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[8:48] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[8:51] * lcavassa (~lcavassa@89.184.114.246) has joined #ceph
[8:52] * longguang (~chatzilla@123.126.33.253) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 29.0.1/20140506152807])
[8:53] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[8:54] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[8:56] * lczerner (~lczerner@nat-pool-brq-t.redhat.com) has joined #ceph
[8:56] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) has joined #ceph
[9:01] * RameshN (~rnachimu@101.222.246.240) Quit (Ping timeout: 480 seconds)
[9:04] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:08] * analbeard (~shw@support.memset.com) has joined #ceph
[9:09] * i_m (~ivan.miro@gbibp9ph1--blueice1n1.emea.ibm.com) has joined #ceph
[9:10] * RameshN (~rnachimu@101.222.246.240) has joined #ceph
[9:21] * v2 (~vshankar@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:22] * lucas1 (~Thunderbi@222.240.148.130) Quit (Quit: lucas1)
[9:27] * ade (~abradshaw@a181.wifi.FSV.CVUT.CZ) has joined #ceph
[9:27] * ade (~abradshaw@a181.wifi.FSV.CVUT.CZ) Quit (Remote host closed the connection)
[9:31] * cok (~chk@2a02:2350:18:1012:b5ad:8d76:a9cb:f300) has joined #ceph
[9:34] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[9:34] * ashishchandra (~ashish@49.32.0.114) Quit (Ping timeout: 480 seconds)
[9:35] * rendar (~I@host211-176-dynamic.3-87-r.retail.telecomitalia.it) has joined #ceph
[9:35] * ashishchandra (~ashish@49.32.0.114) has joined #ceph
[9:37] * jtang_ (~jtang@80.111.83.231) Quit (Remote host closed the connection)
[9:38] * barnim (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[9:39] * overclk (~vshankar@121.244.87.117) has joined #ceph
[9:40] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Ping timeout: 480 seconds)
[9:42] * kapil (~ksharma@2620:113:80c0:5::2222) has joined #ceph
[9:47] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[9:49] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[9:54] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Remote host closed the connection)
[9:57] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[10:14] * ksingh (~Adium@2001:708:10:10:54d7:67d8:2392:8648) has joined #ceph
[10:15] <ksingh> Need help in installing ceph on Centos 7 , problem is Ceph needs python2.6 and Centos7 has default python version as 2.7
[10:15] <ksingh> [ceph-client1][WARNIN] Error: Package: python-werkzeug-0.8.3-2.el6.noarch (Ceph-noarch)
[10:15] <ksingh> [ceph-client1][WARNIN] Requires: python(abi) = 2.6
[10:15] <ksingh> [ceph-client1][WARNIN] Installed: python-2.7.5-16.el7.x86_64 (@anaconda)
[10:15] <ksingh> [ceph-client1][WARNIN] python(abi) = 2.7
[10:15] <ksingh> [ceph-client1][WARNIN] python(abi) = 2.7
[10:15] <ksingh> pls help
[10:16] * madkiss (~madkiss@46.114.46.201) has joined #ceph
[10:18] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:21] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[10:26] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[10:32] * vmx (~vmx@dslb-084-056-062-146.084.056.pools.vodafone-ip.de) has joined #ceph
[10:34] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[10:40] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[10:45] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[10:52] * LeaChim (~LeaChim@host86-159-115-162.range86-159.btcentralplus.com) has joined #ceph
[11:03] * mgarcesMZ (~mgarces@5.206.228.5) has joined #ceph
[11:03] <mgarcesMZ> hi there
[11:05] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[11:12] * ade (~abradshaw@a181.wifi.fsv.cvut.cz) has joined #ceph
[11:13] * jtang_ (~jtang@80.111.83.231) has joined #ceph
[11:15] * joao|lap (~JL@78.29.191.247) has joined #ceph
[11:15] * ChanServ sets mode +o joao|lap
[11:17] * Joffer (~Christoph@212.62.233.233) has joined #ceph
[11:18] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) has joined #ceph
[11:19] * madkiss (~madkiss@46.114.46.201) Quit (Ping timeout: 480 seconds)
[11:20] * JC1 (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) Quit (Quit: Leaving.)
[11:20] * jtang_ (~jtang@80.111.83.231) Quit (Remote host closed the connection)
[11:21] * jtang_ (~jtang@80.111.83.231) has joined #ceph
[11:25] * jtang_ (~jtang@80.111.83.231) Quit (Remote host closed the connection)
[11:31] * jtang_ (~jtang@80.111.83.231) has joined #ceph
[11:31] * jtang_ (~jtang@80.111.83.231) Quit (Remote host closed the connection)
[11:32] * jtang_ (~jtang@80.111.83.231) has joined #ceph
[11:32] <Joffer> Hi there. Our ceph guru has left the company and now I'm left with running our ceph cluster.. yikes.. haven't had time to read much of the great ceph documentation yet, so here I am, looking for some help. Our application that stores its data via NFS to our ceph system is down from time to time due to problems with ceph: HEALTH_WARN 10 requests are blocked > 32 sec; 2 osds have slow requests
[11:32] <Joffer> I've outlined some info on the systems here: http://pastebin.com/Uct7veSe
[11:33] <Joffer> what can I do to troubleshoot this?
[11:33] * jtang_ (~jtang@80.111.83.231) Quit (Remote host closed the connection)
[11:33] <Joffer> it comes and goes. When ceph has problems the NFS is blocked and out applications cant read/write to disk (nfs)
[11:34] * jtang_ (~jtang@80.111.83.231) has joined #ceph
[11:34] <wedge> Joffer: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-November/006229.html
[11:36] <Joffer> it's random osds that get reported as slow/blocking..
[11:37] * lucas1 (~Thunderbi@222.240.148.154) has joined #ceph
[11:37] <Joffer> I will doublecheck the disks.. it's connected to a raid controller so need to find the commands for megaraid to check disks
[11:38] * swami (~swami@49.32.0.126) Quit (Quit: Leaving.)
[11:40] * overclk (~vshankar@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:42] <cookednoodles> Joffer, on osd.0 on osd.62
[11:42] <cookednoodles> look in that direction
[11:42] * jtang_ (~jtang@80.111.83.231) Quit (Remote host closed the connection)
[11:43] <Joffer> right now yes, I will. we have two servers in each datacenter, and one copy in each site
[11:43] <Joffer> (just to give updated information)
[11:45] * emmenemoi (~seb@46.218.46.150) has joined #ceph
[11:45] * lucas1 (~Thunderbi@222.240.148.154) Quit (Quit: lucas1)
[11:45] <emmenemoi> Hi
[11:46] <mgarcesMZ> I have setup my ceph cluster with 2 osd, and it was ???active+clean?????? I have rebooted osd.0, and now it shows: ???96 active+degraded, 96 active+clean???
[11:46] <cookednoodles> you'll need to wait for it to resync
[11:46] <mgarcesMZ> how can I get it back to ???192 active+clean???
[11:46] <mgarcesMZ> but it???s taking forever
[11:46] <mgarcesMZ> shouldn???t be fast?
[11:46] <absynth> welcome to ceph ;)
[11:46] <cookednoodles> run ceph -w
[11:47] <mgarcesMZ> cookednoodles: doing it
[11:47] * bkopilov (~bkopilov@213.57.64.150) Quit (Ping timeout: 480 seconds)
[11:47] <mgarcesMZ> ceph health outputs: HEALTH_OK
[11:47] <emmenemoi> I couldn't find an answer to a question I have for long: is there any loopback risk to install the ceph MON on a RBD backed VM, the RBD being the on the ceph cluster monitored by the MON install on the VM ?
[11:49] <mgarcesMZ> i dont get it??? first part of ???ceph -w???, says ???192 active+clean??? but then there is a log line telling otherwise
[11:49] <absynth> emmenemoi: yes, cascading failure
[11:49] <absynth> if the cluster starts recovering (couple OSDs are down), the mon might suffer i/o issues and then be thrown out of the mon cluster.
[11:50] <emmenemoi> absynth, what do you mean ? (there could be several MON VMs for HA)
[11:50] <absynth> yes, but if they're all on the ceph that they're monitoring, any issue in that ceph will cause issues for the mons
[11:50] <emmenemoi> ah yes. Then ALL the MON, not just one
[11:50] <absynth> well, also ONE mon
[11:51] <absynth> consider a total ceph failure (power outage or so)
[11:51] <absynth> you will only be able to start the mons that are not on the Ceph cluster after such an event
[11:51] <absynth> putting additional risk on your tedious recovery
[11:51] <emmenemoi> if so: big shit: they'll be on the same rack for the moment
[11:51] <absynth> some kind of a bootstrap issue
[11:51] <absynth> summing up: i wouldn't do it.
[11:51] <cookednoodles> emmenemoi, A + B :P
[11:51] <absynth> we had a+b failure in a tier3 data center a couple months ago
[11:51] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[11:51] <absynth> whole building was black
[11:52] <cookednoodles> ditto
[11:52] <absynth> where are you? frankfurt?
[11:52] <cookednoodles> docklands
[11:52] <absynth> ah, we're in frankfurt
[11:52] <absynth> point is, these things happen. :/
[11:52] <emmenemoi> hum. yes, I know. but multi datacenter replication is costly....
[11:53] <absynth> and impossible with ceph
[11:53] <cookednoodles> its planned though
[11:53] <absynth> uh huh ;)
[11:53] <emmenemoi> why impossible?
[11:53] <absynth> it's no implemented
[11:53] <emmenemoi> even with a good CRUSH map ?
[11:53] <absynth> you cannot run a multi-location ceph cluster
[11:53] <absynth> no, ceph is too latency sensitive
[11:54] <emmenemoi> ah ok. didn't know.
[11:54] <Joffer> cookednoodles, I checked all disk with "/opt/MegaRAID/MegaCli/MegaCli64 -PDList -aALL" and none of them seems to have any errors. not osd.0 or osd.61
[11:54] <emmenemoi> :(
[11:54] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[11:55] <cookednoodles> Well first do the general IT fix for everything
[11:55] <Vacum_> we have a multi dc setup running. but with a latency between DCs of ~1ms :)
[11:55] <cookednoodles> turn off the osd and then turn it back on again
[11:55] <Joffer> I do think it was loosened up now, as the new status is: HEALTH_WARN 20 pgs backfill; 173 pgs backfilling; 193 pgs degraded; 193 pgs stuck unclean; 2 requests are blocked > 32 sec; 2 osds have slow requests; recovery 161415/31105730 degraded (0.519%); recovering 70 o/s, 153MB/s
[11:55] <emmenemoi> That's what ceph calls geo-replication maybe
[11:56] <Joffer> but I will get the same problem later today, as I get this several times a day the last weeks
[11:56] <cookednoodles> backfilling ?
[11:56] <cookednoodles> did you replace a drive ?
[11:56] <Joffer> nope
[11:56] <kraken> http://i.imgur.com/xKYs9.gif
[11:56] <emmenemoi> Vacum_: good to know. ceph just needs low latency interconnects. But what is your average inter DC bandwidth consumption !!?
[11:57] <mgarcesMZ> other dumb question I have??? 2117 MB used, 28580 MB / 30698 MB avail (each OSD is a 16GB partition)??? I thought the total space would be 16...
[11:58] <mgarcesMZ> I have used journal size = 1024, so I get the 30??? I just dont get it why
[11:58] * mgarcesMZ (~mgarces@5.206.228.5) has left #ceph
[11:58] * mgarcesMZ (~mgarces@5.206.228.5) has joined #ceph
[11:58] <Vacum_> emmenemoi: for ceph? normal operation around 6-8gbit/s. recovery over 20gbit/s
[11:59] <Vacum_> emmenemoi: and of course depending on the time of day. at night there is less traffic
[11:59] <emmenemoi> Vacum_: just on the inter-DC link ? nice. It's not so big
[11:59] <Joffer> ceph -w gives me: 2014-08-08 11:59:05.855047 mon.0 [INF] pgmap v28127429: 24192 pgs: 24047 active+clean, 1 active+degraded+wait_backfill, 65 active+degraded+backfilling, 12 active+degraded+remapped+wait_backfill, 67 active+degraded+remapped+backfilling; 33199 GB data, 107 TB used, 237 TB / 345 TB avail; 5413B/s rd, 27065B/s wr, 4op/s; 117414/31105730 degraded (0.377%); recovering 347 o/s, 863MB/s
[11:59] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[11:59] <Vacum_> emmenemoi: that is only for our ceph cluster... :)
[11:59] <Joffer> does this look ok?
[11:59] <Joffer> remapping? hmm..
[12:00] <emmenemoi> Vacum_: and what is the storage size of your cluster ?
[12:00] <Vacum_> mgarcesMZ: you have 2 osds, each 16GB. total available space is (after formatting) 31GB. your pools' replica size is not calculated in there
[12:01] <Vacum_> emmenemoi: currently we have ~1PB stored
[12:01] <mgarcesMZ> Vacum_: I thought that it would show me the total space I have available to use, not the total space in the cluster..
[12:02] <Gugge-47527> the total space available for use is hard to figure out
[12:02] <Vacum_> mgarcesMZ: the total available space depends on the replication size of your pools. and they can be different for different pools
[12:02] <mgarcesMZ> I see I have to spend more time in the docs :(
[12:02] <Vacum_> mgarcesMZ: so the available space is shown in gross available
[12:02] <kraken> http://i.imgur.com/XEEI0Rn.gif
[12:02] <emmenemoi> Vacum_: thanks. It looks promising then. Not so much inter DC bandwidth consumption !
[12:02] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Ping timeout: 480 seconds)
[12:03] <Vacum_> emmenemoi: eh. that totally depends! ie if your primary PG is in DC A, and you have two replicas in DC B, the primary PG's osd will send the written object *twice* over your inter DC
[12:03] <Vacum_> emmenemoi: so the inter dc traffic is directly dependent on the client traffic to the cluster
[12:04] <Vacum_> emmenemoi: if you want to write 2GB/s to the cluster and have 2 replicas per DC, your inter DC will see *4* GB/s replication traffic
[12:04] <emmenemoi> I used to focus on ZFS backed Lustre. But CEPH seems to be much better
[12:04] * capri (~capri@212.218.127.222) has joined #ceph
[12:04] <emmenemoi> yes, sure. That's where a clever setup and CRUSH map is requested.
[12:05] * ade (~abradshaw@a181.wifi.fsv.cvut.cz) Quit (Quit: Too sexy for his shirt)
[12:06] <emmenemoi> by clever setup I mean clever use of the cluster.
[12:08] <emmenemoi> and what is the risk of total failure of your cluster , ie: ceph can't recover the data in any way: data lost ? Is there a need of archiving the cluster in a "safe standard lowcost storage"
[12:09] <emmenemoi> ie: regular exports or diff exports of the pools to a kind of bunker room ?
[12:15] * bjornar (~bjornar@ns3.uniweb.no) has joined #ceph
[12:16] * overclk (~vshankar@121.244.87.117) has joined #ceph
[12:20] * haomaiwa_ (~haomaiwan@223.223.183.114) has joined #ceph
[12:20] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Read error: Connection reset by peer)
[12:21] <Joffer> i'm stuck in what and where to troubleshoot. Now my cluster is fine, only to go into warning with "X requests are blocked > 32 sec; Y osds have slow requests" again later today.
[12:21] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[12:22] <Joffer> could it be some tcp buffers? or other networking settings? all osd servers have 4x1GbE links and 10GbE between datacenters
[12:23] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[12:25] <mgarcesMZ> I dont get this.. I shutdown the main mon (but I have other 2)??? now a ceph command wont output anything
[12:26] <nizedk> wait
[12:26] <nizedk> does your ceph.conf reflect all the mons?
[12:27] <mgarcesMZ> it shows me moh_host and mon_initial_members (ponting to node1)
[12:27] <mgarcesMZ> when I had extra monitors with ???ceph-deploy???, shouldn???t he update the conf?
[12:27] <nizedk> no
[12:28] <mgarcesMZ> ok
[12:28] <mgarcesMZ> should I put it on mon_host and mon_initial_members?
[12:28] <mgarcesMZ> the 3 nodes?
[12:28] <nizedk> initial_members = names, mon_host = IP's of
[12:29] <nizedk> (that's how mine looks now)
[12:29] <mgarcesMZ> ok
[12:29] <mgarcesMZ> how do I push the new config to every node?
[12:29] <mgarcesMZ> with ceph-deploy
[12:29] <nizedk> ceph-deploy config push node1 node2 node3
[12:29] <nizedk> ...
[12:29] <mgarcesMZ> cool, let me try
[12:29] <mgarcesMZ> also
[12:29] <mgarcesMZ> when I added the osd
[12:29] <nizedk> (probably --overwrite-conf)
[12:29] <mgarcesMZ> there is nothing on the conf either
[12:30] <mgarcesMZ> I should update manually the conf?
[12:30] <nizedk> yes, that is my understanding
[12:30] <nizedk> ceph-deploy generates a starting point, it does not maintain your ceph.conf
[12:30] <mgarcesMZ> oh! (headbang)
[12:30] <nizedk> Been there, done that
[12:31] <mgarcesMZ> Im just in ceph since monday, thank you for your help :)
[12:31] <emmenemoi> generally speaking, nobody archive its ceph pools in case of CEPH complete collapse (during version upgrades maybe, I don't know) ? CEPH is reliable enough ?
[12:31] <mgarcesMZ> its a bit overwhelming
[12:32] <nizedk> "generally speaking, nobody archive its $product in case of $product complete collapse (during version upgrades maybe, I don't know) ? $product is reliable"
[12:32] <nizedk> that is related to your business continuity plans, not at technical question regardinng ceph...
[12:33] <nizedk> what if you accidentally erase something?
[12:33] <nizedk> and so on...
[12:33] <nizedk> raid != backup, ceph != magic
[12:33] <emmenemoi> sure. But I'm wondering (or trying to evaluate) the % of CEPH total collapse. Not easy :)
[12:34] <emmenemoi> And by experience, during version upgrades, the risk is high. But I never did any CEPH upgrades of a cluster...
[12:34] <emmenemoi> (experience on other systems)
[12:35] <emmenemoi> to build a continuity plan :) (erase something: there's snapshots on CEPH)
[12:37] <mgarcesMZ> Im getting a lot of ???clock skew??? on my nodes..
[12:37] <mgarcesMZ> I run this with ntpd???
[12:37] <mgarcesMZ> have no clue why this is happening
[12:37] <mgarcesMZ> that is a phrase I am using a lot this week ???have no clue???
[12:37] <mgarcesMZ> :)
[12:39] * jtang_ (~jtang@80.111.83.231) has joined #ceph
[12:39] <joao|lap> just nitpicking: "Ceph", not "CEPH" :)
[12:39] * swami (~swami@223.227.56.220) has joined #ceph
[12:39] * swami (~swami@223.227.56.220) has left #ceph
[12:40] <mgarcesMZ> joao|lap: I think people read about Ceph @ CERN, an then just mix both
[12:40] <emmenemoi> ok, sorry :)
[12:40] <mgarcesMZ> :)
[12:40] <joao|lap> eh, probably :p
[12:40] * cok (~chk@2a02:2350:18:1012:b5ad:8d76:a9cb:f300) Quit (Quit: Leaving.)
[12:41] <mgarcesMZ> joao|lap: sorry to ask, but, where are you from?
[12:43] <absynth> lisboa
[12:43] * blSnoopy (~snoopy@miram.persei.mw.lg.virgo.supercluster.net) Quit (Remote host closed the connection)
[12:45] <joao|lap> yep, Lisboa
[12:47] <mgarcesMZ> ;)
[12:47] <mgarcesMZ> I live in mozambique??? but I am portuguese
[12:47] <joao|lap> \o/
[12:47] <mgarcesMZ> was also living and working in lisbon
[12:47] <mgarcesMZ> :)
[12:48] <joao|lap> cool, we should have a beer when you come over for a visit :)
[12:49] <mgarcesMZ> sure thing!
[12:49] <mgarcesMZ> only next year :(
[12:49] <mgarcesMZ> by this time next year
[12:51] * overclk (~vshankar@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:51] * emmenemoi (~seb@46.218.46.150) has left #ceph
[12:53] * emmenemoi (~seb@46.218.46.150) has joined #ceph
[12:55] <mgarcesMZ> I must find the config for the kicking out of osd
[12:55] <joao|lap> mgarcesMZ, are you guys using ceph in lisbon or just for overseas projects?
[12:55] <mgarcesMZ> it took 300 seconds to realise a node was down
[12:56] <mgarcesMZ> I work directly in mozambique
[12:56] <mgarcesMZ> not for a portuguese company
[12:56] <joao|lap> okay
[12:56] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[12:57] <mgarcesMZ> I am investigating Ceph, so we can implement it as a object store with HA for our frontend banking application
[12:57] <joao|lap> been looking for portuguese companies using ceph for a while and have found only one; and I think they were solely looking into it, don't think it was a serious endeavor :(
[12:57] <mgarcesMZ> looking also to do it with federated gateways, since we have main datacenter and disaster recovery, but the links are slow
[12:58] <mgarcesMZ> the last company I???ve worked, is doing some work with ceph
[12:58] <mgarcesMZ> building the first OpenStack provider in Portugal
[12:58] <joao|lap> nice
[12:59] <joao|lap> we should talk a bit about that; would love to reach out to them and have a chat :)
[12:59] <mgarcesMZ> sure, I???m still in contact with the guy (small company) and I think I will be doing some remote work also
[13:00] <joao|lap> anyway, that'll have to wait for another day unfortunately
[13:00] <mgarcesMZ> ;)
[13:00] <mgarcesMZ> will meet again here
[13:00] <joao|lap> have to pack, grab some lunch and head to the airport
[13:00] <joao|lap> awesome
[13:00] <joao|lap> looking forward to it
[13:00] <joao|lap> o/
[13:00] <kraken> \o
[13:01] <mgarcesMZ> o/
[13:01] <kraken> \o
[13:02] <mgarcesMZ> funny thing.. did not update ceph.conf with the osd info, but when I reboot the nodes (which also are mon), they start both the osd and the mon
[13:06] * bkopilov (~bkopilov@213.57.65.56) has joined #ceph
[13:07] * lcavassa (~lcavassa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[13:09] * zhaochao (~zhaochao@124.207.139.17) has left #ceph
[13:10] * Hell_Fire (~hellfire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[13:11] * Hell_Fire (~hellfire@123-243-155-184.static.tpgi.com.au) Quit ()
[13:11] * Hell_Fire (~hellfire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[13:14] * emmenemoi (~seb@46.218.46.150) has left #ceph
[13:22] * overclk (~vshankar@121.244.87.117) has joined #ceph
[13:24] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[13:24] * mgarcesMZ (~mgarces@5.206.228.5) Quit (Quit: mgarcesMZ)
[13:27] * zhangdongmao (~zhangdong@203.192.156.9) Quit (Ping timeout: 480 seconds)
[13:39] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:41] * haomaiwa_ (~haomaiwan@223.223.183.114) Quit (Remote host closed the connection)
[13:41] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[13:46] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:50] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Ping timeout: 480 seconds)
[13:55] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[13:56] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (Remote host closed the connection)
[13:56] <Xiol> Hi. We're preparing to replace some of our disks with SSDs for journaling, but we don't want to rebuild the cluster from scratch, so we've started to vacate disks to be removed by taking them out of hte crush map and letting data migrate off, but we're seeing insanely high sys cpu usage, and load shoots up to 1000+ (currently at 499!). there's virtually no IO wait (1-2% occasionally), just really high sys cpu usage, any ideas what we can do to reduce this? g
[13:57] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[13:57] <Xiol> The high load is also causing some of the OSDs to be marked as down temporarily, which is just compounding the problem
[14:03] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Ping timeout: 480 seconds)
[14:05] * sz0 (~sz0@94.55.197.185) Quit ()
[14:05] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[14:09] * kippi (~oftc-webi@host-4.dxi.eu) Quit (Remote host closed the connection)
[14:13] * mgarcesMZ (~mgarces@5.206.228.5) has joined #ceph
[14:13] * absynth (~absynth@irc.absynth.de) Quit (Read error: Connection reset by peer)
[14:20] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:20] * vmx (~vmx@dslb-084-056-062-146.084.056.pools.vodafone-ip.de) Quit (Quit: Leaving)
[14:22] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[14:30] * ade (~abradshaw@h31-3-227-203.host.redstation.co.uk) has joined #ceph
[14:31] * ade (~abradshaw@h31-3-227-203.host.redstation.co.uk) Quit (Remote host closed the connection)
[14:31] * fretb (~fretb@drip.frederik.pw) Quit (Quit: leaving)
[14:31] * fretb (~fretb@pie.frederik.pw) has joined #ceph
[14:32] * fretb (~fretb@pie.frederik.pw) Quit ()
[14:33] * wschulze (~wschulze@cpe-74-71-240-233.nyc.res.rr.com) has joined #ceph
[14:37] * KevinPerks (~Adium@2606:a000:80a1:1b00:dc58:32ca:8f2c:1c4c) has joined #ceph
[14:38] * fretb (~fretb@pie.frederik.pw) has joined #ceph
[14:39] * fretb (~fretb@pie.frederik.pw) Quit ()
[14:40] * cok (~chk@2a02:2350:1:1203:ad8d:1a8:48c3:98c5) has joined #ceph
[14:40] * wschulze (~wschulze@cpe-74-71-240-233.nyc.res.rr.com) Quit (Quit: Leaving.)
[14:41] * fretb (~fretb@drip.frederik.pw) has joined #ceph
[14:45] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[14:46] * jtang__ (~jtang@80.111.83.231) has joined #ceph
[14:46] * jtang_ (~jtang@80.111.83.231) Quit (Read error: Connection reset by peer)
[14:47] <kitz> Xiol: try setting noout & nodown while doing this migration (you may also want to temporarily suspend scrubbing with noscrub and nodeep-scrub)
[14:47] <Vacum_> Xiol: do you have boxes with LOT of memory and NUMA?
[14:47] <Vacum_> Xiol: also check if core0 of cpu0 is overwhelmed with irqs from your network cards
[14:52] <ganders> any recommended size for osd_journal_size for 3TB OSD disks, and 64GB of Ram on server, to use RAMDISK?
[14:53] <Vacum_> ganders: you want to put the journal on a ramdisk?
[14:53] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) has joined #ceph
[14:54] <ksingh> guys , whats the meaning of fs_commit_latency(ms) and fs_apply_latency(ms) , in the output of ceph osd perf
[14:56] * yguang11_ (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Read error: Connection reset by peer)
[14:57] <ganders> Vacum_: yes
[14:57] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[14:57] <ganders> I know that if power is gone i lost all the osd's but i want to test it out
[15:00] <Vacum_> ganders: 1GB journal size should be fine. how many disks in one osd host?
[15:00] * pressureman (~pressurem@62.217.45.26) has joined #ceph
[15:01] <ganders> 9 x 3TB OSD disks and 64GB of RAM
[15:02] <ganders> I've already create 9 ramdisks: "modprobe brd rd_nr=9 rd_size=4194304 max_part=0"
[15:02] <ganders> but im getting a satanic message while running "ceph-osd -c /etc/ceph/ceph.conf -i X --mkjournal" cmd
[15:03] <ganders> http://pastebin.com/raw.php?i=XJuVbY02
[15:03] * joao|lap (~JL@78.29.191.247) Quit (Ping timeout: 480 seconds)
[15:04] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:04] * RameshN (~rnachimu@101.222.246.240) Quit (Ping timeout: 480 seconds)
[15:05] <pressureman> ceph default journal size is now 5GB
[15:05] <pressureman> are you specifying it explicitly in your ceph.conf?
[15:05] <Vacum_> is rd_size is bytes?
[15:05] <pressureman> rd_size is in kilobytes
[15:05] <ganders> so it would be: "modprobe brd rd_nr=9 rd_size=5242880 max_part=0"
[15:06] <pressureman> or you could reduce the journal size in ceph.conf
[15:06] <ganders> and in the ceph.conf file it would be: osd_journal_size = 5242880
[15:06] <ganders> right?
[15:06] <kraken> http://i.imgur.com/RvquHs0.gif
[15:06] <pressureman> you're going to burn up 36 GB of your ram for journals
[15:07] <ganders> yeah but i could put more ram on it.. its cheap, but again it depends on the tests that i made from this kind of setup
[15:07] <pressureman> i tested with journal on tmpfs just a couple of days ago
[15:08] <ganders> and how about the results? can you share them?
[15:08] <pressureman> create a single tmpfs, large enough to hold all journals, and explicitly set journal size in ceph.conf
[15:08] <ganders> pressureman: can you share the steps for that conf on tmpfs?
[15:09] * rwheeler (~rwheeler@173.48.207.57) Quit (Quit: Leaving)
[15:09] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[15:09] <pressureman> it's not rocket surgery - just create a tmpfs of a given size, mount it somewhere, and configure your OSDs to use uniquely named journals on that tmpfs
[15:10] <ganders> yeah that's ok for the first steps but in the ceph.conf file what parameters do i need to change?
[15:10] * brad_mssw (~brad@shop.monetra.com) has joined #ceph
[15:11] <pressureman> just set "journal dio = false" and "osd journal size = 4096" (for a 4 GB journal)
[15:12] <ganders> ok and then you conf the "osd journal = /mnt/tmpfs" for all the osd's?
[15:13] <pressureman> i just symlinked each osd's journal to a separate file on the tmpfs
[15:13] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[15:13] * thb (~me@2a02:2028:2b3:72b0:8c8b:e62d:2d0:aaef) has joined #ceph
[15:14] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[15:14] <pressureman> but you could also do something like "osd journal = /mnt/tmpfs/$cluster-$id/journal"
[15:14] <ganders> oh ok thanks pressureman, i will check it out and see if that work fine
[15:14] <pressureman> not sure if it will automagically create dirs tho, so /mnt/tmpfs/$cluster-$id-journal might be better
[15:14] <kraken> ???_???
[15:15] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[15:15] <pressureman> you shouldn't really use journals on ramdisk/tmpfs for production systems though
[15:16] <pressureman> i was only doing this to prove that the SSDs in the system were a bottleneck
[15:18] * Kureyslii (abbasi@78.186.163.63) has joined #ceph
[15:18] * bitserker (~toni@169.38.79.188.dynamic.jazztel.es) has joined #ceph
[15:18] * Kureyslii (abbasi@78.186.163.63) Quit (autokilled: Do not spam other people. Mail support@oftc.net if you feel this is in error. (2014-08-08 13:18:36))
[15:20] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Remote host closed the connection)
[15:21] * aknapp (~aknapp@64.202.160.233) has joined #ceph
[15:22] * Cube (~Cube@97-83-54-121.dhcp.aldl.mi.charter.com) has joined #ceph
[15:23] * jtang__ (~jtang@80.111.83.231) Quit (Remote host closed the connection)
[15:23] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[15:24] * jtang_ (~jtang@80.111.83.231) has joined #ceph
[15:24] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) Quit (Quit: Leaving.)
[15:26] * ashishchandra (~ashish@49.32.0.114) Quit (Quit: Leaving)
[15:30] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[15:33] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[15:33] <Xiol> Vacum_: sorry, got pulled afk. the boxes have 128GB of RAM and 2 sockets, 12 cores total. not sure about IRQ - it's calmed down now, about to start another so will check
[15:33] <cookednoodles> ganders, what did your benches say ?
[15:33] <cookednoodles> I'm curious
[15:34] * jtang_ (~jtang@80.111.83.231) Quit (Ping timeout: 480 seconds)
[15:35] <ganders> im using 4 osd servers, each has 9 osd daemons on 3TB each, and 3 x SSD 120G for journals, each journal holds 3 OSD daemons, and 64GB of ram, then i got 3 MON servers with SSDs for OS and 32GB, all of the communication is done at 10GbE, OSD servers communicate to the cluster network at MTU 9000, and the other inet with 1500. The first tests, give me almost 180MB/s for rand writes... not too much
[15:35] <ganders> :(
[15:36] <flaf> Hi, one the same host (Ubuntu 14.04), if I want to create 2 mon daemons (one for cluster1 and the other for cluster2), after a restart of ceph-all, I have only one daemon which is are running (for example the daemon for cluster2). Is it normal?
[15:36] <flaf> *s/one/on/
[15:36] <Anticimex> ganders: what SSDs?
[15:38] <flaf> Is it possible tu run 2 mon daemons one the same host (mon daemons for different clusters)?
[15:39] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[15:40] <ganders> Anticimex: DC S3500
[15:40] * dmsimard_away is now known as dmsimard
[15:40] * lcavassa (~lcavassa@2-229-47-79.ip195.fastwebnet.it) has joined #ceph
[15:40] <ganders> I think is has 135MB/s on seq writes
[15:40] <Anticimex> ~and random=
[15:40] <Anticimex> ?tyo
[15:40] <Anticimex> *typo -- grr
[15:41] * cok (~chk@2a02:2350:1:1203:ad8d:1a8:48c3:98c5) Quit (Quit: Leaving.)
[15:42] * bitserker (~toni@169.38.79.188.dynamic.jazztel.es) Quit (Quit: Leaving.)
[15:50] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Quit: leaving)
[15:54] * aknapp (~aknapp@64.202.160.233) Quit (Remote host closed the connection)
[15:54] * aknapp (~aknapp@64.202.160.233) has joined #ceph
[15:56] <mgarcesMZ> can someone explain me the map epoch (eNNNN) ?
[15:57] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Remote host closed the connection)
[15:57] * jtang_ (~jtang@178.167.254.21.threembb.ie) has joined #ceph
[15:57] * jeff-YF (~jeffyf@173.254.191.141) has joined #ceph
[15:57] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[15:58] * bkopilov (~bkopilov@213.57.65.56) Quit (Ping timeout: 480 seconds)
[16:02] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[16:02] * aknapp (~aknapp@64.202.160.233) Quit (Ping timeout: 480 seconds)
[16:05] <pressureman> ganders, how many of the S3500 drives do you have for journals in your whole cluster?
[16:07] <Joffer> I'm really stuck.. several times a day I get "HEALTH_WARN 1 requests are blocked > 32 sec; 1 osds have slow requests". Earlier today it was osd.0 and osd.61. Now it's osd.6. I've had to take over management of ceph as our guru left the company and this error makes our applications that uses ceph (via nfs) stop. The osds in question is random from time to time, and the harddrives are 100% ok. Checked them wi
[16:07] <Joffer> th MegaCli64 and didn't see any errors. Just don't know where to start now
[16:07] <Joffer> system info is still on pastebin: http://pastebin.com/Uct7veSe
[16:08] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[16:09] <pressureman> 0.67.9 is quite an old ceph version
[16:09] <ganders> pressureman: i got 3 x SSD S3500 per OSD Server, so it would be 12 in total
[16:09] <pressureman> ganders, and replica size 3?
[16:10] <Joffer> pressureman, yeah I guess. We are building a small replica of the environment to test upgrading that first, since neither me or my colleague have much experience with ceph and like to test there first
[16:10] <Anticimex> ganders: io block size for rand write test?
[16:11] <ganders> replica size to 2
[16:11] <ganders> i try bs=4k, 128k and 8m
[16:12] <Anticimex> what throughput did you get with 4k?
[16:12] <Anticimex> what tool do you use to generate the load?
[16:12] <ganders> less than 70MB/s
[16:12] <ganders> fio
[16:12] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[16:12] <pressureman> ganders, so if you have your journals evenly spread over all the SSDs, your cluster max write speed is going to be about (12 x 135MB/s) / 2
[16:12] <Anticimex> and that's sequential
[16:13] * lczerner (~lczerner@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:13] <ganders> exactly...i expect to be around 800MB/s.. but im getting 170MB/s max
[16:13] <pressureman> does iostat show the SSDs writing about 135 MB/s ?
[16:13] <Anticimex> are you doing random write test with fio or sequential? because you claim 135 MB/s sequential write on the ssd
[16:14] <ganders> both seq and rand are almost the same 170 and 145
[16:14] <Anticimex> AFAIK S3500 120GB will do <10k random write iops sustained
[16:15] <pressureman> S3500 specs are all here http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-dc-s3500-series.html
[16:15] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[16:15] <Anticimex> throughput specs are best case there
[16:15] * ksingh (~Adium@2001:708:10:10:54d7:67d8:2392:8648) Quit (Ping timeout: 480 seconds)
[16:15] <ganders> what would be the pg_num and pgp_num for this particullary scenario?
[16:15] <Anticimex> anand etc have for some models real numbers
[16:15] * AbyssOne is now known as a1-away
[16:15] <ganders> maybe that has something to do with the perf?
[16:15] * overclk (~vshankar@121.244.87.117) Quit (Quit: Leaving)
[16:16] <Vacum_> isn't the journal written sequentially?
[16:16] * a1-away is now known as AbyssOne
[16:16] * TiCPU (~jeromepou@216-80-70-254.c3-0.alc-ubr4.chi-alc.il.cable.rcn.com) has joined #ceph
[16:16] <Anticimex> ganders: that affects spread yeah
[16:17] <Anticimex> ganders: how much RAM in the machines? sufficient to buffer the journals?
[16:17] <Vacum_> ganders: assuming a single pool 2048 would be a good number for pg_num and pgp_num
[16:17] <Anticimex> ganders: because if SSD have to read for the flush to OSD, you'll get halved performance or worse on write
[16:17] <ganders> 64GB of RAM
[16:18] <Anticimex> ganders: but i think while you run a test, run iostat on the nodes and record what throughput and ops you see
[16:18] <pressureman> ganders, if you test with journals on ramdisk, you're going to be pretty close to the maximum that you will get out of the cluster, i.e. if you went and bought faster, more expensive SSDs
[16:18] <Anticimex> both on journals and on OSDs, that will give you hard facts
[16:18] <Joffer> no tips on what to look for on the osd servers?
[16:20] <ganders> is there any specific parameter to tune the 3TB SAS disks? like nr_requests, etc?
[16:21] <pressureman> you'll find lots of tips for that kind of thing if you google it
[16:23] <burley> ganders: Are you monitoring your CPU usage during your testing, to make sure you have some spare -- I chased my tail for a while on that one and found that we needed more CPU
[16:23] * vbellur (~vijay@122.166.175.107) has joined #ceph
[16:24] <ganders> more cpu? i got for the osd servers: 2x Intel Xeon E5-2609v2
[16:24] <ganders> @2.5Gz (8C)
[16:25] <burley> that's probably plenty -- we had less
[16:25] * jeff-YF (~jeffyf@173.254.191.141) Quit (Quit: jeff-YF)
[16:25] <pressureman> have you tested network throughput between each node? e.g. to confirm that you're able to saturate the 10GE ?
[16:25] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[16:26] * jtang_ (~jtang@178.167.254.21.threembb.ie) Quit (Remote host closed the connection)
[16:26] <burley> we had 12 OSDs on a single E5-2620 (2GHz 6 core)
[16:26] <Vacum_> Joffer: start measuring all your drives performance statistics, ie with diamond and graph them. if one osd starts having slow requests, check if the according drive has high await/r_await/w_await
[16:26] * BManojlovic (~steki@91.195.39.5) Quit (Remote host closed the connection)
[16:27] <Vacum_> ganders: are you running irqbalance (or other ways to distribute the irqs of your 10GbE cards)?
[16:27] <ganders> Vacum_: good question
[16:27] <ganders> i will check it out
[16:27] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[16:28] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[16:28] <Vacum_> ganders: check /proc/interrupts . if all your msix irqs of the 10G cards are piling up on cpu0 (first column) that could be one thing that influences your throughput
[16:28] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[16:29] <ganders> irqbalance on /etc/default is enabled
[16:31] <ganders> the interrupts on one of the osd servers: pastebin.com/raw.php?i=su9ibHTt
[16:31] <ganders> so p4p2 is the cluster network
[16:32] <ganders> and p4p1 is the pub net
[16:32] <ganders> and em1 is the mgmt net
[16:34] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[16:35] <nizedk> anyone using consumer grade drives (ie. Seagate Barracuda) for spinners, as opposed to ES series enterprise drives?
[16:35] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) has joined #ceph
[16:36] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[16:36] <nizedk> (in 12+ drive nodes)
[16:40] * TiCPU (~jeromepou@216-80-70-254.c3-0.alc-ubr4.chi-alc.il.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[16:41] <nizedk> It's often brought up on the mailing list, that the HighPoint Rocket 750 controller is to be avoided. But - anyone inhere know why?
[16:41] <ganders> iostat while running a perf test on one of the OSD servers: http://pastebin.com/raw.php?i=e2KTKE5R
[16:43] <pressureman> ganders, sdd and sdn have very high iowait... are those the SSDs?
[16:44] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[16:44] <Vacum_> ganders: are those virtual machines?
[16:45] * cok (~chk@2a02:2350:18:1012:21b0:940c:2dd8:7fc5) has joined #ceph
[16:46] <ganders> no, they are SAS disks
[16:46] <ganders> no, they are physical servers
[16:47] * joao|lap (~JL@78.29.168.60) has joined #ceph
[16:47] * ChanServ sets mode +o joao|lap
[16:48] <pressureman> if the iowait is constant on those sas disks, they will slow down the overall cluster
[16:49] <pressureman> since objects are distributed pretty much evenly over all osds, you can only write data as fast as the slowest osd
[16:50] * TiCPU (~jeromepou@12.160.0.155) has joined #ceph
[16:50] <ganders> thats correct, but why is using more those sdX
[16:51] <pressureman> are they all equal weights?
[16:51] * vbellur (~vijay@122.166.175.107) Quit (Ping timeout: 480 seconds)
[16:51] <nizedk> ...what was the number of pg's on the test?
[16:51] <ganders> yes they all have 2.73 of weight
[16:51] <nizedk> a small number of pg's will prevent it to spread to all osds in a large cluster (or am I misunderstanding that?)
[16:52] <ganders> the pg_num and pgp_num is 2048
[16:52] <ganders> i use that number since i have 34 osd * 100 / 2
[16:52] <pressureman> those numbers should be ok
[16:52] * oro (~oro@2001:620:20:16:a98b:9a85:fdd6:e8b2) has joined #ceph
[16:52] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[16:53] <pressureman> do you trust those two particular sas disks? is it possible that they have hardware problems?
[16:53] <nizedk> ... out them one at a time and see if the problem shifts to another osd?
[16:54] <pressureman> the iostat shows all drives being give more or less the same number of writes per second... but those two drives in particular are just taking their sweet time about it
[16:54] <ganders> they are new.. so... i hope they are ok... let me check the rest of the osd servers to see the iostat values
[16:55] <ganders> i'm using ceph version 0.82
[16:55] <pressureman> a development release?
[16:55] <pressureman> or do you mean 0.80.2?
[16:55] <ganders> yep dev release
[16:56] <ganders> 0.82
[16:56] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[16:58] * kapil (~ksharma@2620:113:80c0:5::2222) Quit (Quit: Leaving)
[16:58] <nizedk> how much data on them? personally, i'd consider out'ing them just to see if that made a difference
[16:59] <ganders> ok, here are the results of cephosd02 (another osd server), same perf test: http://pastebin.com/raw.php?i=fangVeG2
[17:04] * vbellur (~vijay@122.178.250.178) has joined #ceph
[17:06] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:06] * jtang_ (~jtang@178.167.254.21.threembb.ie) has joined #ceph
[17:06] * jtang_ (~jtang@178.167.254.21.threembb.ie) Quit (Remote host closed the connection)
[17:08] * jtang_ (~jtang@178.167.254.21.threembb.ie) has joined #ceph
[17:12] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[17:14] * erice (~erice@50.245.231.209) Quit (Ping timeout: 480 seconds)
[17:15] * lofejndif (~lsqavnbok@212.7.194.71) has joined #ceph
[17:16] * vz (~vz@122.166.154.232) has joined #ceph
[17:18] * bitserker (~toni@169.38.79.188.dynamic.jazztel.es) has joined #ceph
[17:19] * john (~john@2601:9:6c80:7df:3019:4c7:9dab:f528) has joined #ceph
[17:19] <Xiol> Hi guys, I've just removed an OSD to vacate the data (removed from CRUSH map and marked out). Data migration is now complete but I've got stuck PGs - "HEALTH_WARN 1 pgs peering; 27 pgs stuck inactive; 27 pgs stuck unclean; 138 requests are blocked > 32 sec". None of them appear to be stuck waiting for the OSD I've removed, they're all waiting on live OSDs. querying the PGs doesn't show anything obvious as to why they're unclean+inactive
[17:20] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:20] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[17:21] <john> bloodice: I updated the upgrade doc based on your feedback to Alfredo. Let me know if it resolves your questions. Looking into the authentication docs too.
[17:23] <Joffer> Vacum_, what is 'diamond'? Can't find any tool by that name for doing performance monitoring. Something "inside" ceph or included tool in the packages=
[17:23] <ganders> other very nice tool is "nmon" -> just apt-get install nmon
[17:23] <ganders> with "c" you could see the cpu perf and with "d" the disks
[17:23] * joao|lap (~JL@78.29.168.60) Quit (Ping timeout: 480 seconds)
[17:24] <Vacum_> Joffer: https://github.com/BrightcoveOS/Diamond/wiki/Configuration
[17:24] <Joffer> ok. will look at it.
[17:24] <Joffer> Vacum_, thanks
[17:25] <Joffer> I just noticed that my latest hickup ended up with an osd out, and when all "active+degraded+remapped+backfilling" work was done it was taken back in..
[17:28] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[17:33] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[17:34] * lcavassa (~lcavassa@2-229-47-79.ip195.fastwebnet.it) Quit (Quit: Leaving)
[17:36] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) has joined #ceph
[17:37] * jtang_ (~jtang@178.167.254.21.threembb.ie) Quit (Ping timeout: 480 seconds)
[17:38] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[17:38] <ganders> ok with tmpfs for the journals.. im getting 1GB/s of throughput on writes
[17:39] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[17:42] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[17:42] * mgarcesMZ (~mgarces@5.206.228.5) Quit (Quit: mgarcesMZ)
[17:43] <Joffer> Not that I would do this now, but how is it to upgrade from 0.67.09 to 0.80.5? Should that be fine directly (debian packages)?
[17:43] * bitserker (~toni@169.38.79.188.dynamic.jazztel.es) Quit (Ping timeout: 480 seconds)
[17:46] * ismell_ (~ismell@host-24-52-35-110.beyondbb.com) has joined #ceph
[17:49] * cok (~chk@2a02:2350:18:1012:21b0:940c:2dd8:7fc5) Quit (Quit: Leaving.)
[17:51] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[17:51] <kitz> ganders: that's great.
[17:52] * joshd (~jdurgin@2607:f298:a:607:a8e3:2e6e:8c39:b602) Quit (Ping timeout: 480 seconds)
[17:52] <kitz> ganders: see if it stays that way. I'm seeing my speeds drop from 900+MB/s down to <300MB/s once the drives start to fill up. I'm not sure if it's the disks or something else but I suggest you fill them up and see if you maintain those speeds.
[17:54] <ganders> kitz: good advise, now start my trip to configure ceph with openstack havana release, and then start to put some heavy workload on it, hope that the perf stays that way.. or drop as little as possible
[17:54] <kitz> good luck!
[17:58] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Remote host closed the connection)
[17:59] <alfredodeza> stupidnic: ping
[17:59] * joef (~Adium@2620:79:0:131:4c4f:2c95:2efd:1f31) has joined #ceph
[17:59] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[18:00] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:00] * rweeks (~rweeks@pat.hitachigst.com) has joined #ceph
[18:01] * bkopilov (~bkopilov@213.57.19.138) has joined #ceph
[18:02] * joshd (~jdurgin@2607:f298:a:607:54b9:f5c2:243d:2e70) has joined #ceph
[18:11] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:14] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[18:14] * bkopilov (~bkopilov@213.57.19.138) Quit (Ping timeout: 480 seconds)
[18:22] * erice (~erice@50.245.231.209) has joined #ceph
[18:22] * Venturi (~Venturi@93-103-91-169.dynamic.t-2.net) has joined #ceph
[18:23] <stupidnic> alfredodeza: hey there
[18:23] <alfredodeza> I think I know why things are still failing
[18:23] <stupidnic> oh?
[18:24] <alfredodeza> those two functions look like they want to do something different (different parts of the URL) but they are actually duplicates of the same intent
[18:24] <alfredodeza> there should really be one of those
[18:24] <stupidnic> Okay.
[18:24] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[18:25] <stupidnic> That would be a fair bit of refactoring though, right?
[18:25] <alfredodeza> the reason why: all urls we have use the same identifier
[18:25] <alfredodeza> either el7 or rhel7 in both places
[18:25] <stupidnic> I am not sure I follow
[18:25] <alfredodeza> no no
[18:25] <stupidnic> no.
[18:25] <stupidnic> you use rhel in the url to the repo
[18:25] <stupidnic> and the el in the filename
[18:25] <stupidnic> I mean we could use a single function to determine both
[18:26] <alfredodeza> do you have a link?
[18:26] <stupidnic> to the logic?
[18:26] <alfredodeza> no a link to a repo that uses rhel and el
[18:26] <alfredodeza> I couldn't find one
[18:26] <stupidnic> let me find a RHEL install
[18:27] <alfredodeza> e.g. http://ceph.com/rpm-testing/rhel6/noarch/ uses ceph-release-1-0.rhel6.noarch.rpm
[18:28] * bkopilov (~bkopilov@213.57.64.8) has joined #ceph
[18:28] <stupidnic> Okay. But then in the actual rhel6 repo for Firefly you are using el
[18:28] <stupidnic> http://ceph.com/rpm-firefly/rhel7/x86_64/
[18:28] <stupidnic> that's rhel7 but you have the same thing
[18:28] <alfredodeza> argh
[18:29] <stupidnic> heh, yeah i think that one file is to catch both
[18:29] <stupidnic> since there is also an el6
[18:29] <alfredodeza> so IMO we (our repos) are all wrong and inconsistent
[18:29] <stupidnic> those just install repos anyways
[18:30] <stupidnic> I don't know why they were forked into separate repos
[18:30] <stupidnic> is the code that different from RH and Cent?
[18:30] * b0e (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[18:30] * erice (~erice@50.245.231.209) Quit (Ping timeout: 480 seconds)
[18:30] <alfredodeza> no it isn't
[18:30] <alfredodeza> it is actually (mostly) the same
[18:33] * oro (~oro@2001:620:20:16:a98b:9a85:fdd6:e8b2) Quit (Ping timeout: 480 seconds)
[18:37] * bandrus (~Adium@184.53.40.41) has joined #ceph
[18:39] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has joined #ceph
[18:40] * erice (~erice@50.245.231.209) has joined #ceph
[18:42] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[18:43] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[18:44] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has left #ceph
[18:46] * BManojlovic (~steki@cable-94-189-160-74.dynamic.sbb.rs) has joined #ceph
[18:46] * BManojlovic (~steki@cable-94-189-160-74.dynamic.sbb.rs) Quit ()
[18:48] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[18:51] * diegows (~diegows@190.190.5.238) has joined #ceph
[18:56] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Quit: leaving)
[18:57] * BManojlovic (~steki@cable-94-189-160-74.dynamic.sbb.rs) has joined #ceph
[18:59] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[19:00] * bitserker (~toni@169.38.79.188.dynamic.jazztel.es) has joined #ceph
[19:00] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[19:06] * erice (~erice@50.245.231.209) Quit (Ping timeout: 480 seconds)
[19:07] * Aea (~aea@66.185.106.232) Quit (Quit: Aea)
[19:08] * Sysadmin88 (~IceChat77@2.218.9.98) has joined #ceph
[19:08] * BManojlovic (~steki@cable-94-189-160-74.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[19:08] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[19:10] * ircolle-afk is now known as ircolle
[19:12] * houkouonchi-home (~linux@2001:470:c:c69::2) has joined #ceph
[19:12] * lofejndif (~lsqavnbok@82JAAHC08.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[19:16] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[19:17] * sjustwork (~sam@2607:f298:a:607:4155:980c:7a60:223d) has joined #ceph
[19:17] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) Quit (Quit: Leaving.)
[19:24] * adamcrume (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) has joined #ceph
[19:27] * bandrus (~Adium@184.53.40.41) Quit (Quit: Leaving.)
[19:29] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) has joined #ceph
[19:30] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[19:32] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[19:33] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Remote host closed the connection)
[19:33] * Nats__ (~Nats@2001:8000:200c:0:5038:e2f9:345e:cbf5) has joined #ceph
[19:33] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[19:33] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[19:34] <lincolnb> does the firefly package no longer provide /etc/init.d/rbdmap on CentOS?
[19:35] <devicenull> are there any sort of recommendations for setting 'osd recovery max active'?
[19:36] <devicenull> the default is 5, but I'm not really sure how to determin what i should set it to
[19:37] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[19:38] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[19:39] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:40] * Nats_ (~Nats@2001:8000:200c:0:5038:e2f9:345e:cbf5) Quit (Ping timeout: 480 seconds)
[19:43] * jeff-YF (~jeffyf@216.14.83.26) has joined #ceph
[19:50] * jeff-YF (~jeffyf@216.14.83.26) Quit (Quit: jeff-YF)
[19:57] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) has joined #ceph
[19:59] <joshd> lincolnb: it should be there, maybe a bug in the spec file
[20:01] * Aea (~aea@172.56.8.157) has joined #ceph
[20:04] * bandrus (~oddo@216.57.72.205) has joined #ceph
[20:04] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[20:04] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[20:10] * i_m (~ivan.miro@gbibp9ph1--blueice1n1.emea.ibm.com) Quit (Quit: Leaving.)
[20:12] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Ping timeout: 480 seconds)
[20:15] * hufman (~hufman@cpe-184-58-235-28.wi.res.rr.com) has joined #ceph
[20:16] * Aea (~aea@172.56.8.157) Quit (Quit: Aea)
[20:16] <hufman> hello!!
[20:16] * primechuck (~primechuc@173.209.9.124) has joined #ceph
[20:16] * erice (~erice@65.114.129.62) has joined #ceph
[20:16] * primechuck (~primechuc@173.209.9.124) Quit ()
[20:17] <wedge> runt 27-30 k
[20:17] <wedge> ops mistype
[20:21] * bitserker (~toni@169.38.79.188.dynamic.jazztel.es) Quit (Quit: Leaving.)
[20:21] * ganders (~root@200.0.230.235) has joined #ceph
[20:25] * bandrus1 (~Adium@216.57.72.205) has joined #ceph
[20:25] * bandrus1 (~Adium@216.57.72.205) Quit ()
[20:27] <hufman> when doing rbd snapshot shipping with 'rbd export-diff' and 'rbd import-diff', is there a way to make it write faster?
[20:28] * rweeks (~rweeks@pat.hitachigst.com) Quit (Quit: Leaving)
[20:28] <hufman> the bottleneck seems to be the single-threaded write of 'rbd import-diff'
[20:30] * rendar (~I@host211-176-dynamic.3-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:31] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[20:31] * Venturi (~Venturi@93-103-91-169.dynamic.t-2.net) Quit (Ping timeout: 480 seconds)
[20:33] * aknapp_ (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[20:34] * rendar (~I@host211-176-dynamic.3-87-r.retail.telecomitalia.it) has joined #ceph
[20:37] * davidz (~Adium@cpe-23-242-12-23.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[20:37] * aknapp_ (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Read error: Connection reset by peer)
[20:38] * aknapp_ (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[20:40] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Ping timeout: 480 seconds)
[20:40] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[20:42] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[20:42] * aknapp_ (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Read error: Connection reset by peer)
[20:43] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[20:47] * vz (~vz@122.166.154.232) Quit (Remote host closed the connection)
[20:54] * bandrus1 (~Adium@216.57.72.205) has joined #ceph
[20:54] * bandrus (~oddo@216.57.72.205) Quit (Quit: Leaving.)
[20:58] * erice (~erice@65.114.129.62) Quit (Ping timeout: 480 seconds)
[21:05] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[21:08] * davidz (~Adium@cpe-23-242-20-139.socal.res.rr.com) has joined #ceph
[21:13] * erice (~erice@40.sub-70-208-141.myvzw.com) has joined #ceph
[21:18] * todayman (~quassel@magellan.acm.jhu.edu) Quit (Ping timeout: 480 seconds)
[21:29] * sarob (~sarob@2001:4998:effd:600:dfe:179c:3e7a:b01d) has joined #ceph
[21:30] * ganders (~root@200.0.230.235) Quit (Quit: WeeChat 0.4.1)
[21:35] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:35] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[21:37] * erice (~erice@40.sub-70-208-141.myvzw.com) Quit (Ping timeout: 480 seconds)
[21:40] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[21:40] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Remote host closed the connection)
[21:41] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[21:42] * sjustlaptop (~sam@2607:f298:a:607:302a:e9a:e634:46cb) has joined #ceph
[21:43] * aknapp_ (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[21:43] * aknapp_ (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Remote host closed the connection)
[21:43] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Read error: Connection reset by peer)
[21:43] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[21:44] * qhartman (~qhartman@den.direwolfdigital.com) Quit (Quit: Ex-Chat)
[21:47] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[21:48] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[21:50] * bandrus1 (~Adium@216.57.72.205) Quit (Quit: Leaving.)
[21:55] * davidz1 (~Adium@cpe-23-242-12-23.socal.res.rr.com) has joined #ceph
[21:58] * bandrus (~Adium@216.57.72.205) has joined #ceph
[21:58] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) has joined #ceph
[21:59] * jakes (~oftc-webi@128-107-239-233.cisco.com) has joined #ceph
[22:00] <jakes> i am able to execute ceph commands. but I am getting permission denied when executing ceph --admin-daemon /var/run/ceph/ceph-osd.2.asok perf dump in any of the nodes
[22:00] * vz (~vz@122.166.154.232) has joined #ceph
[22:00] * vz (~vz@122.166.154.232) Quit ()
[22:01] * davidz (~Adium@cpe-23-242-20-139.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[22:02] <jakes> i am getting the error "admin_socket: exception getting command descriptions: [Errno 2] No such file or directory"
[22:04] * davidz (~Adium@cpe-23-242-12-23.socal.res.rr.com) has joined #ceph
[22:05] <devicenull> I have a whole bunch of pgs that are attached to nodes that are down
[22:05] <devicenull> I'm not sure what's the best way to get rid of them
[22:05] <devicenull> er, nodes that died
[22:05] <devicenull> so they're listed as 'stuck'
[22:06] <devicenull> for example, 'pg 3.db is stuck stale for 18747.066379, current state stale+active+clean, last acting [14,11]'
[22:06] <devicenull> neither of those OSD's exist anymore... so I am going to have data loss
[22:07] * davidz1 (~Adium@cpe-23-242-12-23.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[22:09] * Cube (~Cube@97-83-54-121.dhcp.aldl.mi.charter.com) Quit (Quit: Leaving.)
[22:12] <devicenull> 'cep pg mark_unfound_lost revert' hangs
[22:13] * madkiss (~madkiss@2001:6f8:12c3:f00f:c812:2d08:41a3:3d65) has joined #ceph
[22:15] * rweeks (~rweeks@pat.hitachigst.com) has joined #ceph
[22:15] * joef1 (~Adium@2620:79:0:8207:594d:b3c0:d61e:a482) has joined #ceph
[22:16] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[22:18] * madkiss (~madkiss@2001:6f8:12c3:f00f:c812:2d08:41a3:3d65) Quit ()
[22:18] * Aea (~aea@66.185.106.232) has joined #ceph
[22:19] * Aea (~aea@66.185.106.232) Quit ()
[22:19] * JC (~JC@AMontpellier-651-1-503-28.w92-143.abo.wanadoo.fr) has joined #ceph
[22:20] * sverrest_ (~sverrest@cm-84.208.166.184.getinternet.no) Quit (Ping timeout: 480 seconds)
[22:23] * Cube (~Cube@97-83-20-197.dhcp.aldl.mi.charter.com) has joined #ceph
[22:25] * sjustlaptop (~sam@2607:f298:a:607:302a:e9a:e634:46cb) Quit (Ping timeout: 480 seconds)
[22:26] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[22:30] * PreetBharara (~xeb@host86-141-91-224.range86-141.btcentralplus.com) has joined #ceph
[22:31] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[22:32] <devicenull> ceph health detail | grep pg | awk '{ print $2 }' | xargs -n1 ceph pg force_create_pg
[22:32] <devicenull> "fixed" most of them
[22:32] <devicenull> I have one pg hanging on creating now though
[22:35] * PreetBharara is now known as [nsh]
[22:35] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[22:39] * alram (~alram@38.122.20.226) has joined #ceph
[22:40] * sjm (~sjm@24-234-180-234.ptp.lvcm.net) has joined #ceph
[22:42] * tupper (~chatzilla@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[22:45] <lincolnb> joshd: ill take a look at the spec when i get a chance. just checked master and its present there ,at least
[22:46] * dmsimard is now known as dmsimard_away
[22:47] <lincolnb> wow this is super interesting
[22:48] <lincolnb> ive run into an error condition in CephFS where i get a hang when i ls a directory (one that a user has put thousands of subdirectories in)
[22:49] * joef (~Adium@2620:79:0:131:4c4f:2c95:2efd:1f31) Quit (Remote host closed the connection)
[22:49] * davidz1 (~Adium@cpe-23-242-12-23.socal.res.rr.com) has joined #ceph
[22:49] <lincolnb> _but_ when i 'ls' with strace, it doesnt hang
[22:50] * davidz (~Adium@cpe-23-242-12-23.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[22:51] * bandrus (~Adium@216.57.72.205) Quit (Quit: Leaving.)
[22:51] * bandrus (~Adium@216.57.72.205) has joined #ceph
[22:52] <dmick> lincolnb: maybe a race; try strace -o /tmp/strace.out so it doesn't have to wait on the terminal
[22:52] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Quit: Leaving...)
[22:53] <lincolnb> works
[22:53] <lincolnb> ah
[22:53] <lincolnb> hm
[22:53] <lincolnb> i deleted a bunch of problematic files so now it works with normal ls
[22:53] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[22:53] <lincolnb> ill try this out on my testbed and see if i can reproduce
[22:53] <lincolnb> i'm on kernel 3.12 so maybe its been fixed upstream already
[22:56] * madkiss (~madkiss@chello084112124211.20.11.vie.surfer.at) has joined #ceph
[22:59] * kfei (~root@114-27-83-66.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[23:00] <lincolnb> hm okay started a new shell session. still hangs on normal 'ls', strace -o still works
[23:00] * madkiss (~madkiss@chello084112124211.20.11.vie.surfer.at) Quit (Read error: No route to host)
[23:00] * brad_mssw (~brad@shop.monetra.com) Quit (Quit: Leaving)
[23:09] * sjm1 (~sjm@24-234-180-234.ptp.lvcm.net) has joined #ceph
[23:12] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[23:12] * kfei (~root@114-27-61-6.dynamic.hinet.net) has joined #ceph
[23:16] * alram_ (~alram@38.122.20.226) has joined #ceph
[23:17] * tupper (~chatzilla@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[23:18] * madkiss (~madkiss@2001:6f8:12c3:f00f:f5b1:305d:ab70:7eb8) has joined #ceph
[23:19] * sjm1 (~sjm@24-234-180-234.ptp.lvcm.net) has left #ceph
[23:19] * sjm1 (~sjm@24-234-180-234.ptp.lvcm.net) has joined #ceph
[23:20] * sjm (~sjm@24-234-180-234.ptp.lvcm.net) Quit (Remote host closed the connection)
[23:20] * sjm (~sjm@24-234-180-234.ptp.lvcm.net) has joined #ceph
[23:22] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[23:23] <dmick> worth a shot.
[23:23] * ircolle (~Adium@2601:1:a580:145a:4d9:433e:91bd:1a5a) Quit (Read error: Connection reset by peer)
[23:23] * ircolle (~Adium@2601:1:a580:145a:34cd:45d0:aca5:ae4e) has joined #ceph
[23:24] <lincolnb> thanks for the help dmick. happy friday!
[23:28] * madkiss (~madkiss@2001:6f8:12c3:f00f:f5b1:305d:ab70:7eb8) Quit (Quit: Leaving.)
[23:28] * sjm (~sjm@24-234-180-234.ptp.lvcm.net) Quit (Ping timeout: 480 seconds)
[23:29] * alram_ (~alram@38.122.20.226) Quit (Quit: leaving)
[23:30] * alram (~alram@38.122.20.226) has joined #ceph
[23:31] * hufman (~hufman@cpe-184-58-235-28.wi.res.rr.com) Quit (Quit: leaving)
[23:33] * jakes (~oftc-webi@128-107-239-233.cisco.com) Quit (Quit: Page closed)
[23:42] * TiCPU (~jeromepou@12.160.0.155) Quit (Ping timeout: 480 seconds)
[23:43] * Cube is now known as Concubidated
[23:51] * ircolle1 (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[23:52] * davidz1 (~Adium@cpe-23-242-12-23.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:55] * nhm (~nhm@65-128-141-191.mpls.qwest.net) Quit (Quit: Lost terminal)
[23:55] * davidz (~Adium@cpe-23-242-12-23.socal.res.rr.com) has joined #ceph
[23:58] * sjm (~sjm@24-234-180-234.ptp.lvcm.net) has joined #ceph
[23:59] * ircolle (~Adium@2601:1:a580:145a:34cd:45d0:aca5:ae4e) Quit (Ping timeout: 480 seconds)
[23:59] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.