#ceph IRC Log

Index

IRC Log for 2015-04-06

Timestamps are in GMT/BST.

[0:05] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:08] * sigsegv (~sigsegv@188.25.121.203) has joined #ceph
[0:10] * bkopilov (~bkopilov@bzq-79-183-144-37.red.bezeqint.net) has joined #ceph
[0:13] * mLegion (~jwandborg@2WVAAA3AU.tor-irc.dnsbl.oftc.net) Quit ()
[0:14] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[0:17] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[0:19] * sigsegv (~sigsegv@188.25.121.203) Quit (Quit: sigsegv)
[0:22] * brutusca_ (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[0:22] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Read error: Connection reset by peer)
[0:23] * Administrator (~Administr@172.245.26.218) has joined #ceph
[0:35] * rongze (~rongze@219.143.85.125) has joined #ceph
[0:37] * jclm1 (~jclm@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[0:38] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[0:39] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[0:39] * B_Rake (~B_Rake@2605:a601:5b9:dd01:183d:ed86:251f:b811) Quit (Remote host closed the connection)
[0:42] * BManojlovic (~steki@cable-89-216-172-100.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:43] * SweetGirl (~aleksag@2WVAAA3DP.tor-irc.dnsbl.oftc.net) has joined #ceph
[0:43] * rongze (~rongze@219.143.85.125) Quit (Ping timeout: 480 seconds)
[0:44] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[0:46] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[0:49] * brutusca_ (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[0:50] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:50] * jclm1 (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[0:50] * jclm1 (~jclm@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[0:54] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[0:56] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[1:07] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:13] * SweetGirl (~aleksag@2WVAAA3DP.tor-irc.dnsbl.oftc.net) Quit ()
[1:17] * Mattress (~verbalins@tor-exit1.arbitrary.ch) has joined #ceph
[1:36] * rongze (~rongze@219.143.85.125) has joined #ceph
[1:41] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Ping timeout: 480 seconds)
[1:44] * rongze (~rongze@219.143.85.125) Quit (Ping timeout: 480 seconds)
[1:47] * Mattress (~verbalins@2WVAAA3E3.tor-irc.dnsbl.oftc.net) Quit ()
[1:47] * Bobby (~murmur@tor-exit3-readme.dfri.se) has joined #ceph
[1:49] * burley (~khemicals@cpe-98-28-239-78.cinci.res.rr.com) has joined #ceph
[1:53] * oms101 (~oms101@p20030057EA77E900EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:02] * oms101 (~oms101@p20030057EA632A00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[2:05] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) Quit (Quit: Connection closed for inactivity)
[2:06] * zack_dolby (~textual@nfmv001163050.uqw.ppp.infoweb.ne.jp) has joined #ceph
[2:07] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[2:09] * jo00nas (~jonas@188-183-5-254-static.dk.customer.tdc.net) has joined #ceph
[2:10] * jo00nas (~jonas@188-183-5-254-static.dk.customer.tdc.net) Quit ()
[2:10] * yanzheng (~zhyan@171.216.95.48) has joined #ceph
[2:11] * alexxy[home] (~alexxy@2001:470:1f14:106::2) has joined #ceph
[2:12] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Read error: Connection reset by peer)
[2:17] * Bobby (~murmur@2WVAAA3GL.tor-irc.dnsbl.oftc.net) Quit ()
[2:17] * Dragonshadow (~MKoR@176.10.99.200) has joined #ceph
[2:19] * yanzheng (~zhyan@171.216.95.48) Quit (Quit: This computer has gone to sleep)
[2:37] * rongze (~rongze@219.143.85.125) has joined #ceph
[2:39] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) has left #ceph
[2:45] * rongze (~rongze@219.143.85.125) Quit (Ping timeout: 480 seconds)
[2:47] * Dragonshadow (~MKoR@3OZAAAVJU.tor-irc.dnsbl.oftc.net) Quit ()
[2:47] * `Jin (~Guest1390@exit1.ipredator.se) has joined #ceph
[3:14] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[3:17] * `Jin (~Guest1390@2WVAAA3JK.tor-irc.dnsbl.oftc.net) Quit ()
[3:17] * Kottizen (~Snowcat4@tor-exit2-readme.puckey.org) has joined #ceph
[3:18] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[3:30] * neurodrone (~neurodron@pool-100-1-89-227.nwrknj.fios.verizon.net) Quit (Quit: neurodrone)
[3:38] * rongze (~rongze@219.143.85.125) has joined #ceph
[3:40] * B_Rake (~B_Rake@2605:a601:5b9:dd01:4ad7:5ff:fee3:8873) has joined #ceph
[3:40] * yanzheng (~zhyan@171.216.95.48) has joined #ceph
[3:42] * neurodrone (~neurodron@pool-100-1-89-227.nwrknj.fios.verizon.net) has joined #ceph
[3:43] * root (~root@p57B2EE05.dip0.t-ipconnect.de) has joined #ceph
[3:46] * rongze (~rongze@219.143.85.125) Quit (Ping timeout: 480 seconds)
[3:47] * Kottizen (~Snowcat4@2WVAAA3KW.tor-irc.dnsbl.oftc.net) Quit ()
[3:47] * neobenedict (~KrimZon@tor-exit-node.cs.usu.edu) has joined #ceph
[3:48] * B_Rake (~B_Rake@2605:a601:5b9:dd01:4ad7:5ff:fee3:8873) Quit (Ping timeout: 480 seconds)
[3:49] * root4 (~root@p57B2E3D9.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:51] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[4:13] * yanzheng (~zhyan@171.216.95.48) Quit (Quit: This computer has gone to sleep)
[4:13] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[4:14] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[4:17] * neobenedict (~KrimZon@98EAAAY1P.tor-irc.dnsbl.oftc.net) Quit ()
[4:17] * xENO_ (~ylmson@destiny.enn.lu) has joined #ceph
[4:26] * kefu (~kefu@114.92.108.72) has joined #ceph
[4:38] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:39] * rongze (~rongze@219.143.85.125) has joined #ceph
[4:47] * kefu (~kefu@114.92.108.72) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[4:47] * rongze (~rongze@219.143.85.125) Quit (Ping timeout: 480 seconds)
[4:47] * xENO_ (~ylmson@5NZAAA672.tor-irc.dnsbl.oftc.net) Quit ()
[4:47] * Arcturus (~K3NT1S_aw@tor-exit.server9.tvdw.eu) has joined #ceph
[5:08] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[5:11] * rongze (~rongze@219.143.85.125) has joined #ceph
[5:17] * Arcturus (~K3NT1S_aw@98EAAAY2P.tor-irc.dnsbl.oftc.net) Quit ()
[5:17] * Sketchfile (~Schaap@cs-tor.bu.edu) has joined #ceph
[5:20] * kefu (~kefu@114.92.108.72) has joined #ceph
[5:26] * neurodrone (~neurodron@pool-100-1-89-227.nwrknj.fios.verizon.net) Quit (Quit: neurodrone)
[5:36] * neurodrone (~neurodron@pool-100-1-89-227.nwrknj.fios.verizon.net) has joined #ceph
[5:38] * Vacuum (~vovo@88.130.209.146) has joined #ceph
[5:45] * Vacuum_ (~vovo@88.130.206.44) Quit (Ping timeout: 480 seconds)
[5:47] * rongze (~rongze@219.143.85.125) Quit (Remote host closed the connection)
[5:47] * Sketchfile (~Schaap@2WVAAA3QJ.tor-irc.dnsbl.oftc.net) Quit ()
[5:48] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[5:49] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit ()
[5:50] * kefu (~kefu@114.92.108.72) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:52] * yuastnav (~tuhnis@195.169.125.226) has joined #ceph
[5:52] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[5:55] * rdas (~rdas@121.244.87.116) has joined #ceph
[5:55] * rongze (~rongze@219.143.85.125) has joined #ceph
[5:57] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[5:58] * sankarshan (~sankarsha@121.244.87.116) has joined #ceph
[5:58] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[6:05] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[6:06] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Quit: Leaving.)
[6:07] * neurodrone (~neurodron@pool-100-1-89-227.nwrknj.fios.verizon.net) Quit (Quit: neurodrone)
[6:12] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[6:14] * kefu (~kefu@114.92.108.72) has joined #ceph
[6:18] * rongze (~rongze@219.143.85.125) Quit (Remote host closed the connection)
[6:20] * kefu (~kefu@114.92.108.72) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:22] * yuastnav (~tuhnis@2WVAAA3RU.tor-irc.dnsbl.oftc.net) Quit ()
[6:22] * Rens2Sea (~Behedwin@tor-exit.server9.tvdw.eu) has joined #ceph
[6:38] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:46] * mlausch (~mlausch@2001:8d8:1fe:7:11c5:80e2:a127:5c9c) Quit (Ping timeout: 480 seconds)
[6:52] * Rens2Sea (~Behedwin@98EAAAY4Q.tor-irc.dnsbl.oftc.net) Quit ()
[6:52] * Aramande_ (~Pirate@marcuse-2.nos-oignons.net) has joined #ceph
[6:52] * rongze (~rongze@219.143.85.125) has joined #ceph
[6:55] * mlausch (~mlausch@2001:8d8:1fe:7:7837:8c92:e048:9d94) has joined #ceph
[6:56] * kefu (~kefu@114.92.108.72) has joined #ceph
[7:02] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[7:05] * amote (~amote@121.244.87.116) has joined #ceph
[7:09] * kefu (~kefu@114.92.108.72) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:19] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:22] * Aramande_ (~Pirate@2WVAAA3T9.tor-irc.dnsbl.oftc.net) Quit ()
[7:22] * Coestar (~Snowcat4@tor-exit-readme.manalyzer.org) has joined #ceph
[7:23] * Savemech (~Savemech@mail.primetver.ru) Quit (Read error: Connection reset by peer)
[7:23] * rongze (~rongze@219.143.85.125) Quit (Remote host closed the connection)
[7:24] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:32] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:44] * sankarshan (~sankarsha@121.244.87.116) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[7:48] * rongze (~rongze@219.143.85.125) has joined #ceph
[7:48] * rongze (~rongze@219.143.85.125) Quit (Remote host closed the connection)
[7:52] * Coestar (~Snowcat4@98EAAAY53.tor-irc.dnsbl.oftc.net) Quit ()
[7:54] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[7:59] * overclk (~overclk@121.244.87.117) has joined #ceph
[7:59] * jcsalem (~Jim@pool-108-49-214-102.bstnma.fios.verizon.net) has joined #ceph
[8:02] * jcsalem (~Jim@pool-108-49-214-102.bstnma.fios.verizon.net) Quit ()
[8:04] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[8:04] * Hemanth (~Hemanth@121.244.87.117) Quit ()
[8:05] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[8:16] * bkopilov (~bkopilov@bzq-79-183-144-37.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[8:22] * toast (~Malcovent@72.52.91.30) has joined #ceph
[8:25] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[8:25] * sankarshan (~sankarsha@121.244.87.116) has joined #ceph
[8:28] * bkopilov (~bkopilov@bzq-79-183-144-37.red.bezeqint.net) has joined #ceph
[8:31] * rongze (~rongze@219.143.85.125) has joined #ceph
[8:34] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[8:44] * evl (~chatzilla@139.216.138.39) has joined #ceph
[8:46] * rongze (~rongze@219.143.85.125) Quit (Remote host closed the connection)
[8:50] * subscope (~subscope@92-249-244-64.pool.digikabel.hu) has joined #ceph
[8:52] * toast (~Malcovent@98EAAAY7L.tor-irc.dnsbl.oftc.net) Quit ()
[8:52] * roaet (~SaneSmith@98EAAAY76.tor-irc.dnsbl.oftc.net) has joined #ceph
[8:59] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[9:00] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[9:00] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit ()
[9:01] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[9:01] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit ()
[9:08] * evl (~chatzilla@139.216.138.39) Quit (Quit: ChatZilla 0.9.91.1 [Firefox 37.0/20150327124350])
[9:12] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[9:14] * karnan (~karnan@121.244.87.117) has joined #ceph
[9:21] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[9:22] * roaet (~SaneSmith@98EAAAY76.tor-irc.dnsbl.oftc.net) Quit ()
[9:22] * N3X15 (~Tonux@37.187.129.166) has joined #ceph
[9:29] * jcsalem (~Jim@pool-108-49-214-102.bstnma.fios.verizon.net) has joined #ceph
[9:34] * jcsalem (~Jim@pool-108-49-214-102.bstnma.fios.verizon.net) Quit ()
[9:52] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[9:52] * N3X15 (~Tonux@2WVAAA302.tor-irc.dnsbl.oftc.net) Quit ()
[9:52] * Salamander_ (~rushworld@h88-150-187-210.host.redstation.co.uk) has joined #ceph
[9:54] * sankarshan (~sankarsha@121.244.87.116) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[9:55] * linjan (~linjan@213.8.240.146) has joined #ceph
[10:09] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[10:10] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[10:13] * jluis is now known as joao
[10:21] * vbellur (~vijay@121.244.87.124) has joined #ceph
[10:22] * Salamander_ (~rushworld@2WVAAA32B.tor-irc.dnsbl.oftc.net) Quit ()
[10:22] * Epi (~roaet@h88-150-187-210.host.redstation.co.uk) has joined #ceph
[10:39] * jcsalem (~Jim@pool-108-49-214-102.bstnma.fios.verizon.net) has joined #ceph
[10:42] * jcsalem (~Jim@pool-108-49-214-102.bstnma.fios.verizon.net) Quit ()
[10:48] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[10:49] * fdmanana (~fdmanana@bl13-135-166.dsl.telepac.pt) has joined #ceph
[10:49] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:52] * Epi (~roaet@2WVAAA33U.tor-irc.dnsbl.oftc.net) Quit ()
[10:52] * Misacorp (~Shesh@98EAAAZBT.tor-irc.dnsbl.oftc.net) has joined #ceph
[10:57] * sigsegv (~sigsegv@188.25.121.203) has joined #ceph
[11:11] * zack_dolby (~textual@nfmv001163050.uqw.ppp.infoweb.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:12] * rdas (~rdas@121.244.87.116) has joined #ceph
[11:14] * Hemanth (~Hemanth@121.244.87.117) Quit (Quit: Leaving)
[11:14] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[11:22] * Misacorp (~Shesh@98EAAAZBT.tor-irc.dnsbl.oftc.net) Quit ()
[11:22] * ylmson (~maku@195.169.125.226) has joined #ceph
[11:32] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) has joined #ceph
[11:50] * sigsegv (~sigsegv@188.25.121.203) Quit (Quit: sigsegv)
[11:52] * ylmson (~maku@5NZAAA7KL.tor-irc.dnsbl.oftc.net) Quit ()
[11:58] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[12:07] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[12:22] * maku1 (~CoZmicShR@tor-exit.squirrel.theremailer.net) has joined #ceph
[12:32] * joao (~joao@249.38.136.95.rev.vodafone.pt) Quit (Quit: Leaving)
[12:32] * joao (~joao@249.38.136.95.rev.vodafone.pt) has joined #ceph
[12:32] * ChanServ sets mode +o joao
[12:34] * Nacer (~Nacer@2001:41d0:fe82:7200:5892:252c:7e2c:f7e2) has joined #ceph
[12:52] * maku1 (~CoZmicShR@98EAAAZEC.tor-irc.dnsbl.oftc.net) Quit ()
[12:52] * Jamana (~galaxyAbs@176.10.99.200) has joined #ceph
[12:57] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[13:01] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[13:06] * dl-est (~dl-est@a91-153-45-198.elisa-laajakaista.fi) has joined #ceph
[13:06] <dl-est> good morning
[13:11] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[13:12] * vbellur (~vijay@121.244.87.117) has joined #ceph
[13:16] * linjan (~linjan@213.8.240.146) Quit (Remote host closed the connection)
[13:17] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:21] * dl-est (~dl-est@a91-153-45-198.elisa-laajakaista.fi) Quit (Remote host closed the connection)
[13:22] * Jamana (~galaxyAbs@5NZAAA7M8.tor-irc.dnsbl.oftc.net) Quit ()
[13:26] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[13:31] * pdrakeweb (~pdrakeweb@pool-72-75-231-226.bflony.fios.verizon.net) has joined #ceph
[13:34] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[13:37] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[13:44] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) has joined #ceph
[13:46] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) Quit ()
[13:47] * georgem (~Adium@184.151.179.95) has joined #ceph
[13:48] * georgem (~Adium@184.151.179.95) Quit ()
[13:48] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[13:54] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[13:56] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[13:56] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[13:57] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[14:14] * neurodrone (~neurodron@pool-100-1-89-227.nwrknj.fios.verizon.net) has joined #ceph
[14:22] * Chaos_Llama (~Scymex@chomsky.torservers.net) has joined #ceph
[14:25] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:30] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[14:31] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[14:33] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[14:34] * rotbeard (~redbeard@aftr-95-222-27-149.unity-media.net) has joined #ceph
[14:35] * dl-est (~dl-est@a91-153-45-198.elisa-laajakaista.fi) has joined #ceph
[14:39] * justyns (~justyns@li916-116.members.linode.com) Quit (Ping timeout: 480 seconds)
[14:39] * yanzheng (~zhyan@171.216.95.48) has joined #ceph
[14:40] * dl-est (~dl-est@a91-153-45-198.elisa-laajakaista.fi) Quit (Remote host closed the connection)
[14:47] * i_m (~ivan.miro@deibp9eh1--blueice3n2.emea.ibm.com) has joined #ceph
[14:48] * sigsegv (~sigsegv@188.25.121.203) has joined #ceph
[14:50] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[14:51] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) has joined #ceph
[14:52] * Chaos_Llama (~Scymex@5NZAAA7Q6.tor-irc.dnsbl.oftc.net) Quit ()
[14:52] * Frostshifter (~Enikma@98EAAAZIM.tor-irc.dnsbl.oftc.net) has joined #ceph
[14:57] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has joined #ceph
[14:58] * longguang_home (~chatzilla@111.202.0.54) has joined #ceph
[14:59] * hellertime (~Adium@a72-246-185-10.deploy.akamaitechnologies.com) has joined #ceph
[15:03] <fxmulder_> health HEALTH_WARN mds0: Client 1569366 failing to respond to cache pressure; mds0: Client 1569571 failing to respond to cache pressure; mds0: Client 1557062 failing to respond to cache pressure; mds0: Client 1570149 failing to respond to cache pressure; mds0: Client 1585199 failing to respond to cache pressure
[15:04] <fxmulder_> so I keep seeing this, any idea what might be causing this? I'm running 0.87.1
[15:09] * dl-est (~dl-est@a91-153-45-198.elisa-laajakaista.fi) has joined #ceph
[15:11] * tupper_ (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) has joined #ceph
[15:12] * longguang_home_ (~chatzilla@111.202.0.54) has joined #ceph
[15:13] * rotbeard (~redbeard@aftr-95-222-27-149.unity-media.net) Quit (Quit: Leaving)
[15:14] * longguang_home (~chatzilla@111.202.0.54) Quit (Ping timeout: 480 seconds)
[15:14] * longguang_home_ is now known as longguang_home
[15:21] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:21] * neurodrone (~neurodron@pool-100-1-89-227.nwrknj.fios.verizon.net) Quit (Quit: neurodrone)
[15:22] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) has joined #ceph
[15:22] * Frostshifter (~Enikma@98EAAAZIM.tor-irc.dnsbl.oftc.net) Quit ()
[15:23] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:26] * Aethis (~Izanagi@37.187.129.166) has joined #ceph
[15:26] <Vivek> loicd: Are you there for a quick question ?
[15:27] * harold (~hamiller@71-94-227-66.dhcp.mdfd.or.charter.com) has joined #ceph
[15:27] <Vivek> I had integrated OpenStack Juno to use Ceph for block storage.
[15:27] * harold (~hamiller@71-94-227-66.dhcp.mdfd.or.charter.com) Quit ()
[15:27] <Vivek> I have brought up one of the nodes of the ceph cluster which had crashed.
[15:28] <Vivek> I am getting an error as follows when I do a ceph -w on the Juno node.
[15:29] * yanzheng (~zhyan@171.216.95.48) Quit (Quit: This computer has gone to sleep)
[15:29] <Vivek> http://paste.ubuntu.com/10749911/
[15:32] * scuttle|afk is now known as scuttlemonkey
[15:35] <Vivek> scuttlemonkey: Hi.
[15:35] * subscope (~subscope@92-249-244-64.pool.digikabel.hu) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:37] <dl-est> hi
[15:37] * rongze (~rongze@106.39.154.69) has joined #ceph
[15:39] <scuttlemonkey> Vivek: hey there
[15:48] * rendar (~I@host163-180-dynamic.23-79-r.retail.telecomitalia.it) has joined #ceph
[15:48] * MrHeavy (~mrheavy@pool-108-54-190-117.nycmny.fios.verizon.net) has joined #ceph
[15:49] <MrHeavy> Hey all, I'm testing a 4-node Ceph cluster and all my I/O (i.e. rados -p <pool> ls) hangs when any one node is offline -- any idea what might be happening?
[15:50] <MrHeavy> Health shows all the primaries are online
[15:50] <dl-est> Mr Heavy, how many OSDs? I had that happening with an erasure pool when more OSDs were offline (disks) than were tolrated..
[15:50] <SamYaple> MrHeavy: do you have ceph-mon running on the same nodes as your OSDs?
[15:51] <MrHeavy> SamYaple: Yes, but I still have quorum
[15:51] <MrHeavy> dl-est: 24 OSDs
[15:51] <dl-est> ok weird..
[15:51] <SamYaple> MrHeavy: when you have a mon and osd go down at the same time, iops will hang
[15:51] <SamYaple> while a new election for the mons happens, the topology of the OSD going down can't be updated
[15:51] <MrHeavy> That makes perfect sense
[15:52] <SamYaple> it should recover after a little while though, lik 60 seconds or so (may be more with more OSD
[15:52] <SamYaple> i would recommend first taking down the OSDs, then the monitor if you have to do it that way
[15:52] <dl-est> i have a major authentication problem. glance throws an poeration not permitted even though glance ceph user has full permission to the correct pool.. really really really weird
[15:53] <SamYaple> dl-est: most probably a configuration issue
[15:53] <SamYaple> dl-est: do you have logs/stacktraces/configs?
[15:53] <dl-est> SamYaple.. yea i get that... but where?!?! :) yea i have the configs and the traces..
[15:53] <dl-est> hang on
[15:53] <MrHeavy> SamYaple: In this case it was a scheduled maintenance, so that's good to know going forward
[15:54] <SamYaple> dl-est: my guess is wrong client name and/or wrong pool name
[15:54] <SamYaple> MrHeavy: yea they dont really tell you the specifics of why you shouldn't run osd+mod, but thats basically why
[15:55] * DV__ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[15:56] * Aethis (~Izanagi@98EAAAZJP.tor-irc.dnsbl.oftc.net) Quit ()
[15:56] * tunaaja (~homosaur@ec2-54-94-241-184.sa-east-1.compute.amazonaws.com) has joined #ceph
[15:59] * vbellur (~vijay@122.171.73.145) has joined #ceph
[16:00] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[16:02] <dl-est> SamYaple: client name as in ceph user? and wrong pool?.. glance/ images :D ... well i will continue with this.. did a permission change on the files.. now i get glance.api.v1.upload_utils ObjectNotFound: error calling connect .... this is getting weirder by the minute..
[16:03] <dl-est> but thanks for your pointer so far.. i will have to redo it and just dig a bit deeper.. i can ping the cluser and connect to the ports just fine from the glance server..
[16:04] * dl-est (~dl-est@a91-153-45-198.elisa-laajakaista.fi) Quit (Remote host closed the connection)
[16:05] * yanzheng (~zhyan@171.216.95.48) has joined #ceph
[16:07] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[16:07] * yanzheng (~zhyan@171.216.95.48) Quit ()
[16:09] * daniel2_ (~daniel2_@cpe-24-28-6-151.austin.res.rr.com) has joined #ceph
[16:13] <MrHeavy> dl-est: On whatever server is giving you the problem, run: rbd --id <userid without "client."> -p <pool> ls
[16:14] <MrHeavy> (You're probably not managing RBD images, but as far as I can tell, the rados command doesn't let you specify a client name.)
[16:15] <MrHeavy> I ran into this problem recently and it was the dumbest thing imaginable: my ceph.conf pointed to the right keyring, but I had a copy/paste error in the keyring and the heading read [client.glance] instead of [client.cinder]
[16:15] <MrHeavy> Oh whoops, he/she's not even here
[16:16] * DV__ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[16:17] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[16:20] <loicd> Vivek: that suggests /var/run/ceph does not exist
[16:23] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[16:26] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[16:26] * tunaaja (~homosaur@2WVAAA4NY.tor-irc.dnsbl.oftc.net) Quit ()
[16:30] * wushudoin (~wushudoin@209.132.181.86) has joined #ceph
[16:39] * yanzheng (~zhyan@171.216.95.48) has joined #ceph
[16:39] * linuxkidd (~linuxkidd@166.177.186.242) has joined #ceph
[16:42] * yanzheng (~zhyan@171.216.95.48) Quit ()
[16:46] <Vivek> loicd: Yes, but I have /var/run/ceph on all my 4 ceph nodes but not on the Juno node.
[16:47] <Vivek> loicd: How should I resolve this issue ?
[16:47] <Vivek> loicd: A pointer /url would be fine.
[16:47] <loicd> Vivek: either mkdir /var/run/ceph or set the --run-dir option to a directory that exists
[16:52] <Vivek> mkdir on the Juno node ?
[16:52] <Vivek> loicd:ping.
[16:53] <loicd> Vivek: yes
[16:55] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[16:56] * csharp (~Szernex@tor-exit.eecs.umich.edu) has joined #ceph
[16:57] <Vivek> Ok, Thanks.
[17:00] * ircolle (~Adium@c-71-229-136-109.hsd1.co.comcast.net) has joined #ceph
[17:00] * davidz (~davidz@2605:e000:1313:8003:61f1:bdcf:4084:6804) has joined #ceph
[17:01] <georgem> we plan to start with a cluster of 468 OSDs and add another 288 (1 rack every) 6-12 months, should we start with a 2-3x PGs or just grow them when we add new OSDs?
[17:04] * kingcu (~kingcu@kona.ridewithgps.com) has joined #ceph
[17:09] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[17:12] <georgem> any advice?
[17:13] <Sysadmin88> you may need to wait for advice, for someone to see who has good advice to give
[17:15] * reed (~reed@d126-6b15-a945-4197-0386-356b-4420-2062.6rd.ip6.sonic.net) has joined #ceph
[17:17] <georgem> thanks, I'll keep waiting :)
[17:17] <magicrobotmonkey> georgem: I'd start with fewer
[17:17] <magicrobotmonkey> how many osds/host will you have?
[17:17] <georgem> 36
[17:18] <magicrobotmonkey> yea currently in ceph, it uses two threads per pg for communication
[17:18] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:18] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has left #ceph
[17:18] <magicrobotmonkey> so the more pgs you add the more context switching you have to do
[17:18] <magicrobotmonkey> we have a similar cluster and thats been a big bottleneck for us
[17:18] <georgem> there will be data move when the PG number grows or the new OSDs are added anyway, so we might as well only provision PGs for the existing drives
[17:19] <magicrobotmonkey> right
[17:20] <georgem> magicrobotmonkey: what do you mean by "uses two threads per pg for communication"?
[17:20] * shylesh (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[17:21] <magicrobotmonkey> so you know how if you have replication of 3, you have 3 pgs working together?
[17:21] <georgem> yes
[17:21] <magicrobotmonkey> they stay connected once peered
[17:21] <magicrobotmonkey> currently each connection has it's own thread
[17:22] <magicrobotmonkey> so the more pgs each osd hosts, the more thread contention each osd process has
[17:23] <magicrobotmonkey> so the osd hosts end up wasting a *lot* of time context switching
[17:23] <magicrobotmonkey> i think hammer may have a couple solutions to this problem though
[17:24] <georgem> ok, so basically if we have very large objects it is safe to assume there will be 2 x any-to-any established connections, so 500 OSDs to 500 OSDs x 2 ?
[17:24] <magicrobotmonkey> no its more than that
[17:24] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[17:24] <magicrobotmonkey> because its pg to pg
[17:24] <magicrobotmonkey> not osd to osd
[17:25] <magicrobotmonkey> are you planning on using erasure coded pools?
[17:26] <georgem> so, if I have 50000 PGs there will be 50000 x 2 threads x 2(3) replicas?
[17:26] * csharp (~Szernex@425AAAHPD.tor-irc.dnsbl.oftc.net) Quit ()
[17:26] <georgem> no, no erasure coded pools, aren't they still experimental?
[17:26] * mason1 (~Mattress@chulak.enn.lu) has joined #ceph
[17:28] <magicrobotmonkey> well the issue is per osd so with replica size 3, you'll have #pgs/osd * 3 threads per osd thread * 36 (osds per host) threads per host
[17:28] * lpabon (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[17:28] <magicrobotmonkey> we do have 60 osds/host so its more of a problem for us, you should be ok with 36, depending on your processors, but it's something to keep in mind
[17:28] <magicrobotmonkey> i haven't seen it discussed much
[17:30] <georgem> magicrobotmonkey: thanks for the heads up, I'll keep an eye of cs values, the CPUs are 2x6 cores and 128 GB RAM so they should be fine
[17:30] <magicrobotmonkey> yea for sure
[17:30] <magicrobotmonkey> the other thing is that you can always add more pgs, but you can't remove them
[17:31] <georgem> I know, thanks
[17:31] * joshd (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[17:31] <magicrobotmonkey> anyways, all that to say, if it was me, i'd start with less
[17:31] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) has joined #ceph
[17:32] * xarses_ (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:36] * rturk|afk is now known as rturk
[17:39] * moore (~moore@64.202.160.88) has joined #ceph
[17:40] * daniel2_ (~daniel2_@cpe-24-28-6-151.austin.res.rr.com) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:42] * xarses_ (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[17:46] * daniel2_ (~daniel2_@cpe-24-28-6-151.austin.res.rr.com) has joined #ceph
[17:51] <kingcu> anyone here have issues with logrotate causing mons to drop out of the cluster?
[17:52] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[17:52] <kingcu> seems like the shipped logrotate script is a bit aggressive
[17:53] <kingcu> specifically, every morning at 6:30am when logrotate kicks off, I get alerts on the return status of ceph status: WARNING: HEALTH_WARN 1 mons down, quorum 1,2 drexler,lucy
[17:53] <gleam> doesn't it just do a hup? i'd think that wouldn't cause the mon to restart + drop out
[17:54] <kingcu> exactly
[17:54] <kingcu> https://gist.github.com/kingcu/c53a4f9dd89053c3591d
[17:55] <kingcu> this is on ubuntu 14.04
[17:55] <kingcu> it's not just mons, the other day i had OSDs go south during logrotate
[17:55] * longguang_home (~chatzilla@111.202.0.54) Quit (Quit: ChatZilla 0.9.91.1 [Firefox 36.0.1/20150305021524])
[17:55] <kingcu> WARNING: HEALTH_WARN 50 pgs peering; 41 pgs stuck inactive; 41 pgs stuck unclean
[17:55] <kingcu> as well as a metadata server
[17:56] <kingcu> kinda worrying, still in pre-production evaluation phase of a small cluster. stress testing cephfs a little bit as the filestore for a pair of OSM tileservers
[17:56] * mason1 (~Mattress@98EAAAZNC.tor-irc.dnsbl.oftc.net) Quit ()
[17:56] * Xa (~hgjhgjh@hessel2.torservers.net) has joined #ceph
[17:56] <kingcu> hoping to have enough confidence to start putting more production data in the system (not using cephfs)
[17:57] * ToMiles (~ToMiles@nl6x.mullvad.net) has joined #ceph
[17:58] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[17:59] * p66kumar (~p66kumar@74.119.205.248) has joined #ceph
[18:01] * puffy (~puffy@50.185.218.255) has joined #ceph
[18:01] * scuttlemonkey is now known as scuttle|afk
[18:04] * Rickus_ (~Rickus@office.protected.ca) has joined #ceph
[18:04] * Rickus (~Rickus@office.protected.ca) Quit (Read error: Connection reset by peer)
[18:05] * Rickus__ (~Rickus@office.protected.ca) has joined #ceph
[18:05] * Rickus_ (~Rickus@office.protected.ca) Quit (Read error: Connection reset by peer)
[18:05] * BManojlovic (~steki@cable-89-216-232-254.dynamic.sbb.rs) has joined #ceph
[18:08] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[18:11] * Rickus_ (~Rickus@office.protected.ca) has joined #ceph
[18:11] * Rickus__ (~Rickus@office.protected.ca) Quit (Read error: Connection reset by peer)
[18:12] * Rickus (~Rickus@205.173.252.249) has joined #ceph
[18:12] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[18:14] * Rickus_ (~Rickus@office.protected.ca) Quit (Read error: Connection reset by peer)
[18:14] * Rickus_ (~Rickus@office.protected.ca) has joined #ceph
[18:19] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[18:21] * Rickus (~Rickus@205.173.252.249) Quit (Ping timeout: 480 seconds)
[18:22] * bandrus (~brian@117.sub-70-211-78.myvzw.com) has joined #ceph
[18:23] * joef (~Adium@2620:79:0:2420::6) has joined #ceph
[18:26] * Xa (~hgjhgjh@98EAAAZN8.tor-irc.dnsbl.oftc.net) Quit ()
[18:28] <fxmulder_> is Client 1569366 failing to respond to cache pressure; something to be worried about?
[18:28] * bandrus1 (~brian@117.sub-70-211-78.myvzw.com) has joined #ceph
[18:29] <kingcu> fxmulder_: i've seen that too on 0.87.1, on heavy cephfs usage
[18:30] * bandrus (~brian@117.sub-70-211-78.myvzw.com) Quit (Ping timeout: 480 seconds)
[18:31] * Kyso (~Thononain@5NZAAA72S.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:31] * rturk is now known as rturk|afk
[18:33] * scuttle|afk is now known as scuttlemonkey
[18:35] * lalatenduM (~lalatendu@122.172.133.171) has joined #ceph
[18:37] * bkopilov (~bkopilov@bzq-79-183-144-37.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[18:38] * xarses_ (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:40] * joshd (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[18:44] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[18:47] * bandrus1 (~brian@117.sub-70-211-78.myvzw.com) Quit (Quit: Leaving.)
[18:49] * bandrus (~brian@117.sub-70-211-78.myvzw.com) has joined #ceph
[18:52] * rongze (~rongze@106.39.154.69) Quit (Remote host closed the connection)
[18:57] <fxmulder_> I have lots of accesses to files but not a lot of files being accessed
[18:58] <kingcu> fxmulder_: i found a few references online to it being a non-issue with it being fixed in hammer
[18:58] <kingcu> upgrading to hammer today to see if that is resolved. still in pre production testing
[19:01] * Kyso (~Thononain@5NZAAA72S.tor-irc.dnsbl.oftc.net) Quit ()
[19:01] * Tarazed (~WedTM@chomsky.torservers.net) has joined #ceph
[19:04] * dl-est (~dl-est@a91-153-45-198.elisa-laajakaista.fi) has joined #ceph
[19:10] * xarses_ (~andreww@12.164.168.117) has joined #ceph
[19:12] * dl-est (~dl-est@a91-153-45-198.elisa-laajakaista.fi) Quit (Ping timeout: 480 seconds)
[19:13] * mgolub (~Mikolaj@91.225.203.116) has joined #ceph
[19:14] * dl-est (~dl-est@a91-153-45-198.elisa-laajakaista.fi) has joined #ceph
[19:14] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[19:16] * magicrobotmonkey (~abassett@ec2-50-18-55-253.us-west-1.compute.amazonaws.com) Quit (Ping timeout: 480 seconds)
[19:16] * Tarazed (~WedTM@5NZAAA733.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[19:17] * hellertime1 (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) has joined #ceph
[19:17] * hellertime (~Adium@a72-246-185-10.deploy.akamaitechnologies.com) Quit (Read error: Connection reset by peer)
[19:20] * MACscr (~Adium@2601:d:c800:de3:e40a:177c:821f:8a5b) Quit (Quit: Leaving.)
[19:20] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[19:22] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has left #ceph
[19:22] * daniel2_ (~daniel2_@cpe-24-28-6-151.austin.res.rr.com) Quit (Quit: Textual IRC Client: www.textualapp.com)
[19:22] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[19:22] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[19:23] * MACscr (~Adium@2601:d:c800:de3:f19d:4c12:3088:1412) has joined #ceph
[19:26] * i_m (~ivan.miro@deibp9eh1--blueice3n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[19:27] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Read error: Connection reset by peer)
[19:28] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[19:31] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Remote host closed the connection)
[19:31] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[19:35] * dl-est (~dl-est@a91-153-45-198.elisa-laajakaista.fi) Quit (Remote host closed the connection)
[19:46] * ggg (~OODavo@spftor4e1.privacyfoundation.ch) has joined #ceph
[19:52] <fxmulder_> kingcu: let me know how it goes
[19:52] <kingcu> fxmulder_: rolling update went well
[19:53] <kingcu> orchestrated by ansible. going to hammer at cephfs and see if it gives me shit still
[19:53] * joshd (~jdurgin@38.122.20.226) has joined #ceph
[19:56] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[20:01] * dl-est (~dl-est@a91-153-45-198.elisa-laajakaista.fi) has joined #ceph
[20:01] * magicrobotmonkey (~abassett@ec2-50-18-55-253.us-west-1.compute.amazonaws.com) has joined #ceph
[20:03] * rongze (~rongze@106.39.154.69) has joined #ceph
[20:06] * bandrus1 (~brian@117.sub-70-211-78.myvzw.com) has joined #ceph
[20:12] * bandrus (~brian@117.sub-70-211-78.myvzw.com) Quit (Ping timeout: 480 seconds)
[20:12] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[20:14] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[20:16] * ggg (~OODavo@98EAAAZSA.tor-irc.dnsbl.oftc.net) Quit ()
[20:17] * Spikey (~Frostshif@tor-exit.server9.tvdw.eu) has joined #ceph
[20:20] * vbellur (~vijay@122.171.73.145) Quit (Ping timeout: 480 seconds)
[20:20] * puffy (~puffy@216.207.42.144) has joined #ceph
[20:21] * sbfox (~Adium@72.2.49.50) has joined #ceph
[20:25] * puffy1 (~puffy@216.207.42.129) has joined #ceph
[20:28] * puffy (~puffy@216.207.42.144) Quit (Ping timeout: 480 seconds)
[20:28] * lalatenduM (~lalatendu@122.172.133.171) Quit (Quit: Leaving)
[20:34] * bkopilov (~bkopilov@bzq-79-180-169-37.red.bezeqint.net) has joined #ceph
[20:34] * davidz (~davidz@2605:e000:1313:8003:61f1:bdcf:4084:6804) Quit (Quit: Leaving.)
[20:39] * c4tech (~c4tech@199.91.185.156) has joined #ceph
[20:42] * c4tech (~c4tech@199.91.185.156) Quit ()
[20:42] * c4tech (~c4tech@199.91.185.156) has joined #ceph
[20:42] * c4tech (~c4tech@199.91.185.156) Quit ()
[20:42] * c4tech (~c4tech@199.91.185.156) has joined #ceph
[20:42] * c4tech (~c4tech@199.91.185.156) Quit ()
[20:42] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[20:46] * Spikey (~Frostshif@98EAAAZTG.tor-irc.dnsbl.oftc.net) Quit ()
[20:49] * davidzlap (~Adium@2605:e000:1313:8003:4c44:9477:d73e:cd35) has joined #ceph
[20:51] * nih (~airsoftgl@ks4003088.ip-142-4-208.net) has joined #ceph
[20:51] * scuttlemonkey is now known as scuttle|afk
[20:56] * scuttle|afk is now known as scuttlemonkey
[20:58] * dl-est (~dl-est@a91-153-45-198.elisa-laajakaista.fi) Quit (Quit: Leaving...)
[21:01] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:13] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Read error: Connection reset by peer)
[21:15] * rongze (~rongze@106.39.154.69) Quit (Remote host closed the connection)
[21:19] * brad_mssw (~brad@66.129.88.50) Quit (Remote host closed the connection)
[21:21] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[21:21] * nih (~airsoftgl@5NZAAA78A.tor-irc.dnsbl.oftc.net) Quit ()
[21:23] * shivark (~oftc-webi@32.97.110.54) has joined #ceph
[21:23] <shivark> hi Loic
[21:24] <shivark> This is Shiva asking you about osd auto mount issue
[21:27] * shivark slaps loicd around a bit with a large fishbot
[21:28] <loicd> shivark: \o
[21:29] <loicd> could you paste2.org the output of sgdisk --info 1 /dev/sdb ?
[21:29] <loicd> shivark: ^
[21:29] <shivark> sure.. jsut sent you an email
[21:29] * loicd looking
[21:30] <shivark> with that info
[21:31] <loicd> shivark: could you sudo ceph-disk list | pastebinit ?
[21:31] <loicd> Partition unique GUID: 1618223E-B8C9-4C4A-B5D2-EBFF6D64CB12
[21:31] <loicd> matches
[21:32] <loicd> lrwxrwxrwx 1 root root 10 Apr 4 16:27 1618223e-b8c9-4c4a-b5d2-ebff6d64cb12 -> ../../sdb1
[21:32] <loicd> which is good
[21:32] <shivark> http://pastebin.com/T0B7suMw
[21:33] <shivark> ceph-disk list
[21:33] <loicd> /dev/sdb1 ceph data, active, cluster ceph, osd.8
[21:33] <loicd> but osd.8 is not running, right shivark ?
[21:34] <shivark> right
[21:35] * sbfox (~Adium@72.2.49.50) Quit (Quit: Leaving.)
[21:36] <loicd> shivark: ls -l /lib/udev/rules.d/*ceph* ?
[21:37] <shivark> # ls -l /lib/udev/rules.d/*ceph* -rw-r--r-- 1 root root 1721 Mar 9 20:11 /lib/udev/rules.d/60-ceph-partuuid-workaround.rules -rw-r--r-- 1 root root 208 Mar 9 20:11 /lib/udev/rules.d/95-ceph-osd.rules
[21:37] <shivark> http://pastebin.com/NxuAuztb
[21:38] * sbfox (~Adium@72.2.49.50) has joined #ceph
[21:38] <loicd> shivark: /dev/sdb1 is not mounted, is it ?
[21:39] <shivark> right now, we manually mounted it
[21:39] * linuxkidd (~linuxkidd@166.177.186.242) Quit (Ping timeout: 480 seconds)
[21:39] <shivark> otherwise, it was not automounted after system reboot
[21:39] <shivark> I can try rebooting again
[21:40] <loicd> shivark: so you mount it manually and then how do you start osd.8 ?
[21:40] <shivark> I've not done the start. Usauly do, service ceph restart
[21:40] <loicd> this is rhel 6.5 ?
[21:40] <shivark> yes
[21:41] <loicd> does osd.8 start when you service ceph restart ?
[21:41] <shivark> let me try
[21:41] <loicd> wait
[21:41] <shivark> ok
[21:42] <loicd> please :-)
[21:42] * loicd forgot to say the magic word
[21:42] <loicd> shivark: there are other osd on the same machine and they run fine at boot, is that right ?
[21:43] <shivark> I think we tried on another cluster (similar config), and osd was restarted after mounting the partition and doing service ceph restart
[21:43] <shivark> On this cluster, there are only two OSDs mounted and both are down afrer reboot
[21:44] <loicd> ok
[21:44] <loicd> both osd.9 and osd.8 are not up
[21:44] <shivark> yes
[21:44] <loicd> could you ls -l /var/lib/ceph/osd/*/journal ?
[21:45] <loicd> shivark: ^
[21:45] <shivark> sure
[21:45] <loicd> please :-)
[21:46] <shivark> http://pastebin.com/NxuAuztb
[21:47] <shivark> osd tree, lsblk and ls -l outputs
[21:48] <loicd> shivark: sgdiks --info 1 /dev/sdr ?
[21:49] <loicd> it looks like the problem is that the /dev/sdr1 partition is not the expected journal (otherwise ceph-disk list would have shown it)
[21:49] * linuxkidd (~linuxkidd@166.170.55.15) has joined #ceph
[21:49] <loicd> shivark: is it possible that /dev/sdr was reformatted / changed ?
[21:50] <loicd> ??????sdr1 65:17 1 214.6G 0 part
[21:50] <shivark> no, it was not reformatted
[21:50] <loicd> it looks really big for a journal
[21:50] <shivark> yes got over-provisioned :)
[21:51] <loicd> slightly ;-)
[21:51] <shivark> got a 1 TB SSD instead of a 200 GB
[21:51] * sixofour (~Redshift@171.ip-5-135-148.eu) has joined #ceph
[21:52] <loicd> shivark: I would be surprised if service start ceph-osd manages to start ceph-osd 8
[21:52] <loicd> can you give it a try ?
[21:52] <shivark> sure
[21:52] * reed (~reed@d126-6b15-a945-4197-0386-356b-4420-2062.6rd.ip6.sonic.net) Quit (Ping timeout: 480 seconds)
[21:53] <shivark> both are up
[21:56] <shivark> http://pastebin.com/NxuAuztb
[21:58] <loicd> shivark: and what's the output of ceph-disk list now ?
[21:58] <shivark> checking
[22:00] <shivark> http://pastebin.com/x34TnkvV
[22:01] * xarses_ (~andreww@12.164.168.117) Quit (Remote host closed the connection)
[22:01] <loicd> ok. I kind of remember there is something non intuitive in the display of the journal
[22:01] <loicd> ceph osd tree shows these two are up now shivark ?
[22:01] <shivark> yes
[22:02] * xarses (~andreww@12.164.168.117) has joined #ceph
[22:02] <loicd> # ls -l /var/lib/ceph/osd/ceph-0/journal
[22:02] <loicd> lrwxrwxrwx 1 root root 58 Nov 2 2013 /var/lib/ceph/osd/ceph-0/journal -> /dev/disk/by-partuuid/7086c324-04fd-4594-8f5b-76c7a0d5b833
[22:03] <loicd> is the kind of thing I would have expected
[22:03] <loicd> but
[22:03] * sixofour (~Redshift@5NZAAA8AI.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[22:03] * Rens2Sea (~Eman@98EAAAZW1.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:03] <shivark> oh ok
[22:04] <loicd> I think /dev/sr1 is what you expect when you set the journal on an ssd
[22:04] <loicd> and I don't see why it would be a problem
[22:04] <shivark> shoukd I check the output o 0.80.7?
[22:05] <shivark> We have not changed anything with parttioning of ssds recently
[22:05] <shivark> it is same for both 0.80.7 and 0.80.9
[22:05] <loicd> when it was 0.80.7 it booted correctly
[22:05] <shivark> yesterdat
[22:06] <loicd> after upgrading to v0.80.9 it no longer starts at boot
[22:06] <shivark> yesterday
[22:06] <loicd> right ?
[22:06] <shivark> correct
[22:06] <shivark> got two clusters, one with 80.7 and other with 80.9
[22:06] * loicd browsing the patches
[22:06] <shivark> another member reported problem on 80.8
[22:07] <shivark> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg16625.html
[22:10] <loicd> git log --patch tags/v0.80.7..tags/v0.80.9 -- udev src/ceph-disk
[22:10] <loicd> shows nothing harmless
[22:10] <loicd> s/harmless/harmfull ;-)
[22:10] * sadbox (~jmcguire@sadbox.org) Quit (Max SendQ exceeded)
[22:12] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[22:12] <loicd> I think it's something unrelated to ceph but I'd be *very* curious to know what exactly
[22:12] * sadbox_ (~jmcguire@sadbox.org) has joined #ceph
[22:13] <shivark> ok
[22:13] <loicd> shivark: could you shutdown osd.8 / osd.9 ?
[22:13] <loicd> service stop ceph-osd presumably
[22:13] <shivark> ok, doing "service ceph stop"
[22:13] <loicd> ok
[22:13] <loicd> do you have a terminal with udevadm monitor in it ?
[22:14] <shivark> no udevadm terminal
[22:14] <loicd> could you run sudo udevadm monitor in a terminal ?
[22:14] <loicd> and run udevadm trigger --sysname-match=sdb
[22:15] <loicd> and paste2.org what is shown in the terminal
[22:15] * lpabon (~quassel@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[22:15] <shivark> ok
[22:15] * rongze (~rongze@106.39.154.69) has joined #ceph
[22:16] <shivark> http://pastebin.com/x34TnkvV
[22:17] * rodrigoUSA (~Rodri@172.56.4.149) has joined #ceph
[22:17] <rodrigoUSA> hi all
[22:17] <rodrigoUSA> someone know how to mount cephfs on windows ?
[22:17] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[22:19] * boredatwork (~overonthe@199.68.193.62) has joined #ceph
[22:19] * loicd tries to remember how to figure out if udev/95-ceph-osd.rules is called
[22:21] * bkopilov (~bkopilov@bzq-79-180-169-37.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[22:22] * bkopilov (~bkopilov@bzq-109-64-149-201.red.bezeqint.net) has joined #ceph
[22:23] * rongze (~rongze@106.39.154.69) Quit (Ping timeout: 480 seconds)
[22:25] <shivark> ok
[22:25] * dupont-y (~dupont-y@2a01:e34:ec92:8070:9075:21c0:2a0f:dc51) has joined #ceph
[22:26] <dmick> rodrigoUSA: generally you don't. There has been talk of a ceph client for Windows but nothing is available AFAIK. Exporting as NFS is probably the closest to what you want.
[22:27] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[22:29] * loicd boots a rhel 6.5 machine to try things
[22:29] <loicd> and I'll take notes this time
[22:30] <rodrigoUSA> shivark, how to I export ceph as NFS
[22:32] <shivark> thanks loicd. Are you going to be up for some more time tongiht ?
[22:32] * cpceph (~Adium@67.21.63.155) has joined #ceph
[22:32] <loicd> i'd like to get to the bottom of this
[22:32] <loicd> I suspect it's related to a bug that has been reported but which I was not able to reproduce
[22:33] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Ping timeout: 480 seconds)
[22:33] <shivark> ok, sounds good loic.
[22:33] * mliang2 (~oftc-webi@12.22.22.11) has joined #ceph
[22:33] * Rens2Sea (~Eman@98EAAAZW1.tor-irc.dnsbl.oftc.net) Quit ()
[22:34] <mliang2> Hi, I need some advice on ceph cache tier best practices. Anyone w/ experiences?
[22:35] * linuxkidd (~linuxkidd@166.170.55.15) Quit (Ping timeout: 480 seconds)
[22:35] <shivark> rodrigoUSA, sebastian has a blog on nfs over rbd: http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
[22:37] * moore_ (~moore@63-232-3-122.dia.static.qwest.net) has joined #ceph
[22:38] * TomyLobo (~w2k@72.ip-198-50-145.net) has joined #ceph
[22:39] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[22:42] * tupper_ (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) Quit (Ping timeout: 480 seconds)
[22:43] * moore (~moore@64.202.160.88) Quit (Ping timeout: 480 seconds)
[22:45] <loicd> following instructions from http://ceph.com/docs/master/install/get-packages/ gets me v0.80.5 on rhel 6.5...
[22:45] * ToMiles (~ToMiles@nl6x.mullvad.net) Quit (Quit: leaving)
[22:45] <loicd> although http://ceph.com/rpm-firefly/rhel6/x86_64/ has v0.80.9
[22:46] * linuxkidd (~linuxkidd@mobile-166-173-249-046.mycingular.net) has joined #ceph
[22:49] * rendar (~I@host163-180-dynamic.23-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[22:50] <shivark> we download the packages from the firefly directory
[22:50] * rodrigoUSA (~Rodri@172.56.4.149) Quit (Ping timeout: 480 seconds)
[22:51] * rendar (~I@host163-180-dynamic.23-79-r.retail.telecomitalia.it) has joined #ceph
[22:51] * puffy (~puffy@216.207.42.144) has joined #ceph
[22:52] * loicd doing that
[22:52] <loicd> shivark: is there a way to figure out if a given repo is taken into account ?
[22:53] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[22:54] <loicd> shivark: did you get http://ceph.com/rpm-firefly/rhel6/x86_64/gdisk-0.8.2-1.el6.x86_64.rpm ?
[22:54] * rodrigoUSA (~Rodri@mfd2c36d0.tmodns.net) has joined #ceph
[22:54] <loicd> hum
[22:54] <loicd> the default one seems more recent
[22:54] <shivark> gdisk-0.8.2-1.el6.x86_64
[22:55] <shivark> thats what I got on the system
[22:56] * puffy1 (~puffy@216.207.42.129) Quit (Ping timeout: 480 seconds)
[22:56] <loicd> I can't do that manually I need to figure out why the instructions do not work for me
[22:57] <loicd> shivark: do you have a /etc/yum.repos.d/ceph.repo that works for you ?
[22:58] * mgolub (~Mikolaj@91.225.203.116) Quit (Quit: away)
[22:58] <shivark> We have a local yum repo
[22:58] * Egyptian[Laptop] (~marafa@cpe-98-26-77-230.nc.res.rr.com) has joined #ceph
[22:59] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[23:00] <shivark> we download all the rpms from firefly directory and make a local repo
[23:00] <loicd> ok
[23:01] <shivark> as our servers are not allowed to connect to internet
[23:01] <loicd> apparently I have epel packages taking precendence
[23:01] <loicd> epel6
[23:01] <shivark> disable that repo?
[23:02] <shivark> set this value to 0: enabled=0
[23:02] <shivark> so, it won't look in that repo
[23:03] <loicd> it needs packages from epel
[23:03] <loicd> this is painful
[23:04] * moore (~moore@64.202.160.88) has joined #ceph
[23:05] <loicd> if I disable epel packages are missing but it tries to install .9
[23:05] <B_Rake> You can use yum priorities or exclude packages in the epel repo fiels
[23:05] <B_Rake> files*
[23:05] <loicd> B_Rake: I set priority=2 as instructed at http://ceph.com/docs/master/install/get-packages/
[23:06] <loicd> B_Rake: how can I exclude packages ?
[23:06] * rodrigoUSA (~Rodri@mfd2c36d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[23:06] * Nacer_ (~Nacer@2001:41d0:fe82:7200:78c4:1ebb:82d7:906d) has joined #ceph
[23:07] * georgem (~Adium@fwnat.oicr.on.ca) has left #ceph
[23:08] * TomyLobo (~w2k@5NZAAA8B7.tor-irc.dnsbl.oftc.net) Quit ()
[23:08] <loicd> I must be doing something stupidly wrong
[23:08] * _s1gma (~Miho@marylou.nos-oignons.net) has joined #ceph
[23:08] <B_Rake> You would do something like
[23:08] <B_Rake> exclude=package1* *package2* package*
[23:08] <B_Rake> etc in the repo file
[23:10] <B_Rake> Like how this article goes over the "How to Exclude Packages from EPEL Repo" section http://www.tecmint.com/disable-certain-package-updates-using-yum-in-rhel-centos-fedora/
[23:10] * moore_ (~moore@63-232-3-122.dia.static.qwest.net) Quit (Ping timeout: 480 seconds)
[23:11] <loicd> B_Rake: thanks. I'm on the ceph-deploy path but if it fails I'll get back to that.
[23:11] <loicd> Error: Package: cloud-init-0.7.4-2.el6.noarch (epel)
[23:11] <loicd> Requires: dmidecode
[23:12] * loicd nukes the machine and starts over
[23:13] * Nacer (~Nacer@2001:41d0:fe82:7200:5892:252c:7e2c:f7e2) Quit (Ping timeout: 480 seconds)
[23:15] <B_Rake> I'm showing dmidecode is in base
[23:16] * greavette (~oftc-webi@64-7-147-239.agas1a-dynamic.dsl.sentex.ca) has joined #ceph
[23:16] <loicd> B_Rake: how do you check that ?
[23:16] * BManojlovic (~steki@cable-89-216-232-254.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:16] * rongze (~rongze@106.39.154.69) has joined #ceph
[23:16] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has left #ceph
[23:18] <B_Rake> I just ran 'yum search dmidecode' to see which repo it is pulling from
[23:18] <greavette> Hello, Just learning about Ceph still. I'm hoping to get some direction from #ceph. Regarding SSD drives. my 8 bay Supermicro server has 8 SATA drives. To use an SSD Drive in the mix I would need to use a bay converter (2.5 ") that would allow me to use an SSD drive. Does anyone think this will be a concern for performance?
[23:18] <B_Rake> Is the base repo enabled?
[23:18] <loicd> shivark: could you pastebin the output of udevadm test /block/sdb ?
[23:18] <shivark> ok
[23:18] * rodrigoUSA (~Rodri@24.41.238.33) has joined #ceph
[23:20] <shivark> http://pastebin.com/8CnqgSwx
[23:21] * scuttlemonkey is now known as scuttle|afk
[23:25] * rongze (~rongze@106.39.154.69) Quit (Ping timeout: 480 seconds)
[23:25] * linuxkidd (~linuxkidd@mobile-166-173-249-046.mycingular.net) Quit (Quit: Leaving)
[23:26] <greavette> And assuming that 2 of my 8 bays would be used for the O/S (in Raid 1), the other 6 bays would be for Ceph storage. 1 SATA bay would be a spare. This leaves 5 bays for Ceph storage. Do I use one of these 5 bays for the SSD and put my storage on the other 4 SATA drives? How many SSD drives do I need in my server?
[23:30] <loicd> shivark: I can't figure out something useful from that output, back to the 6.5 installation
[23:30] <shivark> loic, thank you
[23:31] <loicd> shivark: it looks like it's not running the script I would expect. but when I test the same on a system that works, the test output does not show the udev script I would expect either. I conclude that i don't know how to read this properly.
[23:34] <shivark> ok
[23:34] <loicd> interestingly ceph-deploy works adn .. installs v0.80.5
[23:36] <loicd> alfredodeza: do you happen to know why ceph-deploy installs firefly v0.80.5 instead of v0.80.9 by default on a newly installed rhel6.5 machine ?
[23:37] <loicd> hum
[23:37] <loicd> I though ceph-deploy would add new repos but it did not
[23:38] * _s1gma (~Miho@425AAAHSL.tor-irc.dnsbl.oftc.net) Quit ()
[23:40] * loicd going back to manual installation instructions and excluding packages from epel
[23:41] * rodrigoUSA (~Rodri@24.41.238.33) Quit (Quit: Leaving)
[23:43] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) has joined #ceph
[23:43] * mliang2 (~oftc-webi@12.22.22.11) Quit (Quit: Page closed)
[23:48] <loicd> $ ceph --version
[23:48] <loicd> ceph version 0.80.9 (b5a67f0e1d15385bc0d60a6da6e7fc810bde6047)
[23:48] <loicd> B_Rake: with exclude, thanks for the tip !
[23:58] <loicd> shivark: I have a rhel 6.5 with a running cluster and a disk configured in the same way yours is. rebooting now ...
[23:59] <loicd> shivark: my ceph-osd is up
[23:59] * bandrus (~brian@198.23.71.111-static.reverse.softlayer.com) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.