#ceph IRC Log

Index

IRC Log for 2015-04-29

Timestamps are in GMT/BST.

[0:01] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[0:01] * LeaChim (~LeaChim@host86-143-18-67.range86-143.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[0:02] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) has joined #ceph
[0:05] * georgem (~Adium@184.151.178.164) has left #ceph
[0:05] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) Quit (Quit: Connection closed for inactivity)
[0:12] * rendar (~I@host102-182-dynamic.20-87-r.retail.telecomitalia.it) Quit ()
[0:15] * bene2 (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[0:16] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[0:24] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[0:27] * clusterfudge (~Bromine@8Q4AAAF3U.tor-irc.dnsbl.oftc.net) Quit ()
[0:27] * Keiya (~rapedex@72.52.91.30) has joined #ceph
[0:31] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Ping timeout: 480 seconds)
[0:33] * puffy (~puffy@216.207.42.129) Quit (Quit: Leaving.)
[0:35] * puffy (~puffy@216.207.42.129) has joined #ceph
[0:35] * fireD (~fireD@93-139-238-9.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[0:37] * oblu (~o@62.109.134.112) has joined #ceph
[0:41] * oblu (~o@62.109.134.112) Quit ()
[0:41] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[0:43] * oblu (~o@62.109.134.112) has joined #ceph
[0:44] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[0:46] * rotbeard (~redbeard@aftr-95-222-27-149.unity-media.net) Quit (Quit: Leaving)
[0:47] * oblu (~o@62.109.134.112) Quit ()
[0:51] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[0:57] * Keiya (~rapedex@5NZAAB89S.tor-irc.dnsbl.oftc.net) Quit ()
[0:57] * arsenaali (~Unforgive@orilla.enn.lu) has joined #ceph
[0:59] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[1:03] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[1:04] * oms101 (~oms101@46.189.67.9) Quit (Ping timeout: 480 seconds)
[1:08] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[1:18] * RayTracer (~RayTracer@host-81-190-2-156.gdynia.mm.pl) has joined #ceph
[1:26] * RayTracer (~RayTracer@host-81-190-2-156.gdynia.mm.pl) Quit (Quit: Leaving...)
[1:27] * arsenaali (~Unforgive@8Q4AAAF57.tor-irc.dnsbl.oftc.net) Quit ()
[1:27] * MonkeyJamboree (~lmg@37.187.129.166) has joined #ceph
[1:33] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[1:35] * danieagle (~Daniel@177.94.30.173) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[1:38] * fsimonce (~simon@host129-29-dynamic.250-95-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:44] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[1:46] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) has joined #ceph
[1:57] * MonkeyJamboree (~lmg@8BXAAABK1.tor-irc.dnsbl.oftc.net) Quit ()
[1:57] * Grum (~Sun7zu@ncc-1701-a.tor-exit.network) has joined #ceph
[2:27] * Grum (~Sun7zu@789AAAAEY.tor-irc.dnsbl.oftc.net) Quit ()
[2:27] * puffy (~puffy@216.207.42.129) Quit (Ping timeout: 480 seconds)
[2:29] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) Quit (Quit: Ex-Chat)
[2:35] * fam_away is now known as fam
[2:53] * vata (~vata@cable-21.246.173-197.electronicbox.net) has joined #ceph
[2:57] * yanzheng (~zhyan@171.216.95.139) has joined #ceph
[2:59] * segutier (~segutier@mf35636d0.tmodns.net) has joined #ceph
[3:02] * segutier_ (~segutier@172.56.13.181) has joined #ceph
[3:05] * segutier_ (~segutier@172.56.13.181) Quit ()
[3:07] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[3:07] * segutier (~segutier@mf35636d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[3:11] * xinxinsh (~xinxinsh@192.102.204.38) has joined #ceph
[3:13] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[3:13] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:13] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[3:18] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[3:27] * cooey (~AluAlu@politkovskaja.torservers.net) has joined #ceph
[3:32] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[3:46] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[3:48] * bearkitten (~bearkitte@cpe-76-167-204-192.san.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:49] * yghannam (~yghannam@0001f8aa.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:50] * kefu (~kefu@114.92.111.70) has joined #ceph
[3:51] * imjustmatthew (~imjustmat@pool-108-4-98-95.rcmdva.fios.verizon.net) Quit (Remote host closed the connection)
[3:57] * cooey (~AluAlu@9U1AAAG1S.tor-irc.dnsbl.oftc.net) Quit ()
[3:57] * `Jin (~CoMa@herngaard.torservers.net) has joined #ceph
[4:00] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[4:03] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[4:11] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[4:25] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[4:27] * `Jin (~CoMa@9U1AAAG2S.tor-irc.dnsbl.oftc.net) Quit ()
[4:27] * Kakeru (~TehZomB@176.10.99.204) has joined #ceph
[4:29] * Kupo1 (~tyler.wil@23.111.254.159) Quit (Read error: Connection reset by peer)
[4:34] * zhaochao (~zhaochao@124.202.190.2) has joined #ceph
[4:37] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) has joined #ceph
[4:52] * MACscr (~Adium@2601:d:c800:de3:c983:3e7c:7f2d:e2a3) Quit (Quit: Leaving.)
[4:57] * Kakeru (~TehZomB@789AAAAI2.tor-irc.dnsbl.oftc.net) Quit ()
[4:57] * raindog (~Grum@ncc-1701-a.tor-exit.network) has joined #ceph
[4:59] * wushudoin (~wushudoin@2601:9:4b00:f10:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[5:04] * wushudoin (~wushudoin@2601:9:4b00:f10:2ab2:bdff:fe0b:a6ee) has joined #ceph
[5:06] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[5:11] * Vacuum_ (~vovo@88.130.219.163) has joined #ceph
[5:15] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[5:18] * Vacuum__ (~vovo@i59F79B73.versanet.de) Quit (Ping timeout: 480 seconds)
[5:20] * wizr (~Adium@23.19.56.240) has joined #ceph
[5:21] * wizr (~Adium@23.19.56.240) has left #ceph
[5:22] * wizr (~Adium@23.19.56.240) has joined #ceph
[5:22] * oblu (~o@62.109.134.112) has joined #ceph
[5:27] * raindog (~Grum@8Q4AAAGEK.tor-irc.dnsbl.oftc.net) Quit ()
[5:27] * andrew_m (~Diablothe@orion.enn.lu) has joined #ceph
[5:29] * bandrus (~brian@50.23.113.232) Quit (Quit: Leaving.)
[5:30] * bd (~bd@88.217.195.190) Quit (Remote host closed the connection)
[5:30] * bd (~bd@mail.bc-bd.org) has joined #ceph
[5:31] * oblu (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[5:31] * fred`` (fred@earthli.ng) Quit (Ping timeout: 480 seconds)
[5:31] * oblu (~o@62.109.134.112) has joined #ceph
[5:33] * puffy (~puffy@50.185.218.255) has joined #ceph
[5:37] * slitvak69 (~slitvak69@c-98-206-252-16.hsd1.il.comcast.net) Quit (Read error: Connection reset by peer)
[5:39] * jeevan_ullas (~Deependra@114.143.38.213) has joined #ceph
[5:47] * fred`` (fred@earthli.ng) has joined #ceph
[5:50] * rdas (~rdas@122.168.205.106) has joined #ceph
[5:57] * andrew_m (~Diablothe@789AAAAK4.tor-irc.dnsbl.oftc.net) Quit ()
[5:57] * Xylios (~Miho@109.163.235.246) has joined #ceph
[5:58] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[6:00] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[6:00] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[6:05] * vbellur (~vijay@122.167.227.24) Quit (Ping timeout: 480 seconds)
[6:05] * overclk (~overclk@121.244.87.117) has joined #ceph
[6:09] * lavalake (~lavalake@2601:7:8080:5d1:2510:e70c:e3fd:ccc5) Quit (Remote host closed the connection)
[6:11] * oblu (~o@62.109.134.112) Quit (Quit: ~)
[6:11] * lavalake_ (~lavalake@c-98-239-240-118.hsd1.pa.comcast.net) has joined #ceph
[6:15] * laval____ (~lavalake@c-98-239-240-118.hsd1.pa.comcast.net) has joined #ceph
[6:15] * lavalake_ (~lavalake@c-98-239-240-118.hsd1.pa.comcast.net) Quit (Read error: Connection reset by peer)
[6:18] * lav______ (~lavalake@2601:7:8080:5d1:8cfd:f389:9:4685) has joined #ceph
[6:18] * laval____ (~lavalake@c-98-239-240-118.hsd1.pa.comcast.net) Quit (Read error: Connection reset by peer)
[6:19] * kefu_ (~kefu@114.92.111.70) has joined #ceph
[6:19] * kefu (~kefu@114.92.111.70) Quit (Read error: Connection reset by peer)
[6:21] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) Quit (Quit: Leaving.)
[6:21] * lavalak__ (~lavalake@2601:7:8080:5d1:35a1:3cc:92aa:e02a) has joined #ceph
[6:26] * lav______ (~lavalake@2601:7:8080:5d1:8cfd:f389:9:4685) Quit (Ping timeout: 480 seconds)
[6:27] * Xylios (~Miho@8BXAAABRH.tor-irc.dnsbl.oftc.net) Quit ()
[6:27] * FNugget (~Aethis@176.10.99.204) has joined #ceph
[6:29] * madkiss3 (~madkiss@2001:6f8:12c3:f00f:7c8e:3206:eaf2:34d9) Quit (Read error: Connection reset by peer)
[6:29] * lavalak__ (~lavalake@2601:7:8080:5d1:35a1:3cc:92aa:e02a) Quit (Ping timeout: 480 seconds)
[6:34] * vbellur (~vijay@121.244.87.124) has joined #ceph
[6:34] * kefu_ (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:35] * Karcaw_ (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) Quit (Ping timeout: 480 seconds)
[6:36] * elder_ (~elder@104.135.1.105) Quit (Quit: Leaving)
[6:37] * xinxinsh (~xinxinsh@192.102.204.38) Quit (Remote host closed the connection)
[6:38] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) has joined #ceph
[6:45] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[6:49] * amote (~amote@121.244.87.116) has joined #ceph
[6:57] * FNugget (~Aethis@5NZAAB9FB.tor-irc.dnsbl.oftc.net) Quit ()
[6:57] * Shnaw (~dug@37.187.129.166) has joined #ceph
[7:01] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[7:04] * xinxinsh (~xinxinsh@192.102.204.38) has joined #ceph
[7:14] * elder_ (~elder@12.23.74.29) has joined #ceph
[7:20] * ade (~abradshaw@dslb-094-223-083-167.094.223.pools.vodafone-ip.de) has joined #ceph
[7:22] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:23] * lavalake (~lavalake@2601:7:8080:5d1:19cc:24b9:f26c:6a7) has joined #ceph
[7:27] * Shnaw (~dug@2WVAAB4M8.tor-irc.dnsbl.oftc.net) Quit ()
[7:27] * rhonabwy (~cyphase@46.183.220.132) has joined #ceph
[7:28] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[7:31] * lavalake (~lavalake@2601:7:8080:5d1:19cc:24b9:f26c:6a7) Quit (Ping timeout: 480 seconds)
[7:31] * kefu (~kefu@114.92.111.70) has joined #ceph
[7:32] * puffy (~puffy@50.185.218.255) Quit (Ping timeout: 480 seconds)
[7:35] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[7:57] * rhonabwy (~cyphase@789AAAAOY.tor-irc.dnsbl.oftc.net) Quit ()
[8:02] * kawa2014 (~kawa@80.71.49.3) has joined #ceph
[8:02] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) Quit (Read error: Connection reset by peer)
[8:03] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) has joined #ceph
[8:06] * elder_ (~elder@12.23.74.29) Quit (Quit: Leaving)
[8:08] * Shantanu (~shantanu@114.79.157.147) has joined #ceph
[8:08] * Pablo (~pcaruana@213.175.37.10) has joined #ceph
[8:10] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[8:11] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[8:13] * cok (~chk@2a02:2350:18:1010:1482:d21d:89a8:7bf) has joined #ceph
[8:15] * pvh_sa (~pvh@105.210.174.253) Quit (Ping timeout: 480 seconds)
[8:16] * _robbat21irssi (nobody@www2.orbis-terrarum.net) has joined #ceph
[8:18] * _robbat2|irssi (nobody@www2.orbis-terrarum.net) Quit (Ping timeout: 480 seconds)
[8:21] * Nacer (~Nacer@2001:41d0:fe82:7200:1470:f581:f88e:b620) has joined #ceph
[8:25] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[8:26] <Be-El> hi
[8:27] * ZombieL (~dicko@176.10.99.200) has joined #ceph
[8:28] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:28] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[8:28] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[8:38] * Nacer (~Nacer@2001:41d0:fe82:7200:1470:f581:f88e:b620) Quit (Remote host closed the connection)
[8:38] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) Quit (Quit: Relax, its only ONES and ZEROS!)
[8:40] * vbellur (~vijay@121.244.87.117) has joined #ceph
[8:42] <topro> sage: thanks a lot, I'll have a look at ceph-dokan and ceph vfs plugin to see which one is most promising solution for me
[8:43] * cholcombe (~chris@80.71.49.3) has joined #ceph
[8:45] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[8:48] * fdmanana (~fdmanana@bl13-128-229.dsl.telepac.pt) has joined #ceph
[8:57] * ZombieL (~dicko@789AAAARF.tor-irc.dnsbl.oftc.net) Quit ()
[8:57] * zc00gii (~Nanobot@212.7.194.71) has joined #ceph
[8:59] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[8:59] * wogri (~wolf@nix.wogri.at) Quit (Ping timeout: 480 seconds)
[9:00] * coderspinoza (~oftc-webi@dcsneespop.snu.ac.kr) has joined #ceph
[9:04] * hflai (hflai@alumni.cs.nctu.edu.tw) Quit (Remote host closed the connection)
[9:04] * lavalake (~lavalake@c-98-239-240-118.hsd1.pa.comcast.net) has joined #ceph
[9:06] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[9:07] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Quit: Leaving.)
[9:11] * analbeard (~shw@support.memset.com) has joined #ceph
[9:11] * wogri (~wolf@nix.wogri.at) has joined #ceph
[9:11] * wogri (~wolf@nix.wogri.at) Quit ()
[9:11] * wogri (~wolf@nix.wogri.at) has joined #ceph
[9:12] * coderspinoza (~oftc-webi@dcsneespop.snu.ac.kr) Quit (Quit: Page closed)
[9:12] <anorak> hi
[9:12] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:13] * lavalake (~lavalake@c-98-239-240-118.hsd1.pa.comcast.net) Quit (Ping timeout: 480 seconds)
[9:14] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:14] * bitserker (~toni@88.87.194.130) has joined #ceph
[9:17] * cholcombe (~chris@80.71.49.3) Quit (Ping timeout: 480 seconds)
[9:18] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit ()
[9:19] * jordanP (~jordan@213.215.2.194) has joined #ceph
[9:22] * linjan (~linjan@195.110.41.9) has joined #ceph
[9:23] * hflai (hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[9:23] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[9:25] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[9:26] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[9:26] * cholcombe (~chris@static.c147-98.i01-5.onvol.net) has joined #ceph
[9:27] * zc00gii (~Nanobot@789AAAAS0.tor-irc.dnsbl.oftc.net) Quit ()
[9:28] * dgurtner (~dgurtner@178.197.231.67) has joined #ceph
[9:29] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[9:30] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[9:34] * fsimonce (~simon@host129-29-dynamic.250-95-r.retail.telecomitalia.it) has joined #ceph
[9:34] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit ()
[9:35] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) has joined #ceph
[9:36] * pvh_sa (~pvh@uwcfw.uwc.ac.za) has joined #ceph
[9:37] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:38] * Shantanu (~shantanu@114.79.157.147) Quit (Quit: Leaving.)
[9:38] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[9:38] * hflai (hflai@alumni.cs.nctu.edu.tw) Quit (Remote host closed the connection)
[9:39] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:39] * Shantanu (~shantanu@114.79.157.147) has joined #ceph
[9:40] * joao (~joao@h-213.61.119.230.host.de.colt.net) has joined #ceph
[9:40] * ChanServ sets mode +o joao
[9:42] * oms101 (~oms101@h-213.61.119.230.host.de.colt.net) has joined #ceph
[9:44] * coderspinoza (~oftc-webi@dcsneespop.snu.ac.kr) has joined #ceph
[9:44] * treenerd (~treenerd@85.193.140.98) has joined #ceph
[9:45] * treenerd (~treenerd@85.193.140.98) Quit ()
[9:45] * treenerd (~treenerd@85.193.140.98) has joined #ceph
[9:46] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[9:49] * treenerd (~treenerd@85.193.140.98) Quit ()
[9:49] * treenerd (~treenerd@85.193.140.98) has joined #ceph
[9:50] * coderspinoza (~oftc-webi@dcsneespop.snu.ac.kr) Quit (Quit: Page closed)
[9:51] * wizr1 (~Adium@180.166.92.36) has joined #ceph
[9:51] * wizr (~Adium@23.19.56.240) Quit (Ping timeout: 480 seconds)
[9:51] * hflai (hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[9:57] * LeaChim (~LeaChim@host86-143-18-67.range86-143.btcentralplus.com) has joined #ceph
[9:57] * PappI (~Chrissi_@TidyBread.tor-exit.sec.gd) has joined #ceph
[9:59] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[10:02] * shyu (~Shanzhi@119.254.196.66) Quit (Remote host closed the connection)
[10:08] * pvh_sa (~pvh@uwcfw.uwc.ac.za) Quit (Ping timeout: 480 seconds)
[10:09] * a1-away is now known as AbyssOne
[10:09] * nico_ch (~nc@flinux01.tu-graz.ac.at) has joined #ceph
[10:10] * nico_ch (~nc@flinux01.tu-graz.ac.at) Quit ()
[10:10] * shang (~ShangWu@80.71.49.3) has joined #ceph
[10:11] * nc_ch (~nc@flinux01.tu-graz.ac.at) has joined #ceph
[10:14] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[10:14] <nc_ch> hi all ... i have a question considering my understanding of the system: i have a ceph-cluster, until today morning 79 osd's, today i added some disks into the machines, and have now 91 osd's ... everything works fine, etc, what i wonder is the following: why do i get degraded objects ... i understand that obviously i will have misplaced objects. but why degraded
[10:14] <nc_ch> they are of course synching quickly and happily, so no problem as it is
[10:15] <Be-El> nc_ch: which ceph version do you use?
[10:15] <nc_ch> hammer
[10:15] <nc_ch> the cluster was installed with firefly, then upgraded to giant, then to hammer
[10:15] <Be-El> do you use erasure coded pools?
[10:15] <nc_ch> but i have seen the same effect on giant too
[10:15] <nc_ch> no, replicated ones
[10:16] * rendar (~I@host165-19-dynamic.3-79-r.retail.telecomitalia.it) has joined #ceph
[10:16] <Be-El> in this case you are right, the pg should be misplaced instead of degraded
[10:16] <nc_ch> my understanding is, that of course, if a disk would for example fail, then it makes sense i have degraded objects
[10:17] <nc_ch> but by adding disks, misplaced is of course logical ...
[10:17] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[10:17] <Be-El> nc_ch: you can check one or two of the degraded pgs with ceph pg dump. their acting list should contain the correct number of osd if they are just misplaced.
[10:19] <nc_ch> ok ... that is an idea
[10:20] <nc_ch> thank you, since i am adding another machine this weekend, i will have enough opportunity to look at it, thanks alot
[10:20] <Be-El> you're welcome
[10:20] * nc_ch (~nc@flinux01.tu-graz.ac.at) Quit (Quit: Leaving)
[10:23] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:23] * AbyssOne is now known as a1-away
[10:25] * thomnico (~thomnico@smb-rsycl-09.hotspot.hub-one.net) has joined #ceph
[10:27] * PappI (~Chrissi_@5NZAAB9JN.tor-irc.dnsbl.oftc.net) Quit ()
[10:34] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[10:36] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:37] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[10:41] * pvh_sa (~pvh@41.164.8.114) has joined #ceph
[10:41] * DV_ (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[10:43] * fireD (~fireD@93-138-198-160.adsl.net.t-com.hr) has joined #ceph
[10:43] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[10:45] * oms101 (~oms101@h-213.61.119.230.host.de.colt.net) Quit (Ping timeout: 480 seconds)
[10:47] <guerby> loicd, BTW we have the same question as nc_ch above, we see "degraded" when adding disks or changing OSD weight whereas we expected only "misplaced" ^ sileht
[10:47] <guerby> with giant
[10:48] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[10:48] * joao (~joao@h-213.61.119.230.host.de.colt.net) Quit (Ping timeout: 480 seconds)
[10:52] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:54] * lavalake (~lavalake@c-98-239-240-118.hsd1.pa.comcast.net) has joined #ceph
[10:57] * boichev (~boichev@213.169.56.130) has joined #ceph
[10:57] * joao (~joao@h-213.61.119.230.host.de.colt.net) has joined #ceph
[10:57] * zhaochao_ (~zhaochao@123.125.35.150) has joined #ceph
[11:02] * lavalake (~lavalake@c-98-239-240-118.hsd1.pa.comcast.net) Quit (Ping timeout: 480 seconds)
[11:03] * zhaochao (~zhaochao@124.202.190.2) Quit (Ping timeout: 480 seconds)
[11:03] * zhaochao_ is now known as zhaochao
[11:05] * ChrisHolcombe (~chris@80.71.49.3) has joined #ceph
[11:05] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:06] <stalob> hi , i've got system who were working fine, making test on it, until i tried to pass an heavy file after increased the replication, and now the pgs of one pool are active+clean+undersized+degraded , how do i fix that ( outside deleting and recreating the pool )
[11:06] * cholcombe (~chris@static.c147-98.i01-5.onvol.net) Quit (Ping timeout: 480 seconds)
[11:06] <destrudo> `ceph osd tree` output plox
[11:08] <destrudo> and `ceph --status`
[11:08] <destrudo> pastebin it
[11:08] <stalob> cluster ca662c64-78a3-4230-b77a-b0818fbda80d
[11:08] <stalob> health HEALTH_WARN
[11:08] <stalob> 164 pgs degraded
[11:08] <stalob> 164 pgs stuck degraded
[11:09] <destrudo> paste
[11:09] <destrudo> bin
[11:10] * ircolle (~ircolle@88.128.80.248) has joined #ceph
[11:11] <stalob> http://pastebin.com/tjiZTa7t
[11:14] <destrudo> and ceph osd tree?
[11:14] <destrudo> Oh and dump + decompile a crushmap and post it too
[11:14] <destrudo> wait
[11:15] <destrudo> you increased replication to something >3
[11:15] <destrudo> with the crush map using hosts as the map mechanism
[11:15] <destrudo> er
[11:15] <destrudo> type
[11:16] <destrudo> type mechanism
[11:16] <stalob> yeah replicat ion = 6
[11:16] <destrudo> yeah
[11:16] <destrudo> bad idea
[11:16] <stalob> ok
[11:16] <destrudo> reset it to 3.
[11:16] <stalob> why it's a bad idea,
[11:16] <stalob> ?
[11:17] <destrudo> because you need to change the chooseleaf step to osd rather than host
[11:17] <stalob> oh thx, all pg are active+ clean
[11:17] <destrudo> but
[11:17] <destrudo> you don't even have 6 osd's
[11:17] <destrudo> so
[11:17] * oms101 (~oms101@h-213.61.119.230.host.de.colt.net) has joined #ceph
[11:17] <destrudo> how the hell do you expect to increase replication
[11:18] <destrudo> I think you can do something else
[11:18] <destrudo> well
[11:18] <destrudo> no
[11:18] <destrudo> I don't know
[11:18] <destrudo> I don't think so
[11:18] <destrudo> dump the crushmap though
[11:18] <destrudo> and then read the docs
[11:18] <destrudo> then you'll understand
[11:27] * ade (~abradshaw@dslb-094-223-083-167.094.223.pools.vodafone-ip.de) Quit (Remote host closed the connection)
[11:27] * Revo84 (~TehZomB@ncc-1701-a.tor-exit.network) has joined #ceph
[11:31] * thomnico (~thomnico@smb-rsycl-09.hotspot.hub-one.net) Quit (Ping timeout: 480 seconds)
[11:32] <anorak> hi all. I seem to be having a problem. I tried to shrink the block device but during shrinking...it got corrupted i guess
[11:32] <anorak> running "resize2fs /dev/rbd/rbd/buffer" gives Please run 'e2fsck -f /dev/rbd/rbd/ringbuffer' first.
[11:33] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:36] * rwheeler (~rwheeler@121.244.87.124) has joined #ceph
[11:37] * dgurtner_ (~dgurtner@178.197.231.213) has joined #ceph
[11:37] * bobrik (~bobrik@83.243.64.45) has joined #ceph
[11:38] * dgurtner (~dgurtner@178.197.231.67) Quit (Ping timeout: 480 seconds)
[11:41] * kblin (~kai@h2176968.stratoserver.net) has joined #ceph
[11:41] <kblin> hi folks
[11:44] <kblin> can I use an OSD as a cephfs client these days? When I looked at this a couple of years back, that wasn't possible for some reason
[11:45] * vbellur (~vijay@121.244.87.124) has joined #ceph
[11:47] * karnan (~karnan@106.51.233.68) has joined #ceph
[11:48] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) Quit (Ping timeout: 480 seconds)
[11:48] * fghaas (~florian@zid-vpnn076.uibk.ac.at) has joined #ceph
[11:52] * rotbeard (~redbeard@x5f74d5c6.dyn.telefonica.de) has joined #ceph
[11:57] * Revo84 (~TehZomB@5NZAAB9LJ.tor-irc.dnsbl.oftc.net) Quit ()
[11:58] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[12:02] * branto (~borix@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[12:07] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[12:13] * karnan (~karnan@106.51.233.68) Quit (Ping timeout: 480 seconds)
[12:13] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[12:19] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[12:21] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[12:22] * karnan (~karnan@106.51.133.102) has joined #ceph
[12:26] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[12:27] * Teddybareman (~delcake@ncc-1701-d.tor-exit.network) has joined #ceph
[12:29] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[12:30] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[12:32] * cok (~chk@2a02:2350:18:1010:1482:d21d:89a8:7bf) has left #ceph
[12:33] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[12:34] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[12:36] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[12:37] * joao (~joao@h-213.61.119.230.host.de.colt.net) Quit (Ping timeout: 480 seconds)
[12:38] * bobrik (~bobrik@83.243.64.45) Quit (Read error: Connection reset by peer)
[12:38] * lovejoy (~lovejoy@213.83.69.6) has joined #ceph
[12:38] * lovejoy (~lovejoy@213.83.69.6) Quit ()
[12:38] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[12:41] * ChrisHolcombe (~chris@80.71.49.3) Quit (Ping timeout: 480 seconds)
[12:42] * a1-away is now known as AbyssOne
[12:43] * fghaas (~florian@zid-vpnn076.uibk.ac.at) Quit (Ping timeout: 480 seconds)
[12:43] * zhaochao_ (~zhaochao@124.202.190.2) has joined #ceph
[12:46] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:49] * zhaochao (~zhaochao@123.125.35.150) Quit (Ping timeout: 480 seconds)
[12:49] * zhaochao_ is now known as zhaochao
[12:49] * ircolle (~ircolle@88.128.80.248) Quit (Ping timeout: 480 seconds)
[12:51] * nc_ch (~nc@flinux01.tu-graz.ac.at) has joined #ceph
[12:51] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[12:52] * kefu (~kefu@114.92.111.70) has joined #ceph
[12:53] <nc_ch> re's ... just giving an info to a question i posted earlier, which Be-El was so nice to answer me, when adding a new OSD, why are some pg's shown as degraded, where this is not intuitive, since there should not be a possibility of degradation, as no disk is lost. the answer is as follows:
[12:55] <nc_ch> ceph adds the new OSD to the acting set of PG's going to be rebalanced, and the number of replicas increase by 1. so ... replica n+1 is now obviously missing, on the new OSD, so it enters degraded state. once the backfilling process has completed, one of the OSD's that previously had served the particular PG is removed from acting, and the PG returns to active+clean (found in a list-post)
[12:55] <nc_ch> it is a bit counter-intuitive to call this degraded, but at least it is not really so ;)
[12:56] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) has joined #ceph
[12:56] * karnan (~karnan@106.51.133.102) Quit (Quit: Leaving)
[12:57] * Teddybareman (~delcake@8BXAAAB2R.tor-irc.dnsbl.oftc.net) Quit ()
[12:57] * jakekosberg (~Quackie@2.tor.exit.babylon.network) has joined #ceph
[12:58] * pvh_sa (~pvh@41.164.8.114) Quit (Ping timeout: 480 seconds)
[12:59] <T1w> nc_ch: nice to know - but I assume it simplifies the state diagram a fair bit
[13:00] <nc_ch> yes, i thought it might be good to know, since the "degraded" was a bit unsettling...
[13:00] <nc_ch> i understand why they did it this way as well
[13:00] * bobrik (~bobrik@83.243.64.45) has joined #ceph
[13:02] <nc_ch> it just helps to know
[13:02] * hellertime (~Adium@72.246.0.14) has joined #ceph
[13:08] * Mika_c (~quassel@122.146.93.152) Quit (Remote host closed the connection)
[13:09] * leseb_ (~leseb@81-64-215-19.rev.numericable.fr) Quit (Quit: ZNC - http://znc.in)
[13:10] <T1w> oh,, yes indeed
[13:16] * shylesh (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[13:20] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[13:20] * leseb_ (~leseb@81-64-215-19.rev.numericable.fr) has joined #ceph
[13:21] * dneary (~dneary@pool-96-252-45-212.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[13:23] * bitserker1 (~toni@88.87.194.130) has joined #ceph
[13:23] * bitserker (~toni@88.87.194.130) Quit (Read error: Connection reset by peer)
[13:27] * jakekosberg (~Quackie@8BXAAAB3V.tor-irc.dnsbl.oftc.net) Quit ()
[13:27] * pvh_sa (~pvh@uwcfw.uwc.ac.za) has joined #ceph
[13:31] * xinxinsh (~xinxinsh@192.102.204.38) Quit (Remote host closed the connection)
[13:31] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) has joined #ceph
[13:33] * yghannam (~yghannam@0001f8aa.user.oftc.net) has joined #ceph
[13:39] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[13:40] * kobazik (~oftc-webi@91.194.158.21) has joined #ceph
[13:42] * wogri (~wolf@nix.wogri.at) Quit (Quit: Lost terminal)
[13:42] * wogri (~wolf@nix.wogri.at) has joined #ceph
[13:42] <kobazik> Hi, you probably get lots of questions like that but I'm desiging a 3 node ceph cluster with 3TB per OSD node. Has anyone tried intel avaton 8-cores atom like CPUs for OSD nodes? Intel avaton motherboards come with 10gbit ethernet so I wonder how well they perform. Thanks.
[13:43] * wogri (~wolf@nix.wogri.at) Quit ()
[13:43] * wogri (~wolf@nix.wogri.at) has joined #ceph
[13:44] * wogri (~wolf@nix.wogri.at) Quit ()
[13:44] * wogri (~wolf@nix.wogri.at) has joined #ceph
[13:44] * zhaochao (~zhaochao@124.202.190.2) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 31.6.0/20150331233809])
[13:44] * wogri (~wolf@nix.wogri.at) Quit ()
[13:44] * wogri (~wolf@nix.wogri.at) has joined #ceph
[13:45] * wogri (~wolf@nix.wogri.at) Quit ()
[13:45] * wogri (~wolf@nix.wogri.at) has joined #ceph
[13:51] <T1w> without anything but my intuition I'd be a bit worried about performance for MON and MDS instances as well as using erasure encoded pools on an atom processor
[13:52] * capri (~capri@212.218.127.222) Quit (Read error: Connection reset by peer)
[13:52] * haomaiwang (~haomaiwan@114.111.166.250) Quit (Remote host closed the connection)
[13:56] * ChrisHolcombe (~chris@static.c147-98.i01-5.onvol.net) has joined #ceph
[13:56] <kobazik> T1w: according to intel avaton c2750 benchmark, they are better than xeon e3-1220 v3 - http://www.servethehome.com/intel-atom-c2750-8-core-avoton-rangeley-benchmarks-fast-power/
[13:56] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:57] * rwheeler (~rwheeler@121.244.87.124) Quit (Quit: Leaving)
[13:57] * Nephyrin (~curtis864@tor-exit.xshells.net) has joined #ceph
[13:58] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:58] <T1w> kobazik: yes, but the e3s are still a lower class cpu - if I'd be building such a small cluster I'd go for a few e5s
[13:59] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[13:59] <T1w> but then again - that's only my own personal opinion
[14:00] <kobazik> T1w: thanks for your opinion. trying to build something power and space efficient so I'm considering intel avaton or xeon-d at the moment
[14:01] <T1w> I can easily understand why, but I'd be worried about performance once you's filled data into it and needed to repair a downgraded cluster
[14:03] * kefu (~kefu@114.92.111.70) Quit (Max SendQ exceeded)
[14:04] * kefu (~kefu@114.92.111.70) has joined #ceph
[14:05] * kefu (~kefu@114.92.111.70) Quit (Max SendQ exceeded)
[14:05] * capri (~capri@212.218.127.222) has joined #ceph
[14:06] * kefu (~kefu@114.92.111.70) has joined #ceph
[14:07] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[14:07] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has left #ceph
[14:08] * lavalake_ (~lavalake@128.237.221.165) has joined #ceph
[14:08] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[14:10] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) has joined #ceph
[14:11] * kefu (~kefu@114.92.111.70) Quit (Max SendQ exceeded)
[14:11] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[14:11] * kefu (~kefu@114.92.111.70) has joined #ceph
[14:11] * ganders (~root@190.2.42.21) has joined #ceph
[14:12] * ganders (~root@190.2.42.21) Quit ()
[14:17] * pvh_sa (~pvh@uwcfw.uwc.ac.za) Quit (Ping timeout: 480 seconds)
[14:17] * rdas_ (~rdas@122.168.165.79) has joined #ceph
[14:20] * rdas (~rdas@122.168.205.106) Quit (Ping timeout: 480 seconds)
[14:20] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[14:21] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[14:22] * lalatenduM (~lalatendu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:27] * rdas_ (~rdas@122.168.165.79) Quit (Quit: Leaving)
[14:27] * Nephyrin (~curtis864@9U1AAAHRK.tor-irc.dnsbl.oftc.net) Quit ()
[14:29] * danieagle (~Daniel@177.94.30.173) has joined #ceph
[14:30] * jeevan_ullas (~Deependra@114.143.38.213) Quit (Ping timeout: 480 seconds)
[14:31] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) has joined #ceph
[14:33] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[14:38] * lalatenduM (~lalatendu@121.244.87.124) has joined #ceph
[14:40] <nc_ch> ( i know i am a bit late to answer
[14:41] * ChrisNBl_ (~ChrisNBlu@178.255.153.117) has joined #ceph
[14:42] <nc_ch> kobazik: the question is, why you want to build actually a small cluster ... not judging, everyone has good ideas, and so i can only ask for the actual why. ceph is great, and really good, but i think if you want to make relatively small storage units, a more classical approach is definitely more effective.
[14:43] <stalob> in my case i'm using a small cluster for testing ceph
[14:43] <stalob> but later there will be much bigger clusters
[14:44] <nc_ch> yes, well, as i said, a valid approach ...
[14:44] <T1w> I'm about to start with a 5-node cluster that's to be scaled up over the comming years (other physical locations etc etc), but still nothing expected to raise above 15-20 nodes in total
[14:46] <nc_ch> yes, at the moment i am at 6 nodes +3 separate mons, 114 osd's, 228 Tb
[14:46] * liiwi (liiwi@idle.fi) Quit (Ping timeout: 480 seconds)
[14:46] <T1w> my goal here is a replicated fully failure tolerant storage device, and I'm hoping cephFS will mature at bit more within the next years time (eg. 2+ active MDSs, ability to have more than 1000 files in the same directory without bottlenecks etc etc))
[14:47] <nc_ch> i use only rbd's at the moment
[14:47] * joao (~joao@h-213.61.119.230.host.de.colt.net) has joined #ceph
[14:47] * b0e (~aledermue@213.95.25.82) has joined #ceph
[14:47] <T1w> at the moment we're looking at dumping 15mio+ files (only ~1.4GB space) into it and some other ~600.000 files (~1.2TB+ space) into it
[14:48] <T1w> and remove a rather largish backup window from 2 other machines
[14:48] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Ping timeout: 480 seconds)
[14:48] <T1w> oh, and logfiles from a few systems generating 500GB+ logs per month
[14:49] * treenerd (~treenerd@85.193.140.98) Quit (Ping timeout: 480 seconds)
[14:49] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[14:49] <nc_ch> are you just storing the logfiles, or are you using rados to store it in objects ?
[14:50] <T1w> at the moment everything is on local storage on those 2 mashines, but it's either that or a common filesystem of some sort - and then it's either NFS of some sorts or home made replication
[14:50] <T1w> at the moment just logfiles
[14:51] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[14:51] <nc_ch> if you are interested in logfile evaluation, logstash can use an s3 filestore, which rados is able to prvide
[14:51] <T1w> but we're thinging of centralizing it (send it all to a syslog server), which then stores it in some way in ceph
[14:51] * sleinen1 (~Adium@2001:620:0:82::103) has joined #ceph
[14:51] <T1w> yeah, we're looking a bit at logstash also - mostly to use kibana and an elastic search frontend
[14:52] <T1w> for easier access
[14:52] * fsimonce (~simon@host129-29-dynamic.250-95-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[14:52] <nc_ch> that is something i will possibly look at, when some of my other machines will use ceph as backend ... right now three servers have it as backend
[14:52] <T1w> mmmm
[14:52] <nc_ch> yes. well, keep in mind that logstash can store it's stuff in s3
[14:52] <T1w> at the moment it's storage that important to us
[14:52] <nc_ch> same here
[14:53] <T1w> and the ability to cut off rather expensive TSM backup licenses on the 2 machines holding all that data
[14:53] <nc_ch> ^
[14:53] <nc_ch> ^^
[14:53] <T1w> a 32 core app-server custs a sh*tload just for TSM
[14:53] * vata (~vata@cable-21.246.173-197.electronicbox.net) Quit (Quit: Leaving.)
[14:53] <nc_ch> yeah
[14:53] <T1w> for a 5 year lifespan we use as much on TSM licenses as the HW cost to buy
[14:54] <janos> TSM = Tivoli?
[14:54] <T1w> if we pour all important data into ceph we can just 1 node for everything and live with 2-3-4+ hour backup windows
[14:54] <T1w> janos: yes
[14:55] <janos> yeah getting into bed with IBM = $$$
[14:55] <T1w> janos: we've got enough data and few enough nodes that capacity licensing is a no go
[14:55] * kawa2014 (~kawa@80.71.49.3) Quit (Ping timeout: 480 seconds)
[14:55] <T1w> our current per-core licensing is almost a third of going over to capacity licensing
[14:55] <nc_ch> off topic ... another 360 pg's and the syncing is done ...
[14:56] * Shantanu (~shantanu@114.79.157.147) Quit (Quit: Leaving.)
[14:56] <T1w> well.. apart from the price we're quite pleased with TSM
[14:57] <janos> i would hope so when spending that ;)
[14:57] * Eman1 (~nartholli@104.207.154.59) has joined #ceph
[14:57] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Ping timeout: 480 seconds)
[14:58] <T1w> I think we've got 10-12 nodes or so and almost 50TB storage that's 80% full atm
[14:58] <Be-El> tsm is a pain in the ass to setup correctly
[14:58] <T1w> really?
[14:58] <Be-El> these fsckers insists on bundling their own libssl and override system pathes to use them
[14:58] * bearkitten (~bearkitte@cpe-76-167-204-192.san.res.rr.com) has joined #ceph
[14:58] <Be-El> which results in all applications using the bundled librar
[14:58] <Be-El> +y
[14:59] <T1w> hm.. never seen that problem
[14:59] <Be-El> the first step after installing a new tsm client is removing all bundled libraries (after conversion of the rpms to debian packages)
[15:00] * liiwi (liiwi@idle.fi) has joined #ceph
[15:01] <T1w> heh
[15:01] <T1w> rhel here
[15:01] * lalatenduM (~lalatendu@121.244.87.124) Quit (Ping timeout: 480 seconds)
[15:01] <T1w> and 2 or 3 windows-based ones
[15:02] <nc_ch> i try to do whatever possible on debian ... ansys is a pita, so i have 2 centos and 2 rhel machines ...
[15:05] * vbellur (~vijay@121.244.87.124) has joined #ceph
[15:06] <T1w> hm..
[15:06] <T1w> the gsk* ssl packages for RHEL installs everything under /usr/local/ibm/gsk8_64
[15:07] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) has joined #ceph
[15:08] <T1w> and the postinstall scripts only symblink a few things to that and never touches system ssl libs
[15:08] <Be-El> T1w: do they add that directory to ldconfig (/etc/ld.so.conf) ?
[15:08] <T1w> nope
[15:09] <T1w> there's no mention of ld or LD in the scripts
[15:09] <T1w> (just checked)
[15:09] <Be-El> in that case it should be ok
[15:10] <T1w> it does a restorecon if the check for selinux returns true, but that's it
[15:10] <T1w> as I said.. never experienced that kind of problem (goind back to some of the first v5 BA clients and going up to the latest v7 ones..)
[15:11] <T1w> hm.. afk
[15:14] * joao (~joao@h-213.61.119.230.host.de.colt.net) Quit (Quit: leaving)
[15:15] * cok (~chk@2a02:2350:18:1010:f198:3120:2732:6ba2) has joined #ceph
[15:16] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[15:16] * yanzheng (~zhyan@171.216.95.139) Quit (Quit: This computer has gone to sleep)
[15:16] * harold (~hamiller@71-94-227-43.dhcp.mdfd.or.charter.com) has joined #ceph
[15:16] * kawa2014 (~kawa@80.71.49.3) has joined #ceph
[15:17] * yanzheng (~zhyan@171.216.95.139) has joined #ceph
[15:17] * fsimonce (~simon@host11-35-dynamic.32-79-r.retail.telecomitalia.it) has joined #ceph
[15:19] * pvh_sa (~pvh@41.164.8.114) has joined #ceph
[15:20] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) has joined #ceph
[15:21] <pmatulis> is there a minimum size for an OSD? i'm getting strange results setting up a cluster using 10 GB block devices
[15:23] * harold (~hamiller@71-94-227-43.dhcp.mdfd.or.charter.com) Quit (Quit: Leaving)
[15:24] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:24] <Be-El> pmatulis: 10GB is not enough. the osd will have a weight of 0.0 (e.g. in ceph osd tree). this results in an unusable osd
[15:25] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Quit: Leaving.)
[15:27] * Eman1 (~nartholli@7D0AAAHBP.tor-irc.dnsbl.oftc.net) Quit ()
[15:27] * Harryhy (~hoopy@h-213.61.149.100.host.de.colt.net) has joined #ceph
[15:31] <fghaas> pmatulis, Be-El: ceph osd crush reweight is your friend
[15:31] <Be-El> fghaas: pmatulis will probably run into the next problem with small osds (space occupied by journal file)
[15:32] <Be-El> afk, meeting
[15:32] <fghaas> default ceph-deploy journal size is 5g
[15:32] <fghaas> but I don't think this is a production cluster anyway
[15:33] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[15:34] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[15:34] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[15:34] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has left #ceph
[15:35] * lavalake_ (~lavalake@128.237.221.165) Quit (Remote host closed the connection)
[15:38] * zerick (~zerick@irc.quassel.zerick.me) Quit (Remote host closed the connection)
[15:43] * lalatenduM (~lalatendu@121.244.87.124) has joined #ceph
[15:43] <pmatulis> Be-El, fghaas: this should be mentioned in the docs. i can't see it there
[15:44] <fghaas> pmatulis: the docs are really easy to edit and patch; I'm sure jwilkins would much appreciate the contribution
[15:45] <pmatulis> anyway, i did do the reweight thing and now i see that my 10 GB disks show up as 5 GB each. is this another consequence of having small disks or something else?
[15:47] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[15:51] <pmatulis> hmm, maybe the above is what Be-El mentioned? the missing space is due to the journal?
[15:52] <pmatulis> Be-El,fghaas: so what is the minimum then?
[15:54] <fghaas> There is no minimum. It's just that OSD weight is a float where 1T equals a weight of 1, and 5 GB (10G minus journal) is .005 where apparently .01 is the smallest amount crush can handle (never dug into the code base deep enough to find out the details).
[15:54] <fghaas> so just set your weight to 0.01 and you'll be fine
[15:55] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[15:56] * wizr1 (~Adium@180.166.92.36) Quit (Quit: Leaving.)
[15:56] <pmatulis> fghaas: alright
[15:57] * Harryhy (~hoopy@9U1AAAHVK.tor-irc.dnsbl.oftc.net) Quit ()
[15:57] * SquallSeeD31 (~mollstam@ncc-1701-d.tor-exit.network) has joined #ceph
[15:59] <kblin> I take it that there's no debian jessie build on the ceph debian repo yet?
[16:01] * cok (~chk@2a02:2350:18:1010:f198:3120:2732:6ba2) Quit (Quit: Leaving.)
[16:01] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:05] <maurosr> morning folks, is there anywhere I can find minimum recomended version of ceph to use with next openstack (kilo)?
[16:06] * oms101 (~oms101@h-213.61.119.230.host.de.colt.net) Quit (Ping timeout: 480 seconds)
[16:06] <pmatulis> http://tracker.ceph.com/issues/11500
[16:10] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:10] * haomaiwang (~haomaiwan@118.244.254.7) has joined #ceph
[16:11] * lavalake (~lavalake@128.237.221.165) has joined #ceph
[16:12] * pvh_sa (~pvh@41.164.8.114) Quit (Ping timeout: 480 seconds)
[16:12] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[16:13] * treenerd (~treenerd@77.119.133.19.wireless.dyn.drei.com) has joined #ceph
[16:13] * yanzheng (~zhyan@171.216.95.139) Quit (Quit: This computer has gone to sleep)
[16:20] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[16:22] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[16:24] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[16:27] * thomnico (~thomnico@AToulouse-654-1-380-32.w86-199.abo.wanadoo.fr) has joined #ceph
[16:27] * SquallSeeD31 (~mollstam@789AAABAK.tor-irc.dnsbl.oftc.net) Quit ()
[16:29] * wizr (~Adium@116.231.0.181) has joined #ceph
[16:32] * treenerd (~treenerd@77.119.133.19.wireless.dyn.drei.com) Quit (Ping timeout: 480 seconds)
[16:32] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[16:36] * thomnico (~thomnico@AToulouse-654-1-380-32.w86-199.abo.wanadoo.fr) Quit (Quit: Ex-Chat)
[16:36] * kawa2014 (~kawa@80.71.49.3) Quit (Ping timeout: 480 seconds)
[16:38] * fsckstix (~stevpem@124-149-146-76.dyn.iinet.net.au) has joined #ceph
[16:40] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:40] * dgurtner_ (~dgurtner@178.197.231.213) Quit (Ping timeout: 480 seconds)
[16:40] * bkopilov (~bkopilov@bzq-109-65-112-98.red.bezeqint.net) Quit (Read error: Connection reset by peer)
[16:40] * thomnico (~thomnico@AToulouse-654-1-380-32.w86-199.abo.wanadoo.fr) has joined #ceph
[16:41] * alram (~alram@67.159.191.98) has joined #ceph
[16:45] * alram (~alram@67.159.191.98) Quit ()
[16:45] * bkopilov (~bkopilov@bzq-109-64-137-234.red.bezeqint.net) has joined #ceph
[16:46] * bangfoo (~bangfoo@2605:6000:f247:a01:1d6:dba3:de49:4871) has joined #ceph
[16:47] * alram (~alram@67.159.191.98) has joined #ceph
[16:48] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[16:48] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has left #ceph
[16:49] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[16:49] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) Quit ()
[16:49] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[16:50] * kawa2014 (~kawa@static.c147-98.i01-5.onvol.net) has joined #ceph
[16:50] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has left #ceph
[16:55] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) has joined #ceph
[16:55] * alram (~alram@67.159.191.98) Quit (Quit: leaving)
[16:57] * Lyncos (~lyncos@208.71.184.41) has left #ceph
[17:02] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[17:02] * vbellur (~vijay@122.167.227.24) has joined #ceph
[17:04] <pmatulis> i'm following the docs [1] where it says use pg_num of 512 if you have between 5 and 10 OSDs. i have 9 so that's what i did. then i see this:
[17:04] <pmatulis> " Error E2BIG: specified pg_num 512 is too large (creating 448 new PGs on ~9 OSDs exceeds per-OSD max of 32) "
[17:04] <pmatulis> [1]: http://ceph.com/docs/master/rados/operations/placement-groups/
[17:05] * analbeard (~shw@support.memset.com) has left #ceph
[17:09] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) has joined #ceph
[17:09] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:10] * antoine (~bourgault@192.93.37.4) Quit (Remote host closed the connection)
[17:11] * yanzheng (~zhyan@171.216.95.139) has joined #ceph
[17:11] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[17:12] <Kdecherf> hello world
[17:13] * lovejoy (~lovejoy@213.83.69.6) has joined #ceph
[17:13] <Kdecherf> is there any way to restrict the mount of a cephfs subfolder to a given client?
[17:14] * yanzheng (~zhyan@171.216.95.139) Quit ()
[17:16] * kawa2014 (~kawa@static.c147-98.i01-5.onvol.net) Quit (Quit: Leaving)
[17:21] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has left #ceph
[17:21] * Harryhy (~elt@tor-exit.eecs.umich.edu) has joined #ceph
[17:22] * Harryhy (~elt@8BXAAACCA.tor-irc.dnsbl.oftc.net) Quit ()
[17:24] <Tume|Sai> pmatulis: you need to increase the amount of pg's in 32 increments at that time
[17:25] <Tume|Sai> pmatulis: it's basically that you canincrease half of what it now has
[17:26] * shang (~ShangWu@80.71.49.3) Quit (Ping timeout: 480 seconds)
[17:29] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[17:34] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:34] <rkeene> Hmm -- I did not know there was a certain rate at which pgs could be increased :-(
[17:35] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:36] * kefu_ (~kefu@114.92.111.70) has joined #ceph
[17:36] * kefu (~kefu@114.92.111.70) Quit (Read error: No route to host)
[17:36] * rotbeard (~redbeard@x5f74d5c6.dyn.telefonica.de) Quit (Quit: Leaving)
[17:37] <pmatulis> Tume|Sai: not sure what you're saying. there simply appears to be an error in the docs. just multiply the # of OSDs by 32. that will give the max # of PGs to use
[17:39] <Tume|Sai> yes, but you can only increase the amount half of the current amount
[17:39] <Tume|Sai> first increase by 32, then by 32 then 64 etc
[17:39] <Tume|Sai> that is because if someone accidentally puts like 10000, then you have to do some serious work to fix it
[17:40] * BranchPredictor (branch@predictor.org.pl) has joined #ceph
[17:40] <BranchPredictor> is the bluejeans working?
[17:40] <Tume|Sai> and to prevent that, you are only allowed to increase it a bit at a time
[17:44] <pmatulis> Tume|Sai: well i went from 64 to 288 no problem. according to your rules that should have errored out (must use 96?)
[17:45] * lavalake (~lavalake@128.237.221.165) Quit (Remote host closed the connection)
[17:46] * xinxinsh (~xinxinsh@192.102.204.38) has joined #ceph
[17:46] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) Quit (Quit: Ex-Chat)
[17:46] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[17:46] <Tume|Sai> oh well, then I was wrong. The point is still the same, you cannot increase it too much
[17:46] <Tume|Sai> at once I mean
[17:46] <Tume|Sai> maybe 32*9 = 288
[17:47] <pmatulis> bingo
[17:47] <Tume|Sai> ;)
[17:49] <pmatulis> there is some logic in going for the max number of PGs when building the cluster. this will give you some wiggle room when adding OSDs later. you won't necessarily need to increase PGs, thereby saving data redistribution across the cluster
[17:50] <pmatulis> that's my simpleton thinking anyway
[17:57] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[17:59] * lovejoy (~lovejoy@213.83.69.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:00] * bandrus (~brian@50.23.113.232) has joined #ceph
[18:07] * kefu_ (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:08] * lalatenduM (~lalatendu@121.244.87.124) Quit (Ping timeout: 480 seconds)
[18:08] * madkiss (~madkiss@2001:6f8:12c3:f00f:98e4:e8d4:40cd:8593) has joined #ceph
[18:14] <doppelgrau> Is there any news, how long it will take till there are current (not the old in jessie) ceph-packages avaiable?
[18:14] * kobazik (~oftc-webi@91.194.158.21) Quit (Quit: Page closed)
[18:14] * daniel2_ (~daniel@cpe-24-28-6-151.austin.res.rr.com) has joined #ceph
[18:18] * Fapiko (~offer@fenix.nullbyte.me) has joined #ceph
[18:21] * xinxinsh (~xinxinsh@192.102.204.38) Quit (Remote host closed the connection)
[18:22] * Pablo (~pcaruana@213.175.37.10) Quit (Quit: Leaving)
[18:22] * wizr (~Adium@116.231.0.181) Quit (Quit: Leaving.)
[18:24] * sleinen1 (~Adium@2001:620:0:82::103) Quit (Ping timeout: 480 seconds)
[18:30] * elder_ (~elder@104.135.1.105) has joined #ceph
[18:31] * ChrisNBl_ (~ChrisNBlu@178.255.153.117) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:31] * davidz (~davidz@2605:e000:1313:8003:45ed:de:a599:4708) Quit (Quit: Leaving.)
[18:35] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[18:35] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) has joined #ceph
[18:36] * oms101 (~oms101@88.128.80.104) has joined #ceph
[18:39] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) has joined #ceph
[18:40] * zerick (~zerick@irc.quassel.zerick.me) has joined #ceph
[18:40] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[18:41] * appaji (~appaji@122.172.42.251) has joined #ceph
[18:41] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[18:48] * Fapiko (~offer@8Q4AAAHE5.tor-irc.dnsbl.oftc.net) Quit ()
[18:49] <appaji> Why would I get a 500 server error when I try to delete an object via radosgw
[18:49] <appaji> ?
[18:50] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:50] <alfredodeza> appaji: that can happen when Apache timesout
[18:50] <alfredodeza> *times out
[18:50] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[18:50] <alfredodeza> if the request is taking too long
[18:50] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) has joined #ceph
[18:52] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[18:52] * _s1gma (~Thononain@exit2.telostor.ca) has joined #ceph
[18:57] <appaji> alfredodeza: I am using civetweb
[18:57] <appaji> debug_rgw 20 and debug_ms 5 logs -> http://pastebin.com/73tRnzqN
[18:58] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:01] <alfredodeza> hrmn
[19:01] <alfredodeza> the request says it completed
[19:01] <alfredodeza> appaji: if you ask for the object again is it gone?
[19:01] * ChrisHolcombe (~chris@static.c147-98.i01-5.onvol.net) Quit (Ping timeout: 480 seconds)
[19:03] * xcezzz (~Adium@pool-100-3-14-19.tampfl.fios.verizon.net) has joined #ceph
[19:03] * xcezzz1 (~Adium@pool-100-3-14-19.tampfl.fios.verizon.net) Quit (Read error: Connection reset by peer)
[19:07] <appaji> I am able to find the object in the bucket and also get it.
[19:07] * branto (~borix@ip-213-220-214-203.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[19:09] * marcan (marcan@marcansoft.com) Quit (Quit: ZNC - http://znc.in)
[19:09] * marcan (marcan@marcansoft.com) has joined #ceph
[19:14] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[19:14] * hellertime (~Adium@72.246.0.14) Quit (Quit: Leaving.)
[19:14] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[19:16] * sleinen2 (~Adium@2001:620:0:82::102) has joined #ceph
[19:16] * linjan (~linjan@213.8.240.146) has joined #ceph
[19:16] <appaji> alfredodeza: I am able to find the object in the bucket and also retrieve the data, the delete doesn't go through.
[19:17] * BManojlovic (~steki@cable-89-216-175-119.dynamic.sbb.rs) has joined #ceph
[19:17] <alfredodeza> this sounds like a bug
[19:17] <alfredodeza> appaji: what is the name of the object you want to delete? /cephs3test1117/myobjects2645 ?
[19:17] <appaji> alfredodeza: looking at the code now to figure out why I am getting the error "ondisk = -95 ((95) Operation not supported)"
[19:18] <appaji> yes.
[19:18] <appaji> alfredodeza: yes
[19:20] <appaji> alfredodeza: unfortunately, I don't know how I can reproduce this if I start all over again.
[19:21] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Ping timeout: 480 seconds)
[19:21] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[19:22] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:22] * _s1gma (~Thononain@5NZAAB9U5.tor-irc.dnsbl.oftc.net) Quit ()
[19:27] * datagutt (~WedTM@192.42.116.16) has joined #ceph
[19:27] * lovejoy (~lovejoy@cpc69388-oxfd28-2-0-cust415.4-3.cable.virginm.net) has joined #ceph
[19:27] * jordanP (~jordan@213.215.2.194) Quit (Quit: Leaving)
[19:30] <alfredodeza> appaji: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-June/040845.html
[19:31] <alfredodeza> it looks like you need to define the erasure coded pool
[19:31] <alfredodeza> maybe it is related?
[19:31] * datagutt (~WedTM@5NZAAB9VU.tor-irc.dnsbl.oftc.net) Quit ()
[19:32] <appaji> alfredodeza: no, I found that message when I searched. There are no erasure coded pools in my tests, just replicated.
[19:35] * sleinen2 (~Adium@2001:620:0:82::102) Quit (Ping timeout: 480 seconds)
[19:39] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[19:41] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[19:41] * martin__ (~martin@212.224.70.43) has joined #ceph
[19:41] * martin__ is now known as Mave
[19:42] * ngoswami (~ngoswami@121.244.87.116) Quit ()
[19:43] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[19:43] <Mave> Hi, I have a huge problem while ceph rebuild. Client IO from Qemu/KVM is blocked and VMs won't work. I already tryed to set client and recovery priority as well as threads but so far, client IO is near 0.
[19:45] <Mave> ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff) - 60 OSDs on 7 Servers with 5 MONs
[19:46] <Mave> Errors like [WRN] 100 slow requests, 1 included below; oldest blocked for > 2460.755936 secs or [WRN] slow request 960.413902 seconds old, received at 2015-04-29 19:28:10.902857: osd_op(client.7248275.0:231919109 rbd_data.6ec223238e1f29.0000000000001ce0 [set-alloc-hint object_size 4194304 write_size 4194304,write 1626112~4096] 5.f0b88e0e RETRY=1 ack+ondisk+retry+write+redirected+known_if_redirected e12242) currently reached_pg
[19:47] * Rosenbluth (~Maza@85.25.9.11) has joined #ceph
[19:48] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:51] * pvh_sa (~pvh@105.210.174.253) has joined #ceph
[19:52] <Mave> ceph tell osd causes additional problems like "Error EINTR: problem getting command descriptions from osd.X"
[20:00] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[20:01] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[20:02] * georgem (~Adium@fwnat.oicr.on.ca) Quit ()
[20:03] * elder_ (~elder@104.135.1.105) Quit (Ping timeout: 480 seconds)
[20:04] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[20:05] * georgem (~Adium@fwnat.oicr.on.ca) Quit ()
[20:08] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[20:09] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[20:12] <appaji> In the ceph pg calculator at http://ceph.com/pgcalc/, is the OSD count the eventual OSD count i.e. after (future) increase in osds?
[20:12] <appaji> Or would that be the initial OSD count?
[20:13] <appaji> I want to set the target pgs per OSD to 300 i.e. the cluster may grow three times its current size.
[20:17] * Rosenbluth (~Maza@2WVAAB4XS.tor-irc.dnsbl.oftc.net) Quit ()
[20:18] * oms101 (~oms101@88.128.80.104) Quit (Ping timeout: 480 seconds)
[20:18] <doppelgrau> appaji: If I understand correctly the current OSD count
[20:19] * davidzlap (~Adium@mobile-166-176-58-254.mycingular.net) has joined #ceph
[20:20] * xarses (~andreww@12.164.168.117) has joined #ceph
[20:22] * tom (~tom@167.88.45.146) Quit (Remote host closed the connection)
[20:22] * tom (~tom@167.88.45.146) has joined #ceph
[20:22] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[20:25] * lavalake (~lavalake@c-98-239-240-118.hsd1.pa.comcast.net) has joined #ceph
[20:26] <xcezzz> Mave: what is the load on your OSD hosts?
[20:26] * jordanP (~jordan@her75-2-78-193-36-209.fbxo.proxad.net) has joined #ceph
[20:27] <Mave> load average: 11,95, 14,96, 15,37
[20:27] <Mave> on the new host
[20:27] <Mave> cpu usage is below 30%
[20:28] <xcezzz> we have had a similar issue, I am assuming you added all OSDs in the system at the same time?
[20:28] * bandrus (~brian@50.23.113.232) Quit (Ping timeout: 480 seconds)
[20:28] <Mave> yes ;/
[20:29] <xcezzz> so ya??? unless these are the beefiest of the beef boxes that definitely killed our client io???
[20:29] <SpaceDump> Hm, why add that many at once? :p
[20:29] <xcezzz> how we got ourselves out of the io storm
[20:29] <SpaceDump> That's just asking about trouble imho. :D
[20:29] <xcezzz> ceph osd set noin
[20:30] <xcezzz> ceph osd out osd.#
[20:30] <Mave> SpaceDump: because its just another server
[20:30] <xcezzz> on all but 1 or maybe 2 OSDs
[20:30] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[20:30] * Kioob (~Kioob@200.254.0.109.rev.sfr.net) has joined #ceph
[20:30] <xcezzz> lol 'just another server'??? but just 'a bunch more OSDs'
[20:31] <xcezzz> basically dont let ceph be able to bring in an OSD and then mark them as out so only one gets filled at a time
[20:31] <Mave> if I mark them out now, will the already placed data replaced again?
[20:32] <xcezzz> whats your replaced/degraded %
[20:32] <xcezzz> s/replaced/misplaced
[20:32] <Mave> degraded 0.000% mispaced 27.356%
[20:33] * bitserker1 (~toni@88.87.194.130) Quit (Ping timeout: 480 seconds)
[20:33] <xcezzz> while yes you would have to 'replace' some of the data??? im assuming its only gotten through some 3% or so??? but what is the priority rebuilding teh whole thing now? or clients being able to use it?
[20:34] <Mave> no it got down from ~40% misplaced
[20:35] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[20:35] * shaunm (~shaunm@74.215.76.114) Quit (Ping timeout: 480 seconds)
[20:35] <xcezzz> actually do ceph osd noup and just down them
[20:36] <xcezzz> i think that will still keep the data placement the same just delay the backfill to them
[20:37] <xcezzz> but ya the IO storm generated from a whole server of OSDs is just insane??? not just on the box you added but on all the other boxes as well since they have to move data too??? but its less data/stormy for one OSD at a time
[20:40] * MACscr (~Adium@2601:d:c800:de3:bd37:169e:edd:d92d) has joined #ceph
[20:41] * bandrus (~brian@223.sub-70-211-65.myvzw.com) has joined #ceph
[20:41] * thomnico (~thomnico@AToulouse-654-1-380-32.w86-199.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[20:42] <Mave> ok, I did that and now its 7% degraded and 27% misplaced - IT WORKED! Thanks ;)
[20:42] <Mave> what the heck, that is bullshit ;(
[20:43] * lalatenduM (~lalatendu@122.172.41.122) has joined #ceph
[20:44] <xcezzz> what the whole io storm?
[20:45] <Mave> no but its a bit better, some VMs are now responding, slow but better
[20:47] <xcezzz> ya.. the first time we added another server with 6 OSDs??? it just basically DoSed us??? thankfully production wasnt a lot of users yet???
[20:51] <xcezzz> but when you think about how ceph works/stores its stuff it makes sense??? if you have 16 OSDs and @ 10GB on each and you need to add 4 OSDs??? then you have to move like 2GB around on each OSD at the SAME time??? otherwise just one OSD its a couple hundred megs per OSD you are moving around??? we went from 18 to 24 and were screwed for a day, it was some 300GB on each OSD at the time
[20:51] * Cybert1nus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[20:52] <appaji> and now I am hitting http://tracker.ceph.com/issues/7598
[20:52] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Read error: Connection reset by peer)
[20:53] <nwf> Hey gang; is there a plan / timeline for offering jessie packages in http://ceph.com/debian-hammer/ ?
[20:53] <Mave> uhm, we have about 1 TB per OSD in use, but still, I don't think that a good software should be unresponsive while backfill/recovery
[20:54] <SpaceDump> When I have my cluster I think I'll add one OSD at a time when I expand stuff.
[20:54] <nwf> Mave: FWIW, we had to aggressively adjust downwards our backfill limits in ceph.conf to keep the system responsive during such things.
[20:54] <xcezzz> mave: if you set the backfill limits prior to doing it its not bad
[20:55] <xcezzz> also??? its kind of good practice to slowly add OSds??? either through reweighting them to full weight or one OSD/time
[20:55] <xcezzz> what if you add a whole server and now find out the server has issues???
[20:56] <xcezzz> you also push the current drives with data to work way harder for a shorter period taxing them much more
[20:56] * nils_ (~nils@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[20:57] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) has joined #ceph
[20:58] * Cybert1nus is now known as Cybertinus
[20:58] <xcezzz> SpaceDump: good idea lol!
[21:00] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[21:00] <Mave> nwf: which settings should we adjust?
[21:00] <xcezzz> ceph has been a great learning experience??? it has its little oddities/nuances but for something as resilient as it is??? for free??? that runs amazing??? and has SEENT SOME SHIT??? it is still working, no data lost
[21:02] <xcezzz> osd max backfills
[21:02] <xcezzz> osd client op priority
[21:02] <xcezzz> osd recovery op priority
[21:04] <xcezzz> this is what i use to change the priorities on the fly
[21:04] <xcezzz> http://pastebin.com/W9a46yt3
[21:05] * MRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[21:05] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Read error: Connection reset by peer)
[21:05] * alram (~alram@172.56.39.210) has joined #ceph
[21:06] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[21:09] * oms101 (~oms101@p20030057EA1EC700EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[21:10] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[21:10] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit ()
[21:10] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[21:10] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit ()
[21:10] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[21:10] * linjan (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[21:11] * davidzlap (~Adium@mobile-166-176-58-254.mycingular.net) Quit (Read error: No route to host)
[21:12] * sleinen1 (~Adium@2001:620:0:82::102) has joined #ceph
[21:12] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) Quit (Quit: Ex-Chat)
[21:12] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[21:13] * MRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: This computer has gone to sleep)
[21:16] * alram (~alram@172.56.39.210) Quit (Ping timeout: 480 seconds)
[21:16] * MRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[21:17] <Mave> the clientIO.sh is what we had set to get the cluster back to work
[21:18] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:20] <xcezzz> ya.. its mainly the max backfills??? that defines how many placement groups a single OSD would try to handle backfilling
[21:21] <xcezzz> but at 10x per OSD in say a 12 drive box??? is a LOT of pgs to be dealing with??? you will drain all resources real quick if your hardware is not capable of handling that
[21:22] * Helleshin (~KrimZon@178-175-139-141.ip.as43289.net) has joined #ceph
[21:24] <immesys> I can't get ceph-deploy to install, it's getting a 404 on http://ceph.com/debian-hammer/dists/utopic/main/binary-i386/Packages
[21:25] <alfredodeza> immesys: there aren't any builds for utopic
[21:25] <immesys> figured as much, can I make it use an LTS package?
[21:25] <immesys> it'll probably work
[21:25] * alram (~alram@172.56.39.210) has joined #ceph
[21:26] * danieagle (~Daniel@177.94.30.173) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[21:26] * lalatenduM (~lalatendu@122.172.41.122) Quit (Ping timeout: 480 seconds)
[21:27] <xcezzz> lol
[21:27] <xcezzz> i like those last words
[21:27] <immesys> well this is not for production, I just need ceph on my laptop before I get on a plane so I can do some dev work
[21:29] <Mave> xcezzz: thanks for your help and have a nice day/night
[21:30] <immesys> is there any way to get ceph-deploy to try use trusty packages rather?
[21:31] * Mave (~martin@212.224.70.43) Quit (Quit: Leaving)
[21:32] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[21:33] <xcezzz> theres a flag to make it not modify the repos i think
[21:33] <_Tassadar> hi, i remember reading about a change in CRUSH algorithm in Hammer, that I should manually enable. Now that I've upgraded to Hammer I can't find what setting that was
[21:33] <alfredodeza> immesys: `ceph-deploy install --help` should point you at the right spot
[21:33] <_Tassadar> it was disabled by default to prevent excessive data migration right after upgrading
[21:33] <xcezzz> --no-adjust-repos
[21:34] <immesys> xcezzz: thanks I will try that
[21:34] <_Tassadar> does this ring a bell?
[21:34] * derjohn_mob (~aj@88.128.80.237) has joined #ceph
[21:34] * alram (~alram@172.56.39.210) Quit (Ping timeout: 480 seconds)
[21:37] <xcezzz> CRUSH improvements: We have added a new straw2 bucket algorithm that reduces the amount of data migration required when changes are made to the cluster.
[21:37] <xcezzz> that?
[21:37] <_Tassadar> yeah i think that is it, but is is true that i have to enable that explicitly?
[21:38] * Rickus__ (~Rickus@office.protected.ca) Quit (Read error: No route to host)
[21:38] <xcezzz> i assume so??? probably a crush decompile, change algo to 'straw2' and compile/import new map
[21:39] <xcezzz> i don't have any experience with that.. still on giant and not sure i wanna upgrade just yet.. but i would make with the googles and figure out what its all about now that you know what it is
[21:40] <_Tassadar> i can't find much on it thb
[21:40] <_Tassadar> maybe i'll just wait a bit for other ppl to experiment with the new features first ;)
[21:40] <_Tassadar> the upgrade went smoothly btw
[21:41] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[21:41] <xcezzz> hmm cool??? a week or two ago i just saw people with problems so my jimmies got a bit rustled
[21:42] <_Tassadar> yeah i skipped 0.94.0, went straight to 0.94.1 and so far no probs
[21:43] * dyasny (~dyasny@104.158.24.163) has joined #ceph
[21:43] * rendar (~I@host165-19-dynamic.3-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:44] * appaji (~appaji@122.172.42.251) Quit (Quit: Adios amigos.)
[21:45] <visbits> yay for bugs
[21:45] <visbits> scuttle|afk
[21:46] <visbits> http://pastebin.com/RQW65hkL
[21:47] * rendar (~I@host165-19-dynamic.3-79-r.retail.telecomitalia.it) has joined #ceph
[21:48] <xcezzz> ya i know i will have to do an upgrade eventually.. i did a release upgrade in our original test deployment??? just now that we are in production im just being cautious??? maybe just lazy tho
[21:51] * shohn (~shohn@ip-88-152-195-208.hsi03.unitymediagroup.de) has joined #ceph
[21:52] * Helleshin (~KrimZon@8Q4AAAHM9.tor-irc.dnsbl.oftc.net) Quit ()
[21:53] * marrusl (~mark@173.231.115.58) has joined #ceph
[22:05] * jordanP (~jordan@her75-2-78-193-36-209.fbxo.proxad.net) Quit (Ping timeout: 480 seconds)
[22:06] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[22:16] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[22:18] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[22:20] * zaitcev (~zaitcev@2001:558:6001:10:61d7:f51f:def8:4b0f) has joined #ceph
[22:22] * Bonzaii (~Bobby@8BXAAACMY.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:22] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Quit: WeeChat 1.1.1)
[22:24] <kklimonda> hey, when I read about putting journal on ssd, my understandment is that I should have ssd:hd ratio so each ssd can handle total write throughput of all hdds that are using it as a journal?
[22:26] <kklimonda> but that's something like 4 or 5 hdds per ssd, and that's only assuming I'll buy ssds that are fast enough - looking at specs for example something like s3610 400GB - which seems to be a waste, given that the journals themselves are going to be comparably small
[22:27] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[22:27] <kklimonda> so I guess my question is, I'll most likely hit bottleneck before that with the setup I have (supermicro 4u servers with 33 3TB disks). Any pointers how could I benchmark them to see how much will putting journal on ssds speed up writing times?
[22:37] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[22:38] * derjohn_mob (~aj@88.128.80.237) Quit (Ping timeout: 480 seconds)
[22:39] * MRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: This computer has gone to sleep)
[22:39] <xcezzz> well its going to be hard to offer the best recommendation for something that??? when we benched a drive on it's own??? ceph writes objects to our 2TB seagate 7200.14s at 50MB/s individually using ceph tell bench??? western digital blacks are the same speed??? that is with the journal on the same spindle...
[22:40] <xcezzz> you may saturate your network before you come close to saturating your aggregate drive throughput on regular read/writes???
[22:40] * fireD (~fireD@93-138-198-160.adsl.net.t-com.hr) Quit (Quit: Lost terminal)
[22:40] <fghaas> kklimonda: at 33 spinners per OSD node, ditch the idea of running SSDs for your journals
[22:41] <fghaas> you'd need about 8-9 SSDs to be useful
[22:41] * MRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[22:41] <xcezzz> ya.. because first of all if your gonna share an SSD as journal for spinners??? just remember you lose the SSD for any reason you lose all the OSDs using it as journal pretty much
[22:42] <xcezzz> kklimonda: what networking are you planning for?
[22:42] * ChrisHolcombe (~chris@static.c147-98.i01-5.onvol.net) has joined #ceph
[22:45] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[22:45] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[22:46] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) has joined #ceph
[22:47] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[22:47] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) has joined #ceph
[22:48] <kklimonda> xcezzz: right now I have DP 10gbe nic, but I'm thinking about getting second one
[22:49] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:49] <kklimonda> I guess it all depends on the benchmarking I do, to see where are bottlenecks
[22:50] <kklimonda> we did "inherit" a lot of supermicro servers from a storage testing project
[22:51] <kklimonda> 10 of them, I'd like to use 7 of them for ceph
[22:52] <kklimonda> we'll have two 10gbe switches in the rack, and i'd like to connect each server to both (hence extra nic so I still have dedicated network for ceph)
[22:52] * Bonzaii (~Bobby@8BXAAACMY.tor-irc.dnsbl.oftc.net) Quit ()
[22:52] * smf68 (~storage@8BXAAACNO.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:55] <kklimonda> so now I'm wondering what the bottleneck is going to be - network, cpus, hdd IO, lsi controller..
[22:57] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[22:59] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) Quit (Ping timeout: 480 seconds)
[23:06] * alram (~alram@216.9.110.1) has joined #ceph
[23:09] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Ping timeout: 480 seconds)
[23:10] * MRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: This computer has gone to sleep)
[23:10] * alram (~alram@216.9.110.1) Quit ()
[23:14] * Kioob (~Kioob@200.254.0.109.rev.sfr.net) Quit (Quit: Leaving.)
[23:15] * MRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[23:20] * elder_ (~elder@67.159.191.98) has joined #ceph
[23:21] * lovejoy (~lovejoy@cpc69388-oxfd28-2-0-cust415.4-3.cable.virginm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[23:22] * smf68 (~storage@8BXAAACNO.tor-irc.dnsbl.oftc.net) Quit ()
[23:22] * Enikma (~Gibri@37.187.129.166) has joined #ceph
[23:24] * MRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: This computer has gone to sleep)
[23:26] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[23:28] * davidz (~davidz@2605:e000:1313:8003:d86:7a88:c668:568f) has joined #ceph
[23:30] * marrusl (~mark@173.231.115.58) Quit (Remote host closed the connection)
[23:30] * derjohn_mob (~aj@tmo-110-154.customers.d1-online.com) has joined #ceph
[23:33] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[23:33] * georgem (~Adium@184.151.178.52) has joined #ceph
[23:33] * georgem (~Adium@184.151.178.52) Quit ()
[23:33] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[23:35] * dyasny (~dyasny@104.158.24.163) Quit (Remote host closed the connection)
[23:37] * shohn (~shohn@ip-88-152-195-208.hsi03.unitymediagroup.de) Quit (Ping timeout: 480 seconds)
[23:38] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) Quit (Quit: Leaving)
[23:39] * MACscr (~Adium@2601:d:c800:de3:bd37:169e:edd:d92d) Quit (Quit: Leaving.)
[23:39] * fsimonce (~simon@host11-35-dynamic.32-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[23:40] * MRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[23:42] * elder_ (~elder@67.159.191.98) Quit (Ping timeout: 480 seconds)
[23:52] * Enikma (~Gibri@8BXAAACOB.tor-irc.dnsbl.oftc.net) Quit ()
[23:52] * Xylios (~ZombieTre@herngaard.torservers.net) has joined #ceph
[23:54] * lovejoy (~lovejoy@cpc69388-oxfd28-2-0-cust415.4-3.cable.virginm.net) has joined #ceph
[23:55] * lovejoy (~lovejoy@cpc69388-oxfd28-2-0-cust415.4-3.cable.virginm.net) Quit ()

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.