#ceph IRC Log

Index

IRC Log for 2015-08-10

Timestamps are in GMT/BST.

[0:00] * mog_ (~Uniju@5NZAAF6XS.tor-irc.dnsbl.oftc.net) Quit ()
[0:00] * hyst (~Unforgive@104.255.64.26) has joined #ceph
[0:06] * sleinen1 (~Adium@2001:620:0:69::101) Quit (Ping timeout: 480 seconds)
[0:10] * OutOfNoWhere (~rpb@199.68.195.101) has joined #ceph
[0:13] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[0:15] * ivotron (~ivotron@c-67-169-145-20.hsd1.ca.comcast.net) has joined #ceph
[0:15] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:16] * OutOfNoWhere (~rpb@199.68.195.101) Quit (Remote host closed the connection)
[0:21] * fdmanana (~fdmanana@bl13-145-124.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[0:21] * Vacuum__ (~Vacuum@88.130.198.184) has joined #ceph
[0:28] * Vacuum_ (~Vacuum@i59F79668.versanet.de) Quit (Ping timeout: 480 seconds)
[0:30] * hyst (~Unforgive@9S0AADBOW.tor-irc.dnsbl.oftc.net) Quit ()
[0:30] * cooey (~lmg@mail.calyx.com) has joined #ceph
[0:53] * dgbaley27 (~matt@ucb-np1-206.colorado.edu) has joined #ceph
[0:58] * neurodrone (~neurodron@162.243.191.67) has joined #ceph
[1:00] * cooey (~lmg@9S0AADBP3.tor-irc.dnsbl.oftc.net) Quit ()
[1:00] * Blueraven (~zviratko@atlantic480.us.unmetered.com) has joined #ceph
[1:05] * dgbaley27 (~matt@ucb-np1-206.colorado.edu) Quit (Quit: Leaving.)
[1:10] * ivotron (~ivotron@c-67-169-145-20.hsd1.ca.comcast.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[1:30] * Blueraven (~zviratko@5NZAAF60Y.tor-irc.dnsbl.oftc.net) Quit ()
[1:30] * Crisco (~Tarazed@spftor1e1.privacyfoundation.ch) has joined #ceph
[1:34] * adrian15b (~kvirc@189.Red-83-32-56.dynamicIP.rima-tde.net) Quit (Ping timeout: 480 seconds)
[1:35] <neurodrone> Has anyone attempted to build librados on OSX?
[1:41] * LeaChim (~LeaChim@host86-132-233-87.range86-132.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:46] * fdmanana (~fdmanana@bl13-145-124.dsl.telepac.pt) has joined #ceph
[1:52] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[1:53] * oms101 (~oms101@p20030057EA245D00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:57] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) has joined #ceph
[2:00] * Crisco (~Tarazed@9S0AADBR3.tor-irc.dnsbl.oftc.net) Quit ()
[2:01] * Tenk (~SEBI@171.25.193.27) has joined #ceph
[2:02] * oms101 (~oms101@p20030057EA083E00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[2:03] * fdmanana (~fdmanana@bl13-145-124.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[2:08] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[2:08] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) has joined #ceph
[2:08] * Nacer (~Nacer@37.160.124.248) Quit (Ping timeout: 480 seconds)
[2:30] * Tenk (~SEBI@9S0AADBST.tor-irc.dnsbl.oftc.net) Quit ()
[2:30] * Tralin|Sleep (~Pettis@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[2:38] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[2:42] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[2:50] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[3:00] * Tralin|Sleep (~Pettis@7R2AADMUW.tor-irc.dnsbl.oftc.net) Quit ()
[3:01] * dux0r (~Snowman@luxemburg.gtor.org) has joined #ceph
[3:08] * sankarshan (~sankarsha@122.171.109.159) has joined #ceph
[3:10] * shyu (~Shanzhi@119.254.120.66) has joined #ceph
[3:14] * neurodrone (~neurodron@162.243.191.67) Quit (Quit: neurodrone)
[3:15] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[3:15] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[3:16] * lucas1 (~Thunderbi@218.76.52.64) Quit (Remote host closed the connection)
[3:20] * scuttle|afk is now known as scuttlemonkey
[3:23] * zhaochao (~zhaochao@125.39.8.225) has joined #ceph
[3:28] * neurodrone (~neurodron@pool-100-1-89-227.nwrknj.fios.verizon.net) has joined #ceph
[3:30] * yanzheng (~zhyan@125.71.106.169) has joined #ceph
[3:30] * dux0r (~Snowman@5NZAAF64B.tor-irc.dnsbl.oftc.net) Quit ()
[3:31] * mr_flea (~Sigma@89.105.194.87) has joined #ceph
[3:42] * tuanlq5 (~tuanlq@210.245.31.7) Quit (Remote host closed the connection)
[3:51] * OutOfNoWhere (~rpb@199.68.195.101) has joined #ceph
[4:00] * mr_flea (~Sigma@5NZAAF65M.tor-irc.dnsbl.oftc.net) Quit ()
[4:00] * yuastnav (~Grum@162.247.72.216) has joined #ceph
[4:05] * OutOfNoWhere (~rpb@199.68.195.101) Quit (Remote host closed the connection)
[4:08] * OutOfNoWhere (~rpb@199.68.195.101) has joined #ceph
[4:15] * ivotron (~ivotron@c-67-169-145-20.hsd1.ca.comcast.net) has joined #ceph
[4:20] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[4:30] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:30] * yuastnav (~Grum@7R2AADMXR.tor-irc.dnsbl.oftc.net) Quit ()
[4:30] * totalwormage (~Nanobot@tor-exit.brmlab.cz) has joined #ceph
[4:37] * OutOfNoWhere (~rpb@199.68.195.101) Quit (Remote host closed the connection)
[4:40] * vbellur (~vijay@122.172.46.181) Quit (Ping timeout: 480 seconds)
[4:44] * joshd (~jdurgin@ip-64-134-128-17.public.wayport.net) has joined #ceph
[5:00] * totalwormage (~Nanobot@5NZAAF669.tor-irc.dnsbl.oftc.net) Quit ()
[5:08] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[5:18] * yanzheng (~zhyan@125.71.106.169) Quit (Quit: This computer has gone to sleep)
[5:20] * yanzheng (~zhyan@125.71.106.169) has joined #ceph
[5:30] * Spikey (~Shesh@tor.t-3.net) has joined #ceph
[5:31] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[5:42] * neurodrone (~neurodron@pool-100-1-89-227.nwrknj.fios.verizon.net) Quit (Quit: neurodrone)
[5:46] * dec (~dec@ec2-54-66-50-124.ap-southeast-2.compute.amazonaws.com) Quit (Max SendQ exceeded)
[5:46] * dec (~dec@ec2-54-66-50-124.ap-southeast-2.compute.amazonaws.com) has joined #ceph
[5:55] * Vacuum_ (~Vacuum@i59F7915E.versanet.de) has joined #ceph
[6:00] * Spikey (~Shesh@5NZAAF68T.tor-irc.dnsbl.oftc.net) Quit ()
[6:00] * capitalthree (~Arfed@5NZAAF69R.tor-irc.dnsbl.oftc.net) has joined #ceph
[6:01] * Vacuum__ (~Vacuum@88.130.198.184) Quit (Ping timeout: 480 seconds)
[6:07] * flisky (~Thunderbi@106.39.60.34) Quit (Quit: flisky)
[6:09] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) Quit (Quit: WeeChat 1.2)
[6:22] * dgurtner (~dgurtner@178.197.231.115) has joined #ceph
[6:22] * derjohn_mobi (~aj@tmo-101-179.customers.d1-online.com) has joined #ceph
[6:25] * capitalthree (~Arfed@5NZAAF69R.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[6:25] * thundercloud (~redbeast1@tor-exit.gansta93.com) has joined #ceph
[6:26] * derjohn_mob (~aj@tmo-112-163.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[6:38] * amote (~amote@121.244.87.116) has joined #ceph
[6:39] * amote (~amote@121.244.87.116) Quit (Remote host closed the connection)
[6:40] * amote (~amote@121.244.87.116) has joined #ceph
[6:40] * yanzheng (~zhyan@125.71.106.169) Quit (Quit: This computer has gone to sleep)
[6:44] * derjohn_mobi (~aj@tmo-101-179.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[6:52] * rdas (~rdas@106.221.129.4) has joined #ceph
[6:55] * thundercloud (~redbeast1@9S0AADBZ3.tor-irc.dnsbl.oftc.net) Quit ()
[6:55] * Shesh (~Pieman@torproxy01.31173.se) has joined #ceph
[7:04] * kanagaraj (~kanagaraj@117.216.100.156) has joined #ceph
[7:05] * kanagaraj (~kanagaraj@117.216.100.156) Quit ()
[7:08] * ira (~ira@121.244.87.124) has joined #ceph
[7:19] * ivotron (~ivotron@c-67-169-145-20.hsd1.ca.comcast.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:19] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[7:25] * Shesh (~Pieman@9S0AADB0U.tor-irc.dnsbl.oftc.net) Quit ()
[7:25] * Architect (~Qiasfah@mail.calyx.com) has joined #ceph
[7:29] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) has joined #ceph
[7:35] * overclk (~overclk@121.244.87.124) has joined #ceph
[7:48] * derjohn_mobi (~aj@88.128.80.177) has joined #ceph
[7:55] * Architect (~Qiasfah@5NZAAF7BW.tor-irc.dnsbl.oftc.net) Quit ()
[8:13] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[8:13] * Nacer (~Nacer@37.163.167.123) has joined #ceph
[8:14] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[8:16] * joshd (~jdurgin@ip-64-134-128-17.public.wayport.net) Quit (Quit: Leaving.)
[8:16] * dopesong_ (~dopesong@lb1.mailer.data.lt) has joined #ceph
[8:22] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[8:24] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:25] * ylmson (~nupanick@171.25.193.25) has joined #ceph
[8:33] * Nacer (~Nacer@37.163.167.123) Quit (Ping timeout: 480 seconds)
[8:37] * derjohn_mobi (~aj@88.128.80.177) Quit (Ping timeout: 480 seconds)
[8:54] * yanzheng (~zhyan@125.71.106.169) has joined #ceph
[8:55] * ylmson (~nupanick@7R2AADM4O.tor-irc.dnsbl.oftc.net) Quit ()
[8:55] * shohn (~shohn@dslb-188-102-025-247.188.102.pools.vodafone-ip.de) has joined #ceph
[8:55] * brianjjo (~sese_@192.99.2.137) has joined #ceph
[8:56] * yuan (~yzhou67@192.102.204.38) has joined #ceph
[8:56] * rendar (~I@host118-186-dynamic.21-87-r.retail.telecomitalia.it) has joined #ceph
[9:05] * kefu (~kefu@114.92.110.67) has joined #ceph
[9:09] * espeer_ (~quassel@phobos.isoho.st) Quit (Ping timeout: 480 seconds)
[9:09] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[9:09] <Be-El> hi
[9:10] * zhaochao_ (~zhaochao@125.39.8.233) has joined #ceph
[9:13] * linjan (~linjan@195.91.236.115) has joined #ceph
[9:16] * zhaochao (~zhaochao@125.39.8.225) Quit (Ping timeout: 480 seconds)
[9:16] * zhaochao_ is now known as zhaochao
[9:21] * th0m (~tom@static-qvn-qvu-164067.business.bouyguestelecom.com) has joined #ceph
[9:22] * derjohn_mobi (~aj@fw.gkh-setu.de) has joined #ceph
[9:23] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:23] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[9:24] * shaunm (~shaunm@dhcp-235-069.nomad.chalmers.se) has joined #ceph
[9:24] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:25] * brianjjo (~sese_@5NZAAF7E2.tor-irc.dnsbl.oftc.net) Quit ()
[9:25] * CydeWeys (~datagutt@spftor1e1.privacyfoundation.ch) has joined #ceph
[9:26] * espeer (~quassel@phobos.isoho.st) has joined #ceph
[9:26] * analbeard (~shw@support.memset.com) has joined #ceph
[9:28] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[9:29] * kefu (~kefu@114.92.110.67) has joined #ceph
[9:32] * bara (~bara@213.175.37.10) has joined #ceph
[9:32] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[9:33] * kefu (~kefu@114.92.110.67) has joined #ceph
[9:33] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:36] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[9:37] * kefu (~kefu@114.92.110.67) has joined #ceph
[9:42] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[9:44] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[9:45] * fsimonce (~simon@host93-234-dynamic.252-95-r.retail.telecomitalia.it) has joined #ceph
[9:46] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[9:47] * kefu (~kefu@114.92.110.67) has joined #ceph
[9:47] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[9:49] * branto (~branto@ip-213-220-232-132.net.upcbroadband.cz) has joined #ceph
[9:50] * owasserm (~owasserm@nat-pool-ams-t.redhat.com) has joined #ceph
[9:51] * Nacer (~Nacer@LCaen-656-1-72-185.w80-13.abo.wanadoo.fr) has joined #ceph
[9:53] * kmARC (~kmARC@apn-94-44-253-253.vodafone.hu) has joined #ceph
[9:54] * kmARC_ (~kmARC@apn-94-44-253-253.vodafone.hu) has joined #ceph
[9:54] * adrian15b (~kvirc@196.Red-88-16-103.dynamicIP.rima-tde.net) has joined #ceph
[9:55] * CydeWeys (~datagutt@5NZAAF7FV.tor-irc.dnsbl.oftc.net) Quit ()
[9:55] * Ian2128 (~Atomizer@4.tor.exit.babylon.network) has joined #ceph
[10:00] * nisha (~nisha@2406:5600:2a:cfc7:5098:7f67:1eca:93) has joined #ceph
[10:02] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[10:03] * kefu (~kefu@114.92.110.67) has joined #ceph
[10:03] <th0m> hi everybody
[10:03] <th0m> I have an strange problem on my ceph cluster, last week i had :
[10:03] <th0m> ??? Added 5 new osd-nodes on my cluster
[10:03] <th0m> ??? Added 5 new monitors
[10:03] <th0m> ??? Removed 3 monitors
[10:03] <th0m> ??? Upgrade from firefly to hammer.
[10:03] <th0m> When i stop some node (the 5 new nodes) the OSD are not seen as down, all OSD are still up and i have manifold blocked requests (aka everything is broken)...
[10:03] <th0m> I have posted on mailing (http://article.gmane.org/gmane.comp.file-systems.ceph.user/22551) but without response at this time :-)
[10:04] <th0m> Any idea?
[10:05] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[10:06] * garphy`aw is now known as garphy
[10:07] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[10:08] * kefu (~kefu@114.92.110.67) has joined #ceph
[10:09] * kmARC_ (~kmARC@apn-94-44-253-253.vodafone.hu) Quit (Ping timeout: 480 seconds)
[10:10] * jordanP (~jordan@213.215.2.194) has joined #ceph
[10:10] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[10:11] * kefu (~kefu@114.92.110.67) has joined #ceph
[10:12] * Nacer (~Nacer@LCaen-656-1-72-185.w80-13.abo.wanadoo.fr) Quit (Remote host closed the connection)
[10:12] * kmARC (~kmARC@apn-94-44-253-253.vodafone.hu) Quit (Ping timeout: 480 seconds)
[10:14] * adrian15b (~kvirc@196.Red-88-16-103.dynamicIP.rima-tde.net) Quit (Ping timeout: 480 seconds)
[10:14] * Nacer (~Nacer@LCaen-656-1-72-185.w80-13.abo.wanadoo.fr) has joined #ceph
[10:14] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[10:16] * kefu (~kefu@114.92.110.67) has joined #ceph
[10:17] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:20] <th0m> nobody has had this kind of problem?
[10:22] * sankarshan (~sankarsha@122.171.109.159) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[10:23] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[10:25] * Ian2128 (~Atomizer@7R2AADM67.tor-irc.dnsbl.oftc.net) Quit ()
[10:25] * Solvius (~basicxman@46.29.248.238) has joined #ceph
[10:26] * kefu (~kefu@114.92.110.67) has joined #ceph
[10:29] * sugoruyo (~georgev@paarthurnax.esc.rl.ac.uk) has joined #ceph
[10:33] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:35] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) has joined #ceph
[10:40] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[10:41] * kefu (~kefu@114.92.110.67) has joined #ceph
[10:41] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[10:42] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) has joined #ceph
[10:43] * overclk (~overclk@121.244.87.124) Quit (Quit: Leaving)
[10:45] * overclk (~overclk@121.244.87.117) has joined #ceph
[10:47] * overclk (~overclk@121.244.87.117) Quit ()
[10:47] * overclk (~overclk@121.244.87.117) has joined #ceph
[10:48] * Sysadmin88 (~IceChat77@2.125.96.238) Quit (Quit: A fine is a tax for doing wrong. A tax is a fine for doing well)
[10:48] <th0m> Another question : what is the file activate.monmap in osd directory (/var/lib/ceph/osd/ceph-xxx/activate.monmap) ?
[10:55] * Solvius (~basicxman@5NZAAF7HS.tor-irc.dnsbl.oftc.net) Quit ()
[10:55] * tritonx (~homosaur@171.25.193.26) has joined #ceph
[11:00] * Nacer_ (~Nacer@LCaen-656-1-72-185.w80-13.abo.wanadoo.fr) has joined #ceph
[11:00] * Nacer (~Nacer@LCaen-656-1-72-185.w80-13.abo.wanadoo.fr) Quit (Read error: Connection reset by peer)
[11:00] * Alexjazz (~M@abi165.abisoft.spb.ru) Quit (Read error: Connection reset by peer)
[11:02] * nisha (~nisha@2406:5600:2a:cfc7:5098:7f67:1eca:93) Quit (Ping timeout: 480 seconds)
[11:02] * fdmanana (~fdmanana@bl13-145-124.dsl.telepac.pt) has joined #ceph
[11:03] * nisha (~nisha@2406:5600:2a:e795:353b:6e42:b01e:8e36) has joined #ceph
[11:08] * shylesh__ (~shylesh@121.244.87.118) has joined #ceph
[11:14] * nisha (~nisha@2406:5600:2a:e795:353b:6e42:b01e:8e36) Quit (Quit: Leaving)
[11:16] * joao (~joao@249.38.136.95.rev.vodafone.pt) has joined #ceph
[11:16] * ChanServ sets mode +o joao
[11:23] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[11:25] * tritonx (~homosaur@5NZAAF7IU.tor-irc.dnsbl.oftc.net) Quit ()
[11:25] * N3X15 (~jakekosbe@chomsky.torservers.net) has joined #ceph
[11:26] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[11:28] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[11:29] * adrian15b (~kvirc@btactic.ddns.jazztel.es) has joined #ceph
[11:29] * dopesong_ (~dopesong@lb1.mailer.data.lt) Quit (Read error: Connection reset by peer)
[11:39] * kawa2014 (~kawa@89.184.114.246) Quit (Read error: Connection reset by peer)
[11:39] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[11:49] * yanzheng (~zhyan@125.71.106.169) Quit (Quit: This computer has gone to sleep)
[11:50] * yanzheng (~zhyan@125.71.106.169) has joined #ceph
[11:54] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[11:54] * kefu (~kefu@114.92.110.67) has joined #ceph
[11:55] * N3X15 (~jakekosbe@7R2AADM9U.tor-irc.dnsbl.oftc.net) Quit ()
[11:55] * Diablodoct0r (~Mattress@ns330308.ip-37-187-119.eu) has joined #ceph
[12:01] * arbrandes (~arbrandes@179.97.155.77) has joined #ceph
[12:12] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[12:13] * kefu (~kefu@114.92.110.67) has joined #ceph
[12:15] * kefu is now known as kefu|afk
[12:17] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[12:24] * nisha (~nisha@2406:5600:2a:e795:353b:6e42:b01e:8e36) has joined #ceph
[12:25] * Diablodoct0r (~Mattress@9S0AADB9X.tor-irc.dnsbl.oftc.net) Quit ()
[12:25] * LorenXo (~Popz@freedom.ip-eend.nl) has joined #ceph
[12:26] * zhaochao (~zhaochao@125.39.8.233) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 38.1.0/20150711212448])
[12:26] * kefu|afk (~kefu@114.92.110.67) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:32] * nardial (~ls@dslb-178-006-188-098.178.006.pools.vodafone-ip.de) has joined #ceph
[12:32] * DRoBeR (~DRoBeR@246.255.117.91.static.mundo-r.com) has joined #ceph
[12:34] * kefu (~kefu@114.92.110.67) has joined #ceph
[12:34] * ira (~ira@121.244.87.124) Quit (Ping timeout: 480 seconds)
[12:46] * brutusca_ (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) has joined #ceph
[12:46] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[12:48] * nisha (~nisha@2406:5600:2a:e795:353b:6e42:b01e:8e36) Quit (Ping timeout: 480 seconds)
[12:50] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[12:50] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[12:54] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) has joined #ceph
[12:54] * brutusca_ (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[12:55] * LorenXo (~Popz@7R2AADNBE.tor-irc.dnsbl.oftc.net) Quit ()
[12:55] * Jyron (~DoDzy@freedom.ip-eend.nl) has joined #ceph
[12:58] * nisha (~nisha@2406:5600:2a:3e72:8435:19b8:6476:912e) has joined #ceph
[13:07] * nardial (~ls@dslb-178-006-188-098.178.006.pools.vodafone-ip.de) Quit (Remote host closed the connection)
[13:08] * ira (~ira@182.48.247.98) has joined #ceph
[13:13] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[13:14] * kefu (~kefu@114.92.110.67) has joined #ceph
[13:16] * ira (~ira@182.48.247.98) Quit (Ping timeout: 480 seconds)
[13:16] * yanzheng (~zhyan@125.71.106.169) Quit (Quit: This computer has gone to sleep)
[13:17] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[13:19] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[13:20] * kefu (~kefu@114.92.110.67) has joined #ceph
[13:21] * dopesong_ (~dopesong@lb1.mailer.data.lt) has joined #ceph
[13:21] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[13:21] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[13:25] * Jyron (~DoDzy@7R2AADNCE.tor-irc.dnsbl.oftc.net) Quit ()
[13:25] * anadrom (~Morde@176.10.99.206) has joined #ceph
[13:26] * shaunm (~shaunm@dhcp-235-069.nomad.chalmers.se) Quit (Ping timeout: 480 seconds)
[13:27] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[13:27] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[13:27] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[13:32] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[13:33] * kefu (~kefu@114.92.110.67) has joined #ceph
[13:37] * nisha (~nisha@2406:5600:2a:3e72:8435:19b8:6476:912e) Quit (Ping timeout: 480 seconds)
[13:37] * zenpac (~zenpac3@66.55.33.66) has joined #ceph
[13:38] * ToMiles (~ToMiles@nl9x.mullvad.net) Quit (Ping timeout: 480 seconds)
[13:41] * zacbri (~zacbri@2a01:e35:2e1e:a70:830:9531:3e1:5c3b) Quit (Ping timeout: 480 seconds)
[13:41] * zacbri (~zacbri@2a01:e35:2e1e:a70:a148:491b:ca5b:c2b) has joined #ceph
[13:44] * lucas1 (~Thunderbi@218.76.52.64) Quit (Read error: Connection reset by peer)
[13:46] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[13:49] * nisha (~nisha@2406:5600:25:6dd2:209f:4acb:aa18:a196) has joined #ceph
[13:49] * rdas (~rdas@106.221.129.4) Quit (Read error: Connection reset by peer)
[13:50] * neurodrone (~neurodron@pool-100-1-89-227.nwrknj.fios.verizon.net) has joined #ceph
[13:51] * lucas1 (~Thunderbi@218.76.52.64) Quit ()
[13:55] * anadrom (~Morde@9S0AADCCZ.tor-irc.dnsbl.oftc.net) Quit ()
[13:55] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Read error: Connection reset by peer)
[13:55] * CoMa (~Salamande@7R2AADND9.tor-irc.dnsbl.oftc.net) has joined #ceph
[13:57] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[13:57] <neurodrone> Any reason why https://github.com/ceph/ceph/blob/master/src/msg/simple/Pipe.cc#L954 would keep `addrbl.c_str()` empty?
[13:58] * sankarshan (~sankarsha@106.216.143.196) has joined #ceph
[13:58] * ToMiles (~ToMiles@medgents.ugent.be) has joined #ceph
[13:58] <neurodrone> I am able to confirm that `do_sendmsg()` before it is sending 'ceph 0v27' over.
[13:59] <neurodrone> This one https://github.com/ceph/ceph/blob/master/src/msg/simple/Pipe.cc#L944 I mean. And it is succeeding fine.
[14:00] * ToMiles (~ToMiles@medgents.ugent.be) Quit ()
[14:08] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[14:08] <kefu> neurodrone, i don't quite understand you
[14:09] <kefu> if tcp_read() returns successful, addrbl is not empty.
[14:10] <neurodrone> When I am trying to connect to my ceph cluster via librados my operation times out. `client mount timeout` defaulted to 5 mins but I changed it to 10 seconds. I tried finding out what the cause could be by increasing my debug levels and saw this:
[14:11] <neurodrone> http://p.defau.lt/?LaCyoYogWpYQAMbNGiP0Gw
[14:12] <neurodrone> I added some debugging logic around it to see what was being sent and received. And got this:
[14:13] <neurodrone> http://p.defau.lt/?bavG3LDaw9U0D765U_W2QA
[14:13] <neurodrone> `.cstr ; len 272` doesn't make any sense.
[14:13] <neurodrone> `tcp_read()` is succeeding as seen from the second code snippet.
[14:15] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[14:18] <neurodrone> Not sure if this makes it more clear though?
[14:20] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[14:22] * kefu (~kefu@114.92.110.67) has joined #ceph
[14:23] <kefu> neurodrone, i am looking
[14:23] <neurodrone> Thank you!
[14:25] * CoMa (~Salamande@7R2AADND9.tor-irc.dnsbl.oftc.net) Quit ()
[14:25] * brianjjo (~Oddtwang@freedom.ip-eend.nl) has joined #ceph
[14:25] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[14:25] * shyu (~Shanzhi@119.254.120.66) Quit (Remote host closed the connection)
[14:26] * shyu (~Shanzhi@119.254.120.66) has joined #ceph
[14:30] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[14:30] * dopesong (~dopesong@lb1.mailer.data.lt) has joined #ceph
[14:30] * burley_ (~khemicals@cpe-98-28-239-78.cinci.res.rr.com) has joined #ceph
[14:30] * burley (~khemicals@cpe-98-28-239-78.cinci.res.rr.com) Quit (Read error: Connection reset by peer)
[14:30] * burley_ is now known as burley
[14:30] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[14:31] * kutija_ (~kutija@95.180.90.38) has joined #ceph
[14:31] * qybl_ (~foo@kamino.krzbff.de) has joined #ceph
[14:32] * fxmulder_ (~fxmulder@cpe-24-55-6-128.austin.res.rr.com) has joined #ceph
[14:32] * qybl (~foo@kamino.krzbff.de) Quit (Read error: Connection reset by peer)
[14:32] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[14:32] * jklare (~jklare@185.27.181.36) Quit (Ping timeout: 480 seconds)
[14:33] * dopesong_ (~dopesong@lb1.mailer.data.lt) Quit (Read error: Connection reset by peer)
[14:33] * sage (~quassel@2607:f298:6050:709d:c1f1:eb57:1854:ca74) Quit (Read error: Connection reset by peer)
[14:33] * georgem (~Adium@65-110-211-254.cpe.pppoe.ca) has joined #ceph
[14:33] * _kelv (sid73234@id-73234.highgate.irccloud.com) Quit (Ping timeout: 480 seconds)
[14:35] * ndru_ (~jawsome@104.236.94.35) has joined #ceph
[14:35] * _kelv (sid73234@highgate.irccloud.com) has joined #ceph
[14:35] * off_rhoden (~off_rhode@transit-86-181-132-209.redhat.com) Quit (Max SendQ exceeded)
[14:36] * sage (~quassel@2607:f298:6050:709d:a189:6c85:9ff1:cc15) has joined #ceph
[14:36] * ChanServ sets mode +o sage
[14:36] * jklare (~jklare@185.27.181.36) has joined #ceph
[14:36] * rendar (~I@host118-186-dynamic.21-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[14:36] * jcsp_ (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[14:36] * ska (~skatinolo@cpe-173-174-111-177.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[14:36] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[14:36] * fxmulder (~fxmulder@cpe-24-55-6-128.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[14:37] * ndru (~jawsome@00020819.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:37] * Kingrat (~shiny@2605:a000:161a:c022:3977:b9a6:ec69:a0de) Quit (Ping timeout: 480 seconds)
[14:37] * gleam (gleam@dolph.debacle.org) Quit (Ping timeout: 480 seconds)
[14:37] * ska (~skatinolo@cpe-173-174-111-177.austin.res.rr.com) has joined #ceph
[14:37] * hchen (~hchen@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[14:37] * kutija (~kutija@95.180.90.38) Quit (Ping timeout: 480 seconds)
[14:37] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[14:37] * Miouge_ (~Miouge@94.136.92.20) has joined #ceph
[14:37] * Miouge (~Miouge@94.136.92.20) Quit (Read error: Connection reset by peer)
[14:37] * Miouge_ is now known as Miouge
[14:37] * hchen (~hchen@nat-pool-bos-t.redhat.com) has joined #ceph
[14:37] * sw3 (sweaung@2400:6180:0:d0::66:100f) Quit (Ping timeout: 480 seconds)
[14:37] * jpieper (~josh@209-6-39-224.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[14:38] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) has joined #ceph
[14:38] * Georgyo (~georgyo@shamm.as) has joined #ceph
[14:38] * off_rhoden (~off_rhode@transit-86-181-132-209.redhat.com) has joined #ceph
[14:39] * JohnPreston78_ (sid31393@id-31393.ealing.irccloud.com) has joined #ceph
[14:39] * gregmark1 (~Adium@68.87.42.115) has joined #ceph
[14:39] * cetex_ (~oskar@nadine.juza.se) has joined #ceph
[14:39] * mfa298_ (~mfa298@krikkit.yapd.net) has joined #ceph
[14:39] * cfreak200 (~cfreak200@host-109-236-144-27.nynex.de) has joined #ceph
[14:39] * Kingrat (~shiny@2605:a000:161a:c022:95ae:ef54:a01a:460f) has joined #ceph
[14:39] * babilen (~babilen@babilen.user.oftc.net) has joined #ceph
[14:40] * georgem (~Adium@65-110-211-254.cpe.pppoe.ca) Quit (Quit: Leaving.)
[14:40] * olc- (~olecam@93.184.35.82) has joined #ceph
[14:40] * aakso_ (aakso@hauki.tunkki.fi) has joined #ceph
[14:40] * getzburg_ (sid24913@id-24913.ealing.irccloud.com) has joined #ceph
[14:40] <babilen> Hi, are you guys aware of saltstack states/formulas for configuring ceph?
[14:40] <kefu> neurodrone, iiuc, your are writing a rados client with librados. the pipe(socket) reads 272 bytes, but c_str() is an empty(). which just does not make any sense.
[14:40] * snakamoto1 (~Adium@192.16.26.2) has joined #ceph
[14:40] * leseb_ (~leseb@81-64-215-19.rev.numericable.fr) Quit (Ping timeout: 480 seconds)
[14:40] * JohnPreston78 (sid31393@id-31393.ealing.irccloud.com) Quit (Ping timeout: 480 seconds)
[14:40] * jcsp_ (~jspray@82.71.16.249) has joined #ceph
[14:40] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[14:40] * JohnPreston78_ is now known as JohnPreston78
[14:40] * getzburg (sid24913@id-24913.ealing.irccloud.com) Quit (Ping timeout: 480 seconds)
[14:40] * getzburg_ is now known as getzburg
[14:40] * snakamoto (~Adium@cpe-76-91-202-90.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[14:40] * Karcaw_ (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) Quit (Ping timeout: 480 seconds)
[14:40] * jpieper (~josh@209-6-39-224.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) has joined #ceph
[14:40] * devicenull (sid4013@id-4013.ealing.irccloud.com) Quit (Ping timeout: 480 seconds)
[14:40] * darkfader (~floh@88.79.251.60) Quit (Ping timeout: 480 seconds)
[14:40] * scalability-junk (sid6422@id-6422.ealing.irccloud.com) Quit (Ping timeout: 480 seconds)
[14:40] * ccourtaut_ (~ccourtaut@2001:41d0:1:eed3::1) Quit (Ping timeout: 480 seconds)
[14:40] * espeer_ (~quassel@phobos.isoho.st) has joined #ceph
[14:41] * gleam (gleam@dolph.debacle.org) has joined #ceph
[14:41] * tchmnkyz (tchmnkyz@box.techmonkeyz.net) has joined #ceph
[14:41] * lae_ (~lae@soleil.lae.is) has joined #ceph
[14:41] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[14:41] * sw3 (sweaung@2400:6180:0:d0::66:100f) has joined #ceph
[14:41] * mntasauri_ (~motesorri@192.73.232.107) has joined #ceph
[14:41] * kefu (~kefu@114.92.110.67) has joined #ceph
[14:41] * rektide (~rektide@eldergods.com) Quit (Ping timeout: 480 seconds)
[14:41] * jidar (~jidar@r2d2.fap.me) Quit (Ping timeout: 480 seconds)
[14:41] * nisha (~nisha@2406:5600:25:6dd2:209f:4acb:aa18:a196) Quit (Ping timeout: 480 seconds)
[14:41] * zenpac (~zenpac3@66.55.33.66) Quit (Ping timeout: 480 seconds)
[14:41] * mfa298 (~mfa298@krikkit.yapd.net) Quit (Ping timeout: 480 seconds)
[14:41] * tchmnkyz is now known as Guest1424
[14:41] * wayneseguin (~wayneeseg@mp64.overnothing.com) has joined #ceph
[14:42] * cetex (~oskar@nadine.juza.se) Quit (Ping timeout: 480 seconds)
[14:42] * Guest1362 (tchmnkyz@box.techmonkeyz.net) Quit (Read error: Connection reset by peer)
[14:42] * mza (~adam@metis.fscker.com) Quit (Ping timeout: 480 seconds)
[14:42] * fsimonce` (~simon@host93-234-dynamic.252-95-r.retail.telecomitalia.it) has joined #ceph
[14:42] <kefu> neurodrone, but is is possible there are some bytes left in last read?
[14:42] * Meths_ (~meths@2.25.223.245) has joined #ceph
[14:42] * darkfader (~floh@88.79.251.60) has joined #ceph
[14:42] * Larsen_ (~andreas@larsen.pl) has joined #ceph
[14:42] <kefu> probably you could dig into Pipe a little bit.
[14:42] * liewegas (~quassel@2607:f298:6050:709d:a189:6c85:9ff1:cc15) has joined #ceph
[14:42] <kefu> as it is just a thin wrapper around socket.
[14:42] * loicd_ (~loicd@cmd179.fsffrance.org) has joined #ceph
[14:42] * snerd_ (~motk@2600:3c00::f03c:91ff:fedb:2295) has joined #ceph
[14:43] * DrewBeer_ (~DrewBeer@216.152.240.203) has joined #ceph
[14:43] * mza (~adam@metis.fscker.com) has joined #ceph
[14:43] * devicenull (sid4013@ealing.irccloud.com) has joined #ceph
[14:43] * beardom__ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) has joined #ceph
[14:43] * masterpe_ (~masterpe@2a01:670:400::43) has joined #ceph
[14:43] * wayneeseguin (~wayneeseg@mp64.overnothing.com) Quit (Ping timeout: 480 seconds)
[14:43] * cfreak201 (~cfreak200@host-109-236-144-27.nynex.de) Quit (Ping timeout: 480 seconds)
[14:43] * fsimonce (~simon@host93-234-dynamic.252-95-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[14:43] * loicd (~loicd@cmd179.fsffrance.org) Quit (Ping timeout: 480 seconds)
[14:43] * superbeer (~MSX@studiovideo.org) Quit (Ping timeout: 480 seconds)
[14:43] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[14:43] * Georgyo_ (~georgyo@shamm.as) Quit (Ping timeout: 480 seconds)
[14:43] * espeer (~quassel@phobos.isoho.st) Quit (Ping timeout: 480 seconds)
[14:43] * aakso (aakso@hauki.tunkki.fi) Quit (Ping timeout: 480 seconds)
[14:43] * arbrandes (~arbrandes@179.97.155.77) Quit (Ping timeout: 480 seconds)
[14:43] * wayneseguin is now known as wayneeseguin
[14:43] * dgurtner (~dgurtner@178.197.231.115) Quit (Ping timeout: 480 seconds)
[14:43] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[14:43] * olc-_ (~olecam@93.184.35.82) Quit (Ping timeout: 480 seconds)
[14:43] * lae (~lae@soleil.lae.is) Quit (Ping timeout: 480 seconds)
[14:43] * Larsen (~andreas@www.larsen.pl) Quit (Ping timeout: 480 seconds)
[14:43] * masterpe (~masterpe@2a01:670:400::43) Quit (Read error: Connection reset by peer)
[14:43] * fam_away (~famz@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[14:43] * Meths (~meths@2.25.223.245) Quit (Remote host closed the connection)
[14:43] * sage (~quassel@2607:f298:6050:709d:a189:6c85:9ff1:cc15) Quit (Read error: Connection reset by peer)
[14:43] * loicd_ is now known as loicd
[14:43] * rektide (~rektide@eldergods.com) has joined #ceph
[14:43] * snerd (~motk@2600:3c00::f03c:91ff:fedb:2295) Quit (Read error: Connection reset by peer)
[14:44] * shylesh__ (~shylesh@121.244.87.118) Quit (Ping timeout: 480 seconds)
[14:44] * ccourtaut_ (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[14:44] * gregmark (~Adium@68.87.42.115) Quit (Ping timeout: 480 seconds)
[14:44] * nisha (~nisha@2406:5600:25:6dd2:209f:4acb:aa18:a196) has joined #ceph
[14:44] * shylesh__ (~shylesh@121.244.87.118) has joined #ceph
[14:44] * zenpac (~zenpac3@66.55.33.66) has joined #ceph
[14:44] * mntasauri (~motesorri@192.73.232.107) Quit (Ping timeout: 480 seconds)
[14:44] * arbrandes (~arbrandes@179.97.155.77) has joined #ceph
[14:44] * scalability-junk (sid6422@id-6422.ealing.irccloud.com) has joined #ceph
[14:45] * DrewBeer (~DrewBeer@216.152.240.203) Quit (Ping timeout: 480 seconds)
[14:45] * fam_away (~famz@nat-pool-bos-t.redhat.com) has joined #ceph
[14:45] * leseb_ (~leseb@81-64-215-19.rev.numericable.fr) has joined #ceph
[14:45] * superbeer (~MSX@studiovideo.org) has joined #ceph
[14:45] <neurodrone> Umm, could be.
[14:46] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) has joined #ceph
[14:46] * kefu is now known as kefu|afk
[14:46] <neurodrone> kefu: I can check for sure.
[14:46] * jidar (~jidar@r2d2.fap.me) has joined #ceph
[14:46] * kefu|afk is now known as kefu
[14:46] <kefu> neurodrone, good luck with the adventure!
[14:46] * shaunm (~shaunm@dhcp-237-027.nomad.chalmers.se) has joined #ceph
[14:46] <neurodrone> Haha, thanks! Any idea where it is making that `tcp_read()` too?
[14:47] * liewegas is now known as sage
[14:47] <kefu> i would place an assert, and let the backtrace tell me. =D
[14:47] * dgurtner (~dgurtner@178.197.231.115) has joined #ceph
[14:48] <neurodrone> Okay. :)
[14:48] <neurodrone> I hope it is making that to the mon host I am connecting to.
[14:48] * jklare (~jklare@185.27.181.36) Quit (Ping timeout: 480 seconds)
[14:48] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[14:49] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[14:50] * dyasny (~dyasny@104.158.25.230) has joined #ceph
[14:50] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[14:51] * rendar (~I@host118-186-dynamic.21-87-r.retail.telecomitalia.it) has joined #ceph
[14:52] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[14:54] * lucas1 (~Thunderbi@218.76.52.64) Quit ()
[14:54] * kefu (~kefu@114.92.110.67) has joined #ceph
[14:55] * brianjjo (~Oddtwang@9S0AADCEZ.tor-irc.dnsbl.oftc.net) Quit ()
[14:55] * pico (~Moriarty@162.247.72.7) has joined #ceph
[14:58] <babilen> I am wondering why the Debian packages of ceph do not seem to contain ceph-deploy. Are you aware of a particular reason for that and the ramifications of that choice?
[14:59] <alfredodeza> babilen: which Debian
[14:59] <babilen> https://www.debian.org/ (are there other?)
[14:59] <babilen> If you mean particular packages, I am referring to: https://packages.qa.debian.org/c/ceph.html
[15:00] <alfredodeza> I mean, wheezy? squeeze?
[15:00] <alfredodeza> and for what ceph version
[15:00] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[15:00] <babilen> This is about ceph 0.80.7-2 as found in jessie
[15:00] <alfredodeza> oh we don't maintain those, so that will probably be on Debian's end, whoever is maintaining those packages
[15:00] <alfredodeza> we do have ceph-deploy in our Debian repos
[15:00] <babilen> Yes, I am aware that ceph doesn't maintain those, but I'm simply trying to understand that choice
[15:01] <babilen> (being rather new to ceph I can't quite understand that situation)
[15:01] <alfredodeza> you would need to ping someone that is maintaining them, not sure who that is though
[15:01] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) has joined #ceph
[15:02] <babilen> I would also strongly prefer to use packages that come with Debian proper and it seems that the version in jessie is the one I would like to learn and deploy (correct me if that is not the case)
[15:02] * kefu (~kefu@114.92.110.67) has joined #ceph
[15:02] <alfredodeza> we have packages for different (tested) versions of debian and ceph, so you wouldn't be using something that is unsupported
[15:03] <alfredodeza> nothing wrong with using something provided by the distro either, but you may be behind
[15:03] <babilen> alfredodeza: Essentially just started reading the documentation and have the feeling as if ceph-deploy is rather important. Simply wonder why it has not been packaged and thought I'd ask here in case that is obvious (e.g. "yeah, nobody uses that in production, it is essentially just a sandbox tool" )
[15:03] <alfredodeza> ceph-deploy is useful to try ceph out
[15:03] <alfredodeza> and get a cluster up and running with some defaults and decisions taken for you
[15:04] <alfredodeza> however, some third-party tools and deployment systems use it because it offers a reliable way to setup certain portions of a Ceph cluster that otherwise would be a bit complicated
[15:04] <babilen> Okay, that sounds as if I wouldn't actually want to use it (as it implies "you don't know enough yet to make proper decisions, keep on reading")
[15:05] <alfredodeza> I would actually differ
[15:05] <alfredodeza> since you are a new user, I would encourage you to get started with ceph-deploy
[15:05] <alfredodeza> and then drop it when you understand how everything is set up and feel that you don't want/need it anymore
[15:06] <babilen> It would be nice to, eventually, configure this via saltstack (which does not seem to be well supported by ceph either (as opposed to ansible)) and I am simply starting to read and play along on some vagrant boxes. Just wondered why ceph-deploy isn't in the proper Debian packages.
[15:06] <Be-El> babilen: if you want to use the official ceph documentation, be aware that version 0.80.7 is kind of ancient, and the documentation refers to the current version by default
[15:08] * neurodrone (~neurodron@pool-100-1-89-227.nwrknj.fios.verizon.net) Quit (Quit: neurodrone)
[15:08] <babilen> Okay. I was udner the impression that Firefly is the current LTS release and preferrable to, say, giant or hammer.
[15:09] <babilen> Sorry, just trying to get a feeling for ceph and am (understandable I guess) a bit overwhelmed :D
[15:10] <Be-El> yeah, the learning curve is very steep at the beginning
[15:10] <th0m> Hi everybody
[15:10] <th0m> I have an strange problem on my ceph cluster, last week i had :
[15:10] <th0m> * Added 5 new osd-nodes on my cluster
[15:10] <th0m> * Added 5 new monitors
[15:10] <th0m> * Removed 3 monitors
[15:10] <th0m> * Upgrade from firefly to hammer.
[15:10] <th0m> When i stop some node (the 5 new nodes) the OSD are not seen as down, all OSD are still up and i have manifold blocked requests (aka everything is broken)...
[15:10] <th0m> I have posted on mailing (http://article.gmane.org/gmane.comp.file-systems.ceph.user/22551) but without response at this time :-)
[15:10] <babilen> Already ordered the book and will read it soon, but thought some quality time with vagrant and the ceph documentation won't be too bad
[15:10] <th0m> Any idea?
[15:11] * neurodrone (~neurodron@162.243.191.67) has joined #ceph
[15:11] <babilen> Be-El: So, the best course of action is (IYHO) to install hammer from the ceph repos?
[15:11] * mhack (~mhack@68-184-37-225.dhcp.oxfr.ma.charter.com) has joined #ceph
[15:12] <Be-El> babilen: i would start with the latest stable release on a test cluster. and this means hammer from the repos
[15:12] <babilen> Be-El: Right, ta!
[15:13] <babilen> Does it make sense to run ceph nodes in VMs or is that a ludicrous idea and would I have to invest in some bare metal? (say a couple of smallish Dell R320s)
[15:13] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[15:14] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[15:14] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[15:19] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[15:19] * erice (~erice@c-76-120-53-165.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[15:19] * kefu (~kefu@114.92.110.67) has joined #ceph
[15:21] * neurodrone (~neurodron@162.243.191.67) Quit (Ping timeout: 480 seconds)
[15:25] * pico (~Moriarty@9S0AADC3Z.tor-irc.dnsbl.oftc.net) Quit ()
[15:25] * _s1gma (~Jyron@hessel0.torservers.net) has joined #ceph
[15:25] * nisha (~nisha@2406:5600:25:6dd2:209f:4acb:aa18:a196) Quit (Ping timeout: 480 seconds)
[15:28] * yanzheng (~zhyan@125.71.106.169) has joined #ceph
[15:30] * shyu (~Shanzhi@119.254.120.66) Quit (Remote host closed the connection)
[15:36] * yanzheng (~zhyan@125.71.106.169) Quit (Quit: This computer has gone to sleep)
[15:37] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[15:38] * kefu (~kefu@114.92.110.67) has joined #ceph
[15:45] <Be-El> babilen: for a test setup vms are fine. for production you definitely want to use bare metal
[15:46] <Be-El> babilen: if you are going to use build a testcluster, use OSD disks larger than 20GB. otherwise you will have problems using the OSDs
[15:46] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[15:48] <babilen> Be-El: So larg(ish) Xen HVM guests won't cut it? Why is that? We run plenty of databases in guests
[15:48] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[15:49] <Be-El> babilen: the default weight of an osd in the crush map is its size in TB. if the size if < 10TB, rounding results in a weight of 0.0
[15:49] * owasserm (~owasserm@nat-pool-ams-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:49] <Be-El> babilen: the OSD is up and running, but ceph does not put data on it because a weight of 0.0 indicates a disabled OSD
[15:49] <babilen> So, it is fine to run multiple OSDs as long as I keep their data on their own disks? (+ SSD each)
[15:50] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has left #ceph
[15:50] * kefu (~kefu@114.92.110.67) has joined #ceph
[15:50] <Be-El> babilen: for a test cluster you don't need ssds
[15:50] <babilen> .. on the same box ...
[15:50] <babilen> The plan is not to build a test cluster
[15:51] <babilen> Having read http://docs.ceph.com/docs/master/start/hardware-recommendations/ i got the impression that SSDs are, more or less, a necessity
[15:54] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[15:54] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[15:55] * _s1gma (~Jyron@9S0AADC5G.tor-irc.dnsbl.oftc.net) Quit ()
[15:55] <babilen> And can I really not run them within Xen? (on R720s, 56 core, 256G RAM with SSDs underneath)
[15:58] <babilen> err, R730s naturally
[15:59] <mfa298_> my understanding is that use of SSDs really depends on your use case. They can help with performance either acting as a journal or cache layer.
[16:00] * overclk (~overclk@121.244.87.117) Quit (Ping timeout: 480 seconds)
[16:01] <babilen> Indeed
[16:01] <mfa298_> but in general for ceph I think bare metal is the way to go - it really wants to see the raw disks, not raid arrays or virtual disks (unless you're running a test cluster where you don't care so much about performance or data security).
[16:02] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Ping timeout: 480 seconds)
[16:02] <mfa298_> the only bit of ceph I've considered sticking in a VM is the radosgw servers if we deploy that. Mon/OSD and MDS servers are all bare metal (MDS is on the same hardware as MON which isn't ideal but the boxes are widely overspecced)
[16:02] * vbellur (~vijay@114.143.207.214) has joined #ceph
[16:05] <babilen> okay, thanks for the input. It's just that I'd rather not buy extensive extra hardware if it can just run on the Xen hosts that are available already
[16:05] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[16:07] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[16:08] * kefu (~kefu@114.92.110.67) has joined #ceph
[16:09] <mfa298_> this may depend a bit on your use case, but I'm not sure you'll get the best performance out of ceph by having it in a load of VMs
[16:11] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[16:11] <babilen> ack
[16:11] <babilen> How problematic would it be to move to "bare metal" later on if we run into performance problems?
[16:13] * kefu (~kefu@114.92.110.67) has joined #ceph
[16:13] * neurodrone (~neurodron@162.243.191.67) has joined #ceph
[16:14] * danieagle (~Daniel@189-47-89-13.dsl.telesp.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[16:15] * derjohn_mobi (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[16:15] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:16] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[16:17] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[16:25] * Crisco (~Moriarty@198.50.200.143) has joined #ceph
[16:27] * derjohn_mobi (~aj@fw.gkh-setu.de) has joined #ceph
[16:27] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[16:28] * kefu (~kefu@114.92.110.67) has joined #ceph
[16:29] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[16:30] <burley> babilen: The "simple" way to do tha twould be to add your bare metal nodes to the cluster, and then shut down the VMs one at a time until backfills complete, eventually ending up fully on bare-metal
[16:30] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[16:31] * kefu (~kefu@114.92.110.67) has joined #ceph
[16:33] * overclk (~overclk@121.244.87.124) has joined #ceph
[16:33] * georgem (~Adium@65-110-211-254.cpe.pppoe.ca) has joined #ceph
[16:35] * thansen (~thansen@63-248-145-253.static.layl0103.digis.net) has joined #ceph
[16:39] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[16:44] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[16:45] * georgem (~Adium@65-110-211-254.cpe.pppoe.ca) Quit (Quit: Leaving.)
[16:53] * cjg (~oftc-webi@p113-DSL.agas2c.dsl.sentex.ca) Quit (Quit: Page closed)
[16:55] * Crisco (~Moriarty@9S0AADC8S.tor-irc.dnsbl.oftc.net) Quit ()
[16:55] * nastidon (~nicatronT@strasbourg-tornode.eddai.su) has joined #ceph
[16:57] * yanzheng (~zhyan@125.71.106.169) has joined #ceph
[16:57] * ircolle (~Adium@2601:285:201:2bf9:502c:76f8:4291:9a92) has joined #ceph
[16:59] * davidz (~davidz@2605:e000:1313:8003:7544:2e13:1fef:3bc7) has joined #ceph
[17:00] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:00] * scuttlemonkey is now known as scuttle|afk
[17:09] * overclk (~overclk@121.244.87.124) Quit (Ping timeout: 480 seconds)
[17:12] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[17:13] * kefu (~kefu@114.92.110.67) has joined #ceph
[17:14] <neurodrone> What usually is a response for `ceph v027` message sent to the mon host?
[17:15] <neurodrone> Here it???s expecting IP:port, is that fair https://github.com/ceph/ceph/blob/master/src/msg/simple/Pipe.cc#L954?
[17:18] * TheSov (~TheSov@cip-248.trustwave.com) has joined #ceph
[17:19] * erice (~erice@c-76-120-53-165.hsd1.co.comcast.net) has joined #ceph
[17:20] <TheSov> does ceph ever plan to do block level writing on its own?
[17:21] * dopesong_ (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[17:21] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[17:21] * dopeson__ (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[17:22] <m0zes> why would ceph give up on the guarantees of existing underlying filesystems?
[17:22] <TheSov> speed
[17:24] * m0zes very much doubts there is that much speed to be gained. work would be better suited to speeding up existing filesystems.
[17:24] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:25] * nastidon (~nicatronT@5NZAAF8BT.tor-irc.dnsbl.oftc.net) Quit ()
[17:25] * Mraedis (~Kyso_@tor1e1.privacyfoundation.ch) has joined #ceph
[17:25] <TheSov> well, just thinking out loud here. the size of a ceph object is 4MB correct?
[17:26] <TheSov> and that is fantastic if i was linearly reading a large object, 100MB+ from the sytem.
[17:27] <TheSov> but if you are running RBD's, your disk blocks are much smaller than that
[17:27] * shakamunyi (~shakamuny@c-67-180-191-38.hsd1.ca.comcast.net) has joined #ceph
[17:27] <TheSov> it would make more sense to write those blocks directly to disk
[17:27] <TheSov> there would be greater speed to be had
[17:27] <m0zes> not with latency of the network.
[17:27] <TheSov> but i am an idiot so i may be wrong
[17:27] <Be-El> the osd does not know about rbd
[17:28] * erice (~erice@c-76-120-53-165.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[17:28] * dopesong (~dopesong@lb1.mailer.data.lt) Quit (Ping timeout: 480 seconds)
[17:28] <TheSov> Be-El, thats exactly my arguement, it doesnt optimize its reads/writes for the use case
[17:28] <m0zes> but rbd isn't the only use case.
[17:28] <Be-El> TheSov: it can't, since the is no single use case
[17:28] <TheSov> it seems to be the biggest use case however
[17:28] <m0zes> iirc, rados objects aren't required to be 4MB.
[17:29] <Be-El> TheSov: i have several million small files (<< 4MB in cephfs
[17:29] <Be-El> TheSov: others have gigabyte sized objects in rgw
[17:29] <TheSov> Be-El, i realize that but a ceph object is 4MB
[17:29] * dopesong_ (~dopesong@88-119-94-55.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[17:29] <TheSov> even if you store 10k its sitting in a 4MB object
[17:29] <Be-El> TheSov: no, a ceph object is _upto_ 4 MB
[17:29] <TheSov> are you sure?
[17:29] <TheSov> i swear i just read it was 4mb
[17:30] * dopeson__ (~dopesong@88-119-94-55.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[17:30] <Be-El> i just had a look at some 30kb files on the osds this morning
[17:30] <Be-El> TheSov: if the overall object the file is part of is smaller than 4 mb, no space is wasted
[17:30] <TheSov> "The default object size is 4 MB"
[17:30] <TheSov> right off the ceph website
[17:31] <m0zes> -rw-r--r-- 1 root root 286K Jul 10 16:00 10002a92e5c.00000000__head_0E800528__22_ffffffffffffffff_8
[17:31] <Be-El> TheSov: yes, all objects > 4MB are splitted into chunks of 4 MB
[17:31] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[17:31] <TheSov> odd, so it doesnt take the full space?
[17:31] <Be-El> TheSov: nope, it takes the space it needs
[17:31] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[17:32] <m0zes> otherwise it would be a waste of space.
[17:32] <TheSov> ??? you arent going to use that space for anything else though
[17:33] <m0zes> sure you can.
[17:33] <TheSov> i agree you can, should you?
[17:33] * shylesh__ (~shylesh@121.244.87.118) Quit (Remote host closed the connection)
[17:33] <m0zes> why not?
[17:33] <TheSov> thats your cluster storage space
[17:34] <TheSov> i mean ok for my test cluster, i have a single disk partitioned out, but normally you wouldnt do that
[17:35] <m0zes> right, you could create 10 billion 1 byte objects if you wanted. on a 10G osd.
[17:35] <TheSov> yeah but why not just fit 4 million of them into a 4mb object
[17:36] <TheSov> the FS overhead for that would suck
[17:36] <TheSov> everyone knows storing tons of little files clogs things up
[17:36] <TheSov> which i assumed was the purpose of the objects to store a bunch of object into other objects
[17:37] <m0zes> lots of little files don't cause too many problems on xfs, as long as they aren't all in the same directory.
[17:37] <TheSov> we are having this exact issue with our email archiver, it stores emails in EML files, which as basically text files in directories
[17:38] <TheSov> backing it up takes a week
[17:38] <TheSov> and its not really that big
[17:40] * rotbeard (~redbeard@cm-171-100-223-199.revip10.asianet.co.th) has joined #ceph
[17:41] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[17:42] * kefu (~kefu@114.92.110.67) has joined #ceph
[17:44] * jklare (~jklare@185.27.181.36) has joined #ceph
[17:44] <m0zes> stat-ing the files is generally the pain there. with readdirplus and a small number of files per directory, it should be reasonable to speed it up.
[17:46] <m0zes> of course, my advise for my users that have millions of <4KB files is "don't". especially not if you are storing them all in a single directory.
[17:46] <m0zes> s/advise/advice/
[17:47] <m0zes> it doesn't stop them, but I can call them idiots if they ignore my advice.
[17:49] <TheSov> this is why i like full stack programmers
[17:49] <TheSov> they understand these things
[17:49] <TheSov> most programmers do not
[17:50] <TheSov> so hard to find one of those
[17:50] <Be-El> TheSov: full stack also means the programmer knows a little bit of everything, but there's no component he/she really knows
[17:51] * m0zes runs a general purpose hpc cluster for his university. free access for all students/employees. I don't get to vet the users, I just get to tell them to stop when they do something stupid.
[17:51] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[17:52] <TheSov> Be-El, we have 1 full stack guy who came from end user to network to infrastructure and is now a programmer, hes fantastic
[17:52] <TheSov> i tellem hey these little files are choking our backups, he says no problem which can stick them in a blob-db
[17:52] <TheSov> problem solved
[17:52] <m0zes> gross.
[17:53] <Be-El> until the db fails, concurrent access is necessary etc.
[17:53] <TheSov> later i goto him and say this blob-db is huge we need to make it smaller, he says no problem and autogenerates a new blob-db every 500 gig
[17:53] * yanzheng (~zhyan@125.71.106.169) Quit (Quit: This computer has gone to sleep)
[17:54] <Be-El> TheSov: i think the way ceph stores the files on a osd is more clever than just a blob-db
[17:54] <TheSov> later i goto him and say hey man, people make changes to some of the files in these blobs, we have to keep them forever if that happens and all online, he says no problem and sets it up so any updated files move to the most recent blob-db and old ones can be mounted as read on
[17:54] * gfidente (~gfidente@0001ef4b.user.oftc.net) has joined #ceph
[17:54] <TheSov> when those get really old, we dismount and toss them
[17:54] * CheKoLyN (~saguilar@bender.parc.xerox.com) has joined #ceph
[17:55] <Be-El> let me guess...that guy knows how to keep his job safe by making him undismissable? ;-)
[17:55] * Mraedis (~Kyso_@9S0AADDB8.tor-irc.dnsbl.oftc.net) Quit ()
[17:55] * isaxi (~Yopi@nl.tor-exit.neelc.org) has joined #ceph
[17:55] <TheSov> probably lol
[17:55] * yanzheng (~zhyan@125.71.106.169) has joined #ceph
[17:55] * linjan (~linjan@195.91.236.115) Quit (Ping timeout: 480 seconds)
[17:55] <TheSov> not gonna lie hes pretty much undissmisable now
[17:55] <Be-El> the standard approach is using a hash function and nested directories with names derived from parts of the hashsum
[17:56] * jwilkins (~jowilkin@c-67-180-123-48.hsd1.ca.comcast.net) has joined #ceph
[17:56] * jwilkins (~jowilkin@c-67-180-123-48.hsd1.ca.comcast.net) Quit ()
[17:56] <TheSov> i believe you, im just saying this guy is pretty neat in getting things done
[17:56] <Be-El> if you use 4 bit for each level, you end up with upto 16 subdirectories....fast enough for readdir
[17:57] * jwilkins (~jowilkin@c-67-180-123-48.hsd1.ca.comcast.net) has joined #ceph
[17:57] <TheSov> backups would still take long though
[17:57] <Be-El> he might get things done......but in many cases this quick solutions have their drawbacks, too
[17:58] <TheSov> imagine each file -> READDIR, FAT READ, FILE READ, FAT READ, FILE READ - repeat until directory is empty, then next one
[17:58] * yguang11 (~yguang11@66.228.162.44) has joined #ceph
[17:58] <TheSov> sometimes its just better to read the bigger object and get it all at once
[17:59] <Be-El> sure, reading large objects is always faster than reading smaller ones
[17:59] <TheSov> which goes back to the other discussion, why doesnt ceph store smaller object into the 4MB objects
[17:59] <Be-El> and that's exactly how this discussion started....some workloads require one solution, some workloads another
[18:00] <neurodrone> Problem is they also read too much than necessary for mutually disjoint blobs, if forcefully stored in a single ???object???.
[18:00] <neurodrone> Cannot always know the access patterns in advance.
[18:00] * shaunm (~shaunm@dhcp-237-027.nomad.chalmers.se) Quit (Ping timeout: 480 seconds)
[18:00] <m0zes> and coalescing smaller objects into larger ones requires some seriously intense logic. especially if you just need to read one.
[18:01] <Be-El> and changes (especially resizing) sub objects becomes a real pain in the ass
[18:02] <m0zes> exactly. *most* object storage platforms have no concept of "modify" operations, because of that.
[18:02] <Be-El> TheSov: you mentioned email......when I was younger there was the battle between mailfile and maildir
[18:03] * ivotron (~ivotron@eduroam-169-233-197-33.ucsc.edu) has joined #ceph
[18:03] * moore (~moore@64.202.160.88) has joined #ceph
[18:05] <TheSov> m0zes, i see what you mean
[18:05] * sankarshan (~sankarsha@106.216.143.196) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[18:07] <TheSov> Be-El, i see what you mean also
[18:10] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[18:11] * bara (~bara@213.175.37.10) Quit (Quit: Bye guys! (??????????????????? ?????????)
[18:11] * kefu (~kefu@114.92.110.67) has joined #ceph
[18:13] * xabner (~xabner@2607:f388:1090:0:a950:1488:970e:48e7) has joined #ceph
[18:15] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:17] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[18:20] * kefu (~kefu@114.92.110.67) has joined #ceph
[18:22] * kbader (~Adium@64.169.30.57) has joined #ceph
[18:23] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[18:24] * analbeard (~shw@5.80.204.111) has joined #ceph
[18:25] * isaxi (~Yopi@9S0AADDDZ.tor-irc.dnsbl.oftc.net) Quit ()
[18:25] * sese_ (~LRWerewol@7R2AADNXZ.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:26] * yguang11 (~yguang11@66.228.162.44) Quit ()
[18:26] * yguang11 (~yguang11@2001:4998:effd:600:b40a:cc7c:d04b:ec54) has joined #ceph
[18:27] * analbeard (~shw@5.80.204.111) Quit ()
[18:28] * kefu (~kefu@114.92.110.67) has joined #ceph
[18:28] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:28] * Nacer_ (~Nacer@LCaen-656-1-72-185.w80-13.abo.wanadoo.fr) Quit (Read error: Connection reset by peer)
[18:30] <zenpac> If we shut off the master monitor, should my other 2 monitors form a new quorum? The other two servers don't seem to make a new quorum.
[18:33] * Nacer (~Nacer@LCaen-656-1-72-185.w80-13.abo.wanadoo.fr) has joined #ceph
[18:33] <m0zes> you need >50% of the monitors up to form quorum. it sounds like you have 3 total?
[18:33] <m0zes> 2/3 should form a quorum.
[18:35] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[18:35] <zenpac> yes.. 2/3 ok.
[18:35] <zenpac> Maybe my Calamari is not configured correctly?
[18:36] <zenpac> If I do a "ceph status" on my 2nd host, I get nothting back.
[18:36] <zenpac> It just hangs..
[18:36] * snakamoto1 (~Adium@192.16.26.2) Quit (Quit: Leaving.)
[18:37] <Be-El> zenpac: does your ceph.conf contain the names/ip adresses of all mons?
[18:38] <zenpac> no..
[18:38] <zenpac> I'll show:
[18:39] <zenpac> http://git.io/v33Q0
[18:39] <m0zes> ceph.conf needs them.
[18:39] <zenpac> fir initial monitors?
[18:39] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[18:39] <zenpac> mon_host = [all monitors] ?
[18:40] <m0zes> yep, comma seperated
[18:40] <m0zes> I'd also read http://ceph.com/docs/master/rados/configuration/mon-config-ref/#initial-members
[18:41] <zenpac> I used the quick install to add all 3 mons.. Is there an extra step I missed that adds those to the config files???
[18:42] <m0zes> I take that to mean that 'mon initial members' could cause quorum to be established with *just* ceph-01
[18:42] <Be-El> well, time to call it a day
[18:42] <m0zes> zenpac: https://ceph.com/docs/v0.79/dev/mon-bootstrap/#addresses-only
[18:43] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[18:43] * Nacer (~Nacer@LCaen-656-1-72-185.w80-13.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[18:45] <zenpac> Is there a ceph-deploy command that will change those values on each host?
[18:45] <m0zes> no idea.
[18:46] * rotbeard (~redbeard@cm-171-100-223-199.revip10.asianet.co.th) Quit (Quit: Leaving)
[18:47] <zenpac> ok.. Just wondering why ceph-deploy didn't add those in when I clearly created and cofigured them.
[18:51] * linjan (~linjan@176.195.196.88) has joined #ceph
[18:51] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) has joined #ceph
[18:53] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[18:54] * segutier (~segutier@sfo-vpn1.shawnlower.net) has joined #ceph
[18:55] * sese_ (~LRWerewol@7R2AADNXZ.tor-irc.dnsbl.oftc.net) Quit ()
[18:55] * Pettis (~Grimmer@exit1.torproxy.org) has joined #ceph
[18:55] * sleinen1 (~Adium@2001:620:0:69::100) has joined #ceph
[18:57] * xabner_ (~xabner@dyn-72-33-17-199.uwnet.wisc.edu) has joined #ceph
[18:57] * xabner_ (~xabner@dyn-72-33-17-199.uwnet.wisc.edu) Quit (Remote host closed the connection)
[18:59] * snakamoto (~Adium@192.16.26.2) has joined #ceph
[19:01] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:01] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:02] * xabner_ (~xabner@dyn-72-33-17-199.uwnet.wisc.edu) has joined #ceph
[19:03] * yanzheng (~zhyan@125.71.106.169) Quit (Quit: This computer has gone to sleep)
[19:03] * xabner (~xabner@2607:f388:1090:0:a950:1488:970e:48e7) Quit (Ping timeout: 480 seconds)
[19:04] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:05] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[19:06] * gfidente (~gfidente@0001ef4b.user.oftc.net) Quit (Quit: bye)
[19:09] * sugoruyo (~georgev@paarthurnax.esc.rl.ac.uk) Quit (Quit: I'm going home!)
[19:10] * thansen (~thansen@63-248-145-253.static.layl0103.digis.net) Quit (Quit: Ex-Chat)
[19:16] * analbeard (~shw@5.80.204.111) has joined #ceph
[19:20] * erice (~erice@c-76-120-53-165.hsd1.co.comcast.net) has joined #ceph
[19:21] * analbeard1 (~shw@support.memset.com) has joined #ceph
[19:21] * kefu is now known as kefu|afk
[19:23] * segutier (~segutier@sfo-vpn1.shawnlower.net) Quit (Ping timeout: 480 seconds)
[19:25] * kefu|afk (~kefu@114.92.110.67) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:25] * Pettis (~Grimmer@5NZAAF8HG.tor-irc.dnsbl.oftc.net) Quit ()
[19:25] * brannmar (~osuka_@tor.piratenpartei-nrw.de) has joined #ceph
[19:25] * analbeard (~shw@5.80.204.111) Quit (Ping timeout: 480 seconds)
[19:26] * dgurtner (~dgurtner@178.197.231.115) Quit (Ping timeout: 480 seconds)
[19:28] * erice (~erice@c-76-120-53-165.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[19:30] * xarses_ (~xarses@12.164.168.117) has joined #ceph
[19:33] * kbader (~Adium@64.169.30.57) Quit (Quit: Leaving.)
[19:35] * xabner_ (~xabner@dyn-72-33-17-199.uwnet.wisc.edu) Quit (Remote host closed the connection)
[19:37] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:38] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[19:38] * CheKoLyN (~saguilar@bender.parc.xerox.com) Quit (Read error: Connection reset by peer)
[19:40] * qhartman (~qhartman@den.direwolfdigital.com) has joined #ceph
[19:43] * qhartman (~qhartman@den.direwolfdigital.com) Quit ()
[19:47] * xabner (~xabner@dyn-72-33-17-199.uwnet.wisc.edu) has joined #ceph
[19:47] * xabner (~xabner@dyn-72-33-17-199.uwnet.wisc.edu) Quit (Remote host closed the connection)
[19:48] * Meths_ is now known as Meths
[19:48] * xabner (~xabner@2607:f388:1090:0:bc79:eb86:4281:33cf) has joined #ceph
[19:51] * jordanP (~jordan@213.215.2.194) Quit (Quit: Leaving)
[19:55] * brannmar (~osuka_@9S0AADDIT.tor-irc.dnsbl.oftc.net) Quit ()
[20:13] * dolgner (~textual@50-192-42-249-static.hfc.comcastbusiness.net) has joined #ceph
[20:24] * julen (~julen@2001:638:70e:11:f937:5147:3920:7c92) Quit (Ping timeout: 480 seconds)
[20:33] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[20:44] * fdmanana (~fdmanana@bl13-145-124.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[20:44] * branto (~branto@ip-213-220-232-132.net.upcbroadband.cz) Quit (Quit: Leaving.)
[20:44] * scuttle|afk is now known as scuttlemonkey
[20:44] * analbeard1 (~shw@support.memset.com) Quit (Ping timeout: 480 seconds)
[20:45] * scuttlemonkey is now known as scuttle|afk
[20:46] <TheSov> so does anyone know if WD red's wierd firmware is a good fit for ceph?
[20:48] <snakamoto> What is weird firmware?
[20:49] <TheSov> its readahead is supposed to be able to detect the beginnings and ends of files and reads them whole
[20:49] <TheSov> or trys to
[20:49] <TheSov> so in essence it trys to cache any object its reading
[20:50] <TheSov> its supposed to be smart enough to do it in raid even
[20:50] <TheSov> well if you use nasware
[20:50] * adrian15b (~kvirc@btactic.ddns.jazztel.es) Quit (Ping timeout: 480 seconds)
[20:50] <snakamoto> pretty interesting, I don't have any experience
[20:50] <snakamoto> It reminds me of another question I've had
[20:51] <snakamoto> On the XFS/BTRFS/EXT4 file system, does Ceph create a file for each object? or each pg? or ____ ??
[20:51] <TheSov> snakamoto, apparently so, we just finished speaking about that earlier
[20:52] <TheSov> let me quote
[20:53] <TheSov> Be-El> i just had a look at some 30kb files on the osds this morning
[20:53] <TheSov> <Be-El> TheSov: if the overall object the file is part of is smaller than 4 mb, no space is wasted
[20:53] <TheSov> <TheSov> "The default object size is 4 MB"
[20:53] <TheSov> <TheSov> right off the ceph website
[20:53] <TheSov> <m0zes> -rw-r--r-- 1 root root 286K Jul 10 16:00 10002a92e5c.00000000__head_0E800528__22_ffffffffffffffff_8
[20:53] <TheSov> <Be-El> TheSov: yes, all objects > 4MB are splitted into chunks of 4 MB
[21:00] <TheSov> snakamoto, does that answer your question?
[21:01] * scuttle|afk is now known as scuttlemonkey
[21:02] * rendar (~I@host118-186-dynamic.21-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:02] <snakamoto> Yes, that makes sense to me
[21:03] <snakamoto> but then, we you're using an access method that stripes across objects (like RadosGW), that readahead will be of reduced benefit
[21:04] <TheSov> right i was just wondering if the weirdo firmware causes speed degradation, less effective is not necessarily bad. but slowing it down is.
[21:04] <TheSov> besides my use case is primarily RBD, with some Cephfs mixed in
[21:12] <snakamoto> Thank you for the info, by the way
[21:42] * derjohn_mobi (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[21:44] * skorgu (skorgu@pylon.skorgu.net) Quit (Remote host closed the connection)
[21:52] * garphy is now known as garphy`aw
[21:53] * xabner (~xabner@2607:f388:1090:0:bc79:eb86:4281:33cf) Quit (Quit: Leaving...)
[22:13] * thansen (~thansen@17.253.sfcn.org) has joined #ceph
[22:13] * elder_ (~elder@h69-130-42-166.pqlkmn.broadband.dynamic.tds.net) has joined #ceph
[22:14] * julen (~julen@2001:638:70e:11:1cc9:5acd:fea4:a66c) has joined #ceph
[22:14] * TheSov2 (~TheSov@204.13.200.248) has joined #ceph
[22:15] * Da_Pineapple (~Jyron@4.tor.exit.babylon.network) has joined #ceph
[22:15] * ToMiles (~ToMiles@nl8x.mullvad.net) has joined #ceph
[22:16] * TheSov3 (~TheSov@204.13.200.248) has joined #ceph
[22:17] * dmick1 (~dmick@206.169.83.146) has joined #ceph
[22:21] * TheSov (~TheSov@cip-248.trustwave.com) Quit (Ping timeout: 480 seconds)
[22:22] * TheSov2 (~TheSov@204.13.200.248) Quit (Ping timeout: 480 seconds)
[22:25] * rendar (~I@host118-186-dynamic.21-87-r.retail.telecomitalia.it) has joined #ceph
[22:27] * dopesong (~dopesong@78-60-74-130.static.zebra.lt) has joined #ceph
[22:27] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[22:28] * derjohn_mobi (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[22:30] <bearkitten> hi, does rados cppool overwrite the target pool or just copies the contents of the source pool to the target pool?
[22:36] * dmick1 (~dmick@206.169.83.146) has left #ceph
[22:37] * jordanP (~jordan@bdv75-2-81-57-250-57.fbx.proxad.net) has joined #ceph
[22:40] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[22:42] * segutier (~segutier@sfo-vpn1.shawnlower.net) has joined #ceph
[22:44] * Da_Pineapple (~Jyron@7R2AADN4J.tor-irc.dnsbl.oftc.net) Quit ()
[22:46] <jcsp_> bearkitten: it iterates over objects in the source pool, and copies each one to the target pool.
[22:46] <jcsp_> so the only thing overwritten would be any objects that happened to exist with the same name
[22:50] * TheSov3 is now known as TheSov
[22:56] * dopesong (~dopesong@78-60-74-130.static.zebra.lt) Quit (Remote host closed the connection)
[23:01] * bsanders (~billysand@russell.dreamhost.com) has joined #ceph
[23:03] <bsanders> Has anyone tried Ceph with an SMR drive? Perhaps using a Caching Tier in front?
[23:04] <rkeene> You're not fooling anyone, Bernie.
[23:05] <snakamoto> haha
[23:05] <rkeene> #BLACKLIVESMATTER
[23:06] <bsanders> ...?
[23:06] <bsanders> Oh, bsanders. lol
[23:07] <bsanders> b for Bill :)
[23:07] <snakamoto> I have not seen anyone post recently that they've tried it
[23:07] <snakamoto> I have not read about anyone using it recently either
[23:08] <bsanders> SMR stuff is supposed to be handled by the underlying FS, right?
[23:08] * reed (~reed@2607:f298:a:607:29b1:9870:a5dd:125d) has joined #ceph
[23:08] * elder_ (~elder@h69-130-42-166.pqlkmn.broadband.dynamic.tds.net) Quit (Quit: Leaving)
[23:08] <snakamoto> should be handled by the drive controller. You're talking about shingled magnetic recording right?
[23:09] <bsanders> Was wondering if there would be any need for Ceph to be "SMR-aware"
[23:09] <bsanders> Yes, that's what I'm talking about.
[23:09] <snakamoto> As far as I've read, that's all drive-controller territory.
[23:09] <bsanders> Controller, that makes sense
[23:10] <bearkitten> jcsp_: hmm and is it common to have a object collision like that
[23:10] * julen (~julen@2001:638:70e:11:1cc9:5acd:fea4:a66c) Quit (Ping timeout: 480 seconds)
[23:14] * Jamana (~hassifa@torsrvs.snydernet.net) has joined #ceph
[23:16] * julen (~julen@2001:638:70e:11:1cc9:5acd:fea4:a66c) has joined #ceph
[23:19] * CheKoLyN (~saguilar@bender.parc.xerox.com) has joined #ceph
[23:20] <jcsp_> bearkitten: it depends what's in your pools.
[23:20] * kmARC (~kmARC@80-219-254-3.dclient.hispeed.ch) has joined #ceph
[23:26] * kmARC_ (~kmARC@80-219-254-3.dclient.hispeed.ch) has joined #ceph
[23:34] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[23:34] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[23:35] * julen (~julen@2001:638:70e:11:1cc9:5acd:fea4:a66c) Quit (Ping timeout: 480 seconds)
[23:38] * ilken (ilk@2602:63:c2a2:af00:9448:836c:798:aa67) has joined #ceph
[23:43] * julen (~julen@2001:638:70e:11:1cc9:5acd:fea4:a66c) has joined #ceph
[23:44] * Jamana (~hassifa@9S0AADDPS.tor-irc.dnsbl.oftc.net) Quit ()
[23:44] * Mousey (~demonspor@5NZAAF8PW.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:46] * jordanP (~jordan@bdv75-2-81-57-250-57.fbx.proxad.net) Quit (Quit: Leaving)
[23:52] <bearkitten> jcsp_: kvm images
[23:52] <bearkitten> so they were created on different pools
[23:57] * julen (~julen@2001:638:70e:11:1cc9:5acd:fea4:a66c) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.