#ceph IRC Log

Index

IRC Log for 2015-06-10

Timestamps are in GMT/BST.

[0:00] <TheSov> what constitutes huge?
[0:00] <TheSov> 100 osds?
[0:01] <TheSov> 200, 300?
[0:01] * dneary (~dneary@pool-96-252-45-212.bstnma.fios.verizon.net) has joined #ceph
[0:01] <TheSov> being so big, i cannot imagine that we are expected to keep using hosts, im guessing that at some point i should have a dedicated dns server?
[0:03] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) has joined #ceph
[0:03] * Rosenbluth (~isaxi@9S0AAAW3U.tor-irc.dnsbl.oftc.net) Quit ()
[0:03] * Gecko1986 (~Jebula@9S0AAAW40.tor-irc.dnsbl.oftc.net) has joined #ceph
[0:04] * jskinner (~jskinner@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[0:06] <koollman> TheSov: having a dns always helps. even wut
[0:06] <koollman> *with 10 machines
[0:07] <TheSov> i havent started looking at crushmaps and manipulating PG's
[0:08] <TheSov> for max performance tuning i would need to do that in depth correct?
[0:10] <TheSov> i was told that a ratio of 4 to 1, rust to ssd journal was good for high performance is that correct?
[0:13] <jidar> wouldn't that depend on the read/write ratios
[0:13] <jidar> and more importantly, the type of read/writes
[0:13] <lurbs> It's workload dependent, but that's a reasonable starting point.
[0:14] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[0:14] <TheSov> yes i assume so, but my goal is to use all comoddity disk. so 3tb wd's arent going to be very fast
[0:14] * badone__ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[0:14] <TheSov> so 1 ssd shuld be able to outperform 4 of those
[0:14] <jidar> what bus is the SSD sitting on?
[0:14] <TheSov> pci
[0:14] <jidar> directly/
[0:15] <jidar> and I assume you mean PCIe?
[0:15] <TheSov> yes
[0:15] <TheSov> 4 sata disks, 1 pcie
[0:15] <TheSov> actually 8 and 2
[0:15] <jidar> ok, so I believe even a 1x channel is going to do more than 4 disks of SATA bus
[0:15] <TheSov> per system
[0:15] <TheSov> right so performance should be good
[0:15] <TheSov> ?
[0:16] <jidar> could be, depends on workload ;)
[0:16] <jidar> how big is the PCIe ssd?
[0:16] <TheSov> 128gb
[0:16] <jidar> should be fine
[0:18] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[0:30] * dneary (~dneary@pool-96-252-45-212.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[0:33] * Gecko1986 (~Jebula@9S0AAAW40.tor-irc.dnsbl.oftc.net) Quit ()
[0:33] * dotblank (~starcoder@exit1.telostor.ca) has joined #ceph
[0:35] * wschulze1 (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:36] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[0:38] * jashank42 (~jashan42@117.197.168.193) Quit (Read error: Connection reset by peer)
[0:40] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[0:48] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[0:51] * vata (~vata@cable-21.246.173-197.electronicbox.net) has joined #ceph
[0:57] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[1:00] * rlrevell (~leer@184.52.129.221) has joined #ceph
[1:03] * dotblank (~starcoder@3DDAAAZAA.tor-irc.dnsbl.oftc.net) Quit ()
[1:03] * Plesioth (~Architect@destiny.enn.lu) has joined #ceph
[1:04] * alram (~alram@192.41.52.12) Quit (Quit: leaving)
[1:05] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:12] * Concubidated (~Adium@192.41.52.12) Quit (Ping timeout: 480 seconds)
[1:13] * CheKoLyN (~saguilar@bender.parc.xerox.com) Quit (Quit: Leaving)
[1:16] * rlrevell (~leer@184.52.129.221) Quit (Quit: Leaving.)
[1:19] * lcurtis (~lcurtis@47.19.105.250) Quit (Quit: Ex-Chat)
[1:26] * rlrevell (~leer@184.52.129.221) has joined #ceph
[1:31] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[1:32] * Concubidated (~Adium@66-87-127-163.pools.spcsdns.net) has joined #ceph
[1:33] * Plesioth (~Architect@5NZAADJEG.tor-irc.dnsbl.oftc.net) Quit ()
[1:33] * Mousey (~Arcturus@tor-elpresidente.piraten-nds.de) has joined #ceph
[1:33] * alram (~alram@64.134.221.151) has joined #ceph
[1:36] * JFQ (~ghartz@AStrasbourg-651-1-162-51.w90-6.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[1:39] * irq0 (~seri@amy.irq0.org) Quit (Ping timeout: 480 seconds)
[1:42] * LeaChim (~LeaChim@host86-175-32-176.range86-175.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:43] * jskinner (~jskinner@host-95-2-129.infobunker.com) has joined #ceph
[1:43] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) has joined #ceph
[1:46] * rlrevell (~leer@184.52.129.221) Quit (Quit: Leaving.)
[1:48] * badone__ is now known as badone
[1:49] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[1:51] * rlrevell (~leer@184.52.129.221) has joined #ceph
[1:52] * irq0 (~seri@cpu0.net) has joined #ceph
[1:52] <TheSov> germany should create its own linux distro and call it berlinux
[1:52] <TheSov> oh wait even better, munix
[1:53] * bandrus (~brian@131.sub-70-211-67.myvzw.com) Quit (Quit: Leaving.)
[1:53] * jskinner (~jskinner@host-95-2-129.infobunker.com) Quit (Ping timeout: 480 seconds)
[1:55] * ismell_ (~ismell@host-64-17-88-252.beyondbb.com) Quit (Ping timeout: 480 seconds)
[1:55] <florz> they called it limux ... kindof ;-)
[1:58] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[2:03] * Mousey (~Arcturus@9S0AAAW81.tor-irc.dnsbl.oftc.net) Quit ()
[2:03] * geegeegee (~Cue@tor.het.net) has joined #ceph
[2:03] * oms101 (~oms101@p20030057EA0B2000C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:11] * oms101 (~oms101@p20030057EA0A9A00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[2:12] * OutOfNoWhere (~rpb@199.68.195.102) has joined #ceph
[2:14] * rlrevell (~leer@184.52.129.221) Quit (Quit: Leaving.)
[2:14] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[2:15] * rlrevell (~leer@184.52.129.221) has joined #ceph
[2:21] * evanjfraser_ (~quassel@122.252.188.1) has joined #ceph
[2:22] * evanjfraser (~quassel@122.252.188.1) Quit (Read error: Connection reset by peer)
[2:24] * nsoffer (~nsoffer@bzq-79-180-80-86.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[2:24] * bene (~ben@c-24-60-237-191.hsd1.nh.comcast.net) Quit (Read error: Connection reset by peer)
[2:25] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[2:27] * rlrevell (~leer@184.52.129.221) Quit (Quit: Leaving.)
[2:30] * badone__ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[2:30] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[2:33] * geegeegee (~Cue@9S0AAAW92.tor-irc.dnsbl.oftc.net) Quit ()
[2:33] * neobenedict (~pakman__@politkovskaja.torservers.net) has joined #ceph
[2:34] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[2:36] * yanzheng (~zhyan@171.216.94.129) has joined #ceph
[2:37] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[2:40] * yguang11 (~yguang11@2001:4998:effd:600:4549:1c:69db:5c71) has joined #ceph
[2:45] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[2:49] * xinxinsh (~xinxinsh@192.102.204.38) has joined #ceph
[2:49] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[2:51] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[2:53] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[2:57] * rlrevell (~leer@184.52.129.221) has joined #ceph
[3:03] * neobenedict (~pakman__@5NZAADJJG.tor-irc.dnsbl.oftc.net) Quit ()
[3:03] * offender (~brianjjo@176.10.104.240) has joined #ceph
[3:04] * midnight_ (~midnightr@216.113.160.71) Quit (Remote host closed the connection)
[3:07] * shang (~ShangWu@223-137-59-170.EMOME-IP.hinet.net) has joined #ceph
[3:09] * wushudoin_ (~wushudoin@transit-86-181-132-209.redhat.com) has joined #ceph
[3:13] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[3:13] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:13] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[3:13] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit ()
[3:15] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[3:16] * shang (~ShangWu@223-137-59-170.EMOME-IP.hinet.net) Quit (Quit: Ex-Chat)
[3:16] * jbautista- (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[3:16] * shang (~ShangWu@223-137-59-170.EMOME-IP.hinet.net) has joined #ceph
[3:17] * wushudoin_ (~wushudoin@transit-86-181-132-209.redhat.com) Quit (Ping timeout: 480 seconds)
[3:17] * wushudoin_ (~wushudoin@38.140.108.2) has joined #ceph
[3:21] * madkiss (~madkiss@2001:6f8:12c3:f00f:eca5:a55b:7447:a800) Quit (Read error: Connection reset by peer)
[3:21] * madkiss (~madkiss@2001:6f8:12c3:f00f:2c77:f2ed:c6bc:462f) has joined #ceph
[3:24] * capri_on (~capri@212.218.127.222) Quit (Read error: Connection reset by peer)
[3:25] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[3:25] * rlrevell (~leer@184.52.129.221) Quit (Quit: Leaving.)
[3:27] * ircolle (~ircolle@206.169.83.146) Quit (Ping timeout: 480 seconds)
[3:31] * joao (~joao@249.38.136.95.rev.vodafone.pt) Quit (Quit: Leaving)
[3:31] * alram_ (~alram@172.56.39.30) has joined #ceph
[3:33] * offender (~brianjjo@9S0AAAXB0.tor-irc.dnsbl.oftc.net) Quit ()
[3:33] * darks (~sese_@spftor1e1.privacyfoundation.ch) has joined #ceph
[3:33] * georgem (~Adium@206-248-174-223.dsl.teksavvy.com) has joined #ceph
[3:37] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[3:37] * alram (~alram@64.134.221.151) Quit (Ping timeout: 480 seconds)
[3:38] * xinxinsh (~xinxinsh@192.102.204.38) Quit (Remote host closed the connection)
[3:38] * OutOfNoWhere (~rpb@199.68.195.102) Quit (Ping timeout: 480 seconds)
[3:39] * jclm (~jclm@122.181.21.134) Quit (Quit: Leaving.)
[3:40] * owasserm (~owasserm@206.169.83.146) Quit (Ping timeout: 480 seconds)
[3:40] * ganesh (~oftc-webi@103.27.8.44) Quit (Remote host closed the connection)
[3:42] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[3:47] * rlrevell (~leer@184.52.129.221) has joined #ceph
[3:50] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[3:51] * fam_away is now known as fam
[3:54] * shohn1 (~shohn@dslb-178-002-076-215.178.002.pools.vodafone-ip.de) has joined #ceph
[3:55] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[3:56] * CheKoLyN (~saguilar@c-24-23-216-79.hsd1.ca.comcast.net) has joined #ceph
[3:57] * dneary (~dneary@pool-96-252-45-212.bstnma.fios.verizon.net) has joined #ceph
[3:57] * yguang11 (~yguang11@2001:4998:effd:600:4549:1c:69db:5c71) Quit (Remote host closed the connection)
[3:58] * shohn (~shohn@dslb-178-008-199-047.178.008.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[3:59] * rlrevell (~leer@184.52.129.221) Quit (Quit: Leaving.)
[4:03] * darks (~sese_@9S0AAAXDC.tor-irc.dnsbl.oftc.net) Quit ()
[4:03] * Revo84 (~Rehevkor@exit.kapustik.info) has joined #ceph
[4:04] * zhaochao (~zhaochao@125.39.8.226) has joined #ceph
[4:06] * rlrevell (~leer@184.52.129.221) has joined #ceph
[4:08] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[4:11] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:15] * dneary (~dneary@pool-96-252-45-212.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[4:17] * CAPSLOCK2000 (~oftc@2001:984:3be3:1::8) Quit (Ping timeout: 480 seconds)
[4:18] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[4:20] * zack_dolby (~textual@e0109-114-22-11-74.uqwimax.jp) has joined #ceph
[4:20] * rlrevell (~leer@184.52.129.221) Quit (Quit: Leaving.)
[4:22] * rlrevell (~leer@184.52.129.221) has joined #ceph
[4:27] * OutOfNoWhere (~rpb@199.68.195.102) has joined #ceph
[4:30] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[4:30] * shang (~ShangWu@223-137-59-170.EMOME-IP.hinet.net) Quit (Quit: Ex-Chat)
[4:32] * shang (~ShangWu@223-137-59-170.EMOME-IP.hinet.net) has joined #ceph
[4:33] * Revo84 (~Rehevkor@9S0AAAXE4.tor-irc.dnsbl.oftc.net) Quit ()
[4:33] * slowriot (apx@r2.geoca.st) has joined #ceph
[4:35] * zack_dolby (~textual@e0109-114-22-11-74.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:35] * shang (~ShangWu@223-137-59-170.EMOME-IP.hinet.net) Quit (Read error: No route to host)
[4:37] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) has joined #ceph
[4:40] * xinxinsh (~xinxinsh@shzdmzpr01-ext.sh.intel.com) has joined #ceph
[4:41] * owasserm (~owasserm@216.1.187.164) has joined #ceph
[4:42] * OutOfNoWhere (~rpb@199.68.195.102) Quit (Ping timeout: 480 seconds)
[4:43] * blynch (~blynch@vm-nat.msi.umn.edu) has joined #ceph
[4:46] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[4:50] * wschulze1 (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[4:55] * vbellur (~vijay@122.172.46.139) Quit (Ping timeout: 480 seconds)
[4:57] * rlrevell (~leer@184.52.129.221) Quit (Quit: Leaving.)
[4:57] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[5:01] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[5:02] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[5:03] * slowriot (apx@7R2AABLJT.tor-irc.dnsbl.oftc.net) Quit ()
[5:04] * rlrevell (~leer@184.52.129.221) has joined #ceph
[5:06] * kefu (~kefu@60.247.111.66) has joined #ceph
[5:07] * Rehevkor (~Bored@tor-exit-node.nip.su) has joined #ceph
[5:07] * kefu (~kefu@60.247.111.66) Quit ()
[5:08] * overclk (~overclk@121.244.87.117) has joined #ceph
[5:09] * RC (~Adium@123.201.253.233) has joined #ceph
[5:10] * RC is now known as Guest1104
[5:17] * jclm (~jclm@121.244.87.117) has joined #ceph
[5:18] * calvinx (~calvin@101.100.172.246) has joined #ceph
[5:20] * Vacuum__ (~Vacuum@88.130.222.49) has joined #ceph
[5:21] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[5:21] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[5:26] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[5:27] * Vacuum_ (~Vacuum@i59F790E8.versanet.de) Quit (Ping timeout: 480 seconds)
[5:27] * yguang11_ (~yguang11@2001:4998:effd:7804::108a) has joined #ceph
[5:30] * alram (~alram@50.95.199.16) has joined #ceph
[5:33] * alram_ (~alram@172.56.39.30) Quit (Ping timeout: 480 seconds)
[5:34] * yguang11 (~yguang11@12.31.82.125) Quit (Ping timeout: 480 seconds)
[5:37] * Rehevkor (~Bored@0SGAABBCT.tor-irc.dnsbl.oftc.net) Quit ()
[5:37] * Thononain (~Joppe4899@tor-exit1-readme.dfri.se) has joined #ceph
[5:39] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[5:47] * yguang11_ (~yguang11@2001:4998:effd:7804::108a) Quit (Ping timeout: 480 seconds)
[5:50] * squ (~Thunderbi@46.109.36.167) has joined #ceph
[5:52] * lightspeed (~lightspee@2001:8b0:16e:1:8326:6f70:89f:8f9c) Quit (Ping timeout: 480 seconds)
[5:54] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[5:59] * fdmanana__ (~fdmanana@bl13-130-142.dsl.telepac.pt) has joined #ceph
[6:00] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[6:01] * lightspeed (~lightspee@2001:8b0:16e:1:8326:6f70:89f:8f9c) has joined #ceph
[6:02] * shang (~ShangWu@175.41.48.77) has joined #ceph
[6:06] * KevinPerks (~Adium@2606:a000:80ad:1300:6584:ea02:b68e:1898) Quit (Quit: Leaving.)
[6:06] * fmanana (~fdmanana@bl13-135-31.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[6:07] * Thononain (~Joppe4899@3DDAAAZS4.tor-irc.dnsbl.oftc.net) Quit ()
[6:07] * x303 (~elt@ncc-1701-a.tor-exit.network) has joined #ceph
[6:09] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) has joined #ceph
[6:09] * xarses (~andreww@12.10.113.130) has joined #ceph
[6:10] * alram_ (~alram@64.134.221.151) has joined #ceph
[6:14] * alram (~alram@50.95.199.16) Quit (Ping timeout: 480 seconds)
[6:17] * Guest1104 (~Adium@123.201.253.233) Quit (Ping timeout: 480 seconds)
[6:24] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Remote host closed the connection)
[6:27] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[6:33] * georgem (~Adium@206-248-174-223.dsl.teksavvy.com) Quit (Quit: Leaving.)
[6:33] * RC (~Adium@123.201.211.155) has joined #ceph
[6:33] * RC is now known as Guest1109
[6:37] * yguang11 (~yguang11@12.31.82.125) Quit (Remote host closed the connection)
[6:37] * x303 (~elt@5NZAADJU3.tor-irc.dnsbl.oftc.net) Quit ()
[6:37] * n0x1d (~Scrin@herngaard.torservers.net) has joined #ceph
[6:44] * xinxinsh (~xinxinsh@shzdmzpr01-ext.sh.intel.com) Quit (Remote host closed the connection)
[6:47] * xinxinsh (~xinxinsh@shzdmzpr01-ext.sh.intel.com) has joined #ceph
[6:48] * amote (~amote@121.244.87.116) has joined #ceph
[6:52] * alram_ (~alram@64.134.221.151) Quit (Ping timeout: 480 seconds)
[6:55] * CheKoLyN (~saguilar@c-24-23-216-79.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[6:59] * scuttlemonkey is now known as scuttle|afk
[7:00] * Guest1109 (~Adium@123.201.211.155) Quit (Quit: Leaving.)
[7:03] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) has joined #ceph
[7:07] * n0x1d (~Scrin@5NZAADJWC.tor-irc.dnsbl.oftc.net) Quit ()
[7:07] * Redshift (~AG_Clinto@politkovskaja.torservers.net) has joined #ceph
[7:08] * zaitcev (~zaitcev@2001:558:6001:10:61d7:f51f:def8:4b0f) Quit (Quit: Bye)
[7:21] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[7:21] * kefu (~kefu@124.127.168.156) has joined #ceph
[7:25] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:28] * wschulze1 (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:28] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[7:28] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit ()
[7:30] * kefu (~kefu@124.127.168.156) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:31] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[7:35] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Ping timeout: 480 seconds)
[7:35] * kefu (~kefu@124.127.168.156) has joined #ceph
[7:36] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[7:37] * Redshift (~AG_Clinto@5NZAADJXN.tor-irc.dnsbl.oftc.net) Quit ()
[7:37] * eXeler0n (~Shnaw@176.10.99.206) has joined #ceph
[7:54] * haomaiwang (~haomaiwan@114.111.166.250) Quit (Remote host closed the connection)
[7:58] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[7:59] * kefu (~kefu@124.127.168.156) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[8:02] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[8:03] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[8:07] * eXeler0n (~Shnaw@5NZAADJZH.tor-irc.dnsbl.oftc.net) Quit ()
[8:11] * oro (~oro@84-72-20-79.dclient.hispeed.ch) has joined #ceph
[8:13] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) has joined #ceph
[8:15] * vbellur (~vijay@121.244.87.117) has joined #ceph
[8:15] * xinxinsh (~xinxinsh@shzdmzpr01-ext.sh.intel.com) Quit (Remote host closed the connection)
[8:22] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) Quit (Quit: Not that there is anything wrong with that)
[8:26] * oro (~oro@84-72-20-79.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:28] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[8:34] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[8:37] * Mattress (~Wizeon@195.169.125.226) has joined #ceph
[8:39] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:40] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[8:49] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[8:49] * treenerd (~treenerd@85.193.140.98) has joined #ceph
[8:50] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[8:52] * treenerd (~treenerd@85.193.140.98) Quit (Remote host closed the connection)
[8:53] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:53] <Be-El> hi
[8:53] <T1w> mornings
[8:54] * treenerd (~treenerd@85.193.140.98) has joined #ceph
[8:59] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[8:59] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[9:02] * branto (~borix@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[9:02] * nsoffer (~nsoffer@bzq-79-180-80-86.red.bezeqint.net) has joined #ceph
[9:05] * kefu (~kefu@221.179.140.132) has joined #ceph
[9:07] * Mattress (~Wizeon@5NZAADJ2Q.tor-irc.dnsbl.oftc.net) Quit ()
[9:07] * Xylios (~Tralin|Sl@tor.piratenpartei-nrw.de) has joined #ceph
[9:09] * RC (~Adium@121.244.95.222) has joined #ceph
[9:09] * RC is now known as Guest1116
[9:11] * owasserm (~owasserm@216.1.187.164) Quit (Ping timeout: 480 seconds)
[9:12] * kefu (~kefu@221.179.140.132) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[9:13] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[9:16] * sleinen (~Adium@130.59.94.93) has joined #ceph
[9:17] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:17] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[9:18] * sleinen1 (~Adium@2001:620:0:82::10b) has joined #ceph
[9:20] * analbeard (~shw@support.memset.com) has joined #ceph
[9:22] * sleinen (~Adium@130.59.94.93) Quit (Read error: Connection reset by peer)
[9:24] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[9:24] * reed (~reed@net-2-39-63-50.cust.vodafonedsl.it) has joined #ceph
[9:27] * sleinen1 (~Adium@2001:620:0:82::10b) Quit (Read error: Connection reset by peer)
[9:27] * CAPSLOCK2000 (~oftc@2001:984:3be3:1::8) has joined #ceph
[9:28] * oblu (~o@62.109.134.112) Quit (Quit: ~)
[9:28] * kefu (~kefu@221.179.140.132) has joined #ceph
[9:29] * dgurtner (~dgurtner@178.197.231.53) has joined #ceph
[9:30] * oblu (~o@62.109.134.112) has joined #ceph
[9:30] * kefu (~kefu@221.179.140.132) Quit ()
[9:31] * kefu (~kefu@221.179.140.132) has joined #ceph
[9:33] * vbellur (~vijay@121.244.87.124) has joined #ceph
[9:37] * Xylios (~Tralin|Sl@5NZAADJ4D.tor-irc.dnsbl.oftc.net) Quit ()
[9:37] * thomnico (~thomnico@2a01:e35:8b41:120:bc5f:291e:3856:66bf) has joined #ceph
[9:38] * linjan (~linjan@109.253.51.166) has joined #ceph
[9:38] * sleinen (~Adium@130.59.94.93) has joined #ceph
[9:40] * sleinen1 (~Adium@2001:620:0:82::10d) has joined #ceph
[9:41] * kefu (~kefu@221.179.140.132) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[9:41] * linjan (~linjan@109.253.51.166) Quit (Read error: Connection reset by peer)
[9:42] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[9:44] * jashank42 (~jashan42@117.214.251.176) has joined #ceph
[9:44] * jeroen_ (~jeroen@37.74.194.90) Quit (Remote host closed the connection)
[9:46] * sleinen (~Adium@130.59.94.93) Quit (Ping timeout: 480 seconds)
[9:47] <smerz> good morning
[9:50] * yanzheng1 (~zhyan@125.71.107.110) has joined #ceph
[9:51] * xinxinsh (~xinxinsh@shzdmzpr01-ext.sh.intel.com) has joined #ceph
[9:53] * emik0 (~circuser-@91.241.13.28) Quit (Remote host closed the connection)
[9:54] * vbellur (~vijay@121.244.87.117) has joined #ceph
[9:54] * yanzheng (~zhyan@171.216.94.129) Quit (Ping timeout: 480 seconds)
[9:57] * rotbeard (~redbeard@x5f752118.dyn.telefonica.de) has joined #ceph
[9:59] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[10:00] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[10:02] * sleinen1 (~Adium@2001:620:0:82::10d) Quit (Read error: Connection reset by peer)
[10:02] * oro (~oro@2001:620:20:16:159d:8f9e:2ff5:5914) has joined #ceph
[10:06] * sleinen (~Adium@130.59.94.93) has joined #ceph
[10:07] * Sophie1 (~Linkshot@176.10.104.240) has joined #ceph
[10:08] * ramonskie (ab15507e@107.161.19.53) has joined #ceph
[10:08] * sleinen1 (~Adium@2001:620:0:82::108) has joined #ceph
[10:09] * nsoffer (~nsoffer@bzq-79-180-80-86.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[10:10] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[10:10] * ramonskie (ab15507e@107.161.19.53) Quit ()
[10:15] * sleinen (~Adium@130.59.94.93) Quit (Ping timeout: 480 seconds)
[10:24] * Vivek_V_C (77970422@107.161.19.53) has joined #ceph
[10:25] <tuxcraft1r> morning all
[10:25] <tuxcraft1r> i used ceph-deploy to setup my ceph cluster
[10:26] <tuxcraft1r> i want to add a line to the ceph.conf on all my nodes
[10:28] <tuxcraft1r> should I do this manually on all my nodes
[10:28] <tuxcraft1r> or can i use ceph-deploy somehow?
[10:32] * Vivek_V_C (77970422@107.161.19.53) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[10:34] <tuxcraft1r> To send an updated copy of the Ceph configuration file to hosts in your cluster, use the config push command.
[10:36] <peem_> Morning. I've tried yesterday with no great luck, so I'lk ask again today : Anybody can help me debugging or setting up a radosgw on CentOS 7 ? ceph-deploy seems to fail at starting radosgw, while manual process left me with service that randomly refuses to start.
[10:37] * Sophie1 (~Linkshot@9S0AAAXWG.tor-irc.dnsbl.oftc.net) Quit ()
[10:37] * Epi (~xENO_@chomsky.torservers.net) has joined #ceph
[10:43] * oblu (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[10:52] * jrocha (~jrocha@vagabond.cern.ch) has joined #ceph
[10:53] * The1w (~jens@node3.survey-it.dk) has joined #ceph
[10:54] * T1w (~jens@node3.survey-it.dk) Quit (Remote host closed the connection)
[10:57] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[10:58] <tuxcraft1r> peem_: sorry peem_ i have not used radogw so far
[10:59] <tuxcraft1r> i added [client]
[10:59] <tuxcraft1r> rbd cache = true
[10:59] <tuxcraft1r> to all my nodes
[10:59] <tuxcraft1r> rebooted all my nodes
[10:59] <tuxcraft1r> but ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | grep cache
[10:59] <tuxcraft1r> keeps returning: "rbd_cache": "false",
[10:59] <tuxcraft1r> also http://ceph.com/docs/master/rbd/rbd-config-ref/
[10:59] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[10:59] <tuxcraft1r> is telling me the default is true ?
[11:03] * nsoffer (~nsoffer@192.116.172.16) has joined #ceph
[11:06] * tganguly (~tganguly@121.244.87.117) has joined #ceph
[11:06] <tganguly> loicd, ping
[11:06] <tganguly> While creating ecpool the pgs are getting stuck in inactive/unclean state...
[11:07] <tganguly> Please find the details in link
[11:07] <tganguly> http://paste2.org/0mVWsKJd
[11:07] <loicd> tganguly: could you show the output of ceph osd dump / ceph pg dump ? I suspect some PG are missing OSD
[11:07] <loicd> [NONE,NONE,NONE]
[11:07] <tganguly> yes
[11:07] * CAPSLOCK2000 (~oftc@2001:984:3be3:1::8) Quit (Ping timeout: 480 seconds)
[11:07] <loicd> means these PG have no osd
[11:07] * Epi (~xENO_@7R2AABLSW.tor-irc.dnsbl.oftc.net) Quit ()
[11:07] <tganguly> i also pasted the commands
[11:08] <tganguly> http://paste2.org/XsCfd98B
[11:08] <tganguly> loicd, ^^
[11:08] <tganguly> osd tree
[11:08] * Grum (~hyst@212.7.194.71) has joined #ceph
[11:09] <loicd> ceph osd dump ?
[11:09] <tganguly> pasted the link
[11:09] <loicd> I think you're not using the crush ruleset you think you are using
[11:09] * jashank42 (~jashan42@117.214.251.176) Quit (Ping timeout: 480 seconds)
[11:10] <loicd> tganguly: which link ?
[11:10] <tganguly> where i am doing wrong ?
[11:10] <tganguly> http://paste2.org/XsCfd98B
[11:10] <tganguly> ceph osd dump output
[11:10] <loicd> tganguly: this is ceph osd tree
[11:10] <loicd> I need ceph osd dump
[11:10] <loicd> which is different
[11:11] <tganguly> sorry wrong link
[11:11] <tganguly> this is correct one
[11:11] <tganguly> http://paste2.org/CIXA3JpN
[11:13] * haomaiwang (~haomaiwan@118.244.254.29) has joined #ceph
[11:13] <tganguly> correct link, ignore the previous one
[11:13] <tganguly> http://paste2.org/eM81bIxm
[11:13] <loicd> 9.e 0 0 0 0 0 0 0 0 creating 0.000000 0'0 0:0 [NONE,NONE,NONE] -1 [NONE,NONE,NONE] -1 0'0 2015-06-10 04:47:55.642688 0'0 2015-06-10 04:47:55.642688
[11:13] <tganguly> loicd, ^^
[11:13] <loicd> is from pool 9
[11:14] <loicd> tganguly: but pool nine does not seem to exist according to http://paste2.org/eM81bIxm
[11:14] * Concubidated1 (~Adium@66.87.126.83) has joined #ceph
[11:14] <loicd> tganguly: what is the full output of ceph pg dump ?
[11:16] * mtanski (~mtanski@65.244.82.98) Quit (Read error: Connection reset by peer)
[11:17] * mtanski (~mtanski@65.244.82.98) has joined #ceph
[11:17] <loicd> tganguly: I think the ruleset created by the erasure code plugin does not work because there is only one level (root ssd). Could you show the full output of the crush table please ?
[11:17] <tganguly> loicd, pm
[11:18] * sleinen1 (~Adium@2001:620:0:82::108) Quit (Read error: Connection reset by peer)
[11:18] <tganguly> loicd, ^^
[11:19] * Concubidated (~Adium@66-87-127-163.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[11:20] * jashank42 (~jashan42@117.197.178.40) has joined #ceph
[11:21] * sleinen (~Adium@130.59.94.93) has joined #ceph
[11:21] <loicd> tganguly: actually I think I just need the crush map
[11:21] * sleinen (~Adium@130.59.94.93) Quit ()
[11:23] * nsoffer (~nsoffer@192.116.172.16) Quit (Ping timeout: 480 seconds)
[11:23] <loicd> tganguly: you have http://paste2.org/HkUwbUO3
[11:23] <loicd> step chooseleaf indep 0 type host
[11:24] <loicd> and because there are no host at all, it fails, always
[11:25] * bitserker (~toni@88.87.194.130) has joined #ceph
[11:25] * reed_ (~reed@net-2-39-63-50.cust.vodafonedsl.it) has joined #ceph
[11:25] <loicd> from http://paste2.org/0mVWsKJd it shows that you have created a profile with ruleset-failure-domain=osd
[11:25] <loicd> as you should have, this is good
[11:25] <loicd> but
[11:25] <loicd> I suspect you tried a first time without ruleset-failure-domain=osd and it created this ecpool ruleset
[11:25] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[11:25] <loicd> the simplest way to verify this is to run
[11:25] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) Quit (Remote host closed the connection)
[11:26] <loicd> ceph osd pool create ecpool2 1 1 erasure myprofile
[11:26] <loicd> (there is going to be just one PG but that's ok)
[11:26] <loicd> as a side effect it will create the crush ruleset ecpool2 which will be correct
[11:26] <loicd> tganguly: does that make sense ?
[11:28] <tganguly> it is working now...but didnt get
[11:28] <tganguly> * the complete picture
[11:29] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[11:29] <loicd> the reason why it's not working is because it is re-using an incorrect crush ruleset that you created implicitly
[11:29] <loicd> when an implicit ruleset is created for the benefit of an erasure coded pool, it is not deleted afterwards
[11:29] <loicd> not deleted when the pool is deleted I mean
[11:30] <tganguly> then isn't it a BUG
[11:30] * jeroenvh (~jeroen@37.74.194.90) has joined #ceph
[11:30] <loicd> tganguly: no :-)
[11:30] * badone__ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[11:31] * linjan (~linjan@195.110.41.9) has joined #ceph
[11:31] <tganguly> loicd, :)
[11:31] <tganguly> loicd, Thanks now it works
[11:31] <loicd> it's an incorrect ruleset and you can get rid of it with ceph osd crush rule rm ecpool
[11:31] * reed (~reed@net-2-39-63-50.cust.vodafonedsl.it) Quit (Ping timeout: 480 seconds)
[11:32] <loicd> happy to help :-)
[11:32] <tganguly> loicd, got it thanks
[11:32] * pikos (~pikos@c-24-60-202-92.hsd1.ma.comcast.net) has joined #ceph
[11:33] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[11:34] * pikos (~pikos@c-24-60-202-92.hsd1.ma.comcast.net) Quit ()
[11:35] * pikos (~pikos@c-24-60-202-92.hsd1.ma.comcast.net) has joined #ceph
[11:35] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) has joined #ceph
[11:37] * Grum (~hyst@9S0AAAXZF.tor-irc.dnsbl.oftc.net) Quit ()
[11:37] * N3X15 (~Zeis@aurora.enn.lu) has joined #ceph
[11:39] * pikos (~pikos@c-24-60-202-92.hsd1.ma.comcast.net) has left #ceph
[11:43] * karnan (~karnan@121.244.87.117) Quit (Quit: Leaving)
[11:43] * karnan (~karnan@121.244.87.117) has joined #ceph
[11:46] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[11:47] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) has joined #ceph
[11:54] * jks (~jks@178.155.151.121) has joined #ceph
[11:55] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[12:01] * sleinen (~Adium@130.59.94.93) has joined #ceph
[12:02] * sleinen1 (~Adium@2001:620:0:82::101) has joined #ceph
[12:07] * N3X15 (~Zeis@7R2AABLUR.tor-irc.dnsbl.oftc.net) Quit ()
[12:07] * Aethis (~Drezil@chomsky.torservers.net) has joined #ceph
[12:09] * sleinen (~Adium@130.59.94.93) Quit (Ping timeout: 480 seconds)
[12:09] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[12:13] * jklare (~jklare@185.27.181.36) Quit (Ping timeout: 480 seconds)
[12:15] * sankarshan (~sankarsha@183.87.39.242) Quit (Ping timeout: 480 seconds)
[12:16] * nsoffer (~nsoffer@bzq-79-180-80-86.red.bezeqint.net) has joined #ceph
[12:18] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[12:23] * sankarshan (~sankarsha@183.87.39.242) has joined #ceph
[12:26] * sleinen (~Adium@130.59.94.93) has joined #ceph
[12:27] * sleinen1 (~Adium@2001:620:0:82::101) has joined #ceph
[12:30] * jashank42 (~jashan42@117.197.178.40) Quit (Ping timeout: 480 seconds)
[12:34] * sleinen (~Adium@130.59.94.93) Quit (Ping timeout: 480 seconds)
[12:35] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[12:37] * Aethis (~Drezil@5NZAADKC0.tor-irc.dnsbl.oftc.net) Quit ()
[12:37] * Kaervan1 (~rhonabwy@9S0AAAX24.tor-irc.dnsbl.oftc.net) has joined #ceph
[12:38] * yerrysherry (~yerrysher@ns.milieuinfo.be) has joined #ceph
[12:40] * yerrysherry (~yerrysher@ns.milieuinfo.be) Quit ()
[12:40] * jashank42 (~jashan42@117.207.182.51) has joined #ceph
[12:43] * calvinx is now known as Guest1125
[12:43] * Guest1125 (~calvin@101.100.172.246) Quit (Read error: Connection reset by peer)
[12:43] * calvinx (~calvin@101.100.172.246) has joined #ceph
[12:47] * Flynn (~stefan@89.207.24.152) has joined #ceph
[12:50] <Flynn> Hi All; I???m installing ceph (Hammer) on a 3-node cluster and after a stresstest, one node failed (out of memory) and hung. I???d like to go back to the Giant release, but ceph-deploy insists on installing hammer (while my apt repository is set to http://ceph.com/debian-giant/. It actually also replaces /etc/apt/sources.list.d/ceph.list. How can I prevent that and install Giant instead of Hammer?
[12:50] * thomnico (~thomnico@2a01:e35:8b41:120:bc5f:291e:3856:66bf) Quit (Ping timeout: 480 seconds)
[12:54] <jcsp> ceph-deploy has a command line argument for what version you want, it's something like --stable=giant
[12:58] <Flynn> hmm, not in the version that I have apparently.
[12:58] <Flynn> ceph-deploy v1.5.25
[12:58] <jcsp> pretty sure it's had it for a while. Not promising that's what it's called
[13:00] * reed_ (~reed@net-2-39-63-50.cust.vodafonedsl.it) Quit (Quit: Ex-Chat)
[13:00] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Read error: Connection reset by peer)
[13:01] <Flynn> it is ???ceph-deploy install --release emperor???
[13:01] <Flynn> thankx :)
[13:04] * jklare (~jklare@185.27.181.36) has joined #ceph
[13:07] * Kaervan1 (~rhonabwy@9S0AAAX24.tor-irc.dnsbl.oftc.net) Quit ()
[13:08] * xanax` (~biGGer@nx-74205.tor-exit.network) has joined #ceph
[13:11] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) has joined #ceph
[13:15] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) Quit (Remote host closed the connection)
[13:18] <peem_> is radosgw in hammer capable of doing ssl ? Any hints on how to do it if so ?
[13:18] * amote (~amote@121.244.87.116) Quit (Ping timeout: 480 seconds)
[13:19] <flaf> peem_: I'm not an expert but I think it's possible with apache2 but not with civetweb (not completely sure).
[13:22] * ganders (~root@190.2.42.21) has joined #ceph
[13:23] * amote (~amote@121.244.87.116) has joined #ceph
[13:24] * zhaochao (~zhaochao@125.39.8.226) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 38.0.1/20150526223604])
[13:28] * oblu (~o@62.109.134.112) has joined #ceph
[13:29] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[13:33] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[13:34] * jashank42 (~jashan42@117.207.182.51) Quit (Ping timeout: 480 seconds)
[13:36] * oro (~oro@2001:620:20:16:159d:8f9e:2ff5:5914) Quit (Ping timeout: 480 seconds)
[13:37] * xanax` (~biGGer@5NZAADKGA.tor-irc.dnsbl.oftc.net) Quit ()
[13:39] * yerrysherry (~yerrysher@ns.milieuinfo.be) has joined #ceph
[13:41] * jclm (~jclm@121.244.87.117) Quit (Quit: Leaving.)
[13:42] * jashank42 (~jashan42@61.1.52.239) has joined #ceph
[13:45] * oblu (~o@62.109.134.112) Quit (Remote host closed the connection)
[13:46] * xinxinsh (~xinxinsh@shzdmzpr01-ext.sh.intel.com) Quit (Remote host closed the connection)
[13:47] * georgem (~Adium@184.151.190.197) has joined #ceph
[13:48] * KevinPerks (~Adium@2606:a000:80ad:1300:ad4e:5f74:d689:ce54) has joined #ceph
[13:50] * georgem (~Adium@184.151.190.197) Quit ()
[13:50] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[13:52] * flisky (~Thunderbi@106.39.60.34) Quit (Ping timeout: 480 seconds)
[13:53] * oblu (~o@62.109.134.112) has joined #ceph
[13:58] * amote (~amote@121.244.87.116) Quit (Ping timeout: 480 seconds)
[13:59] * ganders_ (~root@190.2.42.21) has joined #ceph
[14:00] * yerrysherry (~yerrysher@ns.milieuinfo.be) Quit (Ping timeout: 480 seconds)
[14:01] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[14:01] * rlrevell (~leer@184.52.129.221) Quit (Quit: Leaving.)
[14:03] * garphy`aw is now known as garphy
[14:05] <rotbeard> is there a cli shorty to see the sum of all objects in one cluster (over all pools)
[14:05] * ganders (~root@190.2.42.21) Quit (Ping timeout: 480 seconds)
[14:06] <flaf> rotbeard: I know `rados -p "$pool_name" ls -` but it's just for a specific pool.
[14:07] <rotbeard> flaf, yep. with 'rados df' or 'ceph df' I also see all objects but just per pool
[14:08] <flaf> Ah sorry you just want the number of objects.
[14:08] * yerrysherry (~yerrysher@193.191.136.132) has joined #ceph
[14:08] * Sliker (~Behedwin@spftor1e1.privacyfoundation.ch) has joined #ceph
[14:08] <georgem> rotbeard: ceph -s
[14:08] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) has joined #ceph
[14:08] * monsted (~monsted@rootweiler.dk) has joined #ceph
[14:09] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[14:09] <rotbeard> georgem, so output like "recovery 4500203/36787000" will show the sum of all object? 36787000 in that example
[14:10] * amote (~amote@121.244.87.116) has joined #ceph
[14:12] <georgem> rotbeard: the pgmap line ends with total number of objects in all the pools
[14:14] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[14:14] <rotbeard> georgem, i see. thanks. sometimes someone miss the forest for the trees ;)
[14:15] * xarses (~andreww@12.10.113.130) Quit (Ping timeout: 480 seconds)
[14:18] * tganguly (~tganguly@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:21] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[14:22] * jashank42 (~jashan42@61.1.52.239) Quit (Ping timeout: 480 seconds)
[14:22] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[14:24] * joao (~joao@249.38.136.95.rev.vodafone.pt) has joined #ceph
[14:24] * ChanServ sets mode +o joao
[14:24] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[14:25] * i_m (~ivan.miro@deibp9eh1--blueice4n2.emea.ibm.com) has joined #ceph
[14:28] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[14:29] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) has joined #ceph
[14:29] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[14:34] * The1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[14:35] * oro (~oro@2001:620:20:16:2886:b379:e3d5:a8c1) has joined #ceph
[14:37] * Sliker (~Behedwin@5NZAADKI0.tor-irc.dnsbl.oftc.net) Quit ()
[14:37] * KUSmurf (~Salamande@89.105.194.77) has joined #ceph
[14:40] * wkennington (~william@76.77.181.17) Quit (Remote host closed the connection)
[14:42] * linuxkidd (~linuxkidd@207.236.250.131) has joined #ceph
[14:43] * jrankin (~jrankin@nat-pool-bos-t.redhat.com) has joined #ceph
[14:43] * jashank42 (~jashan42@117.220.136.229) has joined #ceph
[14:44] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[14:45] * squ (~Thunderbi@46.109.36.167) Quit (Quit: squ)
[14:49] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[14:51] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[14:53] * jskinner (~jskinner@host-95-2-129.infobunker.com) has joined #ceph
[14:54] * scuttle|afk is now known as scuttlemonkey
[14:55] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Ping timeout: 480 seconds)
[14:58] * jskinner (~jskinner@host-95-2-129.infobunker.com) Quit (Quit: Leaving...)
[14:58] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[15:02] * wenjunhuang__ (~wenjunhua@61.135.172.68) Quit (Quit: Leaving)
[15:04] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has joined #ceph
[15:04] * m0zes (~mozes@beocat.cis.ksu.edu) has joined #ceph
[15:04] * m0zes (~mozes@beocat.cis.ksu.edu) Quit ()
[15:05] * yerrysherry (~yerrysher@193.191.136.132) Quit (Ping timeout: 480 seconds)
[15:05] * yerrysherry (~yerrysher@193.191.136.132) has joined #ceph
[15:06] * m0zes (~mozes@beocat.cis.ksu.edu) has joined #ceph
[15:07] * KUSmurf (~Salamande@8Q4AABE41.tor-irc.dnsbl.oftc.net) Quit ()
[15:07] * basicxman (~Shnaw@tor-elpresidente.piraten-nds.de) has joined #ceph
[15:10] <cmdrk> i have a EC pool that has a down node, and all of the PGs have remapped except one. in the pg query, one of its 'up' OSDs has a strange value http://fpaste.org/230703/43394178/
[15:10] <cmdrk> 2147483647,
[15:14] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[15:15] <cmdrk> this is on 0.94.1
[15:15] * tupper (~tcole@173.38.117.84) has joined #ceph
[15:17] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:17] * jclm (~jclm@122.181.21.134) has joined #ceph
[15:18] * dyasny (~dyasny@198.251.58.23) has joined #ceph
[15:19] * _prime_ (~oftc-webi@199.168.44.192) has joined #ceph
[15:20] * wkennington (~william@76.77.181.17) has joined #ceph
[15:20] <Be-El> cmdrk: you do not have enough hosts to satisfy your ec rule
[15:21] <_prime_> good morning. I have a cluster with 93 OSDs on 19 nodes. Yesterday a node crashed (taking out 5 OSDs). I brought it back up, but two disks are permanently gone. The backfill seemed to work fine, but I came in this morning and it seems to be stuck with 827686/22785118 objects misplaced. recovery 370/22785279 objects degraded (0.002%) is slowly ticking up. Does this usually take this long, and is there a way I can speed this up?
[15:21] <_prime_> the entire output of ceph health: HEALTH_WARN 200 pgs backfill_toofull; 200 pgs stuck unclean; recovery 373/22785446 objects degraded (0.002%); recovery 827702/22785446 objects misplaced (3.63 3%); 1 near full osd(s)
[15:22] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[15:22] * yanzheng1 (~zhyan@125.71.107.110) Quit (Quit: This computer has gone to sleep)
[15:22] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[15:22] * yanzheng1 (~zhyan@125.71.107.110) has joined #ceph
[15:23] <Be-El> _prime_: you can take care for the osd that is too full to support further backfilling.....
[15:23] * xarses (~andreww@166.175.186.204) has joined #ceph
[15:23] <_prime_> Be-El: this is going to sound stupid, but how do I do that?
[15:24] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit ()
[15:24] <Be-El> _prime_: add more osds?
[15:24] <_prime_> Be-El, I can't really do that atm
[15:24] <Be-El> _prime_: your cluster is either full or your data is not balanced in an ideal way
[15:24] <Be-El> _prime_: both are undesired states
[15:25] <_prime_> ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 167T 81046G 90147G 52.66
[15:25] <_prime_> gah, ceph df says 52.66% used, 81946G available
[15:25] <_prime_> do I need to create a new crush map to rebalance?
[15:25] <Be-El> _prime_: that's the global state. check the output of 'ceph osd df'
[15:26] <Be-El> you will notice that the amount of free space varies between the osds
[15:26] <_prime_> I have one that is 89% full: 55 1.81999 1.00000 1860G 1670G 190G 89.79 1.71
[15:26] <Be-El> and if you have a look at the output of 'ceph health detail', it will list the affected osd at the end
[15:27] <_prime_> yup, confirmed, OSD.55: recovery 415/22787071 objects degraded (0.002%) recovery 827824/22787071 objects misplaced (3.633%) osd.55 is near full at 89%
[15:27] <_prime_> is there a good approach to rebalancing without adding additional OSDs?
[15:29] <_prime_> i suppose an extreme case would be to take the osd offline and let things backfill wihtout it, but i'm nervous about doing that with a bunch of objects still misplaced and my degraded object count slowly ticking up.
[15:29] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Read error: Connection reset by peer)
[15:29] * vbellur (~vijay@121.244.87.124) has joined #ceph
[15:29] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[15:30] * alram (~alram@50.95.199.16) has joined #ceph
[15:30] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[15:30] <Be-El> _prime_: you have several options, none is optimal
[15:31] <Be-El> _prime_: you can temporarly raise the backfill full ratio for the affected osd, e.g. to 0.9
[15:31] <burley> _prime_: I hit a similar issue where it just didn't balance the data right, ended up stopping the near full OSDs, moving their data elsewhere so the OSD wouldn't have any pg data at all, and then restarted it -- the backfills made it balanced again
[15:32] <Be-El> _prime_: you can lower the weight of the affected osd and hope that less data is send to it -> small data movement in cluster
[15:32] <Be-El> _prime_: you can reweight all osds according to their usage. there's a ceph command for this -> large data movement in cluster!
[15:33] <Be-El> burley: do you use btrfs for the osds?
[15:33] <_prime_> xfs
[15:33] <burley> no
[15:33] <burley> osds are on xfs
[15:33] <Be-El> burley: in that case restarting the osd should not release any occupied space and thus not influence the backfill blockade
[15:34] <burley> Be-El: I moved the pg data off the OSD, that freed up all the space
[15:34] <Be-El> burley: but i had a similar effect with btrfs and stale snapshot data that was removed on osd restart
[15:34] <burley> then when it backfilled, it backfilled to a proper amount
[15:34] <Be-El> burley: theoretically it should have moved the same amount of data back to the osd
[15:35] <burley> Yes, but it didn't
[15:35] <_prime_> so I think the 3rd option is probably a bad idea (large data movement). What is the trade3off for changing the backfill ratio vs lowering the weight of the affected OSD?
[15:35] <burley> so its not cleaning up something due to some other backfill
[15:35] <Be-El> _prime_: i would consider both temporary hacks
[15:36] <_prime_> what's the real solution ... reweight according to usage (on the weekend)?
[15:36] <_prime_> obviously I'd love to add more OSDs, but I don't have the hardware atm
[15:36] * rotbeard (~redbeard@x5f752118.dyn.telefonica.de) Quit (Quit: Leaving)
[15:36] <Be-El> _prime_: the real solution would be finding out why the data is unbalanced
[15:36] <burley> _prime_: When I did that, it was really only a band-aid -- but I did try re-weighting
[15:37] <burley> _prime_: what release are you on?
[15:37] <Be-El> _prime_: do you use a heterogenous osd setup (different sizes etc.)?
[15:37] <_prime_> Be-El: a large, unnecessary pool was removed. That's probably why ... it eliminated the warning of having too many PGs per OSD and improved backfill permformance (until I hit this blockade)
[15:37] <burley> in my case, all the OSDs were the same size and ever host has the same number of OSDs
[15:37] * basicxman (~Shnaw@9S0AAAYAQ.tor-irc.dnsbl.oftc.net) Quit ()
[15:37] <_prime_> Be-El: no, all our OSDs are 2TB
[15:38] <_prime_> same disks, same hardware, with SSD log disk
[15:38] <burley> _prime_: I hit my issue when I was increasing the number of pg's -- so the reverse of your operation
[15:38] <Be-El> _prime_: hmm..it really sounds like some abandoned data on the full osd
[15:39] <_prime_> interesting. Would shutting the OSD off help?
[15:39] <burley> so, given the similarities, I think my suggestion while not elegant is worth a shot
[15:39] <Be-El> _prime_: you can try it, if the cluster is in an otherwise healthy state
[15:40] <_prime_> ceph osd out osd.55 enough, or do I need to remove it from the crush map as well?
[15:40] <Be-El> just stop and restart the osd daemon
[15:40] * oro (~oro@2001:620:20:16:2886:b379:e3d5:a8c1) Quit (Ping timeout: 480 seconds)
[15:40] <_prime_> ok, i'll give that a go. then try reducing the backfill ratio if that doesn't work I guess
[15:40] * Guest1116 (~Adium@121.244.95.222) Quit (Ping timeout: 480 seconds)
[15:40] <_prime_> thanks!
[15:41] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) has joined #ceph
[15:42] <_prime_> wow, shutting down the OSDs just reduced misplaced objects dramatically: HEALTH_WARN 84 pgs backfilling; 425 pgs degraded; 1 pgs stuck degraded; 622 pgs stuck unclean; 424 pgs undersized; recovery 227400/22492595 objects degrad ed (1.011%); recovery 517128/22492595 objects misplaced (2.299%); 1 near full osd(s); 1/92 in osds are down
[15:42] <_prime_> 2.277% now vs 3.633% before
[15:44] <burley> _prime_: if your cluster gets back to health_ok with it offline, delete all of its pg data and then start it back up
[15:45] <burley> then it should backfill properly
[15:46] <_prime_> burley, cool thanks! I assume that means just nuke the pg data on the partition itself in /var/lib/ceph/osd/ceph-55/current
[15:47] <burley> the contents of that dir, yes
[15:47] <burley> I wouldn't delete current
[15:47] <_prime_> awesome, thanks!
[15:47] <_prime_> right, not the dir just its contents
[15:47] * alram (~alram@50.95.199.16) Quit (Quit: leaving)
[15:48] <Be-El> a cleaner solution is removing the osd from ceph, zap the disk and readd the osd
[15:49] <Be-El> there are more files in the osd partition than just the pg data files...
[15:49] <burley> right, but not trying to remove the OSD, just its data files so it'll refill
[15:49] <burley> but either way, sure, that'd work too
[15:50] <Be-El> it would be interesting to know whether the partition is really that full because of pg data files
[15:51] * shylesh (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[15:51] <Be-El> or whether there are some book keeping mechanism that leave data and fill up the partition
[15:52] <cmdrk> Be-El: that's interesting, because every other PG mapped fine. i have k=10,m=3 and 14 hosts (with 1 down, hence remapping)
[15:52] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[15:52] <Be-El> cmdrk: the number is "-1" which indicates that crush is not able to find an osd
[15:53] <cmdrk> curious
[15:53] <Be-El> cmdrk: with 13 chunks and 13 hosts the odds are good that crush is not able to distribute all chunks
[15:53] <Be-El> cmdrk: you can try to raise the number of retries in the crush rule
[15:54] * thomnico (~thomnico@2a01:e35:8b41:120:5091:fbca:d9a8:7945) has joined #ceph
[15:54] * dyasny (~dyasny@198.251.58.23) Quit (Ping timeout: 480 seconds)
[15:54] <Be-El> but i do not have first hand experience with ec pools
[15:55] <cmdrk> Be-El: why is there a descrepancy between 'acting' and 'up', then? i see 13 OSDs in 'acting'
[15:55] <cmdrk> where i see 2147483647 in 'up', i see osd 72 in acting
[15:55] * zaitcev (~zaitcev@2001:558:6001:10:61d7:f51f:def8:4b0f) has joined #ceph
[15:55] <cmdrk> ah hm
[15:55] <cmdrk> osd 72 and 79 are on the same host
[15:56] <cmdrk> so that's problematic, i would assume
[15:56] <Be-El> it depends on your ec rule
[15:56] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[15:56] <Be-El> but as i said....i cannot help you with the details of ec pools
[15:57] * cmdrk nods
[15:57] <cmdrk> seems reasonable. thanks Be-El
[16:00] <_prime_> current is still doing a du -sh, but it is by far the bulk of the disk usage: https://www.refheap.com/102384
[16:00] * trociny (~mgolub@93.183.239.2) Quit (Quit: ??????????????)
[16:01] * harlequin (~loris@62-193-45-2.as16211.net) has joined #ceph
[16:01] * vbellur (~vijay@121.244.87.124) has joined #ceph
[16:02] <harlequin> Hi all! I'm in the process of planning a new cluster, I'd like to know if the public network defined in the cluster is used to limit access to clients
[16:02] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) Quit (Read error: Connection reset by peer)
[16:03] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) has joined #ceph
[16:03] <harlequin> In other words: do clients need to have an address in the network/netmask space, or they only need to be able to route their traffic to the cluster nodes (and back ;) )?
[16:04] <Be-El> harlequin: routing should be enough
[16:04] * kefu (~kefu@114.86.215.22) has joined #ceph
[16:04] * yerrysherry (~yerrysher@193.191.136.132) Quit (Ping timeout: 480 seconds)
[16:04] * linuxkidd (~linuxkidd@207.236.250.131) Quit (Quit: Leaving)
[16:04] * cooldharma06 (~chatzilla@14.139.180.40) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 21.0/20130515140136])
[16:05] * thomnico (~thomnico@2a01:e35:8b41:120:5091:fbca:d9a8:7945) Quit (Ping timeout: 480 seconds)
[16:05] <Be-El> harlequin: but osd and mon hosts need a network interface in that network, since they bind to its ip address
[16:05] * marrusl (~mark@nat-pool-rdu-u.redhat.com) has joined #ceph
[16:06] * linuxkidd (~linuxkidd@207.236.250.131) has joined #ceph
[16:07] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[16:07] * kefu (~kefu@114.86.215.22) Quit (Max SendQ exceeded)
[16:07] * cryptk (~sese_@9S0AAAYDL.tor-irc.dnsbl.oftc.net) has joined #ceph
[16:08] <harlequin> Thank you very much Be-El :)
[16:08] * Concubidated1 (~Adium@66.87.126.83) Quit (Quit: Leaving.)
[16:10] * harlequin (~loris@62-193-45-2.as16211.net) has left #ceph
[16:10] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Read error: Connection reset by peer)
[16:10] * vata (~vata@cable-21.246.173-197.electronicbox.net) Quit (Quit: Leaving.)
[16:13] * dopesong (~dopesong@lb1.mailer.data.lt) has joined #ceph
[16:13] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[16:15] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[16:15] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[16:15] * shang (~ShangWu@175.41.48.77) Quit (Remote host closed the connection)
[16:15] <visbits> morning
[16:19] * reed (~reed@host253-183-static.123-81-b.business.telecomitalia.it) has joined #ceph
[16:19] * yerrysherry (~yerrysher@ns.milieuinfo.be) has joined #ceph
[16:24] * reed (~reed@host253-183-static.123-81-b.business.telecomitalia.it) Quit ()
[16:25] * yanzheng1 (~zhyan@125.71.107.110) Quit (Quit: This computer has gone to sleep)
[16:26] * reed (~reed@host253-183-static.123-81-b.business.telecomitalia.it) has joined #ceph
[16:28] * yerrysherry (~yerrysher@ns.milieuinfo.be) Quit (Ping timeout: 480 seconds)
[16:30] * sig_wall (adjkru@xn--hwgz2tba.lamo.su) has joined #ceph
[16:30] * yanzheng (~zhyan@125.71.107.110) has joined #ceph
[16:30] * xarses (~andreww@166.175.186.204) Quit (Quit: Leaving)
[16:30] * xarses (~andreww@166.175.186.204) has joined #ceph
[16:31] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) has joined #ceph
[16:31] * Concubidated (~Adium@192.41.52.12) has joined #ceph
[16:31] <ganders_> hi all, i've a ceph cluster with 4 osd nodes (each with 9 osd daemons - 3TB disks) and journals on SSD (120G not-to-good model), and i'm notice some high i/o wait on the disks, very similar in the four nodes... i want to know if maybe tunning the nr_requests, max_sectors_kb, max_hw_sectors_kb and read_ahead_kb maybe could help me to lower a little bit the high io wait, any ideas/recommendations?
[16:32] <ganders_> I'm actually had: max_hw_sectors_kb = 16383; max_sectors_kb = 512; nr_requests = 128 and read_ahead_kb = 128
[16:34] <jeroenvh> iowait to the physical disks?
[16:34] <jeroenvh> how do you measure this? with iostat on the actual node ?
[16:36] <ktdreyer> flaf: you're right, SSL is possible with apache2 but civetweb will need some sort of SSL proxy in front of it (like apache/nginx/stud)
[16:37] * cryptk (~sese_@9S0AAAYDL.tor-irc.dnsbl.oftc.net) Quit ()
[16:38] * lobstar (~xENO_@ool-4a59b2ba.dyn.optonline.net) has joined #ceph
[16:38] <flaf> ktdreyer: thx to confirm to me this information. ;)
[16:38] * thomnico (~thomnico@2a01:e35:8b41:120:7462:58ff:d838:9985) has joined #ceph
[16:39] * xarses (~andreww@166.175.186.204) Quit (Quit: Leaving)
[16:39] * xarses (~andreww@166.175.186.204) has joined #ceph
[16:42] * treenerd (~treenerd@85.193.140.98) Quit (Ping timeout: 480 seconds)
[16:42] * tganguly (~tganguly@122.172.31.18) has joined #ceph
[16:43] * TheSov2 (~TheSov@cip-248.trustwave.com) has joined #ceph
[16:46] * jashank42 (~jashan42@117.220.136.229) Quit (Ping timeout: 480 seconds)
[16:51] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[16:51] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[16:53] * xarses (~andreww@166.175.186.204) Quit (Quit: Leaving)
[16:54] * xarses (~andreww@166.175.186.204) has joined #ceph
[16:54] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[16:54] * xarses (~andreww@166.175.186.204) Quit ()
[16:55] * xarses (~andreww@166.175.186.204) has joined #ceph
[16:56] * alram (~alram@192.41.52.12) has joined #ceph
[16:56] * thomnico_ (~thomnico@2a01:e35:8b41:120:7462:58ff:d838:9985) has joined #ceph
[16:57] * thomnico (~thomnico@2a01:e35:8b41:120:7462:58ff:d838:9985) Quit (Read error: Connection reset by peer)
[16:59] * TheSov2 (~TheSov@cip-248.trustwave.com) Quit (Ping timeout: 480 seconds)
[16:59] * jashank42 (~jashan42@117.207.180.146) has joined #ceph
[16:59] * thomnico_ (~thomnico@2a01:e35:8b41:120:7462:58ff:d838:9985) Quit ()
[17:00] * xarses (~andreww@166.175.186.204) Quit (Quit: Leaving)
[17:00] * xarses (~andreww@166.175.186.204) has joined #ceph
[17:01] * kefu (~kefu@114.86.215.22) has joined #ceph
[17:02] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[17:04] * tganguly (~tganguly@122.172.31.18) Quit (Ping timeout: 480 seconds)
[17:04] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:05] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[17:07] * linuxkidd (~linuxkidd@207.236.250.131) Quit (Quit: Leaving)
[17:07] * lobstar (~xENO_@9S0AAAYET.tor-irc.dnsbl.oftc.net) Quit ()
[17:07] * x303 (~Sliker@chomsky.torservers.net) has joined #ceph
[17:08] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Remote host closed the connection)
[17:09] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[17:11] * thomnico (~thomnico@2a01:e35:8b41:120:2587:12ae:e332:8bf) has joined #ceph
[17:12] * dgurtner (~dgurtner@178.197.231.53) Quit (Ping timeout: 480 seconds)
[17:14] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[17:16] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[17:17] * thomnico (~thomnico@2a01:e35:8b41:120:2587:12ae:e332:8bf) Quit (Quit: Ex-Chat)
[17:17] * thomnico (~thomnico@2a01:e35:8b41:120:2587:12ae:e332:8bf) has joined #ceph
[17:19] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:24] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:24] * TheSov2 (~TheSov@cip-248.trustwave.com) has joined #ceph
[17:25] <TheSov2> i found something thats interesting, the recovery of a failed osd happens a lot faster, if the monitors are seperate
[17:25] <TheSov2> and by a lot i mean like 4-5 percentish, which may be a lot to some
[17:26] * dyasny (~dyasny@173.231.115.59) has joined #ceph
[17:27] * jashank42 (~jashan42@117.207.180.146) Quit (Ping timeout: 480 seconds)
[17:27] <Kvisle> "if the monitors are seperate" ? what does that mean?
[17:29] <monsted> "not on the osd"?
[17:29] <ganders_> jeroenvh: yes with a iostat -x 5 for example
[17:29] * thomnico (~thomnico@2a01:e35:8b41:120:2587:12ae:e332:8bf) Quit (Quit: Ex-Chat)
[17:29] * thomnico (~thomnico@2a01:e35:8b41:120:2587:12ae:e332:8bf) has joined #ceph
[17:33] * sileht (~sileht@gizmo.sileht.net) Quit (Quit: WeeChat 1.0.1)
[17:33] * RC (~Adium@123.201.54.58) has joined #ceph
[17:33] * owasserm (~owasserm@216.1.187.164) has joined #ceph
[17:34] * RC is now known as Guest1148
[17:34] <TheSov2> afaisi yes a machine that only has a monitor on it
[17:34] <TheSov2> then osd's on other machines
[17:34] <TheSov2> so 3 seperate distinct machines as monitors and as many as you want for osd's
[17:36] * jashank42 (~jashan42@117.214.198.98) has joined #ceph
[17:36] * branto (~borix@ip-213-220-214-203.net.upcbroadband.cz) has left #ceph
[17:36] * sileht (~sileht@gizmo.sileht.net) has joined #ceph
[17:37] <TheSov2> is anyone here using ALB as their bond type?
[17:37] * treenerd (~treenerd@178.115.133.193.wireless.dyn.drei.com) has joined #ceph
[17:37] * calvinx (~calvin@101.100.172.246) Quit (Quit: calvinx)
[17:37] * x303 (~Sliker@9S0AAAYGD.tor-irc.dnsbl.oftc.net) Quit ()
[17:37] * LRWerewolf (~xanax`@5NZAADKT1.tor-irc.dnsbl.oftc.net) has joined #ceph
[17:39] * sileht (~sileht@gizmo.sileht.net) Quit ()
[17:39] * reed (~reed@host253-183-static.123-81-b.business.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[17:40] * sileht (~sileht@gizmo.sileht.net) has joined #ceph
[17:40] * reed (~reed@host254-183-static.123-81-b.business.telecomitalia.it) has joined #ceph
[17:41] * ircolle (~Adium@2601:1:a580:1735:914f:6dfb:ef0a:e28c) has joined #ceph
[17:44] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[17:45] <Be-El> TheSov2: mon store the pg map in their local storage; each update of a pg e.g. backfilling, remapping etc. results in an update of the local mon store.
[17:46] <Be-El> TheSov2: these update are synchronous afaik, so they take their time
[17:46] <TheSov2> so that explains the performance gain by seperating them out
[17:46] <Be-El> TheSov2: that's why it is recommended to put the mon storage (/var/lib/ceph/mon etc.) on a fast disk or ssd
[17:46] <Be-El> exactly
[17:47] * i_m (~ivan.miro@deibp9eh1--blueice4n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[17:47] <TheSov2> Be-El, do you mind if i ask you a few more questions
[17:47] * oro (~oro@84-72-20-79.dclient.hispeed.ch) has joined #ceph
[17:47] <Be-El> TheSov2: i'm about to leave for a barbeque. but i'll be back tomorrow
[17:48] * kefu (~kefu@114.86.215.22) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:48] <TheSov2> I know that its recommended to have ceph backend links different from client links.
[17:48] <TheSov2> do you just point them to different dns entries?
[17:48] <TheSov2> and do i have to use LACP or can I do ALB
[17:50] <Be-El> my personal setup uses lacp with vlan. the only advantage of alb over lacp is the fact that you can use links with different speeds and different logical switches
[17:50] <TheSov2> yes
[17:51] <TheSov2> thats why i want alb
[17:51] <TheSov2> so establish "multipath" so to speak
[17:51] <Be-El> the disadvantage of alb is the higher complexity of the setup. if you can handle this, feel free to use it
[17:51] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[17:51] <Be-El> you may also want to think about stuff like traffic shaping. the backfill traffic may easily fillup the bandwidth, leaving no or little bandwidth for clients
[17:52] <TheSov2> well the backend will be a seperate network altogether
[17:52] <Be-El> that's why many people use different physical interfaces to separate the public and private networks
[17:52] <TheSov2> physcially isolated
[17:52] <Be-El> so, bbq time...cu
[17:52] <TheSov2> do the monitors need more than 1gb bandwith on the client side?
[17:53] <TheSov2> no wait!
[17:53] <TheSov2> :P
[17:53] <visbits> the monitors do nothing
[17:53] <visbits> they just keep tabs on shit
[17:53] <visbits> your OSD will chew bandwidth
[17:53] <TheSov2> clients access them for auth
[17:53] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[17:53] <TheSov2> so they need to be available right
[17:53] <visbits> ya
[17:53] <TheSov2> ok 1 gb for bothsides is fine?
[17:53] <visbits> ya
[17:53] <TheSov2> sweet
[17:54] <visbits> its distributed storage, you will only saturate 1g if your client has 10G or are rebuilding a failed osd
[17:54] <TheSov2> ahh so 2 gig on the back end
[17:54] <TheSov2> and 1 gig links are fine for client end
[17:54] <visbits> ya
[17:55] <TheSov2> how do i make sure osd's talk to each other only thru backend? dns?
[17:55] <visbits> http://ceph.com/docs/master/_images/ditaa-2452ee22ef7d825a489a08e0b935453f2b06b0e6.png
[17:55] <visbits> http://ceph.com/docs/master/rados/configuration/network-config-ref/
[17:55] <visbits> CEPH NETWORKS section
[17:55] <TheSov2> u sir.... are amazing
[17:55] * reed (~reed@host254-183-static.123-81-b.business.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[17:56] <visbits> unless your doing crazy throughput 1G on your OSD is fine. We have 40 OSD host with dual 1g In lacp on a switch uplinked at 40g
[17:57] <visbits> or have a really high number of disk per osd server
[17:57] <TheSov2> well initially the ceph cluster being built is going to be proved in backup storage
[17:57] <visbits> i should make some scaling guides on this and submit it into documentation since everything is produced by programmers who have no real world experience with its use cases
[17:57] <visbits> it will work fine
[17:57] <TheSov2> if it works well we are moving to dev/test
[17:57] <visbits> are you uploading as objects or block
[17:57] <TheSov2> and then if that works well we goto prod
[17:57] <TheSov2> just rbd
[17:57] <TheSov2> in a way this replaces our san
[17:58] <visbits> are your clients all linux?
[17:58] <TheSov2> almost all yes
[17:58] <TheSov2> the few windows machines are physical AD's
[17:58] <visbits> take it straight to dev/test, you wont need to prove anything it works
[17:58] <TheSov2> i agree with you, my company is... skiddish to say the least
[17:58] <visbits> we write 200T of backups per week
[17:58] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:59] <visbits> you need to learn about the crush rulesets
[17:59] <TheSov2> yes
[17:59] <TheSov2> thats the only part i havent looked into yet
[17:59] * treenerd (~treenerd@178.115.133.193.wireless.dyn.drei.com) Quit (Ping timeout: 480 seconds)
[17:59] <visbits> its a bitch of a dick to get right but they do what they shoukld
[17:59] * thomnico (~thomnico@2a01:e35:8b41:120:2587:12ae:e332:8bf) Quit (Ping timeout: 480 seconds)
[17:59] * thomnico (~thomnico@2a01:e35:8b41:120:2587:12ae:e332:8bf) has joined #ceph
[17:59] <TheSov2> the whole pg thing
[18:00] <TheSov2> is that how many segments a given pool is broken into?
[18:00] <TheSov2> and if replication level is 3 does that mean each pg has 2 other copies of itself?
[18:00] * bandrus (~brian@131.sub-70-211-67.myvzw.com) has joined #ceph
[18:01] <visbits> its data + replica + replica
[18:01] <visbits> crush is what organizes where it stores the replica
[18:01] <visbits> you use buckets to make *containers*, ie rack, host, datacenter.. ect
[18:01] <visbits> some people replicate 2 copies local and 1 copy to a remote data center
[18:01] <TheSov2> yes i want to do that
[18:01] * arbrandes (~arbrandes@191.17.228.223) has joined #ceph
[18:02] <visbits> well thats what your gonna need
[18:02] <visbits> id recommend you look into storing your data as objects if so, much more responsive
[18:02] <TheSov2> thats only if your replciation level is 3 right
[18:02] <TheSov2> if its 2?
[18:02] <TheSov2> unforunately comvault wants block storage
[18:02] <TheSov2> so it can dedup
[18:04] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[18:06] * owasserm (~owasserm@216.1.187.164) Quit (Ping timeout: 480 seconds)
[18:07] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[18:07] * LRWerewolf (~xanax`@5NZAADKT1.tor-irc.dnsbl.oftc.net) Quit ()
[18:08] * nih (~murmur@176.10.99.207) has joined #ceph
[18:08] <TheSov2> is it possible to move rbd's between pools?
[18:08] * treenerd (~treenerd@91.141.5.82.wireless.dyn.drei.com) has joined #ceph
[18:09] * thomnico (~thomnico@2a01:e35:8b41:120:2587:12ae:e332:8bf) Quit (Quit: Ex-Chat)
[18:10] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[18:13] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[18:15] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[18:17] * Hemanth (~Hemanth@121.244.87.117) Quit (Remote host closed the connection)
[18:18] * jashank42 (~jashan42@117.214.198.98) Quit (Ping timeout: 480 seconds)
[18:20] * BManojlovic (~steki@cable-89-216-232-224.dynamic.sbb.rs) has joined #ceph
[18:25] * yanzheng (~zhyan@125.71.107.110) Quit (Quit: This computer has gone to sleep)
[18:27] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[18:27] * moore (~moore@64.202.160.88) has joined #ceph
[18:28] * dopesong_ (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[18:28] <supay> hey, can i ask about openstack development here?
[18:28] * dopeson__ (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[18:31] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[18:32] <jdillaman> TheSov2: no, you can only copy images between pools
[18:33] * dopeson__ (~dopesong@88-119-94-55.static.zebra.lt) Quit (Remote host closed the connection)
[18:33] * smerz (~ircircirc@37.74.194.90) Quit (Ping timeout: 480 seconds)
[18:33] * treenerd (~treenerd@91.141.5.82.wireless.dyn.drei.com) Quit (Ping timeout: 480 seconds)
[18:34] * jashank42 (~jashan42@117.220.137.110) has joined #ceph
[18:34] * thomnico (~thomnico@2a01:e35:8b41:120:2587:12ae:e332:8bf) has joined #ceph
[18:35] * dopesong (~dopesong@lb1.mailer.data.lt) Quit (Ping timeout: 480 seconds)
[18:36] * dopesong_ (~dopesong@88-119-94-55.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[18:37] * nih (~murmur@5NZAADKVU.tor-irc.dnsbl.oftc.net) Quit ()
[18:38] * Silentkillzr (~Kidlvr@bakunin.gtor.org) has joined #ceph
[18:38] * thomnico_ (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) has joined #ceph
[18:38] * thomnico_ (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) Quit ()
[18:38] * thomnico_ (~thomnico@2a01:e35:8b41:120:2587:12ae:e332:8bf) has joined #ceph
[18:39] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[18:39] * thomnico (~thomnico@2a01:e35:8b41:120:2587:12ae:e332:8bf) Quit (Read error: No route to host)
[18:46] * garphy is now known as garphy`aw
[18:48] * linjan (~linjan@109.253.105.61) has joined #ceph
[18:54] * yguang11 (~yguang11@2001:4998:effd:600:e53d:adfb:2e2b:74fc) has joined #ceph
[18:55] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) has joined #ceph
[18:56] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[18:57] * sleinen1 (~Adium@2001:620:0:82::101) has joined #ceph
[18:58] * djs1 (a89fd5d4@107.161.19.53) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[18:59] * nils_ (~nils@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[19:01] * linjan (~linjan@109.253.105.61) Quit (Ping timeout: 480 seconds)
[19:04] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:05] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[19:06] <TheSov2> are there any large companies using ceph now?
[19:06] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit ()
[19:07] * Silentkillzr (~Kidlvr@5NZAADKXM.tor-irc.dnsbl.oftc.net) Quit ()
[19:08] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Remote host closed the connection)
[19:10] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[19:10] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[19:11] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[19:11] * sleinen2 (~Adium@2001:620:0:82::101) has joined #ceph
[19:11] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Read error: Connection reset by peer)
[19:11] * owasserm (~owasserm@206.169.83.146) has joined #ceph
[19:14] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) has joined #ceph
[19:14] <TheSov2> does anyone know where i can find a portfolio of large companies using ceph?
[19:14] <Sysadmin88> youtube videos of talks given
[19:15] <TheSov2> thanks Sysadmin88
[19:15] <TheSov2> i need hardcopy though something to show the higher ups
[19:15] <georgem> comcast for example, CERN, Dreamhost
[19:16] <TheSov2> do they say for what?
[19:16] <TheSov2> because that makes a difference
[19:16] <gleam> cern does block devices, dreamhost does object storage
[19:16] <gleam> not sure about comcast
[19:16] <TheSov2> like if cern uses it for data collection on the collider thats more impressive than if its for their ftp
[19:17] <gleam> cern has petabytes of storage
[19:17] <gleam> in ceph
[19:17] * kawa2014 (~kawa@212.110.41.244) Quit (Quit: Leaving)
[19:17] <gleam> they've had petabytes for years
[19:17] <TheSov2> well my case is rbd
[19:18] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:18] <gleam> and so is theirs
[19:18] <gregsfortytwo> CERN has discussed in various places that they currently use it for backing an OpenStack cloud, and they are working on migrating some of their archival storage systems to use RADOS
[19:18] <gleam> so you're good to go
[19:18] * puffy (~puffy@216.207.42.144) has joined #ceph
[19:18] <gleam> here's the current state of cern's deployment: https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/ceph-at-cern-a-year-in-the-life-of-a-petabyte-scale-block-storage-service
[19:21] * shylesh (~shylesh@123.136.223.13) has joined #ceph
[19:21] <TheSov2> gleam, thanks man
[19:21] <TheSov2> thats a good video to send out
[19:22] <TheSov2> what does cern use to collect the data from the detectors i wonder
[19:22] <TheSov2> maybe its in the video
[19:26] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[19:26] * vbellur (~vijay@122.171.67.148) has joined #ceph
[19:27] * ifur (~osm@0001f63e.user.oftc.net) Quit (Quit: Lost terminal)
[19:27] * JFQ (~ghartz@AStrasbourg-651-1-211-89.w109-221.abo.wanadoo.fr) has joined #ceph
[19:29] * Guest1148 (~Adium@123.201.54.58) Quit (Quit: Leaving.)
[19:29] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[19:30] <TheSov2> is anyone working on a native ceph client for windows?
[19:30] * midnightrunner (~midnightr@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[19:30] * marrusl (~mark@nat-pool-rdu-u.redhat.com) Quit (Remote host closed the connection)
[19:30] <theanalyst> TheSov2 iirc there was a rados.dll mentioned in the mailing list, dont remeber the status of that
[19:31] * nsoffer (~nsoffer@bzq-79-180-80-86.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[19:34] * midnight_ (~midnightr@216.113.160.71) has joined #ceph
[19:35] * thomnico_ (~thomnico@2a01:e35:8b41:120:2587:12ae:e332:8bf) Quit (Quit: Ex-Chat)
[19:36] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:37] * ifur (~osm@0001f63e.user.oftc.net) Quit (Quit: Lost terminal)
[19:37] * Aethis (~nicatronT@tor-exit-readme.as24875.net) has joined #ceph
[19:39] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[19:39] * midnightrunner (~midnightr@c-67-174-241-112.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:41] <TheSov2> to put multiple journals on a disk using ceph deploy, do i have to pre-partition the journal?
[19:42] * bandrus (~brian@131.sub-70-211-67.myvzw.com) Quit (Ping timeout: 480 seconds)
[19:42] <gleam> i believe so
[19:43] <TheSov2> damn i thought i could just type the number in on the activate and it would do it on its own
[19:44] <TheSov2> i mean it creates a partition on the data disk
[19:45] * marrusl (~mark@nat-pool-rdu-u.redhat.com) has joined #ceph
[19:46] * CAPSLOCK2000 (~oftc@2001:984:3be3:1::8) has joined #ceph
[19:47] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[19:48] <gleam> i don't actually know, so try it and see
[19:53] * bandrus (~brian@31.sub-70-214-41.myvzw.com) has joined #ceph
[19:55] * jashank42 (~jashan42@117.220.137.110) Quit (Ping timeout: 480 seconds)
[19:56] <TheSov2> why does ceph warn you about hot swapping disk when using a seperate journal?
[19:57] <flaf> Hi, for a journals SSD, which is the best disk between SSD DC-S3700 100GB and DC-S3700 200GB here => http://ark.intel.com/compare/71914,71913
[19:57] <flaf> Can you confirm to me that, for journal, write are sequential, not random?
[19:57] <TheSov2> its sequential for 1 journal
[19:57] <TheSov2> if you have more than 1, its random
[19:58] <TheSov2> but who would get a ssd journal for every single osd
[19:58] * bandrus (~brian@31.sub-70-214-41.myvzw.com) Quit (Quit: Leaving.)
[19:58] <flaf> Ah, ok, I see. In my case, I will have 3 journals in the SSD, so the write will be random, correct?
[19:59] * bandrus (~brian@31.sub-70-214-41.myvzw.com) has joined #ceph
[19:59] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) has joined #ceph
[20:01] <jidar> gleam: TheSov2 I know for a fact on RHEL7 I do not have to partition the disk for multiple journals on the same disk
[20:01] <gleam> oh, well cool
[20:01] <jidar> using ceph-deploy, I'll say though that other people have not had the same luck (but not using RHEL7)
[20:02] <TheSov2> so what was your deploy command ooc?
[20:02] <TheSov2> ceph-deploy osd prepare ceph-server:sda:sba1?
[20:02] <jidar> ceph-deploy --overwrite-conf osd prepare ceph{1,2,3}:sd{b,c,d,e,f,g,h,i,j}:sdk
[20:02] <jidar> ceph-deploy --overwrite-conf osd create ceph{1,2,3}:/dev/sd{b,c,d,e,f,g,h,i,j}:/dev/sdk
[20:02] <jidar> either or
[20:03] <jidar> will create the journal and format xfs
[20:03] <TheSov2> so you dont specificy a parititon number?
[20:03] <jidar> I do not
[20:03] <TheSov2> wait so how do you know its using the correct journal?
[20:03] <jidar> ceph-disk list
[20:03] <TheSov2> can u show me your output?
[20:03] <jidar> /dev/sdj :
[20:03] <jidar> /dev/sdj1 ceph data, prepared, cluster ceph, journal /dev/sdk9
[20:04] <jidar> /dev/sdk :
[20:04] * scheuk (~scheuk@204.246.67.78) has joined #ceph
[20:04] <jidar> /dev/sdk1 ceph journal, for /dev/sdb1
[20:04] <jidar> /dev/sdk2 ceph journal, for /dev/sdc1
[20:04] <jidar> truncated, obviously
[20:04] * jashank42 (~jashan42@117.207.177.68) has joined #ceph
[20:04] <jidar> oh! rain on the west coast yay!
[20:05] <scheuk> I'm having some trouble with centOS 6 and ceph-disk osd prepare --dmcrypt
[20:05] <TheSov2> ok
[20:05] <TheSov2> thanks jidar
[20:05] <scheuk> ceph-disk, formats and encrypts the drive
[20:05] * xarses (~andreww@166.175.186.204) Quit (Ping timeout: 480 seconds)
[20:05] <scheuk> but doesn't mount it
[20:06] * shylesh (~shylesh@123.136.223.13) Quit (Remote host closed the connection)
[20:06] <scheuk> if I try to perform a ceph-disk activate /dev/sdb, I get a 'mount: unknown filesystem type 'crypto_LUKS''
[20:07] <scheuk> err ceph-disk activate /dev/sdb1
[20:07] * jseeger (~chatzilla@204.14.239.53) has joined #ceph
[20:07] <scheuk> from what I can tell there is no mount option for crypto/luks disks
[20:07] * Aethis (~nicatronT@7R2AABL8B.tor-irc.dnsbl.oftc.net) Quit ()
[20:07] * utugi______ (~Nanobot@herngaard.torservers.net) has joined #ceph
[20:13] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:14] * linuxkidd (~linuxkidd@207.236.250.131) has joined #ceph
[20:17] * arbrandes (~arbrandes@191.17.228.223) Quit (Quit: Leaving)
[20:22] * owasserm (~owasserm@206.169.83.146) Quit (Quit: Ex-Chat)
[20:35] * bandrus (~brian@31.sub-70-214-41.myvzw.com) Quit (Ping timeout: 480 seconds)
[20:37] * jseeger (~chatzilla@204.14.239.53) Quit (Remote host closed the connection)
[20:37] * utugi______ (~Nanobot@5NZAADK28.tor-irc.dnsbl.oftc.net) Quit ()
[20:38] * Popz (~hyst@9S0AAAYQN.tor-irc.dnsbl.oftc.net) has joined #ceph
[20:38] * bandrus (~brian@213.sub-70-211-77.myvzw.com) has joined #ceph
[20:39] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) Quit (Quit: Lost terminal)
[20:40] * bitserker (~toni@88.87.194.130) Quit (Ping timeout: 480 seconds)
[20:44] * The_Ball (~ballen@42.80-202-192.nextgentel.com) has joined #ceph
[20:46] * linjan (~linjan@80.179.241.26) has joined #ceph
[20:54] * scuttlemonkey is now known as scuttle|afk
[20:59] * nsoffer (~nsoffer@bzq-79-180-80-86.red.bezeqint.net) has joined #ceph
[21:07] * Popz (~hyst@9S0AAAYQN.tor-irc.dnsbl.oftc.net) Quit ()
[21:08] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[21:09] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[21:11] * dopesong (~dopesong@78-60-74-130.static.zebra.lt) has joined #ceph
[21:20] * dopesong_ (~dopesong@lb1.mailer.data.lt) has joined #ceph
[21:22] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[21:25] * dopesong (~dopesong@78-60-74-130.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[21:26] * owasserm (~owasserm@206.169.83.146) has joined #ceph
[21:35] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[21:37] * x303 (~ain@spftor1e1.privacyfoundation.ch) has joined #ceph
[21:38] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[21:38] * smerz (~ircircirc@37.74.194.90) has joined #ceph
[21:39] * cloudm2 (uid37542@id-37542.highgate.irccloud.com) has joined #ceph
[21:40] <cloudm2> Quick question: for ubuntu, has ceph-extras been merged into ceph or what? I saw a repo for precise but not trusty. Thanks
[21:43] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[21:44] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Read error: Connection reset by peer)
[21:49] * oro (~oro@84-72-20-79.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:50] * bene (~ben@nat-pool-rdu-u.redhat.com) has joined #ceph
[21:53] * bene (~ben@nat-pool-rdu-u.redhat.com) Quit ()
[21:54] * The_Ball (~ballen@42.80-202-192.nextgentel.com) Quit (Ping timeout: 480 seconds)
[21:57] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Ping timeout: 480 seconds)
[21:57] * bene (~ben@nat-pool-rdu-u.redhat.com) has joined #ceph
[21:57] <TheSov2> it has old ceph stuff, for hammer you need to add the repo
[21:57] * kapil (~kapil@p548A5880.dip0.t-ipconnect.de) has joined #ceph
[21:58] * xarses (~andreww@166.175.186.204) has joined #ceph
[21:59] <kapil> hi, as per the Hammer release notes, RGW is now based on Civetweb server instead of Apache-based deployment. However, the master docs still mention about installing apache and mod-fastcgi -> http://ceph.com/docs/master/install/install-ceph-gateway/
[22:00] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) Quit (Quit: Verlassend)
[22:00] <rlrevell> kapil: yeah that's come up on here several times lately. http://ceph.com/docs/master/start/quick-ceph-deploy/#add-an-rgw-instance is correct though
[22:01] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[22:03] <kapil> <rlrevell> thanks, but what if one wants to install RGW without using ceph-deploy ?
[22:03] <rlrevell> good point i never thought of it.
[22:04] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[22:04] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Read error: Connection reset by peer)
[22:06] <flaf> kapil: the installation with civetweb is really simple.
[22:07] * TheSov2 (~TheSov@cip-248.trustwave.com) Quit (Ping timeout: 480 seconds)
[22:07] * x303 (~ain@9S0AAAYSZ.tor-irc.dnsbl.oftc.net) Quit ()
[22:07] * Azerothian______ (~Coestar@exit1.ipredator.se) has joined #ceph
[22:08] <kapil> flaf: so looks like in hammer a user won't have to follow all the configuration steps -> http://ceph.com/docs/master/radosgw/config/ ?
[22:09] <flaf> kapil: you can remove the sections about apache2
[22:10] <kapil> flaf: okay, will give it a try. thanks
[22:11] * huats (~quassel@stuart.objectif-libre.com) Quit (Remote host closed the connection)
[22:11] * huats (~quassel@stuart.objectif-libre.com) has joined #ceph
[22:11] * harlequin (~loris@174.65.103.84.rev.sfr.net) has joined #ceph
[22:11] <flaf> kapil: here an example of my ceph.conf for radosgw.
[22:11] <flaf> => http://pastealacon.com/37679
[22:12] <flaf> The new line with civetweb is => "rgw frontends = civetweb port=80" and you can forget apache2.
[22:12] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[22:12] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) has left #ceph
[22:12] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[22:13] <kapil> cool
[22:14] * circ-user-uYDL9 (~circuser-@50.46.225.207) has joined #ceph
[22:16] <harlequin> Hello! Is it possible to set a client key's capabilities to permit reading/writing to the RBDs in a pool, but to forbid creation of new RBDs?
[22:18] * jrankin (~jrankin@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[22:21] * shohn1 (~shohn@dslb-178-002-076-215.178.002.pools.vodafone-ip.de) Quit (Quit: Leaving.)
[22:24] * circ-user-uYDL9 (~circuser-@50.46.225.207) has left #ceph
[22:24] * circ-user-uYDL9 (~circuser-@50.46.225.207) has joined #ceph
[22:24] * The_Ball (~ballen@42.80-202-192.nextgentel.com) has joined #ceph
[22:24] * linuxkidd (~linuxkidd@207.236.250.131) Quit (Quit: Leaving)
[22:25] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[22:27] * bandrus (~brian@213.sub-70-211-77.myvzw.com) Quit (Ping timeout: 480 seconds)
[22:28] <flaf> Hello, do you think it's necessary to put the working dir of a monitor in SSD?
[22:31] * bandrus (~brian@164.sub-70-214-42.myvzw.com) has joined #ceph
[22:32] * TheSov2 (~TheSov@cip-248.trustwave.com) has joined #ceph
[22:37] * Azerothian______ (~Coestar@5NZAADK8X.tor-irc.dnsbl.oftc.net) Quit ()
[22:37] * Phase (~Popz@5NZAADLAZ.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:38] * Phase is now known as Guest1179
[22:40] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[22:40] <jdillaman> harlequin: should be possible if you restrict access to rbd_directory (i.e. allow class-read r object_prefix rbd_directory)
[22:43] * ganders_ (~root@190.2.42.21) Quit (Quit: WeeChat 0.4.2)
[22:47] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[22:47] * oro (~oro@84-72-20-79.dclient.hispeed.ch) has joined #ceph
[22:50] <harlequin> jdillaman: Thank you :)
[22:50] * The_Ball (~ballen@42.80-202-192.nextgentel.com) Quit (Ping timeout: 480 seconds)
[22:51] <TheSov2> how do you drainstop an osd?
[22:52] <jdillaman> harlequin: i think you will need to only use v2 image formats, though
[22:52] <jdillaman> harlequin: ??? and you will also need to grant rwx caps for all other rbd object prefixes: rbd_header., rbd_data., rbd_id.,
[22:52] <harlequin> Do you know of a good reference for cephx capabilities? I find the official doc to be quite... lacking in detail, as it doesn't even explain keywords like rbd_children, or other advanced concepts...
[22:53] * jashank42 (~jashan42@117.207.177.68) Quit (Quit: Leaving)
[22:53] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Quit: Leaving.)
[22:53] <jdillaman> harlequin: there isn???t a good doc for the rbd layout and its interactions w/ caps. i think there is an open tracker ticket
[22:53] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Read error: Connection reset by peer)
[22:54] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[22:56] * LeaChim (~LeaChim@host86-175-32-176.range86-175.btcentralplus.com) has joined #ceph
[22:56] * kapil (~kapil@p548A5880.dip0.t-ipconnect.de) Quit (Quit: Konversation terminated!)
[22:58] <harlequin> jdillaman: Thank you very much, I've just found the tracker feature request, and your valuable comments in it. :)
[23:01] * joep (~raymond@5352409D.cm-6-3b.dynamic.ziggo.nl) has joined #ceph
[23:01] * joep (~raymond@5352409D.cm-6-3b.dynamic.ziggo.nl) has left #ceph
[23:01] * The_Ball (~ballen@80.202.192.42) has joined #ceph
[23:02] * bene (~ben@nat-pool-rdu-u.redhat.com) Quit (Ping timeout: 480 seconds)
[23:03] * visbits (~textual@8.29.138.28) Quit (Quit: Textual IRC Client: www.textualapp.com)
[23:04] * joshd1 (~jdurgin@66-194-8-225.static.twtelecom.net) has joined #ceph
[23:06] * harlequin (~loris@174.65.103.84.rev.sfr.net) Quit (Quit: leaving)
[23:06] * bandrus (~brian@164.sub-70-214-42.myvzw.com) Quit (Ping timeout: 480 seconds)
[23:06] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[23:07] * joshd (~jdurgin@206.169.83.146) Quit (Ping timeout: 480 seconds)
[23:07] * bobrik (~bobrik@83.243.64.45) Quit (Read error: Connection reset by peer)
[23:07] * sleinen2 (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[23:07] * Guest1179 (~Popz@5NZAADLAZ.tor-irc.dnsbl.oftc.net) Quit ()
[23:07] * Sketchfile (~blank@62.210.105.116) has joined #ceph
[23:08] * bobrik (~bobrik@78.25.120.157) has joined #ceph
[23:08] * dyasny (~dyasny@173.231.115.59) Quit (Ping timeout: 480 seconds)
[23:08] * sleinen1 (~Adium@2001:620:0:82::101) has joined #ceph
[23:10] * bobrik_ (~bobrik@83.243.64.45) has joined #ceph
[23:10] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[23:11] * angdraug (~angdraug@12.164.168.117) Quit (Remote host closed the connection)
[23:11] * badone_ is now known as badone
[23:11] * bobrik (~bobrik@78.25.120.157) Quit (Read error: No route to host)
[23:11] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[23:12] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[23:14] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[23:14] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:14] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[23:15] * tupper (~tcole@173.38.117.84) Quit (Ping timeout: 480 seconds)
[23:16] * bandrus (~brian@182.sub-70-214-41.myvzw.com) has joined #ceph
[23:17] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) Quit (Quit: Leaving)
[23:20] * oro (~oro@84-72-20-79.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:21] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) has joined #ceph
[23:21] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:25] * bandrus (~brian@182.sub-70-214-41.myvzw.com) Quit (Ping timeout: 480 seconds)
[23:27] * bandrus (~brian@38.sub-70-214-41.myvzw.com) has joined #ceph
[23:28] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[23:31] * marrusl (~mark@nat-pool-rdu-u.redhat.com) Quit (Quit: sync && halt)
[23:32] * joshd (~jdurgin@206.169.83.146) has joined #ceph
[23:34] * The_Ball (~ballen@80.202.192.42) Quit (Ping timeout: 480 seconds)
[23:37] * joshd1 (~jdurgin@66-194-8-225.static.twtelecom.net) Quit (Ping timeout: 480 seconds)
[23:37] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[23:37] * Sketchfile (~blank@2FBAACAST.tor-irc.dnsbl.oftc.net) Quit ()
[23:38] * Inverness (~Esge@5NZAADLEB.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:39] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[23:43] * sean_ (~seapasull@95.85.33.150) has joined #ceph
[23:44] <sean_> anyone been messing with s3 enough to help me out?
[23:44] <TheSov2> watcha nee i use s3 :)
[23:44] * linjan (~linjan@80.179.241.26) Quit (Ping timeout: 480 seconds)
[23:45] <sean_> I have a set of buckets with thousands of objects inside those buckets. I have a set of users that I need to grant and revoke access to those objects quickly.
[23:46] <sean_> so if I have user A that has a set of 4 buckets. and I would like to grant access to userB to all of the objects in userAs bucket I need to go through programatically and update the policy to all of the keys inside that bucket?
[23:46] <sean_> is that right?
[23:47] <sean_> IS there an easier way to do this. I am pretty sure I am missing something important here
[23:47] <sean_> TheSov2: ^
[23:47] <sean_> is there a way to set the object to inherit the bucket policy or to set up some kind of group in ceph s3?
[23:47] <TheSov2> u should just create a policy object
[23:48] <TheSov2> and then apply it to the user
[23:48] <TheSov2> oh lol, i thought you mean amazon s3.... where is my brain. i have not used ceph's s3
[23:48] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[23:48] <sean_> yeah with IAM it makes things a lot easier.. without IAM though... no idea how to set this up
[23:49] <terje> Anyone know of any tricks that I can use to backup disk partitions to and RBD volume for easy disaster recovery?
[23:49] <terje> *an RBD volume
[23:51] <sean_> sure i mean.. why not just create an rbd and then just use dd?
[23:52] <terje> yea, that's what I was thinking, then if I lose my physical disk, I'd somehow dd it back from like a rescue ISO or something.
[23:52] <sean_> and then when you want to restore dd if=/dev/rbd1p1 of=/dev/sdb1 bs=512 conv=noerror,sync
[23:52] <sean_> yup
[23:52] <sean_> or better yet dd if=/dev/rbd1 /dev/sdb
[23:52] <terje> aroused..
[23:53] * The_Ball (~ballen@80.202.192.42) has joined #ceph
[23:54] * BManojlovic (~steki@cable-89-216-232-224.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:56] <TheSov2> using default settings for a pool is bad right
[23:56] * flakrat (~flakrat@fttu-216-41-245-223.btes.tv) has joined #ceph
[23:57] <lurbs> 'osd pool default pg num' defaults far too low, at least.
[23:58] <terje> sean_: why did you choose bs=512
[23:59] * ircolle (~Adium@2601:1:a580:1735:914f:6dfb:ef0a:e28c) Quit (Quit: Leaving.)
[23:59] <TheSov2> 512 is a "normal" block size silly :P
[23:59] <sean_> terje mostly out of habit but also a bit paranoid
[23:59] <TheSov2> the default size block for ext4 is 4k
[23:59] <sean_> yup
[23:59] <TheSov2> so you can realistically use bs=4096 for a good speed

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.