#ceph IRC Log

Index

IRC Log for 2015-05-15

Timestamps are in GMT/BST.

[0:00] * joshd (~jdurgin@198.0.167.145) Quit (Quit: Leaving.)
[0:01] * srk (~srk@32.97.110.56) Quit (Ping timeout: 480 seconds)
[0:05] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[0:07] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[0:08] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[0:09] * angdraug (~angdraug@12.164.168.117) Quit ()
[0:10] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[0:16] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[0:17] * cdelatte (~cdelatte@165.166.241.150) has joined #ceph
[0:18] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:18] * segutier (~segutier@173.231.115.58) Quit (Quit: segutier)
[0:23] * osuka_ (~Sami345@7R2AAAXKZ.tor-irc.dnsbl.oftc.net) Quit ()
[0:23] * Kayla (~legion@7R2AAAXLT.tor-irc.dnsbl.oftc.net) has joined #ceph
[0:24] * ChrisNBlum (~ChrisNBlu@178.255.153.117) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[0:25] * Concubidated (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[0:26] * Concubidated (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit ()
[0:27] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[0:34] * y (~y@broadband-90-154-64-91.nationalcablenetworks.ru) Quit (Quit: WeeChat 1.0.1)
[0:35] * pvh_sa (~pvh@197.79.9.104) Quit (Ping timeout: 480 seconds)
[0:47] * shakamunyi (~shakamuny@c-67-180-191-38.hsd1.ca.comcast.net) has joined #ceph
[0:49] * vata (~vata@cable-21.246.173-197.electronicbox.net) has joined #ceph
[0:49] * PerlStalker (~PerlStalk@162.220.127.20) Quit (Quit: ...)
[0:53] * Kayla (~legion@7R2AAAXLT.tor-irc.dnsbl.oftc.net) Quit ()
[0:53] * Chaos_Llama (~Maariu5_@185.77.129.54) has joined #ceph
[1:00] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[1:00] * shakamunyi (~shakamuny@c-67-180-191-38.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[1:03] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[1:05] * ircolle (~ircolle@c-71-229-136-109.hsd1.co.comcast.net) has left #ceph
[1:05] * segutier (~segutier@199.48.246.178) has joined #ceph
[1:05] * SamYaple_ (~SamYaple@162.209.126.134) Quit (Remote host closed the connection)
[1:07] * silvrax (~silvrax@wilson.xs4all.nl) Quit (Remote host closed the connection)
[1:10] * segutier (~segutier@199.48.246.178) Quit (Read error: Connection reset by peer)
[1:10] * segutier (~segutier@69.80.110.100) has joined #ceph
[1:14] * OutOfNoWhere (~rpb@199.68.195.102) has joined #ceph
[1:17] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[1:17] * mwilcox (~mwilcox@116.251.192.71) has joined #ceph
[1:17] * shakamunyi (~shakamuny@c-67-180-191-38.hsd1.ca.comcast.net) has joined #ceph
[1:19] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[1:20] * segutier (~segutier@69.80.110.100) Quit (Ping timeout: 480 seconds)
[1:23] * Chaos_Llama (~Maariu5_@8Q4AAASIK.tor-irc.dnsbl.oftc.net) Quit ()
[1:23] <zaitcev> magicrobotmonkey: did you try to file a bug somewhere? I'd be interested to have a look.
[1:23] * Vale1 (~delcake@7R2AAAXNT.tor-irc.dnsbl.oftc.net) has joined #ceph
[1:29] * lcurtis (~lcurtis@47.19.105.250) Quit (Ping timeout: 480 seconds)
[1:34] * oms101 (~oms101@p20030057EA020200EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:37] * MrHeavy (~mrheavy@pool-108-54-190-117.nycmny.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[1:39] * ChrisHolcombe (~chris@pool-108-42-124-94.snfcca.fios.verizon.net) has joined #ceph
[1:39] * cholcombe (~chris@pool-108-42-124-94.snfcca.fios.verizon.net) Quit (Read error: Connection reset by peer)
[1:42] * oms101 (~oms101@p20030057EA01BA00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:48] * segutier (~segutier@64.119.211.122) has joined #ceph
[1:53] * Vale1 (~delcake@7R2AAAXNT.tor-irc.dnsbl.oftc.net) Quit ()
[1:53] * Kizzi (~MJXII@destiny.enn.lu) has joined #ceph
[1:53] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[1:53] * i_m (~ivan.miro@pool-109-191-92-175.is74.ru) Quit (Ping timeout: 480 seconds)
[2:00] * eford (~fford@93.93.251.146) has joined #ceph
[2:00] * gford (~fford@p509901f2.dip0.t-ipconnect.de) Quit (Read error: Connection reset by peer)
[2:02] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[2:06] * jo00nas (~jonas@188-183-5-254-static.dk.customer.tdc.net) has joined #ceph
[2:06] * jo00nas (~jonas@188-183-5-254-static.dk.customer.tdc.net) Quit ()
[2:14] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) has joined #ceph
[2:21] * segutier_ (~segutier@64.119.211.122) has joined #ceph
[2:23] * Kizzi (~MJXII@5NZAACI6W.tor-irc.dnsbl.oftc.net) Quit ()
[2:23] * Chrissi_ (~Diablodoc@53IAAAXWK.tor-irc.dnsbl.oftc.net) has joined #ceph
[2:24] * segutier (~segutier@64.119.211.122) Quit (Ping timeout: 480 seconds)
[2:24] * segutier_ is now known as segutier
[2:31] * LeaChim (~LeaChim@host86-147-114-198.range86-147.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:32] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Quit: Leaving.)
[2:33] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:34] * JV (~chatzilla@204.14.239.17) Quit (Ping timeout: 480 seconds)
[2:39] * MentalRay (~MRay@107.171.161.165) has joined #ceph
[2:47] * bsanders (~bsanders@ip68-111-207-205.sd.sd.cox.net) has joined #ceph
[2:48] <bsanders> If I see messages like "May 14 12:48:06 client1 kernel: net/ceph/libceph: read_partial_message ffff8804cb64c8a0 data crc 292936757 != exp. 3230057996" are they benign? Is that just like some kind of TCP error?
[2:51] * cdelatte (~cdelatte@165.166.241.150) Quit (Quit: This computer has gone to sleep)
[2:51] <bsanders> The above is from a Firefly RBD kernel client's /var/log/messages
[2:52] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) has joined #ceph
[2:53] * Chrissi_ (~Diablodoc@53IAAAXWK.tor-irc.dnsbl.oftc.net) Quit ()
[2:53] * Sirrush (~Unforgive@azura.nullbyte.me) has joined #ceph
[3:00] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Quit: Leaving.)
[3:03] * Albert (~chatzilla@123.232.37.90) Quit (Quit: ChatZilla 0.9.91.1 [Firefox 37.0.2/20150415140819])
[3:14] * xarses (~andreww@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[3:16] * kefu (~kefu@114.92.96.62) has joined #ceph
[3:16] * nhm (~nhm@184-97-175-198.mpls.qwest.net) Quit (Read error: Connection reset by peer)
[3:18] * kefu (~kefu@114.92.96.62) Quit (Read error: Connection reset by peer)
[3:19] * kefu (~kefu@114.92.96.62) has joined #ceph
[3:23] * Sirrush (~Unforgive@8Q4AAASKB.tor-irc.dnsbl.oftc.net) Quit ()
[3:23] * w0lfeh (~notarima@195.169.125.226) has joined #ceph
[3:24] * nsoffer (~nsoffer@bzq-79-177-255-116.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[3:33] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[3:43] * diegows (~diegows@190.190.5.238) has joined #ceph
[3:48] * georgem (~Adium@69-196-174-91.dsl.teksavvy.com) has joined #ceph
[3:53] * w0lfeh (~notarima@7R2AAAXRX.tor-irc.dnsbl.oftc.net) Quit ()
[3:53] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[3:57] * spidu_ (~vend3r@tor.nullbyte.me) has joined #ceph
[4:02] * KevinPerks (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[4:04] * zhaochao (~zhaochao@125.39.8.226) has joined #ceph
[4:06] * segutier_ (~segutier@64.119.211.122) has joined #ceph
[4:07] * srk (~srk@2602:306:836e:91f0:5137:cb7e:f2:27c0) has joined #ceph
[4:08] * wido_ (~wido@92.63.168.213) has joined #ceph
[4:09] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[4:10] * segutier (~segutier@64.119.211.122) Quit (Ping timeout: 480 seconds)
[4:10] * segutier_ is now known as segutier
[4:12] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (Read error: Connection reset by peer)
[4:22] * segutier_ (~segutier@64.119.211.122) has joined #ceph
[4:26] * segutier (~segutier@64.119.211.122) Quit (Ping timeout: 480 seconds)
[4:26] * segutier_ is now known as segutier
[4:26] * bsanders (~bsanders@ip68-111-207-205.sd.sd.cox.net) Quit (Quit: leaving)
[4:27] * spidu_ (~vend3r@3OZAABPAS.tor-irc.dnsbl.oftc.net) Quit ()
[4:27] * ZombieL (~zviratko@7R2AAAXTQ.tor-irc.dnsbl.oftc.net) has joined #ceph
[4:30] * dneary (~dneary@pool-96-252-45-212.bstnma.fios.verizon.net) has joined #ceph
[4:33] * mwilcox (~mwilcox@116.251.192.71) Quit (Ping timeout: 480 seconds)
[4:35] * OutOfNoWhere (~rpb@199.68.195.102) Quit (Ping timeout: 480 seconds)
[4:52] * Concubidated (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[4:52] * mwilcox (~mwilcox@116.251.192.71) has joined #ceph
[4:53] <nigwil> rados df: columns "rd rd KB wr wr KB" are totals of read/write activity since last cluster start or all time?
[4:57] * ZombieL (~zviratko@7R2AAAXTQ.tor-irc.dnsbl.oftc.net) Quit ()
[4:57] * bret1 (~Diablodoc@spftor5e2.privacyfoundation.ch) has joined #ceph
[5:00] * KevinPerks (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[5:01] * Concubidated (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[5:02] * georgem (~Adium@69-196-174-91.dsl.teksavvy.com) Quit (Quit: Leaving.)
[5:05] * Concubidated (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[5:12] * mwilcox (~mwilcox@116.251.192.71) Quit (Ping timeout: 480 seconds)
[5:14] * mwilcox (~mwilcox@116.251.192.71) has joined #ceph
[5:21] * kefu (~kefu@114.92.96.62) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:22] * segutier_ (~segutier@64.119.211.122) has joined #ceph
[5:24] * kefu (~kefu@114.92.96.62) has joined #ceph
[5:26] * segutier (~segutier@64.119.211.122) Quit (Ping timeout: 480 seconds)
[5:26] * segutier_ is now known as segutier
[5:27] * bret1 (~Diablodoc@2WVAACCTA.tor-irc.dnsbl.oftc.net) Quit ()
[5:27] * jakekosberg (~QuantumBe@bolobolo2.torservers.net) has joined #ceph
[5:30] * ira (~ira@0001cb91.user.oftc.net) Quit (Ping timeout: 480 seconds)
[5:33] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[5:38] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[5:39] <Nats_> all time i would imagine, not that there's much difference in practice
[5:39] * JV (~chatzilla@static-50-53-42-60.bvtn.or.frontiernet.net) has joined #ceph
[5:48] * dcasier (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[5:52] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[5:52] * ChrisHolcombe (~chris@pool-108-42-124-94.snfcca.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[5:53] * i_m (~ivan.miro@pool-109-191-92-175.is74.ru) has joined #ceph
[5:53] * Vacuum__ (~vovo@88.130.221.13) has joined #ceph
[5:56] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[5:57] * MentalRay (~MRay@107.171.161.165) Quit (Quit: This computer has gone to sleep)
[5:57] <nigwil> Nats_: that would mean the value is persisted somewhere(if it is all-time)?
[5:57] * jakekosberg (~QuantumBe@8Q4AAASMI.tor-irc.dnsbl.oftc.net) Quit ()
[5:57] * Spessu (~vegas3@1.tor.exit.babylon.network) has joined #ceph
[6:00] * Vacuum_ (~vovo@i59F7A442.versanet.de) Quit (Ping timeout: 480 seconds)
[6:08] <Nats_> all the info in 'rados df' would be persisted somewhere
[6:16] * dcasier (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[6:22] * segutier_ (~segutier@64.119.211.122) has joined #ceph
[6:24] * srk (~srk@2602:306:836e:91f0:5137:cb7e:f2:27c0) Quit (Ping timeout: 480 seconds)
[6:25] * dneary (~dneary@pool-96-252-45-212.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:25] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[6:25] * amote (~amote@121.244.87.116) has joined #ceph
[6:26] * segutier (~segutier@64.119.211.122) Quit (Ping timeout: 480 seconds)
[6:26] * segutier_ is now known as segutier
[6:26] * pvh_sa (~pvh@197.79.7.124) has joined #ceph
[6:27] * Spessu (~vegas3@8Q4AAASMV.tor-irc.dnsbl.oftc.net) Quit ()
[6:27] * uhtr5r1 (~raindog@chulak.enn.lu) has joined #ceph
[6:29] * kefu (~kefu@114.92.96.62) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:31] * JV_ (~chatzilla@204.14.239.106) has joined #ceph
[6:37] * JV (~chatzilla@static-50-53-42-60.bvtn.or.frontiernet.net) Quit (Ping timeout: 480 seconds)
[6:37] * JV_ is now known as JV
[6:46] * i_m (~ivan.miro@pool-109-191-92-175.is74.ru) Quit (Ping timeout: 480 seconds)
[6:48] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) has joined #ceph
[6:55] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[6:56] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[6:57] * uhtr5r1 (~raindog@8Q4AAASNC.tor-irc.dnsbl.oftc.net) Quit ()
[6:57] * CorneliousJD|AtWork (~Frostshif@politkovskaja.torservers.net) has joined #ceph
[7:01] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[7:10] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[7:13] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:15] * tomc (~tomc@64-199-185-54.ip.mcleodusa.net) has joined #ceph
[7:15] <tomc> anyone around tonight? I???ve got a question about the number of threads in the OSD process.
[7:17] <tomc> Recently was testing a smallish cluster, and had great performance, moved to a larger cluster with more CPU (went from 8 cores to 24), and the memory usage and number of threads absolutely exploded??? (from 3-400 threads per OSD to over 1500 threads per osd)
[7:17] <tomc> On the lower powered cluster, things were fine load wise, now I have a cluster that can barely manage 50MB/s reads and writes with load averages over 500
[7:20] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) Quit (Ping timeout: 480 seconds)
[7:20] * bkopilov (~bkopilov@bzq-79-179-39-86.red.bezeqint.net) Quit (Read error: No route to host)
[7:21] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[7:22] * dcasier (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) has joined #ceph
[7:22] * bkopilov (~bkopilov@bzq-79-179-39-86.red.bezeqint.net) has joined #ceph
[7:27] * CorneliousJD|AtWork (~Frostshif@5NZAACI9L.tor-irc.dnsbl.oftc.net) Quit ()
[7:27] * TomyLobo1 (~w2k@marylou.nos-oignons.net) has joined #ceph
[7:28] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[7:29] <Nats_> probably something else wrong
[7:29] <Nats_> how many osd's you running per host?
[7:29] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[7:32] * Concubidated1 (~Adium@66.87.125.240) has joined #ceph
[7:35] * macjack (~macjack@122.146.93.152) has joined #ceph
[7:37] <tomc> 36
[7:38] * i_m (~ivan.miro@83.149.35.189) has joined #ceph
[7:38] <Nats_> one thing i encountered that i dont think is wel documented is you have to increase kernel.pid_max , could that be your issue?
[7:39] <Nats_> well, its mentioned here: http://ceph.com/docs/master/start/hardware-recommendations/#additional-considerations
[7:40] * Concubidated (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[7:40] <tomc> yeah, no we???ve increased that
[7:40] <tomc> we don???t run out of threads
[7:40] <Nats_> in regards to your question, i have hosts with 24 osd's and have about 2200 threads per ceph-osd process
[7:40] <tomc> ok???
[7:40] <Nats_> load usually sits around 16
[7:41] <tomc> yeah, so we???ve got something wrong???
[7:41] <tomc> I don???t know where to look though
[7:41] <tomc> we almost always have load average over 50
[7:41] <tomc> with frequent spikes to 3-500
[7:41] <tomc> and we have fewer threads than that???
[7:41] <Nats_> i'd suggest start with the basics; amount of free memory and 'iostat' to see what the disks themselves are doing
[7:42] <Nats_> does "ceph -s" say all placement groups are active+clean ?
[7:43] <tomc> as far as memory is concerned, we had 128GB/node, when we upgraded the CPUs we completely ran out??? so now we have 196GB/node, and more than 50% of that is cached, not very much free 10-15GB free at any time, but tons of cache??? iostat is very calm, no disks are ever over 10-15% utilization
[7:43] <tomc> yes
[7:44] <tomc> but we get slow requests quite regularly, and just massive load??? it appears the ceph osd processes are very CPU bound??? we???ll see 0% io wait, 0% idle quite often.. with 3-5 OSD processes using 200-300% CPU
[7:45] <tomc> each that is??? so across the 3-5 processes they???re completely occupying up to 15 cores
[7:46] <tomc> when we upgraded the CPUs is when we saw this massive jump in load averages, and memory usage
[7:46] <Nats_> that is weird
[7:47] <tomc> so I thought there migth be some setting that was based on the number of cores somehow??? like it detected the number of cores and then spun up 4x that number threads per osd or something
[7:47] <Nats_> dont have any first hand advice on that i'm afraid, all my systems are dual-proc 6 core - E5-2620 or similar
[7:48] <tomc> yeah??? we went from dual quad to dual hex core??? e5-2603 to 2630???
[7:48] <Nats_> oh, pretty similar to mine then
[7:49] * kefu (~kefu@114.92.96.62) has joined #ceph
[7:50] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[7:50] <Nats_> as a counter-bit of data, i avg maybe 300 iops per disk and each ceph-osd process sits around 15% cpu in normal operation
[7:51] * JV (~chatzilla@204.14.239.106) Quit (Ping timeout: 480 seconds)
[7:51] <Nats_> ceph-osd is definitely greedy, can max it out with a benchmark but in normal operation its relatively snoozy
[7:51] <tomc> yeah??? I have no idea what to make of it at this point, we can???t find a bottleneck in the network (iperf says we should be able to pass traffic just under theoretical max), the disks aren???t busy, the cluster is slow??? and the OSD processes are using tons of CPU??? yeah our whole cluster averages about 300iops??? and the ceph-osd processes are all above 50% cpu all the time..
[7:52] * pvh_sa (~pvh@197.79.7.124) Quit (Ping timeout: 480 seconds)
[7:53] <Nats_> might have to get commercial support, certainly something mucked up if cpu usage is so high for so few iops
[7:54] <tomc> yeah??? we???re looking into that already??? just thought I???d ping the chat and see if anyone had any ideas :)
[7:54] <tomc> thanks for your time.
[7:54] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:54] * smithfarm (~ncutler@nat1.scz.suse.com) has joined #ceph
[7:54] <tomc> I???m going to go to bed and try to battle it out again tomorrow
[7:55] * tomc (~tomc@64-199-185-54.ip.mcleodusa.net) Quit (Quit: tomc)
[7:57] * zaitcev (~zaitcev@2001:558:6001:10:61d7:f51f:def8:4b0f) Quit (Quit: Bye)
[7:57] <Nats_> good luck
[7:57] * TomyLobo1 (~w2k@7R2AAAXZT.tor-irc.dnsbl.oftc.net) Quit ()
[7:57] * legion (~rikai@185.72.177.105) has joined #ceph
[8:02] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[8:02] * Hemanth (~Hemanth@121.244.87.117) Quit ()
[8:02] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[8:04] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[8:08] * DV_ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[8:09] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[8:13] * SamYaple (~SamYaple@162.209.126.134) has joined #ceph
[8:18] * DV_ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[8:19] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) Quit (Quit: Do fish get thirsty?)
[8:23] * segutier_ (~segutier@64.119.211.122) has joined #ceph
[8:26] * segutier (~segutier@64.119.211.122) Quit (Ping timeout: 480 seconds)
[8:26] * segutier_ is now known as segutier
[8:27] * legion (~rikai@2WVAACCW3.tor-irc.dnsbl.oftc.net) Quit ()
[8:27] * Kottizen (~Da_Pineap@manning2.torservers.net) has joined #ceph
[8:45] * mxmln (~mxmln@212.79.49.65) has joined #ceph
[8:46] * bobrik_____ (~bobrik@83.243.64.45) Quit (Quit: (null))
[8:53] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:55] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) Quit (Quit: Connection closed for inactivity)
[8:57] * Kottizen (~Da_Pineap@2WVAACCXP.tor-irc.dnsbl.oftc.net) Quit ()
[8:57] * isaxi (~Kyso_@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[9:08] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[9:10] * analbeard (~shw@support.memset.com) has joined #ceph
[9:13] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[9:14] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:15] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[9:15] * cok (~chk@2a02:2350:18:1010:f902:2856:617c:91bf) has joined #ceph
[9:18] * bobrik_____ (~bobrik@109.167.249.178) has joined #ceph
[9:21] * nsoffer (~nsoffer@bzq-79-177-255-116.red.bezeqint.net) has joined #ceph
[9:21] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[9:23] * segutier_ (~segutier@64.119.211.122) has joined #ceph
[9:24] * segutier (~segutier@64.119.211.122) Quit (Ping timeout: 480 seconds)
[9:24] * segutier_ is now known as segutier
[9:25] * dcasier (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) Quit (Remote host closed the connection)
[9:27] * isaxi (~Kyso_@2WVAACCYE.tor-irc.dnsbl.oftc.net) Quit ()
[9:27] * Grimhound (~aleksag@212.7.194.71) has joined #ceph
[9:33] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Ping timeout: 480 seconds)
[9:33] * pvh_sa (~pvh@41.164.8.114) has joined #ceph
[9:40] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[9:42] * fam is now known as fam_away
[9:42] * fam_away is now known as fam
[9:42] * Concubidated1 (~Adium@66.87.125.240) Quit (Quit: Leaving.)
[9:44] * macjack (~macjack@122.146.93.152) has left #ceph
[9:45] * calvinx (~calvin@101.100.172.246) has joined #ceph
[9:47] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[9:47] * ChrisNBlum (~ChrisNBlu@178.255.153.117) has joined #ceph
[9:47] * calvinx (~calvin@101.100.172.246) Quit ()
[9:48] * calvinx (~calvin@101.100.172.246) has joined #ceph
[9:53] <smerz> the operations per second from the ceph output. what kind of operations are these. all the OSD op's summed up or? so if I do 1 write IO with replication = 3, I do 3 op/s + 3op/s when the journal get's flushed to disk right ?
[9:56] * rotbeard (~redbeard@x5f74fd35.dyn.telefonica.de) has joined #ceph
[9:57] * Grimhound (~aleksag@5NZAACJBE.tor-irc.dnsbl.oftc.net) Quit ()
[10:02] * Kurimus1 (~Corneliou@8Q4AAASRF.tor-irc.dnsbl.oftc.net) has joined #ceph
[10:04] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Ping timeout: 480 seconds)
[10:07] * jordanP (~jordan@213.215.2.194) has joined #ceph
[10:08] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) Quit (Ping timeout: 480 seconds)
[10:08] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[10:16] * bjornar (~bjornar@109.247.131.38) has joined #ceph
[10:17] * ChrisNBlum (~ChrisNBlu@178.255.153.117) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[10:21] * dcasier (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) has joined #ceph
[10:22] * PaulC (~paul@222-153-122-160.jetstream.xtra.co.nz) Quit (Ping timeout: 480 seconds)
[10:23] * segutier_ (~segutier@64.119.211.122) has joined #ceph
[10:26] * derjohn_mob (~aj@ip-95-223-126-17.hsi16.unitymediagroup.de) Quit (Ping timeout: 480 seconds)
[10:26] * segutier (~segutier@64.119.211.122) Quit (Ping timeout: 480 seconds)
[10:26] * segutier_ is now known as segutier
[10:28] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Ping timeout: 480 seconds)
[10:32] * Kurimus1 (~Corneliou@8Q4AAASRF.tor-irc.dnsbl.oftc.net) Quit ()
[10:32] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[10:34] * brutuscat (~brutuscat@74.Red-88-8-87.dynamicIP.rima-tde.net) has joined #ceph
[10:35] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[10:41] * ade (~abradshaw@dslb-188-100-068-120.188.100.pools.vodafone-ip.de) has joined #ceph
[10:50] * PaulC (~paul@222-153-122-160.jetstream.xtra.co.nz) has joined #ceph
[10:52] * kefu (~kefu@114.92.96.62) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[10:59] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[11:00] * dustinm` (~dustinm`@105.ip-167-114-152.net) Quit (Ping timeout: 480 seconds)
[11:02] * puvo (~WedTM@nx-01.tor-exit.network) has joined #ceph
[11:06] * derjohn_mob (~aj@b2b-94-79-172-98.unitymedia.biz) has joined #ceph
[11:06] * kefu (~kefu@114.92.96.62) has joined #ceph
[11:07] * dustinm` (~dustinm`@2607:5300:100:200::160d) has joined #ceph
[11:09] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[11:16] * kefu (~kefu@114.92.96.62) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:18] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[11:23] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[11:23] * smithfarm (~ncutler@nat1.scz.suse.com) has left #ceph
[11:24] * segutier_ (~segutier@64.119.211.122) has joined #ceph
[11:27] * segutier (~segutier@64.119.211.122) Quit (Ping timeout: 480 seconds)
[11:27] * segutier_ is now known as segutier
[11:32] * puvo (~WedTM@2WVAACC1M.tor-irc.dnsbl.oftc.net) Quit ()
[11:32] * LorenXo (~dusti@81-89-96-90.blue.kundencontroller.de) has joined #ceph
[11:41] * vbellur (~vijay@122.178.208.142) has joined #ceph
[11:49] * zW (~wesley@spider.pfoe.be) has joined #ceph
[11:59] * calvinx (~calvin@101.100.172.246) Quit (Quit: calvinx)
[12:02] * LorenXo (~dusti@5NZAACJDS.tor-irc.dnsbl.oftc.net) Quit ()
[12:02] * sixofour1 (~Eric@ncc-1701-d.tor-exit.network) has joined #ceph
[12:04] * Hemanth (~Hemanth@121.244.87.117) Quit (Quit: Leaving)
[12:10] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[12:15] * ChrisNBlum (~ChrisNBlu@178.255.153.117) has joined #ceph
[12:24] * segutier_ (~segutier@64.119.211.122) has joined #ceph
[12:26] * ChrisNBlum (~ChrisNBlu@178.255.153.117) Quit (Quit: Goodbye)
[12:27] * segutier (~segutier@64.119.211.122) Quit (Ping timeout: 480 seconds)
[12:27] * segutier_ is now known as segutier
[12:32] * sixofour1 (~Eric@2WVAACC2Z.tor-irc.dnsbl.oftc.net) Quit ()
[12:32] * Gecko19861 (~brannmar@185.77.129.54) has joined #ceph
[12:35] * bene (~ben@c-24-60-237-191.hsd1.nh.comcast.net) has joined #ceph
[12:35] * mwilcox (~mwilcox@116.251.192.71) Quit (Ping timeout: 480 seconds)
[12:37] * segutier (~segutier@64.119.211.122) Quit (Ping timeout: 480 seconds)
[12:40] * segutier (~segutier@64.119.211.122) has joined #ceph
[12:43] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Read error: Connection reset by peer)
[12:43] * fam is now known as fam_away
[12:43] * cok (~chk@2a02:2350:18:1010:f902:2856:617c:91bf) has left #ceph
[12:44] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) has joined #ceph
[13:02] * Gecko19861 (~brannmar@5NZAACJEP.tor-irc.dnsbl.oftc.net) Quit ()
[13:02] * drupal (~ghostnote@tor-exit01.solidonetworks.com) has joined #ceph
[13:03] * pethani_ (~oftc-webi@134.76.222.225) has joined #ceph
[13:04] <pethani_> hi
[13:05] <pethani_> i have some confusion regarding ceph node
[13:05] <pethani_> is there any one??
[13:08] <smerz> i'd suggest posting your question and wait for answers. often works the best
[13:09] <pethani_> thanks smerz
[13:09] <pethani_> i am new to ceph and cluster.
[13:10] <pethani_> now i am trying to install ceph
[13:10] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[13:11] <pethani_> installaton guid start from admin-deploy step.
[13:11] <pethani_> but my quetion is how to create 3 node in one machine
[13:11] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[13:11] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[13:11] <smerz> using virtualization maybe? so you have 3 vm's essentially
[13:12] <smerz> i assume this is for testing purposes right ?
[13:12] <pethani_> yes
[13:12] <pethani_> otherwise, have i to use phisically three machine??
[13:13] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) has joined #ceph
[13:13] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) Quit ()
[13:13] <smerz> yes. i mean there is no point to build a redundant ceph cluster and have it running on one physical machine (single point of failure)
[13:13] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) has joined #ceph
[13:14] <pethani_> is there any other option whithout virtualization??
[13:15] <smerz> well you could place 3 osd's on one machine (for testing purposes)
[13:15] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[13:16] <pethani_> ok thats sound good for me
[13:16] <pethani_> but how can i create 3 osd ??
[13:16] * hellertime (~Adium@72.246.0.14) has joined #ceph
[13:17] <pethani_> whithout ceph installation
[13:17] * hellertime (~Adium@72.246.0.14) Quit ()
[13:17] * calvinx (~calvin@101.100.172.246) has joined #ceph
[13:17] * hellertime (~Adium@72.246.0.14) has joined #ceph
[13:20] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[13:20] * ChanServ sets mode +o elder
[13:21] * KevinPerks (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[13:32] * drupal (~ghostnote@5NZAACJE3.tor-irc.dnsbl.oftc.net) Quit ()
[13:32] * DoDzy (~nupanick@7R2AAAX93.tor-irc.dnsbl.oftc.net) has joined #ceph
[13:36] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[13:37] * brutuscat (~brutuscat@74.Red-88-8-87.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:39] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[13:39] * pethani_ (~oftc-webi@134.76.222.225) Quit (Quit: Page closed)
[13:40] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[13:42] <dcasier> smerz, do you mean that's not recommended to have many host on same server ?
[13:42] * shyu (~Shanzhi@119.254.196.66) Quit (Remote host closed the connection)
[13:47] <dcasier> with multi OSD in server, host is only crush representation for me. No ?
[13:47] * dcasier is now known as davidc
[13:47] * davidc is now known as david_dcc
[13:54] * bobrik______ (~bobrik@109.167.249.178) has joined #ceph
[13:54] * bobrik_____ (~bobrik@109.167.249.178) Quit (Read error: Connection reset by peer)
[13:56] <smerz> david_dcc, you can and should have multiple osd's per host. 1 osd per disk. i'm not sure if that answers your question
[13:57] <david_dcc> i work with 1hdd = 1osd
[13:58] <david_dcc> but to avoid local network, i create 1host/osd
[13:58] <smerz> i'm not following. sorry :(
[13:59] <david_dcc> 1 server with 24 HDD
[14:00] <david_dcc> 24 OSD / server
[14:00] <david_dcc> 24 host / server
[14:00] <david_dcc> 3 server with 3 replication
[14:01] <david_dcc> and 1 rbd map on couple of 3 hosts
[14:02] * DoDzy (~nupanick@7R2AAAX93.tor-irc.dnsbl.oftc.net) Quit ()
[14:02] * CoZmicShReddeR (~hyst@nx-01.tor-exit.network) has joined #ceph
[14:02] * georgem (~Adium@184.151.179.164) has joined #ceph
[14:03] <hellertime> what tools can I make use of to evaluate the performance of a ceph cluster. is `ceph -w` sufficient for getting a high level view of IO performance?
[14:03] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) has joined #ceph
[14:06] <david_dcc> hellertime, iostat, iotop, iftop, fio, ...
[14:08] * joerocklin (~joe@cpe-65-185-149-56.cinci.res.rr.com) has joined #ceph
[14:09] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[14:10] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[14:10] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:15] * zhaochao (~zhaochao@125.39.8.226) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 38.0/20150513111830])
[14:17] * KevinPerks (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[14:19] <jeroenvh> I would say FIO yeah
[14:20] <jeroenvh> run that on multiple guests/users at the same time to see the parallel performance
[14:23] <david_dcc> And with num_jobs, libaio+iodepth
[14:25] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[14:32] * CoZmicShReddeR (~hyst@7R2AAAYAW.tor-irc.dnsbl.oftc.net) Quit ()
[14:32] * WedTM (~CoMa@185.77.129.54) has joined #ceph
[14:35] * srk (~srk@2602:306:836e:91f0:48a0:d8d8:8641:5eaa) has joined #ceph
[14:35] * georgem (~Adium@184.151.179.164) Quit (Quit: Leaving.)
[14:38] * brutuscat (~brutuscat@74.Red-88-8-87.dynamicIP.rima-tde.net) has joined #ceph
[14:38] * nsoffer (~nsoffer@bzq-79-177-255-116.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[14:40] * Flynn (~stefan@89.207.24.152) has joined #ceph
[14:43] <Flynn> I have a storage node with 24 disk slots. I plan to put in 9xSATA 2T, 9xSAS 1.2T and 6x 200GB SSD. Plan is to use 3x 66GB journal partition on each SSD to function as a journal for 3 OSD???s. Question: what happens when an SSD fails? Can Ceph recover from that, when my pool size == 2?
[14:44] <hellertime> jeroenvh, david_dcc: hmm ok. but those tools all assume I've mounted a ceph fs right? what about direct rados performance?
[14:44] <smerz> if you lose a ssd/journal you loose all osd's behind it
[14:45] <SamYaple> smerz: yes
[14:46] <Flynn> Is Ceph smart enough to not put my objects replicas on OSD???s that share the same physical journal SSD?
[14:46] * brutuscat (~brutuscat@74.Red-88-8-87.dynamicIP.rima-tde.net) Quit (Ping timeout: 480 seconds)
[14:46] <smerz> well yes and no. you can tell ceph how to distribute your data. keep copies on different hosts and/or keep it on different racks etc
[14:46] * shylesh (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[14:46] <SamYaple> Flynn: it wont put them on the same host if at all possible
[14:47] <smerz> indeed
[14:47] <Flynn> OK, so I have to manually make sure my crushmap is OK
[14:47] <smerz> yeah
[14:47] <Flynn> sure thing
[14:47] <Flynn> thanks
[14:47] * nsoffer (~nsoffer@bzq-109-64-255-238.red.bezeqint.net) has joined #ceph
[14:47] * kefu (~kefu@114.92.96.62) has joined #ceph
[14:47] <smerz> your crush map should reflect your failure domains (nodes, racks). so that it has a copy somewhere else at all times to recover
[14:47] * srk (~srk@2602:306:836e:91f0:48a0:d8d8:8641:5eaa) Quit (Ping timeout: 480 seconds)
[14:48] <m0zes> also, you will probably want to know the failure mode for your ssd.
[14:48] <m0zes> some of them fail horrendously. (i.e. bricking themselves without warning)
[14:50] * KevinPerks (~Adium@nat-pool-bos-t.redhat.com) has joined #ceph
[14:50] <m0zes> if the simply go into a readonly mode (even if it is just until the next reboot) you could flush the journals, turn off the osds and replace the disk.
[14:51] * Concubidated (~Adium@nat-pool-bos-t.redhat.com) has joined #ceph
[14:52] * shohn (~shohn@nat-pool-bos-u.redhat.com) has joined #ceph
[14:54] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[14:55] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[14:58] <Flynn> m0zes: it seems to me that it is hard to know how they will fail??? I use Intel S3700???s. Is the failure mode something that is determined by the brand/type?
[14:59] * bobrik______ (~bobrik@109.167.249.178) Quit (Quit: (null))
[15:00] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) has joined #ceph
[15:00] * bandrus (~brian@nat-pool-bos-t.redhat.com) has joined #ceph
[15:01] * bobrik______ (~bobrik@109.167.249.178) has joined #ceph
[15:02] * WedTM (~CoMa@2WVAACC51.tor-irc.dnsbl.oftc.net) Quit ()
[15:06] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[15:09] * jskinner (~jskinner@host-95-2-129.infobunker.com) has joined #ceph
[15:10] * linuxkidd (~linuxkidd@nat-pool-bos-t.redhat.com) has joined #ceph
[15:12] * nsoffer (~nsoffer@bzq-109-64-255-238.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[15:13] <m0zes> fwict, it is determined by the controller manufacturer/firmware. it is something that I am trying to find for most SSDs as well.
[15:14] * wushudoin (~wushudoin@nat-pool-bos-u.redhat.com) has joined #ceph
[15:17] * brutuscat (~brutuscat@74.Red-88-8-87.dynamicIP.rima-tde.net) has joined #ceph
[15:18] * sjm (~sjm@nat-pool-bos-t.redhat.com) has joined #ceph
[15:19] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[15:19] * kefu (~kefu@114.92.96.62) Quit (Max SendQ exceeded)
[15:20] * primechu_ (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[15:20] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Read error: Connection reset by peer)
[15:22] * dyasny (~dyasny@198.251.54.234) has joined #ceph
[15:23] * david_dcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[15:24] * kefu (~kefu@114.92.96.62) has joined #ceph
[15:28] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[15:28] * mwilcox (~mwilcox@116.251.192.71) has joined #ceph
[15:28] * amote (~amote@121.244.87.116) has joined #ceph
[15:30] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[15:30] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit ()
[15:33] * redbeast12 (~Hejt@chomsky.torservers.net) has joined #ceph
[15:35] * smerz (~ircircirc@37.74.194.90) Quit (Quit: Leaving)
[15:36] * mwilcox (~mwilcox@116.251.192.71) Quit (Ping timeout: 480 seconds)
[15:43] * alram (~alram@nat-pool-bos-t.redhat.com) has joined #ceph
[15:43] * srk (~srk@104-54-233-31.lightspeed.austtx.sbcglobal.net) has joined #ceph
[15:46] * jdillaman (~jdillaman@166.170.33.136) has joined #ceph
[15:50] * nsoffer (~nsoffer@bzq-109-64-255-238.red.bezeqint.net) has joined #ceph
[15:55] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[15:55] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has left #ceph
[16:01] * srk (~srk@104-54-233-31.lightspeed.austtx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[16:02] * redbeast12 (~Hejt@5NZAACJH6.tor-irc.dnsbl.oftc.net) Quit ()
[16:02] * Inuyasha (~Hideous@91.219.236.218) has joined #ceph
[16:02] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[16:10] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[16:11] * fmanana (~fdmanana@bl13-130-47.dsl.telepac.pt) has joined #ceph
[16:18] * fdmanana (~fdmanana@bl4-177-189.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[16:25] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[16:26] * alram (~alram@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:28] * david_dcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[16:30] * branto (~branto@nat-pool-brq-t.redhat.com) has left #ceph
[16:30] * georgem1 (~Adium@fwnat.oicr.on.ca) has joined #ceph
[16:30] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Read error: Connection reset by peer)
[16:32] * Inuyasha (~Hideous@7R2AAAYEN.tor-irc.dnsbl.oftc.net) Quit ()
[16:32] * alram (~alram@nat-pool-bos-t.redhat.com) has joined #ceph
[16:32] * sixofour (~Redshift@2WVAACC8M.tor-irc.dnsbl.oftc.net) has joined #ceph
[16:40] * pvh_sa (~pvh@41.164.8.114) Quit (Ping timeout: 480 seconds)
[16:41] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[16:47] * ircolle (~ircolle@c-71-229-136-109.hsd1.co.comcast.net) has joined #ceph
[16:55] * jdillaman (~jdillaman@166.170.33.136) Quit (Ping timeout: 480 seconds)
[17:01] * jdillaman (~jdillaman@mobile-166-171-059-196.mycingular.net) has joined #ceph
[17:02] * sixofour (~Redshift@2WVAACC8M.tor-irc.dnsbl.oftc.net) Quit ()
[17:02] * zc00gii (~Jyron@176.10.99.206) has joined #ceph
[17:04] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[17:06] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:09] * srk (~srk@32.97.110.56) has joined #ceph
[17:12] * nsoffer (~nsoffer@bzq-109-64-255-238.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[17:14] * Flynn (~stefan@89.207.24.152) Quit (Quit: Flynn)
[17:17] * KevinPerks1 (~Adium@nat-pool-bos-t.redhat.com) has joined #ceph
[17:18] * KevinPerks (~Adium@nat-pool-bos-t.redhat.com) Quit (Read error: Connection reset by peer)
[17:22] * alram (~alram@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[17:22] * jdillaman (~jdillaman@mobile-166-171-059-196.mycingular.net) Quit (Ping timeout: 480 seconds)
[17:22] * derjohn_mob (~aj@b2b-94-79-172-98.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[17:23] * rwheeler_ (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[17:27] * daniel2_ (~dshafer@0001b605.user.oftc.net) has joined #ceph
[17:27] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[17:31] * jdillaman (~jdillaman@mobile-166-171-059-153.mycingular.net) has joined #ceph
[17:31] * alram (~alram@nat-pool-bos-t.redhat.com) has joined #ceph
[17:32] * zc00gii (~Jyron@8Q4AAAS0N.tor-irc.dnsbl.oftc.net) Quit ()
[17:38] * verbalins (~dug@81-89-96-91.blue.kundencontroller.de) has joined #ceph
[17:40] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[17:43] * bandrus1 (~brian@nat-pool-bos-t.redhat.com) has joined #ceph
[17:43] * bandrus (~brian@nat-pool-bos-t.redhat.com) Quit (Read error: Connection reset by peer)
[17:52] * i_m (~ivan.miro@83.149.35.189) Quit (Ping timeout: 480 seconds)
[17:53] * mxmln (~mxmln@212.79.49.65) Quit (Ping timeout: 480 seconds)
[17:53] * jdillaman (~jdillaman@mobile-166-171-059-153.mycingular.net) Quit (Ping timeout: 480 seconds)
[17:54] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) Quit (Remote host closed the connection)
[17:56] * linjan (~linjan@109.66.32.222) has joined #ceph
[17:58] * KevinPerks1 (~Adium@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving.)
[17:58] * sjm (~sjm@nat-pool-bos-t.redhat.com) has left #ceph
[17:58] * shakamunyi (~shakamuny@c-67-180-191-38.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[18:01] * vata (~vata@cable-21.246.173-197.electronicbox.net) Quit (Quit: Leaving.)
[18:01] * alram (~alram@nat-pool-bos-t.redhat.com) Quit (Read error: Connection reset by peer)
[18:03] * ade (~abradshaw@dslb-188-100-068-120.188.100.pools.vodafone-ip.de) Quit (Quit: Too sexy for his shirt)
[18:04] * JV (~chatzilla@204.14.239.17) has joined #ceph
[18:04] * bene (~ben@c-24-60-237-191.hsd1.nh.comcast.net) Quit (Quit: Konversation terminated!)
[18:05] * alram (~alram@nat-pool-bos-t.redhat.com) has joined #ceph
[18:08] * verbalins (~dug@7R2AAAYHM.tor-irc.dnsbl.oftc.net) Quit ()
[18:08] * Snowman (~elt@46.182.106.190) has joined #ceph
[18:12] * lkoranda (~lkoranda@213.175.37.10) Quit (Ping timeout: 480 seconds)
[18:12] * jordanP (~jordan@213.215.2.194) Quit (Quit: Leaving)
[18:13] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[18:15] * ChrisHolcombe (~chris@pool-108-42-124-94.snfcca.fios.verizon.net) has joined #ceph
[18:17] * fghaas (~florian@93-82-2-10.adsl.highway.telekom.at) has joined #ceph
[18:18] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) has joined #ceph
[18:20] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:20] * linjan (~linjan@109.66.32.222) Quit (Ping timeout: 480 seconds)
[18:21] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[18:23] * xarses (~andreww@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:25] * alram (~alram@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[18:26] * alram (~alram@nat-pool-bos-t.redhat.com) has joined #ceph
[18:27] * fghaas (~florian@93-82-2-10.adsl.highway.telekom.at) Quit (Quit: Leaving.)
[18:28] * calvinx (~calvin@101.100.172.246) Quit (Quit: calvinx)
[18:33] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[18:33] * primechu_ (~primechuc@host-95-2-129.infobunker.com) Quit (Read error: Connection reset by peer)
[18:34] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) has joined #ceph
[18:34] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[18:35] * analbeard (~shw@host86-140-202-228.range86-140.btcentralplus.com) has joined #ceph
[18:36] * rotbeard (~redbeard@x5f74fd35.dyn.telefonica.de) Quit (Quit: Leaving)
[18:36] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) Quit ()
[18:38] * Snowman (~elt@9U1AAAU66.tor-irc.dnsbl.oftc.net) Quit ()
[18:38] * zc00gii (~starcoder@digi00277.torproxy-readme-arachnide-fr-35.fr) has joined #ceph
[18:40] * yghannam (~yghannam@0001f8aa.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:44] * pvh_sa (~pvh@197.79.8.147) has joined #ceph
[18:47] * subscope (~subscope@92-249-244-29.pool.digikabel.hu) has joined #ceph
[18:54] * chrome0 (~chrome0@static.202.35.46.78.clients.your-server.de) Quit (Read error: Connection reset by peer)
[19:00] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:01] * brutuscat (~brutuscat@74.Red-88-8-87.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[19:01] * joerocklin (~joe@cpe-65-185-149-56.cinci.res.rr.com) Quit (Quit: Leaving)
[19:02] * MentalRay (~MRay@142.169.78.134) has joined #ceph
[19:03] * brutuscat (~brutuscat@74.Red-88-8-87.dynamicIP.rima-tde.net) has joined #ceph
[19:03] * linjan (~linjan@109.66.32.222) has joined #ceph
[19:06] * shaunm (~shaunm@74.215.76.114) Quit (Ping timeout: 480 seconds)
[19:08] * zc00gii (~starcoder@7R2AAAYJT.tor-irc.dnsbl.oftc.net) Quit ()
[19:08] * analbeard (~shw@host86-140-202-228.range86-140.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[19:08] * dontron (~Defaultti@81-89-96-91.blue.kundencontroller.de) has joined #ceph
[19:08] * analbeard (~shw@support.memset.com) has joined #ceph
[19:10] * fghaas (~florian@93-82-2-10.adsl.highway.telekom.at) has joined #ceph
[19:12] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) has joined #ceph
[19:22] * fghaas (~florian@93-82-2-10.adsl.highway.telekom.at) Quit (Quit: Leaving.)
[19:25] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Quit: Exeunt dneary)
[19:25] * psiekl (psiekl@wombat.eu.org) Quit (Quit: leaving)
[19:26] * MentalRay (~MRay@142.169.78.134) Quit (Ping timeout: 480 seconds)
[19:28] * sage (~quassel@2607:f298:6050:709d:c4fb:ae8c:37cd:f3ad) Quit (Remote host closed the connection)
[19:30] * linjan (~linjan@109.66.32.222) Quit (Ping timeout: 480 seconds)
[19:32] * MentalRay (~MRay@142.169.78.250) has joined #ceph
[19:33] * psiekl (psiekl@wombat.eu.org) has joined #ceph
[19:33] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) has joined #ceph
[19:37] * murmur (~murmur@zeeb.org) has joined #ceph
[19:38] * dontron (~Defaultti@7R2AAAYKT.tor-irc.dnsbl.oftc.net) Quit ()
[19:38] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[19:39] * brutuscat (~brutuscat@74.Red-88-8-87.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:40] * fghaas (~florian@93-82-2-10.adsl.highway.telekom.at) has joined #ceph
[19:41] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[19:43] * kefu (~kefu@114.92.96.62) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:44] * MentalRay (~MRay@142.169.78.250) Quit (Ping timeout: 480 seconds)
[19:44] * pvh_sa_ (~pvh@197.79.10.255) has joined #ceph
[19:44] * subscope (~subscope@92-249-244-29.pool.digikabel.hu) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:46] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[19:47] <stj> hi all, i might need to shut down my ceph cluster due to a temperature issue in our datacenter... what's the recommended order to turn ceph off and back on again?
[19:47] * alram (~alram@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[19:48] <gregsfortytwo> probably set the nodown and noout flags, then turn off everything
[19:48] <gregsfortytwo> monitors last
[19:48] <gregsfortytwo> turn everything on, but the monitors first
[19:48] <gregsfortytwo> unset the nodown/noout flags
[19:48] * pvh_sa (~pvh@197.79.8.147) Quit (Ping timeout: 480 seconds)
[19:48] * subscope (~subscope@92-249-244-29.pool.digikabel.hu) has joined #ceph
[19:49] <gregsfortytwo> that will be the most pleasant, but it should all work in whatever order
[19:49] <stj> ok, that sounds about like what I was thinking
[19:49] <stj> thanks :)
[19:50] * bandrus (~brian@nat-pool-bos-t.redhat.com) has joined #ceph
[19:51] * bandrus1 (~brian@nat-pool-bos-t.redhat.com) Quit (Read error: Connection reset by peer)
[19:54] * fghaas (~florian@93-82-2-10.adsl.highway.telekom.at) Quit (Ping timeout: 480 seconds)
[19:56] * alram (~alram@nat-pool-bos-t.redhat.com) has joined #ceph
[20:00] * hellertime1 (~Adium@72.246.0.14) has joined #ceph
[20:00] * hellertime (~Adium@72.246.0.14) Quit (Read error: Connection reset by peer)
[20:02] * fghaas (~florian@93-82-2-10.adsl.highway.telekom.at) has joined #ceph
[20:07] * Kingrat (~shiny@2605:a000:1607:4000:8d36:3c4a:2f24:84e) Quit (Ping timeout: 480 seconds)
[20:07] * alram (~alram@nat-pool-bos-t.redhat.com) Quit (Read error: Connection reset by peer)
[20:08] * JamesHarrison (~delcake@176.10.99.201) has joined #ceph
[20:09] * sjm (~sjm@209.117.47.248) has joined #ceph
[20:09] * vbellur (~vijay@122.178.208.142) Quit (Ping timeout: 480 seconds)
[20:10] * fghaas (~florian@93-82-2-10.adsl.highway.telekom.at) Quit (Quit: Leaving.)
[20:11] * shohn (~shohn@nat-pool-bos-u.redhat.com) Quit (Quit: Leaving.)
[20:13] * linuxkidd (~linuxkidd@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[20:13] * bandrus (~brian@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving.)
[20:15] * Concubidated (~Adium@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving.)
[20:16] * Kingrat (~shiny@2605:a000:1607:4000:659d:2201:c8da:b0d) has joined #ceph
[20:16] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[20:17] * segutier_ (~segutier@128.90.32.75) has joined #ceph
[20:17] <devicenull> I'm running into an issue where I have a lot of concurrent clients trying to read the same rbd volume... this is leading to all their activity taking forever (presumably because they're all hitting the same OSD, and maxing out the interface)
[20:17] <devicenull> is there a way to distribute reads across all the replicas?
[20:17] <devicenull> or, any other suggestions to solve this that dont involve upgrading the connection of every OSD?
[20:17] <devicenull> would a cache pool with a fast connection help?
[20:18] * segutier (~segutier@64.119.211.122) Quit (Ping timeout: 480 seconds)
[20:18] * segutier_ is now known as segutier
[20:20] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[20:22] * BManojlovic (~steki@87.116.175.145) has joined #ceph
[20:24] * georgem1 (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[20:25] * segutier_ (~segutier@64.119.211.122) has joined #ceph
[20:25] * nsoffer (~nsoffer@bzq-109-64-255-238.red.bezeqint.net) has joined #ceph
[20:26] * TheSov (~TheSov@cip-248.trustwave.com) has joined #ceph
[20:26] * segutier (~segutier@128.90.32.75) Quit (Ping timeout: 480 seconds)
[20:27] <TheSov> can someone tell me what the state of active-active multipathing is for rbd/tgt on ceph? I was looking and the closest thing i could find was a year back
[20:28] * wushudoin (~wushudoin@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[20:29] * liewegas (~quassel@2607:f298:6050:709d:78f2:43f7:c257:a19a) has joined #ceph
[20:30] * liewegas is now known as sage
[20:30] * ifur (~osm@0001f63e.user.oftc.net) Quit (Quit: leaving)
[20:31] * segutier_ is now known as segutier
[20:31] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[20:32] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[20:32] * pvh_sa_ (~pvh@197.79.10.255) Quit (Ping timeout: 480 seconds)
[20:33] * hellertime1 (~Adium@72.246.0.14) Quit (Read error: Connection reset by peer)
[20:33] * hellertime (~Adium@72.246.0.14) has joined #ceph
[20:34] * segutier (~segutier@64.119.211.122) Quit (Quit: segutier)
[20:35] <Kupo1> How do you get around "rados.ObjectNotFound: error connecting to the cluster" when running ceph-rest-api? I have a working ceph cluster on internal IP's and everything else works fine
[20:35] <georgem> is there a way to calculate the maximum theoretical throughput for a Ceph cluster?
[20:35] * segutier (~segutier@64.119.211.122) has joined #ceph
[20:36] * shakamunyi (~shakamuny@209.66.74.34) has joined #ceph
[20:37] * sjm (~sjm@209.117.47.248) has left #ceph
[20:38] * JamesHarrison (~delcake@7R2AAAYMO.tor-irc.dnsbl.oftc.net) Quit ()
[20:38] * legion (~spate@chomsky.torservers.net) has joined #ceph
[20:39] <georgem> I have three storage servers, each with 34 SAS OSD with journal on them, 10 Gb client NIC and 10 Gb replication NIC, haproxy doing SSL termination for two radosgw with civetweb and 4 MB default stripe size
[20:43] * segutier (~segutier@64.119.211.122) Quit (Ping timeout: 480 seconds)
[20:48] <mtanski> I???m guessing it???ll be something like 3 x 10Gb * ~0.8
[20:49] <mtanski> since with 34 OSDs in one machines won???t be the bottleneck but the 10Gb network
[20:49] * derjohn_mob (~aj@ip-95-223-126-17.hsi16.unitymediagroup.de) has joined #ceph
[20:49] * Concubidated (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[20:50] <m0zes> it also depends on the cpu backing it, though.
[20:53] <TheSov> does anyone know the state of ceph/rbd/tgt multipathing? I want to use ceph as a storage backend for vmware/database servers via iscsi
[20:55] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[20:56] <mtanski> A run of the mill HGST SAS drive advertises ~ 160MB/s sustained transfer and a 10Gb network does less then 1.25GB a second.
[20:57] <mtanski> But yes, if you don???t have enough cores to cover the OSD processes (if it???s very over suscribed) that might be a your bottleneck
[20:58] * fghaas (~florian@213162068059.public.t-mobile.at) has joined #ceph
[20:59] * pvh_sa (~pvh@197.79.9.171) has joined #ceph
[21:00] * shaunm (~shaunm@cpe-65-185-127-82.cinci.res.rr.com) has joined #ceph
[21:00] * fghaas (~florian@213162068059.public.t-mobile.at) Quit ()
[21:01] * fghaas (~florian@213162068059.public.t-mobile.at) has joined #ceph
[21:02] * shaunm (~shaunm@cpe-65-185-127-82.cinci.res.rr.com) Quit (Read error: Connection reset by peer)
[21:02] * fghaas (~florian@213162068059.public.t-mobile.at) Quit ()
[21:02] * jdillaman (~jdillaman@71-15-235-193.dhcp.sffl.va.charter.com) has joined #ceph
[21:06] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[21:07] * angdraug (~angdraug@12.164.168.117) Quit (Ping timeout: 480 seconds)
[21:08] * nils_ (~nils@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[21:08] * legion (~spate@7R2AAAYNN.tor-irc.dnsbl.oftc.net) Quit ()
[21:08] * xul (~ZombieTre@marylou.nos-oignons.net) has joined #ceph
[21:15] * bobrik______ (~bobrik@109.167.249.178) Quit (Quit: (null))
[21:19] * shaunm (~shaunm@cpe-65-185-127-82.cinci.res.rr.com) has joined #ceph
[21:24] * shaunm (~shaunm@cpe-65-185-127-82.cinci.res.rr.com) Quit (Read error: Connection reset by peer)
[21:25] * dupont-y (~dupont-y@familledupont.org) has joined #ceph
[21:27] * rwheeler_ (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:27] * i_m (~ivan.miro@83.149.35.251) has joined #ceph
[21:28] <georgem> at most I was getting 800 MB/s on the haproxy's outside interface and almost 400 MB/s on the client interface of the storage nodes without having the CPU on the storage nodes 100% used
[21:28] * fghaas (~florian@213162068059.public.t-mobile.at) has joined #ceph
[21:30] <mtanski> do the rgw nodes also have 10gigE networking
[21:31] <mtanski> Also, is it 400MB/s in aggregate (in / out) or just one direction on the clients
[21:31] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (Ping timeout: 480 seconds)
[21:31] <georgem> yes, they have
[21:31] <georgem> it's in, and I use replica 3 for the pool where the data was being uploaded
[21:32] <mtanski> do they have two cards?
[21:32] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[21:32] <georgem> mtanski: what kind of cards? network cards?
[21:32] <mtanski> yeah
[21:33] <georgem> yes, 10 Gb front end and 10 Gb back end
[21:33] * yghannam (~yghannam@0001f8aa.user.oftc.net) has joined #ceph
[21:36] <georgem> my understanding is that a client only writes to one drive/OSD at a time???so it doesn't really matter how many drives the cluster has???if I upload one 100 GB file to S3, the radosgw will break it in 4 MB blocks and upload them to PGs (in order), the first PG will actually write data to the primary OSD then wait for two more syncs
[21:37] <mtanski> that???s right
[21:37] <georgem> so I have to wait for each 4 MB to be written on three OSDs, being in fact limited by the speed of a single OSD (+two more writes)+network roundtrip
[21:38] * xul (~ZombieTre@5NZAACJN5.tor-irc.dnsbl.oftc.net) Quit ()
[21:38] <mtanski> http://ceph.com/docs/master/architecture/ has handy diagrams
[21:38] * sixofour (~MonkeyJam@176.10.99.207) has joined #ceph
[21:38] <georgem> sure, if I have 100 clients than the aggregated throughput will be much better
[21:39] <mtanski> i take it your user case is more data coming in then out?
[21:41] <georgem> the funny thing is that my client was in fact using 20 threads uploading in parallel; when I tested with one client and 20 threads I got 115 MB/s, and with three clients I got 54 MB/s for each, and with 4 client I got 29 MB/s
[21:43] <georgem> so basically it's not linear, the most I got was with 2 clients each using 20 threads (aggregated 162 MB/s) which is 1.3 Gb/s
[21:43] * fghaas (~florian@213162068059.public.t-mobile.at) has left #ceph
[21:43] <georgem> sorry, three clients each using 20 threads pushed an aggregated 162 MB/s, which is 1.3 Gb/s
[21:45] <georgem> and the clients also have 10 Gb NICs
[21:46] <georgem> mtanski: I have to upload the data before I can take it out, so both read and write are important
[21:46] * Flynn (~stefan@ip-81-30-69-189.fiber.nl) has joined #ceph
[21:50] <mtanski> Just as a though, you can always install a smaller cache tier (cache pool) using faster drives (ssd that sync quicker) and then depend on writeback to have be written to the slower pool
[21:51] * shakamunyi (~shakamuny@209.66.74.34) Quit (Ping timeout: 480 seconds)
[21:53] <mtanski> We get higher thoughput with a weaker setup (but more OSD hosts) but that???s via cephfs and our read / write ratio is something like 1000x
[21:54] * subscope (~subscope@92-249-244-29.pool.digikabel.hu) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:56] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Quit: Ex-Chat)
[22:01] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) has joined #ceph
[22:03] * andrew (~oftc-webi@32.97.110.54) has joined #ceph
[22:04] <andrew> is there a ceph plugin for horizon available?
[22:04] <SamYaple> andrew: openstack horizon?
[22:04] <andrew> yes
[22:05] <SamYaple> plugin to do what? you can create cinder volumes and what not backed by ceph
[22:05] <andrew> i saw the blueprint: https://blueprints.launchpad.net/horizon/+spec/ceph-panel but not sure if anyone is working on it
[22:05] <SamYaple> oh you mean like admin type plugin, i haven't seen anything on it come through
[22:06] <SamYaple> looking at the whiteboard on that blueprint seems to say you wont see that in horizon
[22:06] <SamYaple> which is a decsion i agree with
[22:07] <SamYaple> as a core horizon* piece that is
[22:07] <andrew> yeah i agree
[22:08] * sixofour (~MonkeyJam@2FBAABXO2.tor-irc.dnsbl.oftc.net) Quit ()
[22:08] * Averad (~Chaos_Lla@chulak.enn.lu) has joined #ceph
[22:08] <andrew> i want to create a horizon plugin that shows the total capacity of the ceph cluster
[22:08] <georgem> there is something but I never tried it, let me find it
[22:11] <georgem> andrew: https://01.org/virtual-storage-manager
[22:11] * Kvisle (~tv@tv.users.bitbit.net) Quit (Remote host closed the connection)
[22:11] <rlrevell> georgem: do you find it better than calamari?
[22:11] <andrew> @georgem: I'll look into it thanks
[22:11] <cephalobot> andrew: Error: "georgem:" is not a valid command.
[22:12] <andrew> georgem: ill look into it thanks
[22:12] <georgem> andrew:if you just need the capacity, then ceph-dash is an easy option
[22:13] * hellertime (~Adium@72.246.0.14) Quit (Quit: Leaving.)
[22:13] <georgem> rlrevell: I haven't tried either, but I don't really trust UI tools built on top of complex systems and for sure it cannot be easy to install
[22:13] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[22:14] <rlrevell> georgem: yeah it is incredibly complex for what it does
[22:15] <rlrevell> i suspect it's either doing a lot under the hood that isn't exposed in the gui yet, or someone was getting paid by the line of code
[22:15] <georgem> https://github.com/01org/virtual-storage-manager/blob/master/INSTALL.md: "VSM CANNOT manage Ceph Cluster not created by it."
[22:16] * daniel2_ (~dshafer@0001b605.user.oftc.net) Quit (Quit: Konversation terminated!)
[22:16] * daniel2_ (~dshafer@0001b605.user.oftc.net) has joined #ceph
[22:17] * daniel2_ (~dshafer@0001b605.user.oftc.net) Quit ()
[22:17] * daniel2_ (~dshafer@0001b605.user.oftc.net) has joined #ceph
[22:18] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[22:19] * zaitcev (~zaitcev@2001:558:6001:10:61d7:f51f:def8:4b0f) has joined #ceph
[22:20] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[22:20] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[22:21] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[22:23] * jdillaman (~jdillaman@71-15-235-193.dhcp.sffl.va.charter.com) Quit (Quit: jdillaman)
[22:24] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit ()
[22:25] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) Quit (Quit: Connection closed for inactivity)
[22:26] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[22:29] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[22:29] * ChanServ sets mode +o elder
[22:34] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[22:37] * fghaas (~florian@213162068059.public.t-mobile.at) has joined #ceph
[22:38] * Averad (~Chaos_Lla@53IAAAY8W.tor-irc.dnsbl.oftc.net) Quit ()
[22:38] * Averad (~adept256@nx-01.tor-exit.network) has joined #ceph
[22:38] * pvh_sa (~pvh@197.79.9.171) Quit (Ping timeout: 480 seconds)
[22:42] * analbeard (~shw@host86-140-202-228.range86-140.btcentralplus.com) has joined #ceph
[22:48] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) Quit (Ping timeout: 480 seconds)
[22:50] * david_dcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[22:58] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Remote host closed the connection)
[22:59] * dupont-y (~dupont-y@familledupont.org) Quit (Quit: Ex-Chat)
[23:01] * Kvisle (~tv@tv.users.bitbit.net) has joined #ceph
[23:01] * bitserker (~toni@188.87.126.203) has joined #ceph
[23:08] * Averad (~adept256@9U1AAAVK1.tor-irc.dnsbl.oftc.net) Quit ()
[23:08] * andrew_m1 (~SurfMaths@188.72.100.132.leadertelecom.ru) has joined #ceph
[23:10] * wushudoin (~wushudoin@c-76-19-134-77.hsd1.ma.comcast.net) has joined #ceph
[23:10] * analbeard (~shw@host86-140-202-228.range86-140.btcentralplus.com) has left #ceph
[23:11] * jskinner (~jskinner@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[23:11] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:13] * i_m1 (~ivan.miro@83.149.35.13) has joined #ceph
[23:13] * i_m (~ivan.miro@83.149.35.251) Quit (Read error: Connection reset by peer)
[23:17] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Quit: Leaving.)
[23:18] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[23:21] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[23:26] * ircolle1 (~ircolle@c-71-229-136-109.hsd1.co.comcast.net) has joined #ceph
[23:26] * ircolle (~ircolle@c-71-229-136-109.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[23:32] * andrew (~oftc-webi@32.97.110.54) Quit (Remote host closed the connection)
[23:33] * fghaas (~florian@213162068059.public.t-mobile.at) Quit (Remote host closed the connection)
[23:34] * ircolle1 (~ircolle@c-71-229-136-109.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[23:36] <m0zes> okay, http://ceph.com/docs/master/rbd/qemu-rbd/#running-qemu-with-rbd notice the *IMPORTANT* bit?
[23:37] <m0zes> notice how it says "If *you* set" blah blah blah
[23:37] <m0zes> turns out caching defaults to true.
[23:37] <m0zes> just corrupted a small-ish mysql database because of that minor point.
[23:38] * andrew_m1 (~SurfMaths@53IAAAZA7.tor-irc.dnsbl.oftc.net) Quit ()
[23:41] * OutOfNoWhere (~rpb@199.68.195.102) has joined #ceph
[23:43] * i_m1 (~ivan.miro@83.149.35.13) Quit (Ping timeout: 480 seconds)
[23:44] * bitserker (~toni@188.87.126.203) Quit (Ping timeout: 480 seconds)
[23:46] * PaulC (~paul@222-153-122-160.jetstream.xtra.co.nz) Quit (Ping timeout: 480 seconds)
[23:46] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) has joined #ceph
[23:48] * eford (~fford@93.93.251.146) Quit (Quit: Leaving)
[23:50] * srk (~srk@32.97.110.56) Quit (Ping timeout: 480 seconds)
[23:51] * TheSov (~TheSov@cip-248.trustwave.com) Quit (Quit: Leaving)
[23:51] * PaulC (~paul@222-153-122-160.jetstream.xtra.co.nz) has joined #ceph
[23:54] * mwilcox (~mwilcox@116.251.192.71) has joined #ceph
[23:55] * DeepBlueRider (~oftc-webi@17.115.2.52) has joined #ceph
[23:56] <DeepBlueRider> Hi there :) I'm trying to figure out what is resonable amount of objects to store per one node/osd. Is it only disk bound or other overhead (indexing, metadata) etc limit amount of objects that can be stored on a node and perform ?
[23:59] <Sysadmin88> the answer will likely always be 'it depends'
[23:59] <Sysadmin88> no reason you can't have fast OSDs and slow OSDs

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.