#ceph IRC Log

Index

IRC Log for 2015-10-29

Timestamps are in GMT/BST.

[0:00] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[0:00] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[0:01] * Destreyf__ (~quassel@email.newagecomputers.info) Quit (Remote host closed the connection)
[0:05] * yguang11 (~yguang11@66.228.162.44) Quit (Remote host closed the connection)
[0:07] * cdelatte (~cdelatte@2402:c800:ff64:300:a503:7d96:b41b:cdf) has joined #ceph
[0:09] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[0:11] * yguang11 (~yguang11@66.228.162.44) has joined #ceph
[0:12] * sileht (~sileht@sileht.net) has joined #ceph
[0:16] * Craig1 (~Adium@75-132-45-39.dhcp.stls.mo.charter.com) has joined #ceph
[0:17] * mattbenjamin (~mbenjamin@76-206-42-105.lightspeed.livnmi.sbcglobal.net) Quit (Quit: Leaving.)
[0:17] * bitserker (~toni@81.184.9.72.dyn.user.ono.com) has joined #ceph
[0:18] * vata (~vata@207.96.182.162) Quit (Ping timeout: 480 seconds)
[0:23] * davidz1 (~davidz@2605:e000:1313:8003:8936:98b4:ad12:20bd) Quit (Quit: Leaving.)
[0:25] * davidzlap (~Adium@2605:e000:1313:8003:1d53:df7c:3225:5d3b) has joined #ceph
[0:27] * Craig1 (~Adium@75-132-45-39.dhcp.stls.mo.charter.com) has left #ceph
[0:30] * fsimonce (~simon@host30-173-dynamic.23-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:31] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[0:32] * cdelatte (~cdelatte@2402:c800:ff64:300:a503:7d96:b41b:cdf) Quit (Quit: This computer has gone to sleep)
[0:33] * LobsterRoll (~LobsterRo@209-6-180-200.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: LobsterRoll)
[0:37] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[0:37] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[0:39] * VampiricPadraig (~poller@94.242.228.43) has joined #ceph
[0:43] * cdelatte (~cdelatte@2402:c800:ff64:300:41e6:6b2:4e4a:c12b) has joined #ceph
[0:45] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[0:45] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[0:50] * Icey (~chris@0001bbad.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:52] * rendar (~I@host235-46-dynamic.31-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:53] * cdelatte (~cdelatte@2402:c800:ff64:300:41e6:6b2:4e4a:c12b) Quit (Quit: This computer has gone to sleep)
[0:55] <Anticimex> samsung's new devices have very good DWPD
[0:57] <darkfader> if you exclude any server grade sas drive, sure
[0:57] <darkfader> :)
[0:57] <Anticimex> cetex: 1 DWPD on 8TB drive will wear it out, too. iirc regular 8T platter drive is ~300-400 TBW
[0:58] <darkfader> 8T models are "archive" models for low write
[0:58] <Anticimex> this may be wrong though. endurance on platter is rare
[0:58] <darkfader> apples and oranges ftw
[0:58] <Anticimex> there are higher perf 8TB and 8TB SMR though
[0:58] <darkfader> ah ok i missed those
[0:58] <Anticimex> at least more than twice the difference in cost
[0:59] <darkfader> also i've switched to the 850 & friends where i could
[0:59] <Anticimex> the archive drives iirc are around 180TBW
[0:59] <Anticimex> which was "half" of regular as i recall, that's from where i have the endurance rating idea. but again, it could be off
[0:59] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) Quit (Read error: Connection reset by peer)
[1:00] <darkfader> just it's still a massive diff, i have 2011ish sas ssd rated at 7PB, and the same size 850pro from 2014ish is rated to 1PB (and that's with them being pretty certain you won't do it)
[1:00] <Anticimex> if you have really high need for endurance, you'll find that non volatile memory will give you best bang for buck basically
[1:00] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) has joined #ceph
[1:00] <darkfader> i wish there will be a successor for the SV843 with it's 20PB
[1:00] <Anticimex> reduces cache size however
[1:00] * jcsp (~jspray@2402:c800:ff64:300:df3d:9e5b:2e44:13e7) has joined #ceph
[1:01] <Anticimex> 850 is consumer stuff
[1:01] <Anticimex> client
[1:01] <darkfader> DC845 is the same thing inside
[1:01] <darkfader> which one do you *like* then
[1:01] <Anticimex> the real server drives are available at up to 20 DWPD iirc
[1:02] <Anticimex> i use intel right now. looked into samsung this summer, but the most interesting drives werent released yet then
[1:02] <darkfader> my list says the SM1715 is the most durable samsung atm
[1:03] <darkfader> sorry i'm asking so naggingly i just wanna make sure i've not missed something
[1:03] <darkfader> so far i stubbornly say samsung > intel > good sas drives
[1:04] <darkfader> intels 3710 is what i generally recommend atm since it's pretty affordable
[1:04] <Anticimex> i had a great overview pdf of all samsung ssd models
[1:04] <Anticimex> can't easily google it now
[1:04] <Anticimex> seems that it's sm863 and pm863 that i'm referring to?
[1:05] <darkfader> sm863 is the medium write, pm863 is the read friendly
[1:05] <Anticimex> samsung.com could be improved information wise
[1:05] <Anticimex> k
[1:05] <darkfader> friends of mine love the sm863
[1:05] * yguang11 (~yguang11@66.228.162.44) Quit (Remote host closed the connection)
[1:05] <darkfader> i think 4 digits means its a sas/nvme model and SV means heavy write
[1:06] <darkfader> so if they do a sv863 i'll be dancing :>
[1:06] <darkfader> if you happen to find that pdf and still remember, i'd love a link
[1:06] <Anticimex> mhm
[1:07] <darkfader> my stuff's here: http://confluence.wartungsfenster.de/display/Adminspace/Samsung+SSD+guide
[1:07] <darkfader> but lacking the PB numbers for samsung
[1:07] <Anticimex> this was a samsung document
[1:08] <darkfader> yeah that's why it'd be good for me to crossverify
[1:08] <Anticimex> included the 15.6TB models
[1:08] <darkfader> nice
[1:08] <Anticimex> what's the product code for that one?
[1:09] * VampiricPadraig (~poller@5P6AAAIZE.tor-irc.dnsbl.oftc.net) Quit ()
[1:09] <darkfader> probably the xs1715 i'll check
[1:09] * shawniverson (~shawniver@192.69.183.61) has joined #ceph
[1:10] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[1:10] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[1:10] <darkfader> oh, no.
[1:10] <darkfader> pm1633
[1:11] <Anticimex> thanks
[1:11] <Anticimex> that was the missing keyword :)
[1:12] <Anticimex> http://www.samsung.com/us/samsungsemiconductor/pdfs/PSG2015_1H_HR_singles.pdf
[1:12] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:12] <darkfader> thank you!
[1:12] * Anticimex &
[1:13] * jcsp (~jspray@2402:c800:ff64:300:df3d:9e5b:2e44:13e7) Quit (Ping timeout: 480 seconds)
[1:15] * LeaChim (~LeaChim@host86-143-17-156.range86-143.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:15] <darkfader> their "PC Workload" DWD specification was why i skipped the table ;)
[1:15] <darkfader> column
[1:16] <darkfader> and SM1715 is the fancy one - i'm just 'afraid' the intel's still have their ceph friendly optimizations beat it for qd1 latency
[1:19] * sudocat (~dibarra@66.196.218.45) Quit (Ping timeout: 480 seconds)
[1:27] * Borf (~Roy@5P6AAAI1N.tor-irc.dnsbl.oftc.net) has joined #ceph
[1:27] * bitserker (~toni@81.184.9.72.dyn.user.ono.com) Quit (Quit: Leaving.)
[1:27] * vata (~vata@cable-21.246.173-197.electronicbox.net) has joined #ceph
[1:35] * oms101 (~oms101@p20030057EA015F00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:41] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[1:41] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[1:41] * lincolnb (~lincoln@c-71-57-68-189.hsd1.il.comcast.net) Quit (Read error: Connection reset by peer)
[1:42] * lincolnb (~lincoln@c-71-57-68-189.hsd1.il.comcast.net) has joined #ceph
[1:43] * oms101 (~oms101@p20030057EA012900C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:44] * jcsp (~jspray@2402:c800:ff64:300:df3d:9e5b:2e44:13e7) has joined #ceph
[1:52] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[1:53] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[1:53] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[1:57] * Borf (~Roy@5P6AAAI1N.tor-irc.dnsbl.oftc.net) Quit ()
[2:00] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[2:01] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[2:01] * georgem (~Adium@69-196-163-180.dsl.teksavvy.com) has joined #ceph
[2:06] * jcsp (~jspray@2402:c800:ff64:300:df3d:9e5b:2e44:13e7) Quit (Ping timeout: 480 seconds)
[2:06] * yguang11 (~yguang11@66.228.162.44) has joined #ceph
[2:12] * kefu (~kefu@114.92.106.70) has joined #ceph
[2:13] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:dc49:daec:a0a3:2f0f) Quit (Ping timeout: 480 seconds)
[2:17] * yguang11 (~yguang11@66.228.162.44) Quit (Remote host closed the connection)
[2:17] * olid11 (~olid1982@185.17.206.92) Quit (Ping timeout: 480 seconds)
[2:19] * N3X15 (~Shesh@185.101.107.227) has joined #ceph
[2:20] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[2:20] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[2:20] * kefu (~kefu@114.92.106.70) Quit (Max SendQ exceeded)
[2:21] * kefu (~kefu@114.92.106.70) has joined #ceph
[2:36] <hemebond> How do I make civetweb and ceph monitors shut up?
[2:46] * cdelatte (~cdelatte@163.138.224.250) has joined #ceph
[2:49] * cdelatte (~cdelatte@163.138.224.250) Quit ()
[2:49] * N3X15 (~Shesh@5P6AAAI4C.tor-irc.dnsbl.oftc.net) Quit ()
[2:54] * adun153 (~ljtirazon@112.198.90.179) has joined #ceph
[2:56] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[2:59] * JCL1 (~JCL@ip68-108-16-17.lv.lv.cox.net) has joined #ceph
[3:01] * lurbs (user@uber.geek.nz) Quit (Quit: leaving)
[3:01] * lurbs (user@uber.geek.nz) has joined #ceph
[3:02] <adun153> Hi, is the "Ceph Object Gateway" *essential* if I want to support Swift?
[3:03] <rkeene> You'd have to look at the Swift documentation
[3:04] <lurbs> The short answer is yes, unless you want to use Swift itself.
[3:04] <adun153> lurbs: So, the "COG" is a drop-in replacement for Swift?
[3:05] <adun153> So, it's either "COG" or Swift configured with a ceph backend, correct?
[3:05] <rkeene> "COG" = RADOSGW ?
[3:06] * JCL (~JCL@ip68-108-16-17.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[3:06] <adun153> "Ceph Object Gateway"
[3:06] <lurbs> Neither radosgw, nor Swift with a Ceph backend, currently support having multiple regions writeable, I believe.
[3:06] <rkeene> Are you talking about RADOSGW ?
[3:07] <adun153> Nt sure, really. I'm a newbie, researching on how to deploy a 3-node cluster. I want to use it as a storage backend for an OpenStack deployment. I came upon the COG here (http://docs.ceph.com/docs/v0.80.5/install/install-ceph-gateway/#id1), and I'm wondering how important it is.
[3:08] * kefu is now known as kefu|afk
[3:08] <rkeene> adun153, This might be helpful: http://docs.ceph.com/docs/v0.80.5/glossary/#term-ceph-object-gateway
[3:09] <lurbs> If you only have a single region then RADOS Gateway works reasonably well, and supports a decent subset of the Swift API. When it comes to replication between clusters it's a bit tricky though.
[3:10] <adun153> Hmm, just to make sure that I understand it, it's an alternative to configuring Swift with a Ceph backend?
[3:10] <adun153> Or is it used in conjuction with Swift, to enable Swift to access ceph?
[3:11] <adun153> User -> Swift -> RADOS GW -> Ceph <---- like that?
[3:11] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[3:12] <lurbs> You have three basic options: a) Run RADOS Gateway. It talks the Swift (and S3) APIs out the front, and native Ceph (RADOS) out the back, and stores data in the Ceph cluster.
[3:12] <lurbs> b) Run Swift
[3:12] <lurbs> c) Run Swift, but store your data in the Ceph cluster instead of Swift rings.
[3:13] <adun153> Let's talk options a.) and c.) . What is the advantage of one over the other?
[3:13] <lurbs> No Swift components are used for option a.
[3:14] <adun153> What advantages does that provide?
[3:14] <lurbs> Main advantage of c is you get the full Swift API, instead of a subset. But the Ceph backend for Swift is, last I checked, *very* basic and not much more than a stub.
[3:14] <lurbs> Advantage of a is that all your storage needs for OpenStack are based on Ceph, so you only need the single storage cluster.
[3:15] <lurbs> Disadvantage of a is that we've found RADOS Gateway's cross-cluster replication to be unpleasant. Tricky to set up, and not multi-master, etc.
[3:15] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[3:17] <lurbs> Disadvantage of c is that you need a separate storage cluster for Swift. Arguably 'clusters' if you're going multi site.
[3:17] <adun153> I can't mix Glance and KVM usage of ceph with Swift?
[3:17] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[3:17] * cdelatte (~cdelatte@163.138.224.250) has joined #ceph
[3:18] <lurbs> You can. You use different pools for each.
[3:18] <lurbs> But you need the Swift frontend boxes, of which there are potentially many. Proxies, auth, etc. I forget to be honest.
[3:19] <lurbs> Probably should have said 's/separate storage cluster for Swift/separate cluster for Swift/'.
[3:20] <adun153> lurbs: I see. I think I might go with option c, considering that I'll be deploying Ceph for the first time, especially since you mentioned that deploying RADOS GW can be tricky. Thanks.
[3:20] <adun153> I'll focus my research on that track.
[3:21] <rkeene> OpenStack is pretty much terrible...
[3:22] <lurbs> That's a little unfair. You wouldn't say that if you hadn't used it.
[3:23] * cdelatte (~cdelatte@163.138.224.250) Quit (Quit: This computer has gone to sleep)
[3:23] <lurbs> adun153: Warning, the RADOS plugin for Swift is *very* basic.
[3:23] <lurbs> I can't really recommend option c.
[3:23] <rkeene> lurbs, :-D
[3:24] <rkeene> My original auto-assembling cloud project used OpenStack, it was so much fail
[3:24] <adun153> lurbs: Could you please define *very* basic? I foresee only basic usage for Swift for this deployment.
[3:24] <lurbs> https://github.com/openstack/swift-ceph-backend
[3:24] <lurbs> Check out (literally) the code.
[3:24] <lurbs> There may be something more complete out there, but I am unaware of it.
[3:27] * kefu|afk is now known as kefu
[3:28] <adun153> What is a good&&simple&&easy-to-do way to test read/write speeds for a ceph pool or cluster?
[3:28] <lurbs> rados bench
[3:28] <lurbs> http://docs.ceph.com/docs/hammer/man/8/rados/#pool-specific-commands
[3:29] * zhaochao (~zhaochao@125.39.8.235) has joined #ceph
[3:29] <adun153> It's built-in, sweet! Thanks.
[3:42] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[3:42] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[3:43] * yanzheng (~zhyan@125.71.108.204) has joined #ceph
[3:53] * Kupo1 (~tyler.wil@23.111.254.159) Quit (Read error: Connection reset by peer)
[3:57] * Bj_o_rn (~Solvius@4Z9AAAIYX.tor-irc.dnsbl.oftc.net) has joined #ceph
[3:59] * adun153 (~ljtirazon@112.198.90.179) Quit (Ping timeout: 480 seconds)
[4:03] * Wielebny (~Icedove@cl-927.waw-01.pl.sixxs.net) has joined #ceph
[4:08] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[4:09] * georgem (~Adium@69-196-163-180.dsl.teksavvy.com) Quit (Quit: Leaving.)
[4:09] * adun153 (~ljtirazon@112.198.90.112) has joined #ceph
[4:09] * georgem (~Adium@206.108.127.16) has joined #ceph
[4:13] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[4:13] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[4:14] * JoeJulian_ (~JoeJulian@108.166.123.190) has joined #ceph
[4:15] * JoeJulian (~JoeJulian@108.166.123.190) Quit (Ping timeout: 480 seconds)
[4:18] * yguang11 (~yguang11@c-50-131-146-113.hsd1.ca.comcast.net) has joined #ceph
[4:24] <portante> Can somebody point me at documentation for understanding ceph log entries?
[4:25] <portante> if the source is the best, then that is fine too.
[4:27] * Bj_o_rn (~Solvius@4Z9AAAIYX.tor-irc.dnsbl.oftc.net) Quit ()
[4:28] * yguang11_ (~yguang11@nat-dip15.fw.corp.yahoo.com) has joined #ceph
[4:29] * kefu is now known as kefu|afk
[4:31] * vata (~vata@cable-21.246.173-197.electronicbox.net) Quit (Quit: Leaving.)
[4:33] * kefu|afk is now known as kefu
[4:34] * yguang11 (~yguang11@c-50-131-146-113.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:44] * vata (~vata@cable-21.246.173-197.electronicbox.net) has joined #ceph
[4:57] * jcsp (~jspray@2402:c800:ff64:300:df3d:9e5b:2e44:13e7) has joined #ceph
[5:04] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[5:10] * overclk (~overclk@59.93.65.39) has joined #ceph
[5:11] * Vacuum_ (~Vacuum@88.130.197.188) has joined #ceph
[5:17] * kefu is now known as kefu|afk
[5:18] * Vacuum__ (~Vacuum@i59F79B0F.versanet.de) Quit (Ping timeout: 480 seconds)
[5:21] * kefu|afk is now known as kefu
[5:37] * overclk (~overclk@59.93.65.39) Quit (Ping timeout: 480 seconds)
[5:38] * jcsp (~jspray@2402:c800:ff64:300:df3d:9e5b:2e44:13e7) Quit (Ping timeout: 480 seconds)
[5:40] * jcsp (~jspray@2402:c800:ff64:300:df3d:9e5b:2e44:13e7) has joined #ceph
[5:41] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) Quit (Read error: Connection reset by peer)
[5:45] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[5:51] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) has joined #ceph
[5:52] * gleam (gleam@dolph.debacle.org) Quit (Remote host closed the connection)
[5:53] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) has joined #ceph
[5:57] * jcsp (~jspray@2402:c800:ff64:300:df3d:9e5b:2e44:13e7) Quit (Ping timeout: 480 seconds)
[6:04] * vbellur (~vijay@122.172.57.91) Quit (Ping timeout: 480 seconds)
[6:19] * shawniverson (~shawniver@192.69.183.61) Quit (Read error: Connection reset by peer)
[6:19] * shawniverson (~shawniver@192.69.183.61) has joined #ceph
[6:20] * bitserker (~toni@81.184.9.72.dyn.user.ono.com) has joined #ceph
[6:21] * rdas (~rdas@122.168.223.223) has joined #ceph
[6:22] * cephalobot (~ceph@ds3553.dreamservers.com) Quit (Remote host closed the connection)
[6:22] * cephalobot (~ceph@ds3553.dreamservers.com) has joined #ceph
[6:28] * CAPSLOCK2000 (~oftc@2001:984:3be3:1::8) Quit (Ping timeout: 480 seconds)
[6:39] * swami1 (~swami@163.138.224.174) has joined #ceph
[6:42] * overclk (~overclk@59.93.65.122) has joined #ceph
[6:46] * kefu (~kefu@114.92.106.70) Quit (Remote host closed the connection)
[6:46] * vbellur (~vijay@121.244.87.124) has joined #ceph
[6:49] * nardial (~ls@dslb-088-072-094-085.088.072.pools.vodafone-ip.de) has joined #ceph
[6:54] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[6:54] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[6:55] * sileht (~sileht@sileht.net) Quit (Ping timeout: 480 seconds)
[6:56] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[7:03] * neurodrone_ (~neurodron@108.60.145.130) has joined #ceph
[7:04] * yguang11_ (~yguang11@nat-dip15.fw.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[7:07] * adun153 (~ljtirazon@112.198.90.112) Quit (Quit: Leaving)
[7:08] * neurodrone (~neurodron@162.243.191.67) Quit (Ping timeout: 480 seconds)
[7:08] * neurodrone_ is now known as neurodrone
[7:08] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:10] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:16] * bitserker (~toni@81.184.9.72.dyn.user.ono.com) Quit (Quit: Leaving.)
[7:17] * linjan (~linjan@176.195.227.255) has joined #ceph
[7:19] * jcsp (~jspray@2402:c800:ff64:300:df3d:9e5b:2e44:13e7) has joined #ceph
[7:20] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[7:20] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[7:21] * cdelatte (~cdelatte@2402:c800:ff64:300:6cf2:a2b1:3523:8632) has joined #ceph
[7:24] * kefu (~kefu@114.92.106.70) has joined #ceph
[7:30] * isaxi (~ChauffeR@se4x.mullvad.net) has joined #ceph
[7:32] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[7:33] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[7:40] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[7:44] * nardial (~ls@dslb-088-072-094-085.088.072.pools.vodafone-ip.de) Quit (Quit: Leaving)
[7:46] * swami1 (~swami@163.138.224.174) Quit (Quit: Leaving.)
[7:50] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[7:50] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[7:57] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[7:59] * kefu (~kefu@114.92.106.70) Quit (Max SendQ exceeded)
[8:00] * isaxi (~ChauffeR@4Z9AAAI9R.tor-irc.dnsbl.oftc.net) Quit ()
[8:00] * kefu (~kefu@114.92.106.70) has joined #ceph
[8:04] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[8:08] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[8:11] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:dc49:daec:a0a3:2f0f) has joined #ceph
[8:17] * jcsp (~jspray@2402:c800:ff64:300:df3d:9e5b:2e44:13e7) Quit (Ping timeout: 480 seconds)
[8:20] * remy1991 (~ravi@115.114.59.182) has joined #ceph
[8:21] * serg (~serg@195.114.7.96) has joined #ceph
[8:22] <serg> hi, i see hammer 0.94.4 released at release notes site, but i have 0.94.5 version at my repos - is it stable? where i can read description?
[8:24] * rdas (~rdas@122.168.223.223) Quit (Quit: Leaving)
[8:25] <kiranos> serg: it was just a bugfix release, it has the changelog in the users mailinglist
[8:25] <kiranos> its stable
[8:25] <serg> thx
[8:26] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:27] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:29] * serg (~serg@195.114.7.96) Quit (Quit: Leaving)
[8:35] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[8:35] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[8:36] * sileht (~sileht@sileht.net) has joined #ceph
[8:36] <T1w> mornings
[8:37] <T1w> alfredodeza: I'm doning a testrun with --repo-url and --gpg-url now
[8:38] * cdelatte (~cdelatte@2402:c800:ff64:300:6cf2:a2b1:3523:8632) Quit (Quit: This computer has gone to sleep)
[8:39] <T1w> alfredodeza: yup, works like a charm
[8:43] * jasuarez (~jasuarez@243.Red-81-39-64.dynamicIP.rima-tde.net) has joined #ceph
[8:46] * kefu is now known as kefu|afk
[8:46] * kefu|afk is now known as kefu
[8:47] * overclk (~overclk@59.93.65.122) Quit (Remote host closed the connection)
[8:54] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[8:54] <Be-El> hi
[9:02] * kefu is now known as kefu|afk
[9:03] * kefu|afk is now known as kefu
[9:03] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[9:04] <T1w> mornings be-el
[9:06] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[9:06] * longguang (~chatzilla@123.126.33.253) has joined #ceph
[9:07] * kefu (~kefu@114.92.106.70) Quit (Max SendQ exceeded)
[9:07] * ade (~abradshaw@tmo-108-207.customers.d1-online.com) has joined #ceph
[9:07] * cheese^ (~hoopy@7V7AAAT7B.tor-irc.dnsbl.oftc.net) has joined #ceph
[9:09] * kefu (~kefu@114.92.106.70) has joined #ceph
[9:10] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[9:11] * derjohn_mob (~aj@88.128.81.101) has joined #ceph
[9:12] * rendar (~I@host46-131-dynamic.59-82-r.retail.telecomitalia.it) has joined #ceph
[9:12] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[9:13] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[9:13] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[9:14] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[9:15] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[9:18] <kiranos> hm I now have osd.0 is full at 95%
[9:18] <kiranos> HEALTH_ERR 1 full osd(s)
[9:18] <kiranos> but I can still write simple echo test >test to the mounted rbd image
[9:18] <kiranos> isnt it supposed to go to read-only?
[9:19] <Be-El> kiranos: it depends ;-)
[9:19] <kiranos> on what ? :) if there is enogh room on other osd's on the same machine?
[9:20] <Be-El> kiranos: the rbd image is striped into rados objects (usually 4 mb size). the rados objects in turn are distributed across the PGs, which are distributed across the OSDs
[9:20] <Be-El> kiranos: if the write operation on the rbd results in a write to a PG on a full OSD, it will block
[9:23] <Be-El> kiranos: if you want to solve the problem with the full OSD temporarely (and you have enough space on other OSDs), you can try to lower the crush weight of the affected OSD
[9:23] * kefu (~kefu@114.92.106.70) Quit (Max SendQ exceeded)
[9:23] <Be-El> as a result data will be moved from that OSD to others
[9:24] * kefu (~kefu@114.92.106.70) has joined #ceph
[9:25] <kiranos> Be-El: thanks will do
[9:29] * thomnico (~thomnico@2a01:e35:8b41:120:4884:9cb6:f7cc:487b) has joined #ceph
[9:30] * analbeard (~shw@support.memset.com) has joined #ceph
[9:33] * jrocha (~jrocha@vagabond.cern.ch) has joined #ceph
[9:33] * dgurtner (~dgurtner@178.197.231.145) has joined #ceph
[9:34] * fsimonce (~simon@host30-173-dynamic.23-79-r.retail.telecomitalia.it) has joined #ceph
[9:36] * pabluk_ is now known as pabluk
[9:37] * cheese^ (~hoopy@7V7AAAT7B.tor-irc.dnsbl.oftc.net) Quit ()
[9:38] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:40] <kiranos> Be-El: thanks worked great, took crush reweight
[9:46] * garphy`aw is now known as garphy
[9:49] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[9:49] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[9:51] * SEBI1 (~aldiyen@tor4thepeople2.torexitnode.net) has joined #ceph
[9:53] * b0e (~aledermue@213.95.25.82) has joined #ceph
[10:00] * ksperis (~laurent@46.218.42.103) has joined #ceph
[10:01] * overclk (~overclk@121.244.87.117) has joined #ceph
[10:02] * shawniverson (~shawniver@192.69.183.61) Quit (Remote host closed the connection)
[10:05] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[10:07] * enax (~enax@hq.ezit.hu) has joined #ceph
[10:08] * enax (~enax@hq.ezit.hu) Quit ()
[10:09] * derjohn_mob (~aj@88.128.81.101) Quit (Ping timeout: 480 seconds)
[10:13] * enax (~enax@hq.ezit.hu) has joined #ceph
[10:14] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:15] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[10:15] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[10:16] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[10:16] * alrick (~alrick@91.218.144.129) has joined #ceph
[10:18] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:19] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[10:19] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[10:21] * SEBI1 (~aldiyen@7V7AAAT81.tor-irc.dnsbl.oftc.net) Quit ()
[10:26] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[10:26] * branto1 (~branto@213.175.37.10) has joined #ceph
[10:29] <T1w> hm
[10:30] <T1w> ceph-deploy does not take the size of the raw block device for journal during osd prepare into consideration
[10:30] <T1w> it does a simple "size 5120"
[10:30] <T1w> (I've got 10G block devices for it)
[10:31] <Be-El> T1w: does ceph-deploy use the plain blcok device given, or does it try to create a partition with the configured size on it?
[10:32] <T1w> Be-El: it makes a partition
[10:32] <T1w> and it does a
[10:32] <T1w> /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[10:32] <T1w> to see how bit it should be
[10:32] <T1w> ok, do it's a config option I need to fix then
[10:33] <Be-El> T1w: wait, there's a better solution
[10:33] <T1w> mkay?
[10:33] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[10:33] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[10:33] <T1w> Be-El: you can see output from the command here
[10:33] <T1w> http://pastebin.com/hkVCyAjU
[10:33] <Be-El> ceph-deploy only invokes ceph-disk on the target host. ceph-disk check whether the given journal device is a plain disk or a partition; in case of a disk it creates a new partition with the configured size
[10:34] <Be-El> you can create the partition manually before with the required size, and give the partition device to ceph-deploy
[10:35] <Be-El> oh...and volume groups to not play well with ceph-disk at all. they are not recognized as partitions and thus new partition tables are created on them
[10:35] <T1w> Be-El: ah, okay - can I ignore partition-guid and typecode guid?
[10:36] <T1w> yeah, well.. I figure it's not that important - it's a simple VG over a single md mirror
[10:36] <T1w> just to be able to allocate space via LVM
[10:38] <Be-El> i'm not sure how well LVMs work as journals. but the problem is the missing LVM support in ceph-disk (at least the last time i tried)
[10:39] <Gugge-47527> lvm work fine, i just link them in manually after creating the osd :)
[10:39] <T1w> oh dear
[10:40] <Be-El> Gugge-47527: do they support O_DSYNC operations correctly?
[10:40] <T1w> Gugge-47527: via ceph-disk activate-journal ?
[10:41] <Gugge-47527> via "ln -s /dev/mapper/xxxx /var/lib/ceph/osd/ceph-X/journal
[10:41] * LeaChim (~LeaChim@host86-143-17-156.range86-143.btcentralplus.com) has joined #ceph
[10:41] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) has joined #ceph
[10:41] <T1w> Be-El: my fio / dd tests that was posted to Sebastians blog was done on both the raw /dev/sd devices and on the LVM names under /dev/mapper
[10:41] <Gugge-47527> Be-El: as far as i can tell they do
[10:41] <T1w> (the tests of Intel S3710)
[10:42] <T1w> Gugge-47527: hm, so I should not specify the journal device to ceph-deploy then?
[10:43] <T1w> and just do a osd prepare --fs-type xfs ceph1:/dev/sdc
[10:43] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:43] <Be-El> T1w: in that case you'll probably end up with the journal partition on the data disk
[10:43] <T1w> Be-El: eeek
[10:44] <Gugge-47527> T1w: i actually also dont use devices for the osd, but a mountpoint :)
[10:44] <Gugge-47527> i hate all that auto mounting magic :P
[10:44] <T1w> Gugge-47527: bah! :p
[10:44] <Be-El> you can stop the osd and change the journal afterwards, but that's extra effort
[10:44] <T1w> Be-El: if that is what it takes, so be it..
[10:44] <T1w> at the moment the osd is not active anyway
[10:44] <Gugge-47527> the last 2 osd's i created manually without ceph-deploy :P
[10:44] <T1w> just prepared
[10:45] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[10:45] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[10:45] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[10:45] <Be-El> T1w: last idea: use lvm, create a partition table on the lvm, create a partition with the required size, and give the partition within the lvm as argument to ceph-disk / ceph-deploy
[10:45] <T1w> Be-El: worth a try
[10:46] * kefu (~kefu@114.92.106.70) Quit (Max SendQ exceeded)
[10:46] <Be-El> and i finally have a working page cache with cephfs \o/
[10:47] * kefu (~kefu@114.92.106.70) has joined #ceph
[10:51] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[10:51] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[10:51] * thomnico (~thomnico@2a01:e35:8b41:120:4884:9cb6:f7cc:487b) Quit (Quit: Ex-Chat)
[10:51] * thomnico (~thomnico@2a01:e35:8b41:120:4884:9cb6:f7cc:487b) has joined #ceph
[10:52] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[10:54] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[10:55] * rakeshgm (~rakesh@121.244.87.124) has joined #ceph
[10:56] <T1w> damn
[10:57] <T1w> ceph-deploy just tries to add another partition to the LVM and fails since there is no space left on the device for a new 5G partition
[10:57] <T1w> seems like I should set osd_journal_size, zap the journal device and leave it at that
[11:01] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:03] * olid11 (~olid1982@aftr-185-17-204-125.dynamic.mnet-online.de) has joined #ceph
[11:05] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[11:06] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[11:06] <Be-El> T1w: do not use the LVM, but the partition created on the LVM
[11:07] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[11:07] <Be-El> T1w: it should be recognized as partition by ceph-disk
[11:12] <T1w> hm
[11:12] <T1w> partx fails to reload the partition table
[11:13] <T1w> so I dont have a partition entry in /dev
[11:13] * vbellur (~vijay@121.244.87.124) has joined #ceph
[11:15] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[11:15] <T1w> an, partprobe to the rescue
[11:15] <T1w> ah even
[11:16] <T1w> now I've got a /dev/mapper/VGsys0-jour1p1 entry
[11:16] * kefu is now known as kefu|afk
[11:19] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[11:19] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[11:21] * remy1991 (~ravi@115.114.59.182) Quit (Ping timeout: 480 seconds)
[11:22] <T1w> heh, no
[11:22] <T1w> doesn't work either
[11:22] <T1w> [ceph1][WARNIN] Error: /dev/mapper/VGsys0-jour1p1: unrecognised disk label
[11:22] <T1w> [ceph1][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
[11:22] <T1w> [ceph1][WARNIN] DEBUG:ceph-disk:Creating journal partition num 1 size 5120 on /dev/mapper/VGsys0-jour1p1
[11:22] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Quit: Ex-Chat)
[11:22] * kefu|afk is now known as kefu
[11:28] <cetex> Anticimex: yeah.. that's interresting..
[11:28] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[11:28] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit ()
[11:30] * remy1991 (~ravi@115.114.59.182) has joined #ceph
[11:34] <cetex> so, what about spinning disks and dwpd then.. you say 8TB drive will wear out too, so 300TBW on 8TB drive = 37.5drive writes, or 37.5weeks for us.. 400TBW is 50weeks. assuming ~70% disk-usage and writing 70% of the capacity on the drive per week we're looking at a single 100% drive-write per 10days.
[11:35] <cetex> so an 8TB "archive-drive" will wear out in 375 - 400days.
[11:36] <Anticimex> take my spinning-disk DWPD with a huge amount of salt
[11:36] <Anticimex> but you definitely need to consider it...
[11:36] <Anticimex> (and research what it really is)
[11:36] <kiranos> spinning is not fixed
[11:37] <Anticimex> kiranos: the "bath tub" curve applies afaik to spinner failures? some die early, nothing in between and some die late (within lifetime)
[11:37] <Anticimex> but it says nothing about PBW afaik
[11:40] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[11:41] <cetex> yeah.. it's interesting..
[11:42] <cetex> does anyone have any more info on this?
[11:43] <cetex> personal experience with write-heavy applications and stuff?
[11:44] <cetex> on hdd's
[11:44] <cetex> ssd's are easy to calculate.
[11:44] <cetex> hdd's, not so much since there's no real data around?
[11:50] <cetex> Enjoy peace a mind with a drive engineered for 24??7 workloads of 180TB per year???
[11:50] <cetex> so, assuming that
[11:50] <cetex> and 7TB written per day
[11:50] <cetex> *week
[11:50] <cetex> :)
[11:50] <cetex> hrm.
[11:50] <cetex> it's actually less.
[11:51] <cetex> with 70% dis-usage it's gonna be 8*0.7 = 5.6TB written per week
[11:51] <cetex> so 32 weeks to reach the 180TBW/year
[11:56] * nardial (~ls@dslb-088-072-094-085.088.072.pools.vodafone-ip.de) has joined #ceph
[12:00] <cetex> so, some more math: storing 150TB = 150*3*1.43 = 643.5TB (*1.43 so we won't use more than 70% of the cluster.)
[12:00] * Scaevolus2 (~Blueraven@162.216.46.173) has joined #ceph
[12:00] <cetex> 643.5/8 = 81 8TB hdd's
[12:02] <cetex> Assuming 100% failure-rate once that 180TBW is reached = ~$20k every 32weeks = ~$2500 per month.
[12:04] <Heebie> Does anyone have any particular regime for testing performance on a CEPH system. I've found LOTS of reading about the subject, but most of them just use dd and fio, and don't really explain what the numbers in those are telling about CEPH's performance.
[12:06] <kiranos> does anyone know how ceph is populating new drives, I've added 10+ but its only writing data to a few of them
[12:09] * jklare (~jklare@185.27.181.36) Quit (Ping timeout: 480 seconds)
[12:09] * kefu is now known as kefu|afk
[12:13] <rotbeard> Heebie, I use radosbench as well as benchmarks from VMs running inside of ceph (fio for linux VMs, and some windows tool for windows vms). but you should have a look at parallel performance coming from more then just one benchmark node
[12:14] * kefu|afk is now known as kefu
[12:16] * delaf (~delaf@legendary.xserve.fr) Quit (Remote host closed the connection)
[12:17] * bara (~bara@213.175.37.10) has joined #ceph
[12:17] * delaf (~delaf@legendary.xserve.fr) has joined #ceph
[12:19] <Heebie> rotbeard: I only have a test system built, but I was planning on testing simultaneously from a Windows server, a Linux (physical) server, a couple of Linux VM's, and at least one Windows VM. On the Linux items, I'm planning a mix of servers running iSCSI against an RBD-based target (which is obviously Linux as well) and direct. I have VM with librados-based volumes at the libvirt layer, and others just using rbd directly on the VM's. I
[12:22] * bitserker (~toni@88.87.194.130) has joined #ceph
[12:22] * jklare (~jklare@185.27.181.36) has joined #ceph
[12:25] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[12:25] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[12:28] * kawa2014 (~kawa@89.184.114.246) Quit (Read error: Connection reset by peer)
[12:29] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[12:30] * Scaevolus2 (~Blueraven@162.216.46.173) Quit ()
[12:32] * shinobu (~oftc-webi@pdf874b16.tokynt01.ap.so-net.ne.jp) has joined #ceph
[12:35] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[12:47] * alrick (~alrick@91.218.144.129) Quit (Remote host closed the connection)
[12:49] * Icey (~chris@0001bbad.user.oftc.net) has joined #ceph
[12:50] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[12:51] <T1w> dammit
[12:51] <T1w> even with a handheld
[12:51] <T1w> sudo ceph-disk prepare --cluster ceph --cluster-uuid 0e754a32-1085-4e79-8088-88f429061280 --fs-type xfs --journal-dev /dev/sdc /dev/mapper/VGsys0-jour1
[12:52] <T1w> I get a 5GB partition created on the journal LV
[12:58] * IceyEC (~chris@0001bbad.user.oftc.net) has joined #ceph
[12:58] * Icey (~chris@0001bbad.user.oftc.net) Quit (Read error: Connection reset by peer)
[12:58] * sileht (~sileht@sileht.net) Quit (Read error: No route to host)
[12:59] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[13:00] <Heebie> Perhaps you should try using ceph-deploy? It seems to work really well.
[13:03] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[13:09] * bara (~bara@213.175.37.10) Quit (Ping timeout: 480 seconds)
[13:12] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[13:15] <T1w> Heebie: ceph-deploy is the reason I began looking af ceph-disk (and manually partitioning stuff) - neither ceph-deploy or ceph-disk handles OSD journals on LVM
[13:15] <T1w> and I've just thrown in the towel
[13:16] <T1w> I wonder if it's possible to have osd journal on an md-device
[13:18] <via> i personally have found ceph-deploy to not be flexible to a lot of setups, but its not *that* hard to just do everything manually
[13:19] <T1w> Agreed - but when ceph-disk doesn't help out (just do a simple googling for "ceph osd journal lvm") things begin to get complicated
[13:19] <T1w> right now I'm looking into whether or not md devices are accetable for OSD journal or not
[13:20] <via> right now i'm using journal on lvm
[13:20] <T1w> if they are I'll have to see how I can shrink the VG to get accedd to some free space for a couple of new md devices that can act as journals
[13:20] <via> i think doing journal on ssd raid1 md would be aceptasble
[13:20] <via> acceptable even
[13:20] <T1w> .. if not I'll probably reinstall
[13:21] <via> pvresize will help
[13:21] <T1w> yeah, but the whole problem with lvm comes down to udev rules
[13:21] * bara (~bara@213.175.37.12) has joined #ceph
[13:21] <via> i've never had to touch a udev rule for lvm
[13:21] <T1w> nor have I, but ceph depends on it for disk management
[13:21] <T1w> .. which causes lvm not to be easily supported
[13:22] <via> if you don't use ceph-disk/deploy it won't
[13:22] <T1w> (again, there are quite a few hits on google explaining that)
[13:22] * zhaochao (~zhaochao@125.39.8.235) Quit (Quit: ChatZilla 0.9.92 [Iceweasel 38.3.0/20150922225347])
[13:22] <T1w> well.. I'd really really _really_ love NOT to have to do all the preparation myself - mkfs, tmp mount for correct ids etc etc etc
[13:23] * rdas (~rdas@182.70.159.135) has joined #ceph
[13:23] <T1w> perhaps at some later point in time where I know what I'm doing, but right now it's not really an option
[13:23] <via> fair enough, although i don't know how to coax it into doing those things
[13:23] <via> i think there are stil manual setup instructions
[13:23] <via> yeah, the manual install
[13:24] <T1w> yeah - and it's really not pretty to ahve to do that n times
[13:24] <T1w> granted it's explained etc etc
[13:24] <T1w> but the guides does have some holes - I found one yesterday
[13:24] * kefu is now known as kefu|afk
[13:24] <via> yeah, its probably not kept up to date very well unfortunately
[13:25] <T1w> .. on RHEL you _have_ to specify what release you want installed
[13:25] <T1w> otherwise it just installes (and failes during) a few bits and pieces
[13:25] <T1w> .. I got ceph-common and ceph-rgw installed, but no mon or osd binaries
[13:26] <T1w> fails even
[13:26] * alrick (~alrick@91.218.144.129) has joined #ceph
[13:26] <via> ah, i think i remember you doing that the other night
[13:28] <mfa298_> T1w: you may find some of the udev rules (or similar) also make life harder when trying to use md/lvm devices (I've just been testing raid0 md devices for OSDs and had to hack a few bits to make it work)
[13:28] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) Quit (Quit: bai)
[13:28] <T1w> mfa298_: well.. I really don't care for udev rules - I'm not in a situation where I'm likely to move OSD volumes between machines
[13:29] <mfa298_> I think there's been some debate previously about the merits of raid1 on ssds for journals vs using pure partitions and 1/2 the number of jounrals per ssd
[13:29] <T1w> I'd just like to get my journals on a mirrored set of ssds
[13:29] <T1w> yeah, but with 2 OSDs it's not an issue
[13:29] <T1w> I know it is cause for caution when the ration goes up
[13:29] <T1w> ratio even
[13:30] * alrick (~alrick@91.218.144.129) Quit (Remote host closed the connection)
[13:30] * alrick (~alrick@91.218.144.129) has joined #ceph
[13:30] <via> one thing i've been unclear about is journal loss. docs say you lose the contents of the OSD, but i don't really understand why it wouldn't just lose recent updates
[13:30] <T1w> right now I've got 1U nodes with 2x ssd and 2x 4TB spinning rust
[13:30] <T1w> .. and that is exactly why I'd like to have my journals on mirrored devices for now
[13:31] <via> yeah... i just would love some clarification from ceph devs on that
[13:31] <Heebie> Well, I guess that makes sense. Performance-wise, you would have less latency if using disk-partitions directly.
[13:31] <T1w> come later where I've got lots more OSDs it can be scaled
[13:32] <T1w> Heebie: the tests using fio/dd that I posted on Sebastians blog showd no difference when doing them on LVM, a md device (raid 1) or directly on the /dev/sd* device
[13:32] <T1w> look for the results from Intel S3710
[13:32] <via> i think the idea of lvm/md induced latency is a thing from the 90s
[13:32] <T1w> yeah
[13:33] <T1w> we've neger had reasons not to use LVM in the past 8 or 10 years
[13:33] <T1w> never even
[13:33] <T1w> hmpf.. afk
[13:33] <via> you might run into issues with the underlying device cache/flushing settings not propogating correctly i suppose
[13:34] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[13:34] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[13:38] * ira (~ira@nat-pool-bos-t.redhat.com) has joined #ceph
[13:40] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[13:43] * pam (~pam@193.106.183.1) has joined #ceph
[13:45] <T1w> back..
[13:45] <T1w> via: what do you mean?
[13:46] * thomnico_ (~thomnico@2a01:e35:8b41:120:5d8d:e0c8:9100:f088) has joined #ceph
[13:46] <via> oh, i meant in general, not ceph specifcally
[13:46] <T1w> ah
[13:46] <via> or example, drbd on lvm you can't use barriers
[13:47] <T1w> well.. I'm not interested in real high performance
[13:47] <T1w> I'm more interested in integrity and stability
[13:47] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[13:47] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[13:48] <T1w> if an operation takes a bit longer or not it's not really a concern
[13:48] * thomnico (~thomnico@2a01:e35:8b41:120:4884:9cb6:f7cc:487b) Quit (Ping timeout: 480 seconds)
[13:50] <mfa298_> md devices might have negative impacts in some cases if one drive in the set is slower for some reason, but I'm not sure the code path itself is going to significantly add to the latency.
[13:50] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[13:53] <mfa298_> certainly my raid0 OSD devices seem to be slower than the individual disks, but that may be down to the type of drive (we're using SMR drives)
[13:53] <T1w> yeah, but that's to be expected
[13:53] <T1w> SMR drives are..
[13:54] <T1w> nooot really good for anything but a slow tickle of data
[13:54] <T1w> a device managed SMR drive usually have a few GB of non-SMR space
[13:55] <T1w> so incomming data gets written to that area and is then moved to SMR at a later point in time
[13:55] <mfa298_> I'd expect to see similar issues with non smr drives, but not as obvious (and maybe not as often)
[13:55] <T1w> if/when the non-SMR area get filled up, writes slow down a lot
[13:55] * skrblr (~w2k@tor-exit.squirrel.theremailer.net) has joined #ceph
[13:58] * overclk (~overclk@59.93.66.169) has joined #ceph
[13:58] * overclk (~overclk@59.93.66.169) Quit (autokilled: This host may be infected. Mail support@oftc.net with questions. BOPM (2015-10-29 12:58:20))
[14:00] * nihilifer (nihilifer@s6.mydevil.net) Quit (Read error: Connection reset by peer)
[14:02] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[14:07] * trociny (~mgolub@93.183.239.2) Quit (Ping timeout: 480 seconds)
[14:09] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[14:09] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[14:09] * erhudy (uid89730@id-89730.ealing.irccloud.com) has joined #ceph
[14:15] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[14:15] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[14:16] <analbeard> hi guys, what logging stacks do you suggest for ceph? ELK? or is there a more suitable option?
[14:19] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[14:22] * trociny (~mgolub@93.183.239.2) has joined #ceph
[14:25] * skrblr (~w2k@7V7AAAUIU.tor-irc.dnsbl.oftc.net) Quit ()
[14:25] * CoZmicShReddeR (~dux0r@Relay-J.tor-exit.network) has joined #ceph
[14:25] * georgem (~Adium@69-196-163-180.dsl.teksavvy.com) has joined #ceph
[14:29] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[14:29] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[14:31] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[14:32] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[14:33] * kefu|afk is now known as kefu
[14:36] * vbellur (~vijay@122.172.57.91) has joined #ceph
[14:41] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Ping timeout: 480 seconds)
[14:47] <Heebie> That's an interesting question. analbeard: Are you talking about a syslog server, or system for syslogging, or some other method of logging? (I think I'll have to look up this "ELK")
[14:48] * CheKoLyN (~saguilar@bender.parc.xerox.com) has joined #ceph
[14:48] <analbeard> Heebie: basically we want to graph our clusters - i'm not well versed in the different kind of logging stacks
[14:48] <analbeard> Heebie - i've seen people using Graphite to get some nice data out of their cluster
[14:48] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[14:51] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[14:51] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[14:52] <Heebie> I'll look those both up.
[14:54] <pam> Hi, are there any calamari experts here?
[14:54] * olid11 (~olid1982@aftr-185-17-204-125.dynamic.mnet-online.de) Quit (Ping timeout: 480 seconds)
[14:55] <pam> I build the server package described as here http://calamari.readthedocs.org/en/latest//en/latest/development/building_packages.html from the latest master for centos7.1
[14:55] <pam> everyhting went fine
[14:55] * CoZmicShReddeR (~dux0r@5P6AAAJ4X.tor-irc.dnsbl.oftc.net) Quit ()
[14:55] * nih (~blip2@5P6AAAJ6I.tor-irc.dnsbl.oftc.net) has joined #ceph
[14:56] <pam> when I install the diamond package on a node and later want do to an update with yum update I get an dependency error for python
[14:57] * rdas (~rdas@182.70.159.135) Quit (Quit: Leaving)
[14:57] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[14:57] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[14:57] <pam> here the error: http://pastebin.com/Z2ahvbiF
[14:58] <pam> strange since I think after the python update the path /bin/python will still be there???
[14:59] * kefu is now known as kefu|afk
[15:00] * dyasny (~dyasny@198.251.59.55) has joined #ceph
[15:06] * georgem (~Adium@69-196-163-180.dsl.teksavvy.com) Quit (Quit: Leaving.)
[15:07] * nihilifer (nihilifer@s6.mydevil.net) has joined #ceph
[15:07] * longguang (~chatzilla@123.126.33.253) Quit (Read error: Connection reset by peer)
[15:09] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[15:09] * georgem (~Adium@206.108.127.16) has joined #ceph
[15:10] * longguang (~chatzilla@123.126.33.253) has joined #ceph
[15:10] * dneary (~dneary@208.123.164.2) has joined #ceph
[15:12] * kefu|afk is now known as kefu
[15:15] * amote (~amote@1.39.12.95) has joined #ceph
[15:15] <thehoffau> just curious if anyone is running PV/LVM ontop of ceph and if there is anything to watch out for. Looking at replacing current iscsi connected block device(s) which serves a KVM(10nodes) farm with RBD.ceph.
[15:16] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[15:16] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[15:17] <bpkroth> fen: yeah, for a little less than a year
[15:18] <bpkroth> it's survived a few power outages and other mayhem
[15:18] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) has joined #ceph
[15:19] * mhackett (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[15:25] * nih (~blip2@5P6AAAJ6I.tor-irc.dnsbl.oftc.net) Quit ()
[15:27] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[15:28] * Jeeves_ (~Jeeves_@host01.tuxis.net) has joined #ceph
[15:28] <Jeeves_> Hi!
[15:29] * rakeshgm (~rakesh@121.244.87.124) Quit (Ping timeout: 480 seconds)
[15:29] <Jeeves_> I'm trying to install ceph-deploy so I can install a new hammer cluster, but I can't seem to find ceph-deploy using http://docs.ceph.com/docs/master/start/quick-start-preflight/#ceph-deploy-setup
[15:29] <Jeeves_> Any hints?
[15:30] <alfredodeza> Jeeves_: what do you mean with 'not able to find'
[15:30] <alfredodeza> after adding the repo?
[15:30] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[15:30] <Jeeves_> Yes
[15:30] <alfredodeza> hrmn
[15:30] <alfredodeza> what release?
[15:30] <Jeeves_> But I'm running Debian sid (using the jessie repo)
[15:31] <alfredodeza> ah
[15:31] <Jeeves_> I only see Ubuntu versions of ceph-deply here
[15:31] <Jeeves_> http://download.ceph.com/debian-hammer/pool/main/c/ceph-deploy/
[15:31] <alfredodeza> yeah ceph-deploy hasn't made it there yet
[15:31] <alfredodeza> yes
[15:31] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[15:31] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[15:31] <alfredodeza> I can explain why, but I don't think that you want to know why :)
[15:31] * rakeshgm (~rakesh@121.244.87.124) has joined #ceph
[15:31] <Jeeves_> I'm guessing licensing!
[15:32] <alfredodeza> ceph-deploy and ceph builds are distinct, when there is a ceph-deploy build it gets added to the ceph-repo
[15:32] <alfredodeza> *ceph repo
[15:32] <alfredodeza> we haven't had a ceph-deploy release after creating the jessie build for ceph
[15:32] <alfredodeza> hence no ceph-deploy for jessie anywher
[15:32] <Jeeves_> grmbl
[15:32] <Jeeves_> But ok.
[15:33] <alfredodeza> (sorry)
[15:33] <Jeeves_> If I use the trusty repo, that should work?
[15:33] <Jeeves_> No issues that my node is running jessie?
[15:33] <alfredodeza> but what you want is to be able to install it, not for me to tell you how I haven't got it to build for jessie :)
[15:33] <alfredodeza> you mean for ceph-deploy?
[15:33] <Jeeves_> Yes
[15:33] <m0zes> pip install ceph-deploy
[15:34] <Jeeves_> m0zes: Ah
[15:34] <Jeeves_> m0zes: seems to work :)
[15:34] <alfredodeza> right, I didn't mention pip because I ask first if you are familiar with python install tools
[15:34] <alfredodeza> ceph-deploy is 100% python
[15:35] <alfredodeza> thanks m0zes for the suggestion :)
[15:35] * m0zes likes python
[15:35] <Jeeves_> alfredodeza: If I don't understand what you're saying, I'll ask what you mean. Otherwise, assume I understand everything. ;)
[15:36] <alfredodeza> I usually do it the other way around so that I don't sound like a know-it-all :D
[15:36] <alfredodeza> "oh yeah just use pip" can totally go the wrong way if someone has no idea what it is
[15:37] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[15:37] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[15:42] * danieagle (~Daniel@191.254.167.40) has joined #ceph
[15:44] * mattbenjamin (~mbenjamin@aa2.linuxbox.com) has joined #ceph
[15:45] <Jeeves_> :)
[15:52] * rakeshgm (~rakesh@121.244.87.124) Quit (Quit: Leaving)
[15:52] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[15:52] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[15:52] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[15:55] * keeperandy (~textual@50.245.231.209) has joined #ceph
[16:02] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:04] <Jeeves_> What's wrong with the Ceph.com servers?
[16:05] <Aeso> Jeeves_, what issues are you having?
[16:05] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:05] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[16:05] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[16:10] <Jeeves_> sllllooooooooow
[16:14] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[16:15] * dneary (~dneary@208.123.164.2) Quit (Ping timeout: 480 seconds)
[16:15] * Rehevkor (~lmg@nl1x.mullvad.net) has joined #ceph
[16:17] * bara (~bara@213.175.37.12) has joined #ceph
[16:17] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[16:17] * swami1 (~swami@zz2012427835719DFD82.userreverse.dion.ne.jp) has joined #ceph
[16:19] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[16:19] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[16:19] * danieagle (~Daniel@191.254.167.40) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[16:20] * enax (~enax@hq.ezit.hu) Quit (Ping timeout: 480 seconds)
[16:20] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[16:23] * Wielebny (~Icedove@cl-927.waw-01.pl.sixxs.net) Quit (Quit: Wielebny)
[16:25] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[16:25] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[16:28] * ira (~ira@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[16:28] <Jeeves_> Ok, so I currently have a single node with two disks.
[16:29] <Jeeves_> Can I create a pool with size=2 where both disks contain the same data?
[16:29] <Jeeves_> Kinda raid1 on a single box?
[16:30] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[16:30] * kefu is now known as kefu|afk
[16:30] <m0zes> you can, but the performance and maintenance overhead would make it not fun.
[16:30] * kefu|afk is now known as kefu
[16:31] <Jeeves_> It's temporarily
[16:33] <Jeeves_> I thought setting osd_crush_chooseleaf_type to 0 would do the trick
[16:36] * shylesh (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[16:39] * bara (~bara@213.175.37.10) has joined #ceph
[16:39] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[16:40] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[16:41] * xarses (~xarses@zz2012427835719DFD82.userreverse.dion.ne.jp) has joined #ceph
[16:43] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[16:43] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[16:45] * Rehevkor (~lmg@52c05fb8.test.dnsbl.oftc.net) Quit ()
[16:46] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[16:46] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[16:47] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[16:51] * analbeard (~shw@support.memset.com) has joined #ceph
[16:53] * kefu is now known as kefu|afk
[16:54] * ade (~abradshaw@tmo-108-207.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[16:54] * swami1 (~swami@zz2012427835719DFD82.userreverse.dion.ne.jp) Quit (Quit: Leaving.)
[16:56] * olid11 (~olid1982@aftr-185-17-204-125.dynamic.mnet-online.de) has joined #ceph
[16:57] * georgem (~Adium@69-196-163-180.dsl.teksavvy.com) has joined #ceph
[16:58] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[16:58] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:02] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[17:03] * moore (~moore@64.202.160.88) has joined #ceph
[17:03] * kefu|afk (~kefu@114.92.106.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:05] <cetex> so.. building our own storage solution with archive hdd's (poor choice) -> writing to them and the hdd's crashing within 7.5months -> buyin new
[17:05] * kefu (~kefu@114.92.106.70) has joined #ceph
[17:05] <cetex> it's gonna cost os ~25%-~50% of our current cloud-storage solution...
[17:06] <cetex> *cost us*
[17:07] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[17:07] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[17:07] <cetex> we roughly do 5*10^9 operations to the cloud provider per month currently.. :>
[17:08] <cetex> so roughly 1810iops 24/7.
[17:10] * nardial (~ls@dslb-088-072-094-085.088.072.pools.vodafone-ip.de) Quit (Quit: Leaving)
[17:10] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) Quit (Quit: Leaving.)
[17:10] <georgem> cetex: what's the warranty for these drives?
[17:10] <cetex> I just did the math on seagates 8tb archive hdd's.
[17:11] * Kurt (~Adium@2001:628:1:5:10de:b96b:8621:1be9) Quit (Quit: Leaving.)
[17:12] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[17:12] <cetex> i think it's 180tbw / 365d on those
[17:12] <cetex> splitting 1810 iops into reads/writes 50/50 means 905 reads and 905 writes per second, writes are amplified 3 times so 905*3 = 2715 writes per second shared by 80 drives = 33 ops per second per drive.
[17:13] <Aeso> cetex, careful. the 8tb archive drives are SMR drives, so beyond their cache capacity, writes slow to ~7-8MB/s per drive
[17:14] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[17:14] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[17:14] <cetex> yeah.. probably not gonna go there.
[17:16] <cetex> but still, we'd write ~150-200MB/s to this cluster 24/7.
[17:17] <cetex> so 80 archive drives should kinda be able to keep up. (kinda)
[17:17] <cetex> and 100 should definitely.
[17:17] <cetex> but we'll probably go for cheaper $/GB, so 6TB hdd's or something
[17:19] <cetex> we have 192 nodes out of which 180 are diskless, 50% can handle 2 hdd's, 50% can handle 3 hdd's so if we get more but smaller drives this should solve itself nicely.
[17:20] * LobsterRoll (~LobsterRo@140.247.242.44) has joined #ceph
[17:21] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[17:21] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[17:21] * Uniju1 (~Tenk@tor4thepeople2.torexitnode.net) has joined #ceph
[17:21] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:22] * kefu (~kefu@114.92.106.70) Quit (Max SendQ exceeded)
[17:23] * kefu (~kefu@114.92.106.70) has joined #ceph
[17:23] <cetex> Hm, It seems like we can fit 450 drives if needed.
[17:26] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[17:26] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[17:27] * remy1991 (~ravi@115.114.59.182) Quit (Ping timeout: 480 seconds)
[17:28] * Destreyf (~quassel@email.newagecomputers.info) has joined #ceph
[17:29] * segutier (~segutier@sfo-vpn1.shawnlower.net) has joined #ceph
[17:33] * kefu (~kefu@114.92.106.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:33] <mfa298_> My testing so far suggestes the SMR drives can be much slower for io performance. With similar clusters I was getting 1300mbps writes with 6TB wd greens, and 500mbs with the seagate 8TB SMRs
[17:34] <mfa298_> the SMR cluster started getting into blocked requests at that point and the performance tanked
[17:34] <cetex> ah, nice.
[17:34] <cetex> thanks :)
[17:34] <cetex> so, no smr drives, ever.
[17:37] <mfa298_> those tests were 5 storage nodes with 45 drives in each.
[17:38] <cetex> ok. :)
[17:38] <mfa298_> if you want performance then smr doesn't seem a good route. If you want lots of cheap capacity then they might be the way to go.
[17:39] <cetex> yeah.. I guess WD Red 4/6TB or seagates 4/6TB drives would be a good choice.
[17:40] <mfa298_> I want to test the SMR drives with a proper SSD journal which might work a bit better, and longer term when filesystems have caught up performance of them might improve
[17:40] <cetex> :)
[17:40] * dgurtner_ (~dgurtner@178.197.236.67) has joined #ceph
[17:41] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[17:41] <cetex> next question then, cache tier based on memory backed storage? is there any way to limit memory allocation of the osd?
[17:41] <cetex> (it's for the read-intensive edge-sites)
[17:42] <cetex> we only have 10Gbe to them and plan to push quite a bit more from them.
[17:42] * dgurtner (~dgurtner@178.197.231.145) Quit (Ping timeout: 480 seconds)
[17:43] <kiranos> I want to restart my osd's on one host, currently I do it in a for loop in bash, is there an internal command for this, to iterate over all osd's and restart them one by one?
[17:44] * LobsterRoll (~LobsterRo@140.247.242.44) Quit (Quit: LobsterRoll)
[17:44] <cetex> but i guess latency to monitor nodes would be an issue if we run an off-site cache-tier?
[17:46] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[17:46] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[17:46] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[17:46] <cetex> another alternative is caching it in varnish, but then we need to do sorcery with varnish instead..
[17:47] * pam (~pam@193.106.183.1) Quit (Quit: pam)
[17:51] * Uniju1 (~Tenk@4Z9AAAJ0R.tor-irc.dnsbl.oftc.net) Quit ()
[17:54] <mfa298_> kiranos: on ubuntu I use 'restart ceph-osd-all' (it's an upstart job)
[17:54] * yguang11 (~yguang11@2001:4998:effd:600:29d5:1e49:60ec:fe29) has joined #ceph
[17:54] <kiranos> I use centos7 and in hammer its still init
[18:00] * thomnico_ (~thomnico@2a01:e35:8b41:120:5d8d:e0c8:9100:f088) Quit (Quit: Ex-Chat)
[18:04] * amote (~amote@1.39.12.95) Quit (Quit: Leaving)
[18:05] * alrick (~alrick@91.218.144.129) Quit (Remote host closed the connection)
[18:05] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[18:06] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:06] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[18:07] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[18:07] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[18:07] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[18:11] * ksperis (~laurent@46.218.42.103) has left #ceph
[18:14] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[18:15] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Ping timeout: 480 seconds)
[18:17] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[18:18] * shylesh (~shylesh@1.22.75.63) has joined #ceph
[18:21] * thomnico (~thomnico@2a01:e35:8b41:120:5d8d:e0c8:9100:f088) has joined #ceph
[18:22] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[18:23] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[18:24] * CAPSLOCK2000 (~oftc@2001:984:3be3:1::8) has joined #ceph
[18:25] * mykola (~Mikolaj@91.225.202.134) has joined #ceph
[18:27] * ska (~skatinolo@cpe-173-174-111-177.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[18:28] * LobsterRoll (~LobsterRo@140.247.242.44) has joined #ceph
[18:28] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[18:28] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[18:29] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[18:29] <LobsterRoll> kiranos: in my experience ???service ceph restart??? on an osd node restarts each osd daemon one at a time already
[18:33] <kiranos> LobsterRoll: thanks, very little documentation about restart http://docs.ceph.com/docs/hammer/rados/operations/operating/
[18:34] <Aeso> kiranos, that's because it depends on the distro and version you're running Ceph on
[18:34] <Aeso> init scripts, systemd, upstart, etc
[18:34] <LobsterRoll> do a ???watch ceph osd tree??? and in another terminal do a ???service ceph restart??? you will see them go down and back up one at a time. Ive only ever used hammer on centos7
[18:34] * thomnico (~thomnico@2a01:e35:8b41:120:5d8d:e0c8:9100:f088) Quit (Quit: Ex-Chat)
[18:35] <kiranos> Aeso: well if ceph provide packages for these different init systemes, I dont see that as a valid point
[18:35] * shylesh (~shylesh@1.22.75.63) Quit (Remote host closed the connection)
[18:35] * ska (~skatinolo@cpe-173-174-111-177.austin.res.rr.com) has joined #ceph
[18:36] <kiranos> LobsterRoll: thanks, I'll do that once the cluster rebalances itself
[18:36] <kiranos> thanks!
[18:36] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[18:36] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[18:38] * dneary (~dneary@12.30.109.130) has joined #ceph
[18:38] <debian112> What are people using for naming conventions on ceph pool names?
[18:38] <rkeene> "rbd"
[18:38] <debian112> just checking here...
[18:40] * derjohn_mob (~aj@88.128.81.73) has joined #ceph
[18:41] * pabluk is now known as pabluk_
[18:43] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[18:44] <LobsterRoll> my pool names are wordy and descriptive based upon how ive set the replication to happen (ie across datacenters or not)
[18:47] * xcezzz (~xcezzz@97-96-111-106.res.bhn.net) Quit (Quit: xcezzz)
[18:47] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[18:47] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[18:48] <todin> is it possible to use more then one cinder-volume with the same ceph pool, to improve the throughput in volume creation?
[18:49] <bene2> anyone have experience with ceph wireshark plugin? I tried running wireshark-gnome-1.12.6-4.fc22.x86_64 on a tcpdump generated on a Ceph OSD server, and all it shows is TCP, nothing higher. http://docs.ceph.com/docs/master/dev/wireshark/
[18:55] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[18:55] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[18:59] * shaunm (~shaunm@208.102.161.229) Quit (Ping timeout: 480 seconds)
[19:01] * georgem (~Adium@69-196-163-180.dsl.teksavvy.com) Quit (Quit: Leaving.)
[19:01] * georgem (~Adium@206.108.127.16) has joined #ceph
[19:04] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[19:06] * georgem (~Adium@206.108.127.16) Quit ()
[19:08] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[19:08] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[19:11] * shawniverson (~shawniver@208.70.47.116) has joined #ceph
[19:15] * jasuarez (~jasuarez@243.Red-81-39-64.dynamicIP.rima-tde.net) Quit (Quit: WeeChat 1.2)
[19:18] * fsimonce (~simon@host30-173-dynamic.23-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[19:22] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[19:22] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[19:23] * stupidnic (~foo@office.expresshosting.net) has joined #ceph
[19:28] * bitserker (~toni@88.87.194.130) Quit (Ping timeout: 480 seconds)
[19:28] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[19:28] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[19:29] * jwilkins (~jowilkin@c-67-180-123-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:29] * sileht (~sileht@sileht.net) has joined #ceph
[19:29] * derjohn_mob (~aj@88.128.81.73) Quit (Ping timeout: 480 seconds)
[19:31] * shawniverson (~shawniver@208.70.47.116) Quit (Ping timeout: 480 seconds)
[19:34] * garphy is now known as garphy`aw
[19:36] * dgurtner_ (~dgurtner@178.197.236.67) Quit (Ping timeout: 480 seconds)
[19:37] * shaunm (~shaunm@50-5-224-5.dynamic.fuse.net) has joined #ceph
[19:42] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[19:42] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[19:44] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[19:47] * bara (~bara@213.175.37.10) Quit (Quit: Bye guys! (??????????????????? ?????????)
[19:49] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[19:49] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[19:54] * swami1 (~swami@zz2012427835719DFD82.userreverse.dion.ne.jp) has joined #ceph
[19:55] * georgem (~Adium@206.108.127.16) has joined #ceph
[20:01] * pam (~pam@host77-118-dynamic.180-80-r.retail.telecomitalia.it) has joined #ceph
[20:02] * swami1 (~swami@zz2012427835719DFD82.userreverse.dion.ne.jp) Quit (Ping timeout: 480 seconds)
[20:08] * pam (~pam@host77-118-dynamic.180-80-r.retail.telecomitalia.it) Quit (Read error: Connection reset by peer)
[20:08] * pam (~pam@nat1.unibz.it) has joined #ceph
[20:09] * mykola (~Mikolaj@91.225.202.134) Quit (Quit: away)
[20:09] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[20:09] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[20:21] * lcurtis (~lcurtis@47.19.105.250) Quit (Quit: Ex-Chat)
[20:24] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[20:24] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[20:29] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[20:30] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[20:33] * Jourei (~Esge@7V7AAAUYK.tor-irc.dnsbl.oftc.net) has joined #ceph
[20:35] * pam_ (~pam@host77-118-dynamic.180-80-r.retail.telecomitalia.it) has joined #ceph
[20:37] * pam_ (~pam@host77-118-dynamic.180-80-r.retail.telecomitalia.it) Quit ()
[20:39] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[20:40] * pam (~pam@nat1.unibz.it) Quit (Ping timeout: 480 seconds)
[20:43] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[20:43] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[20:51] * LPG (~LPG@c-50-181-212-148.hsd1.wa.comcast.net) has joined #ceph
[20:54] <debian112> what are the ceph ports I need open on firewall?
[20:54] <debian112> ceph-mon = 6789
[20:54] <goberle> Hey everyone, has anybody experienced a wrong MAX AVAIL size for a pool with multiple step take/step emit in a rule ? We have two datacenter and we would like to setup a pool with 3 replicas, two in one datacenter on 1 in the other. We will set the min_size to 2 and are aware that this will broke I/O if we loose the datacenter with 2 replicas. We built this crush map : https://gist.github.com/goberle/7a7e7fed5624eed8c4ce and ceph df gives us a raw size of 90TB
[20:54] <goberle> but only 11.9TB MAX AVAIL for the pool with size 3.
[20:54] <debian112> ceph-osd = 6800-?
[20:55] <goberle> It sounds related to an issue that someone has also got a year ago (http://ceph-users.ceph.narkive.com/0wKGTVg5/incorrect-pool-size-wrong-ruleset), but in our case, changing the crush tunables had no effect.
[20:57] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[21:00] <lurbs> debian112: As many as it needs. Each OSD listens on its own set of ports on a machine.
[21:00] <debian112> yeah I saw. I added: 6800-7100
[21:00] * dneary (~dneary@12.30.109.130) Quit (Ping timeout: 480 seconds)
[21:00] <debian112> it should cover any future things
[21:03] * danieagle (~Daniel@187.74.64.217) has joined #ceph
[21:03] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[21:03] * Jourei (~Esge@7V7AAAUYK.tor-irc.dnsbl.oftc.net) Quit ()
[21:03] * ska (~skatinolo@cpe-173-174-111-177.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[21:12] * dupont-y (~dupont-y@familledupont.org) has joined #ceph
[21:13] * ska (~skatinolo@cpe-173-174-111-177.austin.res.rr.com) has joined #ceph
[21:13] * dan_ (~dan@dvanders-pro.cern.ch) Quit (Ping timeout: 480 seconds)
[21:16] * rendar (~I@host46-131-dynamic.59-82-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:18] * shaunm (~shaunm@50-5-224-5.dynamic.fuse.net) Quit (Ping timeout: 480 seconds)
[21:19] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[21:19] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[21:19] * dneary (~dneary@12.30.109.130) has joined #ceph
[21:19] * rendar (~I@host46-131-dynamic.59-82-r.retail.telecomitalia.it) has joined #ceph
[21:25] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[21:25] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[21:27] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[21:28] <cetex> goberle: hm. could it be that you don't have enough PG's in the pool?
[21:28] <cetex> goberle: the pool has PG's which are placed by crush on some OSD's, if the PG's won't cover all OSD's i'm thinkin the pool won't see all available disk-space.
[21:29] <cetex> not sure though. just guessing.
[21:30] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[21:30] <goberle> cetex: the pool has 2048 PGs (72*100/3 = 2400 => 2048)
[21:31] <lurbs> goberle: What does 'ceph osd tree' look like?
[21:31] <Kupo1> anyone seeing https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc slow?
[21:32] <Kupo1> yum is complaining 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds'
[21:34] <goberle> lurbs: let me 5min to restore previous settings and I will paste the ceph osd tree output here
[21:36] <cetex> goberle: hm, sounds ok actually. :)
[21:42] <goberle> lurbs: https://gist.github.com/goberle/386debbc85a8d469cfcf
[21:42] <goberle> lurbs: you have the ceph osd tree, the decompiled crush map and the ouput of ceph df
[21:44] * shaunm (~shaunm@208.102.161.229) has joined #ceph
[21:44] <lurbs> Is there a pool using the replicated_ruleset and a size of 3?
[21:44] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[21:44] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[21:45] <goberle> one-dev is using it
[21:45] * linjan_ (~linjan@176.195.227.255) has joined #ceph
[21:46] <lurbs> And it thinks that the MAX AVAIL is the total space in a single OSD host, right.
[21:47] <lurbs> If you do something like: for i in {1.100}; do ceph osd map one-dev $i; done
[21:47] <lurbs> Does it consistently pick OSDs from the same host in the ley datacentre?
[21:47] <lurbs> s/ley/le7/
[21:47] <goberle> yes, according to my understanding of the crush map, the MAX AVAIL of the pool should be more like 25TB
[21:48] <lurbs> Er, s/1.100/1..100/
[21:50] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[21:50] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[21:50] <goberle> nop, it seems that all primary PGs are in the mai datacenter and there is alawy 2 replcia in mai and only one in le7
[21:50] <lurbs> If so, then it's a CRUSH map problem. If it spreads them through all the hosts in le7 then it's an issue with MAX AVAIL reporting incorrectly.
[21:51] <lurbs> Is the replica in le7 always on the same host in le7, or on different ones?
[21:51] <goberle> lurbs: it's exactly what I conclude, the MAX AVAIL is not reported correctly
[21:51] * linjan (~linjan@176.195.227.255) Quit (Ping timeout: 480 seconds)
[21:51] <goberle> lurbs: they are not on the same host in le7
[21:52] <goberle> lurbs: https://gist.github.com/goberle/52eb11d71edef69821f8
[21:54] * CheKoLyN (~saguilar@bender.parc.xerox.com) Quit (Remote host closed the connection)
[21:56] <lurbs> I agree, problem with MAX AVAIL. And not one I'm able to help with I'm afraid. :)
[21:57] <lurbs> If it had been CRUSH map logic, maybe.
[21:57] * Shnaw (~Schaap@tor-exit.squirrel.theremailer.net) has joined #ceph
[21:57] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[21:57] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[22:09] <goberle> lurbs: ok :), anyway, thanks for the help ! I guess I will open a bug report.
[22:17] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[22:17] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[22:19] * erhudy (uid89730@id-89730.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[22:23] * dneary (~dneary@12.30.109.130) Quit (Ping timeout: 480 seconds)
[22:23] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[22:23] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[22:23] * dyasny (~dyasny@198.251.59.55) Quit (Ping timeout: 480 seconds)
[22:27] * Shnaw (~Schaap@4Z9AAAKDE.tor-irc.dnsbl.oftc.net) Quit ()
[22:28] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[22:28] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[22:34] * dyasny (~dyasny@198.251.60.80) has joined #ceph
[22:41] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[22:41] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[22:43] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[22:46] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[22:46] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[22:51] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[22:53] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[22:53] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[22:55] * dupont-y (~dupont-y@familledupont.org) Quit (Remote host closed the connection)
[22:57] * shinobu (~oftc-webi@pdf874b16.tokynt01.ap.so-net.ne.jp) Quit (Ping timeout: 480 seconds)
[22:58] * dupont-y (~dupont-y@familledupont.org) has joined #ceph
[22:58] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[22:58] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[22:58] * yguang11 (~yguang11@2001:4998:effd:600:29d5:1e49:60ec:fe29) Quit (Remote host closed the connection)
[23:01] * LobsterRoll (~LobsterRo@140.247.242.44) Quit (Ping timeout: 480 seconds)
[23:04] * danieagle (~Daniel@187.74.64.217) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[23:05] * zenpac1 (~zenpac3@66.55.33.66) Quit (Ping timeout: 480 seconds)
[23:07] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[23:07] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[23:08] * shawniverson (~shawniver@192.69.183.61) has joined #ceph
[23:08] * yguang11 (~yguang11@66.228.162.44) has joined #ceph
[23:12] * dneary (~dneary@12.30.109.130) has joined #ceph
[23:15] * moore (~moore@71-211-73-118.phnx.qwest.net) has joined #ceph
[23:16] * moore (~moore@71-211-73-118.phnx.qwest.net) Quit (Remote host closed the connection)
[23:16] * moore (~moore@64.202.160.233) has joined #ceph
[23:30] * zenpac (~zenpac3@66.55.33.66) has joined #ceph
[23:33] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) has joined #ceph
[23:39] * dneary (~dneary@12.30.109.130) Quit (Ping timeout: 480 seconds)
[23:43] * linjan_ (~linjan@176.195.227.255) Quit (Ping timeout: 480 seconds)
[23:51] * dupont-y (~dupont-y@familledupont.org) Quit (Ping timeout: 480 seconds)
[23:52] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[23:52] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.