#ceph IRC Log

Index

IRC Log for 2016-07-20

Timestamps are in GMT/BST.

[0:00] <bstillwell> Well, if it's just archival you can live with the slowness then.
[0:00] <Roland-> well I am still expecing like maybe 200 MB/s ?
[0:00] <bstillwell> blizzow: I don't have a good explanation for it, but others have run into issues with mons not being on SSDs
[0:00] <Roland-> since each disk can do 120 MB/s and I got 8
[0:01] * bene2 is now known as bene2_afk
[0:01] * bene2_afk (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[0:02] <bstillwell> Roland-: Maybe with multiple threads.
[0:03] <Roland-> well these nodes will be dedicated for this only
[0:03] <bstillwell> A single write won't exceed 60 MB/s though, since you'll be writing to the journals of two drives at once, and again to the disk.
[0:03] * rendar (~I@host200-143-dynamic.59-82-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:04] <bstillwell> That's why people use SSDs for journals, so they not doing doubles writes to the same disk.
[0:04] <bstillwell> s/they/they're/
[0:04] * mattbenjamin (~mbenjamin@12.118.3.106) Quit (Quit: Leaving.)
[0:05] <Roland-> ok thank you
[0:06] <bstillwell> BlueStore gets rid of the double writes, but it's not ready for production use yet. Just testing.
[0:06] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[0:07] * mhackett (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[0:17] * slowriot (~Bored@61TAAAPLG.tor-irc.dnsbl.oftc.net) Quit ()
[0:17] * Scymex (~tunaaja@216.239.90.19) has joined #ceph
[0:18] * elfurbe (~Adam@saint.uits.arizona.edu) has joined #ceph
[0:19] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) has joined #ceph
[0:21] <elfurbe> Having a strange day with my ceph cluster. It's an infernalis cluster, 18 osds. I've got one osd that's failed, ceph shows it as down/out, but the cluster isn't recovering. Load average on this host is at 83 and climbing, but the shell is still pretty responsive in general. Some commands do hang the shell, including unfortunately ps aux and ps -ef, so I can't tell if there's a stuck process or something, but I'm not sure that's even relevant to the
[0:21] <elfurbe> question of why the cluster isn't recovering a down/out osd.
[0:23] <kwork> can you run ceph "async" so that local write is fast and sync is delayed ?
[0:24] <elfurbe> @kwork: That was a new question and not a reply to me, right?
[0:24] <kwork> yes
[0:24] <elfurbe> Fantastic, I was very confused :D
[0:38] * KindOne_ (kindone@198.14.197.137) has joined #ceph
[0:43] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:43] * KindOne_ is now known as KindOne
[0:47] * Scymex (~tunaaja@26XAAAF9A.tor-irc.dnsbl.oftc.net) Quit ()
[0:57] * fsimonce (~simon@host99-64-dynamic.27-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:59] * kuku (~kuku@119.93.91.136) has joined #ceph
[1:02] * squizzi (~squizzi@107.13.31.195) Quit (Quit: bye)
[1:05] * evelu (~erwan@133.17.90.92.rev.sfr.net) has joined #ceph
[1:06] * debian112 (~bcolbert@wftest.cuda-inc.com) has joined #ceph
[1:07] * dnunez (~dnunez@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[1:07] * evelu (~erwan@133.17.90.92.rev.sfr.net) Quit (Read error: Connection reset by peer)
[1:09] * borei1 (~dan@216.13.217.230) has left #ceph
[1:09] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:09] * isti (~isti@BC06E559.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[1:11] * oms101 (~oms101@p20030057EA612300C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:11] * whatevsz (~quassel@b9168e24.cgn.dg-w.de) Quit (Ping timeout: 480 seconds)
[1:17] * PappI (~QuantumBe@31.220.4.161) has joined #ceph
[1:20] * oms101 (~oms101@p20030057EA087A00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:20] * xarses (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[1:22] * devster (~devsterkn@2001:41d0:1:a3af::1) Quit (Ping timeout: 480 seconds)
[1:22] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[1:24] * xarses (~xarses@64.124.158.100) has joined #ceph
[1:25] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[1:27] <Italux> Someone already deploy a MultiSite RadsGW on Jewel?
[1:35] * whatevsz (~quassel@185.22.140.59) has joined #ceph
[1:41] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[1:42] * reed (~reed@216.38.134.18) Quit (Quit: Ex-Chat)
[1:42] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[1:44] * xarses (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[1:45] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Read error: Connection reset by peer)
[1:47] * PappI (~QuantumBe@26XAAAGBB.tor-irc.dnsbl.oftc.net) Quit ()
[1:51] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) Quit (Quit: valeech)
[1:55] * hroussea (~hroussea@000200d7.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:56] * krypto (~krypto@G68-90-105-6.sbcis.sbc.com) Quit (Ping timeout: 480 seconds)
[2:04] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[2:05] <Amto_res> Hello, i have "osd_max_backfills": "1", "osd_recovery_threads": "1" , "osd_recovery_max_active": "1", "osd_recovery_op_priority": "3". Why active+remapped+backfilling = 12 ? Yet I am not supposed to have as many "PG" active?
[2:11] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[2:11] * elfurbe (~Adam@saint.uits.arizona.edu) Quit (Quit: Textual IRC Client: www.textualapp.com)
[2:18] <Italux> How many OSDs do you have?
[2:19] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[2:20] * truan-wang (~truanwang@220.248.17.34) has joined #ceph
[2:21] * PuyoDead (~Maariu5_@cry.ip-eend.nl) has joined #ceph
[2:24] <Amto_res> Italux: 32 OSDs
[2:27] * pdrakeweb (~pdrakeweb@oh-76-5-108-60.dhcp.embarqhsd.net) has joined #ceph
[2:30] * vata (~vata@cable-173.246.3-246.ebox.ca) has joined #ceph
[2:32] <Italux> This configurations are per OSD and you have the osd_recovery_op_priority, which reduce the recovery priority per OSD.
[2:33] <Italux> So while the OSDs are busy with the client ops they will no able to recovery any data
[2:35] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) Quit (Quit: Leaving)
[2:36] <Amto_res> The problem is that my clients can not read and write .. :(
[2:40] * davidzlap (~Adium@2605:e000:1313:8003:4cc6:2246:b05b:6cd7) Quit (Quit: Leaving.)
[2:41] <Italux> Is a performance issue?
[2:47] <Amto_res> Recovery impact performance a client
[2:51] * PuyoDead (~Maariu5_@26XAAAGF4.tor-irc.dnsbl.oftc.net) Quit ()
[2:51] * Jeffrey4l_ (~Jeffrey@110.252.46.217) has joined #ceph
[2:51] * OODavo (~nartholli@tor-exit.talyn.se) has joined #ceph
[2:51] * Jeffrey4l_ (~Jeffrey@110.252.46.217) Quit (Remote host closed the connection)
[2:54] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[2:54] <Italux> Unfortunately, this is something common and I faced this kind of issue several times.
[2:54] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[2:54] <Italux> To minimize the impact I use this configuration:
[2:54] <Italux> [osd]
[2:54] <Italux> osd disk thread ioprio priority = 7
[2:54] <Italux> osd disk thread ioprio class = idle
[2:55] <Italux> osd op thread timeout = 30
[2:55] <Italux> osd recovery max active = 10
[2:55] <Italux> osd max backfills = 10
[2:55] <Italux> But this depends of your hardware configuration on the OSD nodes
[2:56] <Italux> My OSD nodes have SSD journal and 1TB SATA disks with 64GB RAM
[3:00] <Italux> @Amto_res You can use this command to change the OSD configuration without restart:
[3:00] <Italux> ceph tell osd.* injectargs '--osd_max_backfills=10'
[3:06] <Amto_res> It for me as what change exactly?
[3:08] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:10] <Italux> @Amto_res I think you can try change the "osd disk thread ioprio priority" and "osd disk thread ioprio class"
[3:10] <Italux> ceph tell osd.* injectargs '--osd_disk_thread_ioprio_priority=7'
[3:10] <Italux> ceph tell osd.* injectargs '--osd_disk_thread_ioprio_class=idle'
[3:15] * JPGainsborough (~jpg@71.122.174.182) Quit (Remote host closed the connection)
[3:19] * JPGainsborough (~jpg@71.122.174.182) has joined #ceph
[3:21] * OODavo (~nartholli@61TAAAPU6.tor-irc.dnsbl.oftc.net) Quit ()
[3:25] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[3:26] * rapedex (~Zombiekil@exit.tor.uwaterloo.ca) has joined #ceph
[3:34] * haplo37 (~haplo37@107-190-44-23.cpe.teksavvy.com) has joined #ceph
[3:41] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:45] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[3:46] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[3:49] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[3:56] * rapedex (~Zombiekil@26XAAAGHI.tor-irc.dnsbl.oftc.net) Quit ()
[3:56] * Kalado (~MonkeyJam@chulak.enn.lu) has joined #ceph
[4:04] * kefu (~kefu@183.193.182.196) has joined #ceph
[4:08] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[4:08] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[4:13] * kefu (~kefu@183.193.182.196) Quit (Ping timeout: 480 seconds)
[4:16] * kefu (~kefu@114.92.96.253) has joined #ceph
[4:19] * Racpatel (~Racpatel@2601:87:0:24af::1fbc) Quit (Quit: Leaving)
[4:21] * yanzheng (~zhyan@125.70.23.222) has joined #ceph
[4:26] * Kalado (~MonkeyJam@5AEAAAFSZ.tor-irc.dnsbl.oftc.net) Quit ()
[4:29] * debian112 (~bcolbert@wftest.cuda-inc.com) has left #ceph
[4:31] * jarrpa (~jarrpa@2602:43:481:4300:eab1:fcff:fe47:f680) Quit (Ping timeout: 480 seconds)
[4:36] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[4:37] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[4:51] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: Textual IRC Client: www.textualapp.com)
[4:52] * MACscr1 (~Adium@c-73-9-230-5.hsd1.il.comcast.net) has joined #ceph
[4:52] * MACscr (~Adium@c-73-9-230-5.hsd1.il.comcast.net) Quit (Read error: Connection reset by peer)
[4:53] * MACscr1 (~Adium@c-73-9-230-5.hsd1.il.comcast.net) has left #ceph
[4:54] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[4:55] * kuku (~kuku@119.93.91.136) has joined #ceph
[4:56] * xENO_ (~Scymex@tor2r.ins.tor.net.eu.org) has joined #ceph
[5:02] * vicente_ (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[5:02] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Read error: Connection reset by peer)
[5:06] * chengpeng_ (~chengpeng@180.168.126.243) has joined #ceph
[5:07] * karnan (~karnan@106.51.130.90) has joined #ceph
[5:11] * chengpeng (~chengpeng@180.168.126.243) Quit (Ping timeout: 480 seconds)
[5:14] * jarrpa (~jarrpa@207.87.211.74) has joined #ceph
[5:18] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[5:19] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[5:22] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[5:22] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[5:22] * debian112 (~bcolbert@173-164-167-196-SFBA.hfc.comcastbusiness.net) has joined #ceph
[5:23] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[5:23] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[5:26] * xENO_ (~Scymex@9YSAAAQLF.tor-irc.dnsbl.oftc.net) Quit ()
[5:30] * Vacuum_ (~Vacuum@88.130.195.35) has joined #ceph
[5:32] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[5:34] * penguinRaider (~KiKo@103.6.219.219) Quit (Ping timeout: 480 seconds)
[5:35] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[5:37] * Vacuum__ (~Vacuum@88.130.219.117) Quit (Ping timeout: 480 seconds)
[5:38] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[5:45] * penguinRaider (~KiKo@103.6.219.219) has joined #ceph
[5:48] * vimal (~vikumar@114.143.165.70) has joined #ceph
[5:53] * shaunm (~shaunm@74.83.215.100) Quit (Ping timeout: 480 seconds)
[5:53] * rdias (~rdias@2001:8a0:749a:d01:f1b7:e796:acd5:2c03) Quit (Ping timeout: 480 seconds)
[5:53] * rdias (~rdias@bl7-92-98.dsl.telepac.pt) has joined #ceph
[6:01] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:01] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) has joined #ceph
[6:05] * karnan (~karnan@106.51.130.90) Quit (Quit: Leaving)
[6:05] * karnan (~karnan@106.51.130.90) has joined #ceph
[6:09] * jarrpa (~jarrpa@207.87.211.74) Quit (Ping timeout: 480 seconds)
[6:10] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[6:11] * kefu (~kefu@114.92.96.253) has joined #ceph
[6:13] * vimal (~vikumar@114.143.165.70) Quit (Quit: Leaving)
[6:14] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[6:15] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[6:26] * tunaaja (~ChauffeR@65.19.167.131) has joined #ceph
[6:34] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[6:35] * vimal (~vikumar@121.244.87.116) has joined #ceph
[6:36] * karnan (~karnan@106.51.130.90) Quit (Ping timeout: 480 seconds)
[6:45] * karnan (~karnan@106.51.137.148) has joined #ceph
[6:50] * truan-wang (~truanwang@220.248.17.34) Quit (Ping timeout: 480 seconds)
[6:56] * tunaaja (~ChauffeR@5AEAAAFVY.tor-irc.dnsbl.oftc.net) Quit ()
[6:56] * Rosenbluth (~tunaaja@tor.les.net) has joined #ceph
[6:59] * truan-wang (~truanwang@220.248.17.34) has joined #ceph
[7:12] * swami1 (~swami@49.38.0.108) has joined #ceph
[7:13] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[7:19] * babilen (~babilen@babilen.user.oftc.net) has joined #ceph
[7:26] * Rosenbluth (~tunaaja@61TAAAP1C.tor-irc.dnsbl.oftc.net) Quit ()
[7:26] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[7:26] * Maza (~BlS@5AEAAAFWW.tor-irc.dnsbl.oftc.net) has joined #ceph
[7:28] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:35] * raso1 (~raso@ns.deb-multimedia.org) Quit (Read error: Connection reset by peer)
[7:35] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[7:36] * raso (~raso@ns.deb-multimedia.org) has joined #ceph
[7:41] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[7:42] * kefu (~kefu@li764-181.members.linode.com) has joined #ceph
[7:43] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:95e5:5468:e78b:6573) Quit (Ping timeout: 480 seconds)
[7:45] * kefu (~kefu@li764-181.members.linode.com) Quit (Max SendQ exceeded)
[7:45] * kefu (~kefu@li764-181.members.linode.com) has joined #ceph
[7:52] * haplo37 (~haplo37@107-190-44-23.cpe.teksavvy.com) Quit (Remote host closed the connection)
[7:56] * dgurtner (~dgurtner@209.132.186.254) has joined #ceph
[7:56] * Maza (~BlS@5AEAAAFWW.tor-irc.dnsbl.oftc.net) Quit ()
[7:56] * Dinnerbone (~Maariu5_@194.187.249.135) has joined #ceph
[8:02] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:05] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[8:14] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[8:17] * art_yo (~kvirc@149.126.169.197) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[8:18] * karnan (~karnan@106.51.137.148) Quit (Remote host closed the connection)
[8:19] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[8:20] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[8:22] * Roland- (~Roland-@46.7.150.9) Quit ()
[8:25] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[8:26] * Dinnerbone (~Maariu5_@26XAAAGNY.tor-irc.dnsbl.oftc.net) Quit ()
[8:26] * Nijikokun (~loft@91.109.29.120) has joined #ceph
[8:27] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) Quit (Quit: jermudgeon)
[8:34] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[8:34] * kefu (~kefu@li764-181.members.linode.com) Quit (Read error: Connection reset by peer)
[8:42] * kefu (~kefu@114.92.96.253) has joined #ceph
[8:45] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[8:46] * kefu (~kefu@114.92.96.253) has joined #ceph
[8:46] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Remote host closed the connection)
[8:47] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[8:51] * epicguy (~epicguy@41.164.8.42) has joined #ceph
[8:56] * Nijikokun (~loft@5AEAAAFX2.tor-irc.dnsbl.oftc.net) Quit ()
[8:56] * poller (~blip2@tor-exit.15-cloud.fr) has joined #ceph
[8:56] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) has joined #ceph
[8:57] * isti (~isti@BC06E559.dsl.pool.telekom.hu) has joined #ceph
[9:00] <TheSov> who was it on here using the minnowboards for a ceph cluster?
[9:03] * GeoTracer (~Geoffrey@41.77.153.99) Quit (Ping timeout: 480 seconds)
[9:04] * GeoTracer (~Geoffrey@41.77.153.99) has joined #ceph
[9:05] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:07] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[9:07] * allaok (~allaok@machine107.orange-labs.com) has left #ceph
[9:09] * crismike (~kmajk@nat-hq.ext.getresponse.com) has joined #ceph
[9:10] * epicguy (~epicguy@41.164.8.42) has left #ceph
[9:11] * epicguy (~epicguy@41.164.8.42) has joined #ceph
[9:11] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[9:18] * karnan (~karnan@121.244.87.117) has joined #ceph
[9:19] * rendar (~I@host249-177-dynamic.37-79-r.retail.telecomitalia.it) has joined #ceph
[9:26] * poller (~blip2@61TAAAP3W.tor-irc.dnsbl.oftc.net) Quit ()
[9:26] * pakman__ (~tunaaja@justus.impium.de) has joined #ceph
[9:29] * fsimonce (~simon@host99-64-dynamic.27-79-r.retail.telecomitalia.it) has joined #ceph
[9:30] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[9:31] * isti (~isti@BC06E559.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[9:36] * hybrid512 (~walid@195.200.189.206) has joined #ceph
[9:36] <nikbor> hello, what is the correct place to report broken links in the official docu pages?
[9:36] * hybrid512 (~walid@195.200.189.206) Quit ()
[9:36] * hybrid512 (~walid@195.200.189.206) has joined #ceph
[9:38] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[9:44] * epicguy (~epicguy@41.164.8.42) has left #ceph
[9:46] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Remote host closed the connection)
[9:48] * kefu is now known as kefu|afk
[9:50] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[9:50] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[9:56] * pakman__ (~tunaaja@26XAAAGPE.tor-irc.dnsbl.oftc.net) Quit ()
[9:56] * kefu|afk is now known as kefu
[9:57] * derjohn_mob (~aj@46.189.28.50) has joined #ceph
[9:57] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:ed7c:2028:533d:c626) has joined #ceph
[9:57] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[9:58] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[10:08] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[10:10] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[10:16] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[10:18] * truan-wang (~truanwang@220.248.17.34) Quit (Ping timeout: 480 seconds)
[10:21] * DanFoster (~Daniel@office.34sp.com) has joined #ceph
[10:33] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[10:34] * crismike (~kmajk@nat-hq.ext.getresponse.com) Quit (Read error: Connection reset by peer)
[10:39] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[10:40] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[10:48] * garphy`aw is now known as garphy
[10:50] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Quit: treenerd_)
[10:51] * isti (~isti@fw.alvicom.hu) has joined #ceph
[10:56] * nastidon1 (~yuastnav@89.43.62.11) has joined #ceph
[11:07] * TMM (~hp@185.5.121.201) has joined #ceph
[11:07] * vimal (~vikumar@121.244.87.116) Quit (Ping timeout: 480 seconds)
[11:08] * vimal (~vikumar@121.244.87.116) has joined #ceph
[11:19] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[11:19] * kefu (~kefu@114.92.96.253) has joined #ceph
[11:20] * swami1 (~swami@49.38.0.108) Quit (Ping timeout: 480 seconds)
[11:21] * swami1 (~swami@49.38.0.108) has joined #ceph
[11:27] * nastidon1 (~yuastnav@61TAAAP6K.tor-irc.dnsbl.oftc.net) Quit ()
[11:27] * Zeis (~tokie@212.7.192.148) has joined #ceph
[11:29] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[11:30] * kefu (~kefu@114.92.96.253) has joined #ceph
[11:32] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[11:32] * kefu (~kefu@114.92.96.253) has joined #ceph
[11:38] * EthanL (~lamberet@cce02cs4044-fa12-z.ams.hpecore.net) has joined #ceph
[11:41] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[11:42] * kefu (~kefu@114.92.96.253) has joined #ceph
[11:43] * EthanL (~lamberet@cce02cs4044-fa12-z.ams.hpecore.net) Quit (Read error: Connection reset by peer)
[11:47] * vincepii (~textual@77.245.22.67) has joined #ceph
[11:47] <vincepii> quick question: can the ceph fsid be set to any string with [a-z][A-Z][0-9] and hyphens (-)?
[11:53] * sebastian-w (~quassel@212.218.8.138) has joined #ceph
[11:54] <sebastian-w> Hi anyone ever tried to use QtCreator as a C++ IDE on Linux?
[11:54] * InIMoeK (~InIMoeK@95.170.93.16) has joined #ceph
[11:54] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[11:56] * Zeis (~tokie@9YSAAAQRU.tor-irc.dnsbl.oftc.net) Quit ()
[11:56] * rraja (~rraja@121.244.87.117) has joined #ceph
[11:57] * yanzheng1 (~zhyan@125.70.23.222) has joined #ceph
[12:00] * yanzheng (~zhyan@125.70.23.222) Quit (Ping timeout: 480 seconds)
[12:03] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[12:04] * nikbor (~n.borisov@admins.1h.com) has left #ceph
[12:04] * kefu (~kefu@114.92.96.253) has joined #ceph
[12:05] * Rosenbluth (~mollstam@watchme.tor-exit.network) has joined #ceph
[12:05] * vincepii (~textual@77.245.22.67) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:06] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[12:06] * DanFoster (~Daniel@office.34sp.com) Quit (Remote host closed the connection)
[12:07] * kefu (~kefu@114.92.96.253) has joined #ceph
[12:08] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[12:09] * DanFoster (~Daniel@2a00:1ee0:3:1337:19e5:8f55:d278:fbcf) has joined #ceph
[12:09] * b0e (~aledermue@213.95.25.82) has joined #ceph
[12:10] * penguinRaider (~KiKo@103.6.219.219) Quit (Ping timeout: 480 seconds)
[12:16] * vincepii (~textual@77.245.22.67) has joined #ceph
[12:25] * penguinRaider (~KiKo@103.6.219.219) has joined #ceph
[12:28] * vincepii (~textual@77.245.22.67) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:30] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[12:30] * vincepii (~textual@77.245.22.67) has joined #ceph
[12:35] * Rosenbluth (~mollstam@61TAAAP7X.tor-irc.dnsbl.oftc.net) Quit ()
[12:35] * dontron (~mrapple@64.ip-37-187-176.eu) has joined #ceph
[12:38] * vincepii (~textual@77.245.22.67) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:38] * vimal (~vikumar@121.244.87.116) Quit (Ping timeout: 480 seconds)
[12:39] * dgurtner (~dgurtner@209.132.186.254) Quit (Ping timeout: 480 seconds)
[12:43] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[12:47] * vimal (~vikumar@121.244.87.116) has joined #ceph
[12:53] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Quit: Ex-Chat)
[12:58] * vincepii (~textual@77.245.22.67) has joined #ceph
[13:00] * gregmark (~Adium@68.87.42.115) has joined #ceph
[13:02] * vicente_ (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:03] * dgurtner (~dgurtner@178.197.227.166) has joined #ceph
[13:04] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[13:04] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[13:05] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[13:05] * dontron (~mrapple@61TAAAP8H.tor-irc.dnsbl.oftc.net) Quit ()
[13:05] * cryptk (~SweetGirl@89.207.129.150) has joined #ceph
[13:08] * oms101 (~oms101@p20030057EA087A00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[13:16] * dgurtner (~dgurtner@178.197.227.166) Quit (Ping timeout: 480 seconds)
[13:18] * oms101 (~oms101@176.6.118.136) has joined #ceph
[13:19] * dgurtner (~dgurtner@178.197.232.164) has joined #ceph
[13:27] * penguinRaider (~KiKo@103.6.219.219) Quit (Ping timeout: 480 seconds)
[13:33] * vincepii (~textual@77.245.22.67) Quit (Ping timeout: 480 seconds)
[13:35] * cryptk (~SweetGirl@5AEAAAF3U.tor-irc.dnsbl.oftc.net) Quit ()
[13:35] * Scrin (~jacoo@93.115.95.206) has joined #ceph
[13:39] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[13:40] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[13:41] * kefu (~kefu@114.92.96.253) has joined #ceph
[13:44] * kawa2014 (~kawa@212.110.41.244) Quit (Ping timeout: 480 seconds)
[13:45] * penguinRaider (~KiKo@103.6.219.219) has joined #ceph
[13:46] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[13:51] * kefu (~kefu@114.92.96.253) has joined #ceph
[13:52] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[13:55] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) has joined #ceph
[13:55] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[13:56] * kefu (~kefu@114.92.96.253) has joined #ceph
[13:56] * truan-wang (~truanwang@116.216.30.51) has joined #ceph
[14:01] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[14:01] * i_m (~ivan.miro@deibp9eh1--blueice1n6.emea.ibm.com) has joined #ceph
[14:05] * Scrin (~jacoo@61TAAAP9W.tor-irc.dnsbl.oftc.net) Quit ()
[14:05] * jcsp (~jspray@fpc101952-sgyl38-2-0-cust21.18-2.static.cable.virginm.net) has joined #ceph
[14:14] * briner (~briner@2001:620:600:1000:5d26:8eaa:97f0:8115) Quit (Read error: Connection reset by peer)
[14:14] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:35ef:2080:a10:fb5c) Quit (Quit: Leaving.)
[14:18] * Racpatel (~Racpatel@2601:87:0:24af::1fbc) has joined #ceph
[14:19] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[14:23] * bniver (~bniver@nat-pool-bos-u.redhat.com) has joined #ceph
[14:27] * nass5 (~fred@l-p-dn-in-12a.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[14:29] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[14:30] * ira (~ira@nat-pool-bos-t.redhat.com) has joined #ceph
[14:34] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:34] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[14:35] * cyphase (~cyphase@000134f2.user.oftc.net) has joined #ceph
[14:35] * MonkeyJamboree (~puvo@178.32.251.105) has joined #ceph
[14:35] * epicguy (~epicguy@41.164.8.42) has joined #ceph
[14:39] * shaunm (~shaunm@74.83.215.100) has joined #ceph
[14:44] * kefu (~kefu@114.92.96.253) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[14:45] * ira (~ira@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[14:46] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[14:46] * truan-wang (~truanwang@116.216.30.51) Quit (Read error: Connection timed out)
[14:52] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[14:55] * joshd1 (~jdurgin@2602:30a:c089:2b0:ec32:9973:529b:736f) has joined #ceph
[14:57] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[14:58] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[14:58] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[15:00] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (Ping timeout: 480 seconds)
[15:01] * adamcrume (~quassel@2601:647:cb01:f890:98c8:3af6:7a0b:cf5) Quit (Quit: No Ping reply in 180 seconds.)
[15:03] * adamcrume (~quassel@2601:647:cb01:f890:35e5:5801:1678:121f) has joined #ceph
[15:03] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[15:05] * MonkeyJamboree (~puvo@9YSAAAQWH.tor-irc.dnsbl.oftc.net) Quit ()
[15:05] * Rosenbluth (~Solvius@216.230.148.77) has joined #ceph
[15:07] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[15:08] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[15:13] * dgurtner (~dgurtner@178.197.232.164) Quit (Read error: Connection reset by peer)
[15:16] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:18] * vimal (~vikumar@121.244.87.116) Quit (Quit: Leaving)
[15:18] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[15:19] * mattbenjamin (~mbenjamin@12.118.3.106) has joined #ceph
[15:20] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:26] <valeech> Hello! I have an existing ceph cluster with 3 monitors and 16 OSDs. Currently, the lcuster network is on an isolated 10G network, while the public network is on a 1G production network. I have bought new 10G nics that I would like to use for the public network on all hosts involved. My question is, how much of a challenge is it to change the public network addresses on a live cluster? I need to keep the current IPs on the 1G netw
[15:26] <valeech> where they are. So I need to add new addresses for the new 10G public network and add monitors and such.
[15:27] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (Ping timeout: 480 seconds)
[15:28] * isti (~isti@fw.alvicom.hu) Quit (Ping timeout: 480 seconds)
[15:29] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[15:30] * blynch (~blynch@vm-nat.msi.umn.edu) has joined #ceph
[15:34] * yanzheng1 (~zhyan@125.70.23.222) Quit (Quit: This computer has gone to sleep)
[15:35] * Rosenbluth (~Solvius@26XAAAGXY.tor-irc.dnsbl.oftc.net) Quit ()
[15:36] * Jeffrey4l (~Jeffrey@110.252.46.217) has joined #ceph
[15:36] * jargonmonk (jargonmonk@00022354.user.oftc.net) Quit (Quit: jargonmonk)
[15:40] * gregmark (~Adium@68.87.42.115) has joined #ceph
[15:42] * jargonmonk (jargonmonk@00022354.user.oftc.net) has joined #ceph
[15:43] * dgurtner (~dgurtner@5.31.144.159) has joined #ceph
[15:48] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (Ping timeout: 480 seconds)
[15:49] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[15:52] * kawa2014 (~kawa@212.110.41.244) Quit (Ping timeout: 480 seconds)
[15:53] * kawa2014 (~kawa@31.159.227.183) has joined #ceph
[15:54] <derjohn_mob> Hello, I have an rbd with a broken filesystem, I suspect one osd to be faulty. Can I initate a scrub or repair immediately? Can I scrub one particular rbd ?
[15:54] * ngoswami` (~ngoswami@121.244.87.116) has joined #ceph
[15:54] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[15:55] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[15:55] * truan-wang (~truanwang@116.216.30.51) has joined #ceph
[15:59] * jargonmonk (jargonmonk@00022354.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:00] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[16:00] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[16:01] * dgurtner_ (~dgurtner@178.197.232.164) has joined #ceph
[16:01] * epicguy (~epicguy@41.164.8.42) Quit (Quit: Leaving)
[16:03] * dgurtner (~dgurtner@5.31.144.159) Quit (Ping timeout: 480 seconds)
[16:04] * jarrpa (~jarrpa@67-4-129-67.mpls.qwest.net) has joined #ceph
[16:05] * Sophie1 (~aleksag@5AEAAAF8O.tor-irc.dnsbl.oftc.net) has joined #ceph
[16:07] * kawa2014 (~kawa@31.159.227.183) Quit (Ping timeout: 480 seconds)
[16:09] * jargonmonk (jargonmonk@00022354.user.oftc.net) has joined #ceph
[16:10] * Italux (~Italux@186.202.97.3) Quit (Read error: Connection reset by peer)
[16:10] <rkeene> I'm trying to compile Ceph 10.2.2... seems to want to run "pip" (which I won't have) and "virtualenv" (which I don't currently have -- seems like a bad idea to have, anyway) -- any way to tell it stop doing that ?
[16:10] * Italux (~Italux@186.202.97.3) has joined #ceph
[16:12] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) has joined #ceph
[16:13] <derjohn_mob> rkeene, apt-get install python-virtualenv python-pip ... on Debian/ubuntu. Which OS?
[16:14] * kefu (~kefu@45.32.49.168) has joined #ceph
[16:14] <rkeene> It's a Linux distribution for which I am the maintainer.
[16:14] * dgurtner_ (~dgurtner@178.197.232.164) Quit (Ping timeout: 480 seconds)
[16:14] <rkeene> And no, I'm not going to install pip. I might install virtualenv -- but it's a silly idea to do so.
[16:16] <TheSov> what do you guys think a sweet spot for cost is per osd?
[16:17] <devicenull> whatever gets you lowest $/GB!
[16:17] <TheSov> a server with 36 drives cost around 11k
[16:17] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:17] <TheSov> so about 300 dollars per osd
[16:17] * dgurtner (~dgurtner@178.197.234.53) has joined #ceph
[16:17] <TheSov> smaller servers tend to be less economical
[16:17] <devicenull> $/OSD is probably not a good metric
[16:17] <TheSov> oh? whys that?
[16:18] <devicenull> what if using bigger/smaller drives gives you a better overall price?
[16:18] <devicenull> $/OSD wouldn't show that
[16:18] <TheSov> well, the price of drives is very flexible isnt it? some will go for smaller faster drives, and others for larger slower drives
[16:19] <TheSov> need more iops, add osd. need more space, add osd. so in both scenarios increasting capacity is all about adding more osds
[16:19] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[16:20] <devicenull> if you're iops bound it's a different thing, yea
[16:20] <devicenull> we're mainly space bound, so that's what we look at
[16:20] <TheSov> so $/osd seems to be a good metric when thinking about total capacity
[16:21] <TheSov> you can add the cost of disk to your cost per osd metric to get your actual costs
[16:21] <TheSov> 300 per osd + 6tb 249 makes 550 per osd with included disk
[16:21] * truan-wang (~truanwang@116.216.30.51) Quit (Ping timeout: 480 seconds)
[16:22] <rkeene> I guess I'll just patch the Makefiles to remove all this pip/virtualenv non-sense
[16:22] <TheSov> divide that by your pool size and you get your space. so 6/3 = 2, so 2 tb for 550. which is still way cheaper than any san out there
[16:25] * swami1 (~swami@49.38.0.108) Quit (Quit: Leaving.)
[16:25] * oms101 (~oms101@176.6.118.136) Quit (Read error: Connection reset by peer)
[16:26] * dgurtner (~dgurtner@178.197.234.53) Quit (Remote host closed the connection)
[16:29] <TheSov> im trying to build/optimize a single osd server, to buy 1 at a time. it needs 4 gigs of ram, 3 nics, 2 sata ports- 1 ssd- 1 spinning disk or combination therein, x86_64 and use like an m.2 boot disk
[16:29] <TheSov> its coming out way higher per osd than 300
[16:31] <rkeene> Well, of course it does... why would you expect it to even be close with that amount of overhead ?
[16:31] <TheSov> i figured the sbc boards were getting cheap enough to make it worth it
[16:32] * kefu_ (~kefu@114.92.96.253) has joined #ceph
[16:32] <TheSov> that way i can wholesale you some of the finished product and you can retail it to consumers
[16:32] <rkeene> But you're still having to duplicate everything you're sharing in the big-box case
[16:32] <TheSov> not really
[16:32] <TheSov> if this works im going custom stackable case
[16:33] * kefu (~kefu@45.32.49.168) Quit (Read error: Connection reset by peer)
[16:33] <TheSov> with a possible custom UPS module
[16:33] * nass5 (~fred@l-p-dn-in-12a.lionnois.site.univ-lorraine.fr) has joined #ceph
[16:34] <TheSov> it just depends on how low i can drive these costs
[16:34] * devster (~devsterkn@2001:41d0:1:a3af::1) has joined #ceph
[16:34] <rkeene> Power, CPUs, I/O bus, all ancilarry circuitry, enclosure, finish -- all shared in the big box case and duplicated in your case
[16:34] <TheSov> see with bluestores you get more bang for your buck, but these traditional osd's are fucking it all up
[16:34] * kutija (~kutija@89.216.27.139) has joined #ceph
[16:35] * Sophie1 (~aleksag@5AEAAAF8O.tor-irc.dnsbl.oftc.net) Quit ()
[16:35] <TheSov> well my purposes are for modular builds, not everyone is going to buy 3 or more 36 drive super micro systems
[16:36] <TheSov> its more for soho / medium business deals
[16:36] * bene3 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[16:37] <TheSov> i can sell you 3 monitor appliances, and peicemeal osd's better than i can sell you a rack full of servers
[16:37] * dgurtner (~dgurtner@178.197.236.25) has joined #ceph
[16:37] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:37] <TheSov> do you disagree?
[16:37] <rkeene> Right, they would probably 3 12-bay systems partially populated with disks, at a higher cost per disk but lower overall cost than the larger systems or single-osd system
[16:38] <TheSov> thats what im trying to drive the price down to 300 per osd
[16:38] <TheSov> which is what bigbox systems are
[16:38] <rkeene> But the medium-sized system isn't much more
[16:38] <TheSov> i agree
[16:39] <rkeene> So buying a much more expensive system just doesn't have any appeal
[16:39] <TheSov> now to tell the graphic artist guy he needs to get 3 loud 12 bay servers in his living room
[16:39] * darkid1 (~Snowman@atlantic850.dedicatedpanel.com) has joined #ceph
[16:40] <TheSov> also think of the flexibility
[16:40] <rkeene> He doesn't -- he just needs access to storage, not to be physically near it
[16:40] <TheSov> if i can get the price down to the same as the big box, would you still buy bigbox?
[16:40] * oms101 (~oms101@ipv6-c6d987fffe4339a1.clients.hamburg.freifunk.net) has joined #ceph
[16:40] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (Ping timeout: 480 seconds)
[16:41] <rkeene> The flexibility comes at a cost: 1. Higher real cost; 2. Higher maintenance cost (each physical unit duplicates all the common resources, so you have 36x PSUs that can fail instead of 3x); 3. Higher configuration cost since you need to manually create failure domains (or have additional hardware to work that out, which just adds material cost)
[16:42] <TheSov> so ill take that as a no
[16:42] <rkeene> Of course -- it would be cheaper and easier to maintain. A lot of systems with a tiny number of disks is just a hassle that nobody wants
[16:42] * ccourtaut (~ccourtaut@157.173.31.93.rev.sfr.net) has joined #ceph
[16:43] * xarses (~xarses@64.124.158.100) has joined #ceph
[16:43] <TheSov> I was just thinking it would be nice for small business to get a high end storage system without large up front costs
[16:43] <rkeene> They'll still have a lot of up-front cost since they need a bunch of OSDs to make Ceph perform well anyway
[16:44] <TheSov> if i just need some capacity, i just get a box throw in a 6tb disk plug it in and bam i got what i need without breaking the bank
[16:44] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[16:44] * jcsp (~jspray@fpc101952-sgyl38-2-0-cust21.18-2.static.cable.virginm.net) Quit (Quit: Ex-Chat)
[16:44] <rkeene> Well, no -- since that 6TB would be too slow to be useful on its own. You have to have the rest of the cluster there too.
[16:44] <TheSov> oh yeah dont get me wrong, this would be a supplemental device
[16:44] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[16:45] <TheSov> but also a starter device
[16:45] <rkeene> And if you have to have the rest of the cluster, buying a bigger box with empty bays would be cheaper
[16:45] <rkeene> It's not useful as a starter device because it'd be too slow.
[16:45] <TheSov> well it has an ssd
[16:46] <TheSov> i dont know how slow it would be until i have my testbed
[16:46] <rkeene> It's real slow.
[16:46] * ira (~ira@nat-pool-bos-u.redhat.com) has joined #ceph
[16:46] <TheSov> i have a home cluster now, i get not bad speeds and im using laptop hard drives
[16:46] <TheSov> wd blues
[16:46] <TheSov> they are slow as shit but i get decent performance out of 3 boxes
[16:47] <rkeene> I try to fix these issues in my Ceph replacement, but it'll be a long time before it's usuable (but atleast it has a better design)
[16:47] <TheSov> 3 boxes with 3 osd's each and 3 ssds
[16:47] <rkeene> What are you using to test the performance ? rados bench is useless, so... mapping an rbd and putting a filesystem on it ?
[16:47] <TheSov> yes
[16:48] <TheSov> i got a proxmox cluster
[16:48] <TheSov> i put 1 vm per host
[16:48] <TheSov> and let them hammer it with fio
[16:48] <TheSov> read speeds are great, write is where performance dips
[16:49] <TheSov> write speeds i get about 65 megs per second
[16:49] <rkeene> Right, because reads can come from librbd cache.
[16:50] <rkeene> 65MB/sec is slower than a single disk, and certainly slower than your SSD.
[16:50] * vata1 (~vata@207.96.182.162) has joined #ceph
[16:50] <TheSov> well accross the board
[16:50] <TheSov> 3 vm's all doing 65
[16:50] <TheSov> but its all connected via 1gig, so im not quite sure where the bottleneck lies
[16:51] <TheSov> oh and thats on a 2 way replication not 3
[16:51] <TheSov> the triple replication on write is something closer to 33 megs per second
[16:51] <TheSov> im pretty sure that one is necked by the network
[16:53] * MentalRay_ (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[16:58] * MentalRay_ (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Read error: Connection reset by peer)
[16:59] * MentalRay_ (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[16:59] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Ping timeout: 480 seconds)
[16:59] * joshd1 (~jdurgin@2602:30a:c089:2b0:ec32:9973:529b:736f) Quit (Quit: Leaving.)
[17:01] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[17:03] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[17:03] * `10` (~10@69.169.91.14) Quit (Ping timeout: 480 seconds)
[17:04] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[17:05] * wushudoin (~wushudoin@38.99.12.237) has joined #ceph
[17:07] * kutija (~kutija@89.216.27.139) Quit (Quit: Textual IRC Client: www.textualapp.com)
[17:09] * rogst (~MonkeyJam@146.0.43.126) has joined #ceph
[17:09] * darkid1 (~Snowman@9YSAAAQ0N.tor-irc.dnsbl.oftc.net) Quit ()
[17:15] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[17:17] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[17:19] * mykola (~Mikolaj@91.245.74.212) has joined #ceph
[17:19] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[17:21] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[17:26] * oms101 (~oms101@ipv6-c6d987fffe4339a1.clients.hamburg.freifunk.net) Quit (Ping timeout: 480 seconds)
[17:28] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (Ping timeout: 480 seconds)
[17:29] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[17:30] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:31] * rakeshgm (~rakesh@121.244.87.118) has joined #ceph
[17:33] * hybrid512 (~walid@195.200.189.206) Quit (Remote host closed the connection)
[17:33] * reed (~reed@184-23-0-196.dsl.static.fusionbroadband.com) has joined #ceph
[17:34] * oms101 (~oms101@p20030057EA087A00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[17:35] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Remote host closed the connection)
[17:36] * isti (~isti@BC06E559.dsl.pool.telekom.hu) has joined #ceph
[17:37] * cathode (~cathode@50.232.215.114) has joined #ceph
[17:38] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[17:39] * rogst (~MonkeyJam@61TAAAQF5.tor-irc.dnsbl.oftc.net) Quit ()
[17:39] * notarima (~Moriarty@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[17:48] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[17:49] * InIMoeK (~InIMoeK@95.170.93.16) Quit (Ping timeout: 480 seconds)
[17:51] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:52] * sudocat (~dibarra@192.185.1.20) Quit (Remote host closed the connection)
[17:54] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[17:55] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:55] * skinnejo (~skinnejo@173-27-199-104.client.mchsi.com) has joined #ceph
[17:55] * skinnejo (~skinnejo@173-27-199-104.client.mchsi.com) Quit ()
[17:56] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[17:59] * rakeshgm (~rakesh@121.244.87.118) Quit (Ping timeout: 480 seconds)
[18:02] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) has joined #ceph
[18:06] * squizzi (~squizzi@107.13.31.195) Quit (Ping timeout: 480 seconds)
[18:08] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[18:09] * notarima (~Moriarty@26XAAAG27.tor-irc.dnsbl.oftc.net) Quit ()
[18:09] * debian112 (~bcolbert@173-164-167-196-SFBA.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[18:11] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) has joined #ceph
[18:12] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[18:12] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[18:14] * dnunez (~dnunez@nat-pool-bos-t.redhat.com) has joined #ceph
[18:15] * ngoswami (~ngoswami@121.244.87.116) Quit ()
[18:17] * kefu_ is now known as kefu
[18:17] * EthanL (~lamberet@cce02cs4042-fa12-z.ams.hpecore.net) has joined #ceph
[18:19] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[18:19] * jarrpa (~jarrpa@67-4-129-67.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[18:19] * blizzow (~jburns@50.243.148.102) has left #ceph
[18:20] * jarrpa (~jarrpa@2602:43:489:2100:eab1:fcff:fe47:f680) has joined #ceph
[18:21] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[18:22] * derjohn_mob (~aj@46.189.28.50) Quit (Ping timeout: 480 seconds)
[18:23] <TheSov> someone needs to do something about the cost of 10g ethernet. shit is expensive
[18:24] <zdzichu> 2,5G and 5G is supposed to be cheaper
[18:24] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[18:24] <zdzichu> BTW, 8 port 10G switch could be had fo around $700
[18:24] <zdzichu> is it expensive?
[18:27] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[18:27] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[18:28] * kefu is now known as kefu|afk
[18:28] * dgurtner (~dgurtner@178.197.236.25) Quit (Ping timeout: 480 seconds)
[18:29] * garphy is now known as garphy`aw
[18:29] * squizzi_ (~squizzi@107.13.31.195) has joined #ceph
[18:29] * MentalRa_ (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[18:30] * ngoswami` (~ngoswami@121.244.87.116) Quit (Quit: Coyote finally caught me)
[18:30] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[18:30] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[18:31] <zdzichu> but it's starting to trickle down to consumer products, so I predict rapid price fall soon: http://www.anandtech.com/show/10501/consumer-10gbase-t-options-motherboards-with-10g-builtin
[18:31] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[18:32] * MentalR__ (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[18:32] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[18:32] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:33] <Sketch> consumer 10G seems a bit overkill to me
[18:33] <zdzichu> it's a bit pointless, true
[18:33] * MentalRay_ (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Ping timeout: 480 seconds)
[18:34] <zdzichu> but wifi exceeded gigabit speeds some time ago, that's why 2.5G was introduced
[18:34] * kefu|afk is now known as kefu
[18:34] <Sketch> i've never even heard of 2.5G
[18:35] <Sketch> wifi theoretically exceeded 1G, but in the real world i doubt anyone will see that
[18:36] * rakeshgm (~rakesh@121.244.87.117) Quit (Quit: Leaving)
[18:36] * EthanL (~lamberet@cce02cs4042-fa12-z.ams.hpecore.net) Quit (Ping timeout: 480 seconds)
[18:36] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[18:38] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Coyote finally caught me)
[18:38] * MentalRa_ (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[18:38] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[18:41] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[18:41] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:42] * ngoswami_ (~ngoswami@121.244.87.116) has joined #ceph
[18:44] * AotC (~Harryhy@37.59.72.135) has joined #ceph
[18:46] * MentalR__ (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[18:47] <zdzichu> http://www.broadcom.com/blog/network-infrastructure/meet-mgbase-t-new-2-55-gbps-ethernet-standard-eases-bottlenecked-enterprise-wireless-networks/
[18:47] <zdzichu> 802.11ac wave3 is 10GBps, IIRC
[18:50] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[18:51] * kefu is now known as kefu|afk
[18:56] * i_m (~ivan.miro@deibp9eh1--blueice1n6.emea.ibm.com) Quit (Read error: Connection reset by peer)
[18:57] * DanFoster (~Daniel@2a00:1ee0:3:1337:19e5:8f55:d278:fbcf) Quit (Quit: Leaving)
[19:01] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[19:02] * jermudgeon_ (~jhaustin@31.207.56.59) has joined #ceph
[19:07] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Ping timeout: 480 seconds)
[19:07] * jermudgeon_ is now known as jermudgeon
[19:09] * squizzi_ (~squizzi@107.13.31.195) Quit (Quit: bye)
[19:11] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:14] * AotC (~Harryhy@26XAAAG5G.tor-irc.dnsbl.oftc.net) Quit ()
[19:14] * MKoR (~Kyso_@158.69.194.36) has joined #ceph
[19:17] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[19:19] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[19:26] * jermudgeon (~jhaustin@31.207.56.59) Quit (Quit: jermudgeon)
[19:27] * EthanL (~lamberet@cce02cs4042-fa12-z.ams.hpecore.net) has joined #ceph
[19:30] * BrianA (~BrianA@fw-rw.shutterfly.com) has joined #ceph
[19:34] * gregmark (~Adium@68.87.42.115) has joined #ceph
[19:35] * ngoswami_ (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[19:35] * ngoswami_ (~ngoswami@121.244.87.116) has joined #ceph
[19:36] * debian112 (~bcolbert@64.235.157.198) has joined #ceph
[19:38] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[19:38] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[19:38] * EthanL (~lamberet@cce02cs4042-fa12-z.ams.hpecore.net) Quit (Ping timeout: 480 seconds)
[19:40] * kefu|afk is now known as kefu
[19:40] * zirpu (~zirpu@00013c46.user.oftc.net) Quit (Quit: leaving)
[19:40] * debian112 (~bcolbert@64.235.157.198) Quit ()
[19:41] * debian112 (~bcolbert@64.235.157.198) has joined #ceph
[19:42] * markl (~mark@knm.org) has joined #ceph
[19:44] * MKoR (~Kyso_@26XAAAG6B.tor-irc.dnsbl.oftc.net) Quit ()
[19:44] * Szernex (~Diablodoc@freedom.ip-eend.nl) has joined #ceph
[19:46] * blizzow (~jburns@50.243.148.102) has joined #ceph
[19:47] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[19:47] * ntpttr___ (~ntpttr@134.134.139.77) Quit (Remote host closed the connection)
[19:47] * ntpttr (~ntpttr@134.134.139.77) has joined #ceph
[19:47] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[19:48] <blizzow> I'm planning on running a small ceph (jewel) installation with just three OSDs, each OSD will have ~16TB storage. How much RAM/CPU should my MON servers have and what should my OSD servers have?
[19:49] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[19:49] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[19:54] * Jeffrey4l (~Jeffrey@110.252.46.217) Quit (Ping timeout: 480 seconds)
[19:55] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[19:56] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[19:56] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[19:57] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[19:59] * `10` (~10@69.169.91.14) has joined #ceph
[19:59] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit ()
[19:59] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) Quit (Quit: valeech)
[20:01] * swami1 (~swami@27.7.165.198) has joined #ceph
[20:01] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[20:03] * vata (~vata@cable-173.246.3-246.ebox.ca) Quit (Ping timeout: 480 seconds)
[20:05] * ngoswami_ (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[20:05] * kefu is now known as kefu|afk
[20:08] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[20:10] * visbits (~textual@8.29.138.28) has joined #ceph
[20:13] * `10` (~10@69.169.91.14) Quit (Ping timeout: 480 seconds)
[20:14] * vata (~vata@cable-173.246.3-246.ebox.ca) has joined #ceph
[20:14] * Szernex (~Diablodoc@26XAAAG7J.tor-irc.dnsbl.oftc.net) Quit ()
[20:15] * ira (~ira@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[20:19] * sto (~sto@121.red-2-139-229.staticip.rima-tde.net) Quit (Quit: leaving)
[20:22] * sto (~sto@121.red-2-139-229.staticip.rima-tde.net) has joined #ceph
[20:22] <kklimonda> do read/write speeds and op/s, as reported by ceph status -w, take into account replication, or are they raw numbers, and have to be further processed?
[20:23] * jargonmonk (jargonmonk@00022354.user.oftc.net) has left #ceph
[20:35] <TheSov> raw number
[20:35] <TheSov> no replication
[20:38] * t4nk190 (~oftc-webi@mintzer.imp.fu-berlin.de) has joined #ceph
[20:39] * t4nk643 (~oftc-webi@mintzer.imp.fu-berlin.de) has joined #ceph
[20:43] * swami1 (~swami@27.7.165.198) Quit (Quit: Leaving.)
[20:44] * skney (~vegas3@176.10.99.204) has joined #ceph
[20:46] * t4nk190 (~oftc-webi@mintzer.imp.fu-berlin.de) Quit (Ping timeout: 480 seconds)
[20:47] * rendar (~I@host249-177-dynamic.37-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:55] <TheSov> theres just something so nice when you can resize your vm disks by 1tb and resize the disk format without shutting down or rebooting. i love ceph, i love proxmox, and love not having partitions on my rbd's
[20:59] * kefu|afk (~kefu@114.92.96.253) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:00] * mykola (~Mikolaj@91.245.74.212) Quit (Read error: Connection reset by peer)
[21:01] * mykola (~Mikolaj@91.245.74.212) has joined #ceph
[21:06] * blizzow (~jburns@50.243.148.102) Quit (Ping timeout: 480 seconds)
[21:06] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (Ping timeout: 480 seconds)
[21:07] * karnan (~karnan@106.51.137.148) has joined #ceph
[21:11] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[21:12] * rendar (~I@host249-177-dynamic.37-79-r.retail.telecomitalia.it) has joined #ceph
[21:14] * skney (~vegas3@9YSAAAQ8N.tor-irc.dnsbl.oftc.net) Quit ()
[21:14] * valeech (~valeech@static-71-171-91-48.clppva.fios.verizon.net) has joined #ceph
[21:15] <nils_> TheSov, I actually have partitions, it's a bit more elaborate but still works
[21:15] <TheSov> i have dumped partitions wherever i can
[21:15] <nils_> TheSov, it seems that grub in some cases didn't work without them.
[21:15] <TheSov> the only disk i have partitions on is the boot disk
[21:15] <TheSov> indeed nils
[21:15] <TheSov> they need to fix that
[21:16] <nils_> I had loads of trouble with grub-install being to smart for its own good
[21:16] * valeech (~valeech@static-71-171-91-48.clppva.fios.verizon.net) Quit ()
[21:16] * EthanL (~lamberet@cce02cs4045-fa12-z.ams.hpecore.net) has joined #ceph
[21:18] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[21:18] * blizzow (~jburns@50.243.148.102) has joined #ceph
[21:21] * EthanL (~lamberet@cce02cs4045-fa12-z.ams.hpecore.net) Quit (Read error: Connection reset by peer)
[21:21] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) Quit (Quit: Leaving.)
[21:32] * bniver (~bniver@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[21:41] * valeech (~valeech@static-71-171-91-48.clppva.fios.verizon.net) has joined #ceph
[21:47] * valeech (~valeech@static-71-171-91-48.clppva.fios.verizon.net) Quit (Quit: valeech)
[21:53] <nils_> I ended up writing an exact grub boot sector + the stage1 into partition number 1
[21:54] * karnan (~karnan@106.51.137.148) Quit (Quit: Leaving)
[22:02] * mykola (~Mikolaj@91.245.74.212) Quit (Quit: away)
[22:11] * borei1 (~dan@216.13.217.230) has joined #ceph
[22:11] <borei1> hi all
[22:11] <borei1> quick question
[22:12] <borei1> can i assigne arbitrary indexes to OSD(s)
[22:12] <borei1> like osd.00 ?
[22:12] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[22:12] <borei1> or i have to follow "ceph osd create" ?
[22:12] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[22:13] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[22:16] <T1> blizzow: 16TB per OSD seems .. excessive
[22:16] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[22:22] * brians (~brian@80.111.114.175) Quit (Quit: Textual IRC Client: www.textualapp.com)
[22:25] * lx0 is now known as lxo
[22:28] * hellertime (~Adium@pool-71-174-154-26.bstnma.east.verizon.net) has joined #ceph
[22:29] * hellertime1 (~Adium@pool-71-174-154-26.bstnma.east.verizon.net) has joined #ceph
[22:29] * hellertime (~Adium@pool-71-174-154-26.bstnma.east.verizon.net) Quit (Read error: Connection reset by peer)
[22:30] <The_Ball> is this OSD crash of interest to anyone? http://pastebin.com/giS0wtmx
[22:31] * dnunez is now known as dnunez-gym
[22:32] <blizzow> T1: Why is that excessive?
[22:34] <T1> blizzow: an OSD should only have a single disk to control - 16TB seems like something along the lines of a LVM device or some SAN device of sorts
[22:34] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[22:35] <blizzow> Just a hardware RAID controller on the machine. It's exposed to the OSD as a single disk.
[22:35] <T1> why raid?
[22:36] <T1> it's not exactly along the lines of what recommended
[22:36] <T1> .. and since you asked it seems like you should stay within the recommendations.. :)
[22:36] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[22:38] <blizzow> The RAID controller is set up as JBOD. I'm essentially offloading lots of disk control to hardware. Why set up an individual OSD for each disk? That seems silly to manage multiple OSDs per machine when I can let the hardware manage the disks.
[22:38] <BranchPredictor> because if disk within hardware array fails, you're screwed
[22:38] <T1> because its counterproductive to what ceph does
[22:39] <blizzow> BranchPredictor: isn't that the point of the replication among separate OSDs?
[22:39] <BranchPredictor> when osd (and/or it's drive) fails, you'll be fine as long as you don't mark your pool(s) with size=1 (replica factor=1)
[22:39] <T1> .. it seems that you haven't read the first parts of why ceph is good at protecting its data
[22:40] <T1> hiding disks behind raid controllers is doable, but it is not recommended and you are in for a very rough ride
[22:43] <Sketch> one of the advantages of ceph is that you don't need raid controllers
[22:44] <blizzow> BranchPredictor says I'm hosed if a disk goes down in my JBOD and T1 says ceph protects data well. I'm not sure which I should take as advice.
[22:44] <BranchPredictor> both.
[22:44] <T1> yes, both..
[22:45] <T1> if you have a node (=server in ceph) with 16 disks available to store data you have 16 OSDs in a single node
[22:45] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) has joined #ceph
[22:45] <Sketch> if you have 2 JBODs and each one is an OSD, ceph will protect you if one fails.
[22:45] <T1> in you have a node with 32 disks you have 32 OSDs
[22:45] <T1> etc etc etc..
[22:47] <Sketch> but for each disk you have in a JBOD, the chances of that JBOD failing are that much higher
[22:48] <BranchPredictor> exactly.
[22:51] <blizzow> Seems like a pain in the c*ck to manage 30 OSDs when I can have three and know that if one goes down, "ceph is good at protecting data".
[22:51] * analbeard (~shw@host109-149-32-128.range109-149.btcentralplus.com) has joined #ceph
[22:51] <T1> well thats the problem
[22:51] <T1> with 30 OSDs you data is resonable safe
[22:51] * garphy`aw is now known as garphy
[22:52] <BranchPredictor> besides
[22:52] <T1> with just 3 you could just as well run a 3-disk raid5
[22:52] <blizzow> T1, not really, I'm hosting all the OSDs on the same box. Something blows on the motherboard, and I lose 10 OSDs all at once.
[22:52] <T1> only one node?
[22:52] <BranchPredictor> if one of those 30 osds fails for whatever reason, only data from that osd will get moved around.
[22:52] <T1> then dont yse ceph at all
[22:52] <T1> use even
[22:53] <BranchPredictor> if you make it 3 osds, and one fails, you'll face ceph attempting to shuffle entire 16GB of data.
[22:54] <T1> if you do not need the HA-stuff, protection of data, multiple copies distributed across multiple servers and want to place everything within a single node, I cannot see a single reason to use ceph
[22:54] <blizzow> BranchPredictor: I do see the logic in the fact that if one drive fails, I'm looking at a huge amount of data shuffling.
[22:55] <BranchPredictor> sorry, 16TB
[22:55] * noahw (~noahw@eduroam-169-233-225-12.ucsc.edu) has joined #ceph
[22:55] <BranchPredictor> blizzow: yeah, but you tell me which is better - to shuffle 1,6TB or 16TB?
[22:59] <borei1> sorry that im rushing in your discussion - what is the reasonable number of OSDs/per node ? Too many OSDs will be pushing CPU usage, to few - no point to use ceph. Im targetting to have 4-6 OSDs per box/
[23:00] <blizzow> BranchPredictor: 1.6TB over 10Gbe = 22minutes vs. 16TB over 10Gbe = 4 hours and 22 minutes.
[23:01] * dgurtner (~dgurtner@5.32.72.140) has joined #ceph
[23:03] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[23:03] * garphy is now known as garphy`aw
[23:03] <blizzow> I'm not saying it's trivial, but it's not exactly a terrible downside.
[23:04] <T1> borei1: 4-6 is fine
[23:05] * T1 (~the_one@5.186.54.143) Quit (Read error: Connection reset by peer)
[23:11] * vbellur (~vijay@71.234.224.255) has joined #ceph
[23:12] * T1 (~the_one@5.186.54.143) has joined #ceph
[23:13] <T1> hmpf.. my machine decided to reboot after updates..
[23:14] * Kaervan (~Helleshin@176.10.99.209) has joined #ceph
[23:14] * squizzi (~squizzi@107.13.31.195) Quit (Ping timeout: 480 seconds)
[23:16] <BranchPredictor> blizzow: except that you'
[23:17] <BranchPredictor> blizzow: except that you'll have to wait twice
[23:17] <BranchPredictor> once for recovery, again when you fix the array/disk.
[23:17] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[23:17] <rkeene> rbd-mirror from Ceph 10.2.2 is failing to link for me
[23:19] <blizzow> BranchPredictor: Definitely understood. Just to be argumentative, if I lose a whole server (motherboard, RAM, ISIS...) I still have to wait the long round trip for a rebuild.
[23:19] <BranchPredictor> blizzow: that's another story. :)
[23:19] <blizzow> Do I get linearly better write speed with more OSDs?
[23:19] <BranchPredictor> still, failure of disk is (imho) more common than entire server.
[23:20] * analbeard (~shw@host109-149-32-128.range109-149.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[23:20] * analbeard (~shw@support.memset.com) has joined #ceph
[23:20] <T1> if the entire server fails you should be able to plug all disks over into another machine, start OSDs up and have rebuild be complete without moving much data
[23:20] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) has joined #ceph
[23:21] <T1> .. and yes, disk failure is not rare - it's to be expected
[23:21] <m0zes> number of simultaneous clients scales fairly linearly with more osds, eventually, you'll reach some sort of peak from a single client, though.
[23:21] <rkeene> http://www.rkeene.org/viewer/tmp/ceph-10.2.2-rbd-mirror-link-fails.txt.htm
[23:21] <BranchPredictor> not that I haven't experienced server failure (I did, mobo decied to die on me)
[23:22] <BranchPredictor> *decided
[23:22] <T1> no no, I've had PSU, mobo, CPU, ramslot failures
[23:22] <m0zes> some combination of latency, bandwidth and cpu being the bottle nech.
[23:22] <m0zes> s/ nech/neck/
[23:22] <BranchPredictor> and replacing just mobo was fine, ceph didn't complain.
[23:22] <T1> in one server the replacement mobo also had a failed ramslot after a tech from the vendor replaced it
[23:23] <T1> that was a bit annoying, but there is nothing I can do
[23:23] <T1> so I just scheduled yet another downtime 24 hours later again
[23:23] <T1> I could do even
[23:24] <BranchPredictor> gotta go, good night!
[23:24] <T1> cya
[23:24] <blizzow> Bye. Thanks BranchPredictor
[23:24] * analbeard (~shw@support.memset.com) has left #ceph
[23:25] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Remote host closed the connection)
[23:27] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[23:29] * isti (~isti@BC06E559.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[23:32] <rkeene> Compiled with --without-gradosgw
[23:32] <rkeene> Grr
[23:32] <rkeene> --without-radosgw
[23:32] * MrHeavy (~MrHeavy@pool-108-46-175-58.nycmny.fios.verizon.net) has joined #ceph
[23:34] * dnunez-gym (~dnunez@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[23:35] * T1 (~the_one@5.186.54.143) Quit (Quit: Where did the client go?)
[23:38] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) has joined #ceph
[23:38] * MrHeavy (~MrHeavy@pool-108-46-175-58.nycmny.fios.verizon.net) Quit (Read error: Connection reset by peer)
[23:39] * EthanL (~lamberet@cce02cs4040-fa12-z.ams.hpecore.net) has joined #ceph
[23:41] <rkeene> And the upstream for one of --with-radosgw's dependencies is gone (fcgi)
[23:41] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[23:44] * Kaervan (~Helleshin@26XAAAHDC.tor-irc.dnsbl.oftc.net) Quit ()
[23:44] * Zyn (~ChauffeR@195.40.181.35) has joined #ceph
[23:45] * T1 (~the_one@5.186.54.143) has joined #ceph
[23:46] * EthanL (~lamberet@cce02cs4040-fa12-z.ams.hpecore.net) Quit (Read error: Connection reset by peer)
[23:50] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.