#ceph IRC Log

Index

IRC Log for 2016-10-09

Timestamps are in GMT/BST.

[0:05] * The_Ball (~pi@20.92-221-43.customer.lyse.net) Quit (Remote host closed the connection)
[0:06] * The_Ball (~pi@20.92-221-43.customer.lyse.net) has joined #ceph
[0:11] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Remote host closed the connection)
[0:11] * Vacuum__ (~Vacuum@88.130.208.18) has joined #ceph
[0:11] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[0:13] * Vacuum_ (~Vacuum@88.130.192.120) Quit (Ping timeout: 480 seconds)
[0:25] * om (~om@66.215.128.117) has joined #ceph
[0:33] * raphaelsc (~raphaelsc@2804:7f2:2180:2145:5e51:4fff:fe86:bbae) has joined #ceph
[0:37] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[0:37] <tessier_> Ceph is awesome. Now that I've got it working properly I'm amazed at how easily I can slice and dice storage and the details are all handled automatically. I should have been using this 2 years ago.
[0:53] * georgem (~Adium@69-165-135-139.dsl.teksavvy.com) has joined #ceph
[1:20] * oms101 (~oms101@p20030057EA48FD00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:26] * mog_ (~Mousey@tsn109-201-154-139.dyn.nltelcom.net) has joined #ceph
[1:29] * oms101 (~oms101@p20030057EA096800C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:30] * markl_ (~mark@knm.org) Quit (Ping timeout: 480 seconds)
[1:47] * spgriffinjr (~spgriffin@66.46.246.206) Quit (Ping timeout: 480 seconds)
[1:47] * yanzheng1 (~zhyan@125.70.23.12) has joined #ceph
[1:51] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:56] * mog_ (~Mousey@tsn109-201-154-139.dyn.nltelcom.net) Quit ()
[2:45] * om (~om@66.215.128.117) Quit (Quit: This computer has gone to sleep)
[2:49] * spgriffinjr (~spgriffin@66.46.246.206) has joined #ceph
[2:56] * jermudgeon (~jermudgeo@southend.mdu.whitestone.link) has joined #ceph
[3:00] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) has joined #ceph
[3:11] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:29] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Quit: Leaving.)
[3:31] * Eric1 (~Revo84@185.65.134.74) has joined #ceph
[4:00] * Eric1 (~Revo84@185.65.134.74) Quit ()
[4:02] * [arx] (~arx@the.kittypla.net) Quit (Quit: Your ideas are intriguing to me, and I wish to subscribe to your newsletter.)
[4:03] * jfaj (~jan@p20030084AD6F2D006AF728FFFE6777FF.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[4:04] * natarej (~natarej@101.188.54.14) Quit (Ping timeout: 480 seconds)
[4:12] * jfaj (~jan@p20030084AD264C006AF728FFFE6777FF.dip0.t-ipconnect.de) has joined #ceph
[4:41] * Linkshot (~Ian2128@tor-exit.squirrel.theremailer.net) has joined #ceph
[5:11] * Linkshot (~Ian2128@tor-exit.squirrel.theremailer.net) Quit ()
[5:14] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Quit: Leaving...)
[5:20] * georgem (~Adium@69-165-135-139.dsl.teksavvy.com) Quit (Quit: Leaving.)
[5:24] * Concubidated (~cube@c-73-12-218-131.hsd1.ca.comcast.net) has joined #ceph
[5:34] * kefu (~kefu@114.92.125.128) has joined #ceph
[5:38] * Morde (~SurfMaths@46.166.190.221) has joined #ceph
[5:45] * Vacuum_ (~Vacuum@i59F79058.versanet.de) has joined #ceph
[5:51] * Vacuum__ (~Vacuum@88.130.208.18) Quit (Ping timeout: 480 seconds)
[6:08] * Morde (~SurfMaths@46.166.190.221) Quit ()
[6:10] * ircuser-1 (~Johnny@158.183-62-69.ftth.swbr.surewest.net) Quit (Quit: because)
[6:12] * walcubi (~walcubi@p5797A886.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:12] * walcubi (~walcubi@p5797AECB.dip0.t-ipconnect.de) has joined #ceph
[7:35] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Quit: cyphase.com)
[7:35] * cyphase (~cyphase@000134f2.user.oftc.net) has joined #ceph
[7:36] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[7:37] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[7:38] * kefu (~kefu@li1445-134.members.linode.com) has joined #ceph
[7:40] * kuku (~kuku@112.203.59.175) has joined #ceph
[7:54] * Sun7zu (~Wijk@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[8:08] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Remote host closed the connection)
[8:09] * dgurtner (~dgurtner@178.197.235.128) has joined #ceph
[8:12] * rendar (~I@host220-173-dynamic.116-80-r.retail.telecomitalia.it) has joined #ceph
[8:24] * mykola (~Mikolaj@91.245.74.58) has joined #ceph
[8:24] * jermudgeon (~jermudgeo@southend.mdu.whitestone.link) Quit (Quit: jermudgeon)
[8:24] * Sun7zu (~Wijk@torland1-this.is.a.tor.exit.server.torland.is) Quit ()
[8:25] * Drezil1 (~totalworm@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[8:29] * sickology (~root@vpn.bcs.hr) Quit (Read error: Connection reset by peer)
[8:29] * sickology (~root@vpn.bcs.hr) has joined #ceph
[8:37] * KindOne (kindone@0001a7db.user.oftc.net) has joined #ceph
[8:42] * kefu (~kefu@li1445-134.members.linode.com) Quit (Read error: Connection reset by peer)
[8:48] * efirs (~firs@98.207.153.155) Quit (Quit: Leaving.)
[8:48] * kefu (~kefu@114.92.125.128) has joined #ceph
[8:53] * haplo37 (~haplo37@107.190.42.94) Quit (Remote host closed the connection)
[8:54] * Drezil1 (~totalworm@torland1-this.is.a.tor.exit.server.torland.is) Quit ()
[9:01] * brians (~brian@80.111.114.175) Quit (Remote host closed the connection)
[9:02] * brians (~brian@80.111.114.175) has joined #ceph
[9:03] * SinZ|offline (~Vale@192.42.116.16) has joined #ceph
[9:23] * kuku (~kuku@112.203.59.175) Quit (Remote host closed the connection)
[9:33] * SinZ|offline (~Vale@192.42.116.16) Quit ()
[9:41] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[9:54] * rotbeard (~redbeard@2a02:908:df13:bb00:5c08:6fb9:5778:1840) has joined #ceph
[9:55] * dgurtner (~dgurtner@178.197.235.128) Quit (Read error: Connection reset by peer)
[10:01] * Hannes (~Hannes@hygeia.opentp.be) Quit (Remote host closed the connection)
[10:01] * Jeffrey4l (~Jeffrey@110.252.64.206) Quit (Remote host closed the connection)
[10:01] * Hannes (~Hannes@hygeia.opentp.be) has joined #ceph
[10:09] * rotbeard (~redbeard@2a02:908:df13:bb00:5c08:6fb9:5778:1840) Quit (Quit: Leaving)
[10:09] * Jeffrey4l (~Jeffrey@110.252.64.206) has joined #ceph
[10:17] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[10:18] * TheDoudou_a (~spidu_@5.61.34.63) has joined #ceph
[10:21] * rotbeard (~redbeard@aftr-109-90-233-215.unity-media.net) has joined #ceph
[10:29] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[10:30] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[10:42] * wkennington (~wak@0001bde8.user.oftc.net) Quit (Read error: Connection reset by peer)
[10:48] * TheDoudou_a (~spidu_@5.61.34.63) Quit ()
[10:56] * koollman (samson_t@78.47.248.51) Quit (Remote host closed the connection)
[10:56] * Hannes (~Hannes@hygeia.opentp.be) Quit (Remote host closed the connection)
[11:02] * Hannes (~Hannes@hygeia.opentp.be) has joined #ceph
[11:08] * nardial (~ls@p548942E5.dip0.t-ipconnect.de) has joined #ceph
[11:10] * rotbeard (~redbeard@aftr-109-90-233-215.unity-media.net) Quit (Ping timeout: 480 seconds)
[11:11] * koollman (samson_t@78.47.248.51) has joined #ceph
[11:12] * murmur (~murmur@zeeb.org) Quit (Read error: Connection reset by peer)
[11:12] * murmur (~murmur@zeeb.org) has joined #ceph
[11:19] * ggarg (~ggarg@host-82-135-29-34.customer.m-online.net) Quit (Ping timeout: 480 seconds)
[11:21] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:2038:c77a:4100:5349) has joined #ceph
[11:51] * sugoruyo (~textual@host81-151-155-205.range81-151.btcentralplus.com) has joined #ceph
[11:57] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[12:15] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[12:18] * sleinen1 (~Adium@2001:620:0:82::102) has joined #ceph
[12:23] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[12:30] * Concubidated (~cube@c-73-12-218-131.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[12:36] * [0x4A6F]_ (~ident@p508CDAF7.dip0.t-ipconnect.de) has joined #ceph
[12:39] * [0x4A6F] (~ident@0x4a6f.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:39] * [0x4A6F]_ is now known as [0x4A6F]
[12:43] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:2038:c77a:4100:5349) Quit (Ping timeout: 480 seconds)
[13:02] * sugoruyo (~textual@host81-151-155-205.range81-151.btcentralplus.com) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[13:04] * nardial (~ls@p548942E5.dip0.t-ipconnect.de) Quit (Quit: Leaving)
[13:05] * sugoruyo (~textual@host81-151-155-205.range81-151.btcentralplus.com) has joined #ceph
[13:07] * iamchrist (~aarcane@99-42-64-115.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[13:07] * aarcane (~aarcane@108-208-206-178.lightspeed.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[13:11] * sugoruyo (~textual@host81-151-155-205.range81-151.btcentralplus.com) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[13:16] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[13:23] * Anticimex (~Anticimex@ec2-52-57-137-19.eu-central-1.compute.amazonaws.com) has joined #ceph
[13:23] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[13:25] * Anticimex (~Anticimex@ec2-52-57-137-19.eu-central-1.compute.amazonaws.com) Quit ()
[13:26] * Anticimex (~Anticimex@ec2-52-57-137-19.eu-central-1.compute.amazonaws.com) has joined #ceph
[13:27] * dgurtner (~dgurtner@178.197.228.84) has joined #ceph
[13:30] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:38] * tallest_red (~redbeast1@exit0.liskov.tor-relays.net) has joined #ceph
[13:55] * Discovery (~Discovery@109.235.52.4) has joined #ceph
[13:59] * kefu (~kefu@114.92.125.128) has joined #ceph
[14:08] * tallest_red (~redbeast1@exit0.liskov.tor-relays.net) Quit ()
[14:17] * raphaelsc (~raphaelsc@2804:7f2:2180:2145:5e51:4fff:fe86:bbae) Quit (Remote host closed the connection)
[14:18] * dgurtner (~dgurtner@178.197.228.84) Quit (Read error: Connection reset by peer)
[14:51] * salwasser (~Adium@2601:197:101:5cc1:cc90:7445:13a8:e580) has joined #ceph
[15:05] * salwasser (~Adium@2601:197:101:5cc1:cc90:7445:13a8:e580) Quit (Quit: Leaving.)
[15:12] * salwasser (~Adium@c-73-219-86-22.hsd1.ma.comcast.net) has joined #ceph
[15:13] * salwasser (~Adium@c-73-219-86-22.hsd1.ma.comcast.net) Quit ()
[15:13] * salwasser (~Adium@2601:197:101:5cc1:2c54:686f:bdea:1d9d) has joined #ceph
[15:14] * salwasser (~Adium@2601:197:101:5cc1:2c54:686f:bdea:1d9d) Quit ()
[15:14] * salwasser (~Adium@2601:197:101:5cc1:2c54:686f:bdea:1d9d) has joined #ceph
[15:23] * georgem (~Adium@69-165-135-139.dsl.teksavvy.com) has joined #ceph
[15:24] * natarej (~natarej@101.188.54.14) has joined #ceph
[15:26] * salwasser (~Adium@2601:197:101:5cc1:2c54:686f:bdea:1d9d) Quit (Quit: Leaving.)
[15:27] * salwasser (~Adium@2601:197:101:5cc1:2c54:686f:bdea:1d9d) has joined #ceph
[15:35] * salwasser (~Adium@2601:197:101:5cc1:2c54:686f:bdea:1d9d) Quit (Quit: Leaving.)
[15:36] * salwasser (~Adium@2601:197:101:5cc1:2c54:686f:bdea:1d9d) has joined #ceph
[15:36] * salwasser (~Adium@2601:197:101:5cc1:2c54:686f:bdea:1d9d) Quit ()
[15:36] * salwasser (~Adium@2601:197:101:5cc1:2c54:686f:bdea:1d9d) has joined #ceph
[15:36] * salwasser (~Adium@2601:197:101:5cc1:2c54:686f:bdea:1d9d) Quit ()
[15:37] * salwasser (~Adium@2601:197:101:5cc1:2c54:686f:bdea:1d9d) has joined #ceph
[15:37] * salwasser (~Adium@2601:197:101:5cc1:2c54:686f:bdea:1d9d) Quit ()
[15:37] * salwasser (~Adium@2601:197:101:5cc1:2c54:686f:bdea:1d9d) has joined #ceph
[15:40] * salwasser (~Adium@2601:197:101:5cc1:2c54:686f:bdea:1d9d) Quit ()
[15:40] * salwasser (~Adium@2601:197:101:5cc1:2c54:686f:bdea:1d9d) has joined #ceph
[15:41] * georgem (~Adium@69-165-135-139.dsl.teksavvy.com) Quit (Quit: Leaving.)
[15:45] * Discovery (~Discovery@109.235.52.4) Quit (Ping timeout: 480 seconds)
[15:48] * salwasser (~Adium@2601:197:101:5cc1:2c54:686f:bdea:1d9d) Quit (Ping timeout: 480 seconds)
[15:54] * root________ (~aarcane@99-42-64-115.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[15:54] * iamchrist (~aarcane@99-42-64-115.lightspeed.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[16:16] * foxxx0 (~fox@valhalla.nano-srv.net) Quit (Quit: WeeChat 1.5)
[16:17] * rotbeard (~redbeard@aftr-109-90-233-215.unity-media.net) has joined #ceph
[16:22] * foxxx0 (~fox@valhalla.nano-srv.net) has joined #ceph
[16:23] * root________ (~aarcane@99-42-64-115.lightspeed.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[16:24] * root________ (~aarcane@99-42-64-115.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[16:24] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[16:25] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[16:26] * dlan (~dennis@116.228.88.131) Quit (Remote host closed the connection)
[16:32] * dlan (~dennis@116.228.88.131) has joined #ceph
[16:46] * kefu (~kefu@114.92.125.128) has joined #ceph
[17:12] * Discovery (~Discovery@109.235.52.3) has joined #ceph
[17:18] * om (~om@66.215.128.117) has joined #ceph
[17:19] * om (~om@66.215.128.117) Quit ()
[17:20] * om (~om@66.215.128.117) has joined #ceph
[18:26] * wiebalck_ (~wiebalck@AAnnecy-653-1-50-224.w90-41.abo.wanadoo.fr) has joined #ceph
[18:39] * Concubidated (~cube@c-73-12-218-131.hsd1.ca.comcast.net) has joined #ceph
[19:00] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:06] * atod (~atod@cpe-74-73-129-35.nyc.res.rr.com) has joined #ceph
[19:18] * erwan_taf (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[19:19] * atod (~atod@cpe-74-73-129-35.nyc.res.rr.com) Quit (Quit: leaving)
[19:26] * atod (~atod@cpe-74-73-129-35.nyc.res.rr.com) has joined #ceph
[19:27] * wiebalck_ (~wiebalck@AAnnecy-653-1-50-224.w90-41.abo.wanadoo.fr) Quit (Quit: wiebalck_)
[19:31] * renthawn^ (~renthawn^@178-175-128-50.static.host) has joined #ceph
[19:31] <atod> Has anyone performed any benchmarks with Ceph to measure latency and data rate for different size files and sized r/w operations?
[19:38] * dgurtner (~dgurtner@176.35.230.73) has joined #ceph
[19:40] * jermudgeon (~jermudgeo@tab.mdu.whitestone.link) has joined #ceph
[19:43] * iamchrist (~aarcane@99-42-64-115.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[19:43] * root________ (~aarcane@99-42-64-115.lightspeed.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[19:55] * renthawn^ (~renthawn^@178-175-128-50.static.host) Quit (Ping timeout: 480 seconds)
[19:56] * erwan_taf (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (Ping timeout: 480 seconds)
[20:08] <T1> atod: probably a lot of people, but it really depends on your particular setup and hardware, so comparisons are not easy
[20:14] * rendar (~I@host220-173-dynamic.116-80-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:32] <atod> T1: yeah probably the number of nodes the data is distributed across and redundancy settings and hardware involved
[20:33] <atod> T1: I'd like to see some sample numnbers showing items specifically like latency on small file I/O operations in a typical configuration. That's where some production workloads really get hit hard.
[20:37] * erwan_taf (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[20:38] <iggy> I've seen plenty of benchmarks that have latency info
[20:40] * rendar (~I@host220-173-dynamic.116-80-r.retail.telecomitalia.it) has joined #ceph
[20:40] * root________ (~aarcane@99-42-64-115.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[20:44] * iamchrist (~aarcane@99-42-64-115.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[20:52] * erwan_taf (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (Ping timeout: 480 seconds)
[20:56] * mykola (~Mikolaj@91.245.74.58) Quit (Quit: away)
[21:05] * adamcrume (~quassel@2601:647:cb01:f890:c136:33db:27c5:a2dc) has joined #ceph
[21:14] * ircuser-1 (~Johnny@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[21:15] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[21:17] * ahadi (~ahadi@static.151.64.243.136.clients.your-server.de) has joined #ceph
[21:17] <ahadi> hi, I am currently learning about Ceph and how to deploy a Ceph cluster. I want to build a small cluster of 5 kind of powerfull machines, each capable of having 8 disks. Two disks for OS, two SSDs for Journals and 4 for data. My initial thought was to create a RAID5 of the 4 storage disks. My question is: Would I create only one partition and one OSD?
[21:18] <ahadi> Or would I create multiple OSDs using multiple partitions. The manual says one OSD per disk is great, but I have 4 disks, but want it to have (for extra peace of mind) RAID setup
[21:18] <ahadi> How would you build this cluster? 5 DELL servers with 8 disks each. Two disks going to be used definitely for OS (being mirrored by a RAID)
[21:18] <ben1> normally you don't use raid with ceph, you just create one osd per disk
[21:19] <ben1> with dell servers you may not be able to do straight jbod, and may need to create single disk raid0
[21:19] <ben1> but see how they appear out, as that could be annoying when you have disk failures
[21:19] <T1> drop raid
[21:20] <T1> use 1 physical disk per OSD
[21:20] <ben1> normally would replace raid controller with sas controller
[21:20] <T1> if possible put the controller into HBA mode
[21:20] <T1> newer PERC can do that
[21:20] <T1> PERCs even
[21:20] <ben1> t1: real? old ones couldn't it was annoying :)
[21:20] <T1> yeah
[21:21] <T1> I bricked one while trying to reflash it to IT firmware.. :)
[21:21] <T1> had to get it replaced
[21:21] <T1> whole machine didn't pass POST
[21:21] <ben1> i've used m1015 in IR mode and it was fine
[21:21] <ben1> but normally flash to IT mode
[21:21] <ahadi> okay, so no RAID. I thought it would give me some extra peace
[21:21] <ben1> m1015 being the ibm one
[21:22] <T1> for ceph I go with HBA mode
[21:22] <ben1> ahadi: you haven't had much experience with RAID have you?
[21:22] <ahadi> ben1: Not any bad yet
[21:22] <T1> for normal application-server usage I use it in raid-mode for os and local storage in multiple mirrors
[21:22] <T1> one of the points of ceph is that you drop raid controllers
[21:23] <ben1> well you can pretty much choose to do straight or raid for OS
[21:23] <ben1> cos OS shouldn't matter
[21:23] <ben1> if you use mdadm then enable write intent bitmap
[21:23] <T1> I went with software raid (md-based) for OS and journals
[21:23] <T1> my tests didn't show any overhead for the journals
[21:23] <ahadi> I know, I've read about the replication of Ceph and I did see it in action with a small test deployment on AWS. Seems pretty neat.
[21:24] <ben1> journals shouldn't have raid
[21:24] <ben1> striping will happen from ceph itself
[21:24] <ben1> if you use raid0 then you can lose more osd, if you use raid1 then you double your amount of total writes down to ssd.
[21:24] <T1> but I like to think that I'm pretty safe in regards to loosing one of the 2 OS/jounrla SSDs in my nodes
[21:24] <ahadi> So no RAID for 'ceph disks', OS could be RAID
[21:24] <T1> yes
[21:25] <T1> journals could(!) also be raid
[21:25] <ben1> ahadi: yeh
[21:25] <ben1> journals shouldn't be raid
[21:25] <T1> it depends
[21:25] <ahadi> Ok thank you very much guys for the quick help / suggestions!
[21:25] <T1> :)
[21:25] <ben1> t1: what's your reasoning for raid journal?
[21:26] <T1> I use the same SSDs for OS and journal
[21:26] <ben1> t1: use mdadm on the beginning for OS
[21:26] <T1> if I loose one I will either loose the entire node or both OSDs
[21:26] <ben1> and have the journal partitions not be raid
[21:26] <T1> so.. raid for those two it is
[21:27] <T1> there is no write amplification
[21:27] <T1> performance is the same (I did multiple tests)
[21:27] <ben1> oh
[21:27] <ben1> are you doing raid 0?
[21:27] <ben1> and just assuming whole node can fail?
[21:28] <ben1> and you only have 4 disks?
[21:28] <T1> and given the amount of IOPS the SSDs can handle I'm not even close to the limits for a single SSD
[21:28] <ben1> 2 of them being OS/journal
[21:28] <T1> no, raid 1 for those 2
[21:28] <T1> but I've got some nodes with only 2 OSDs
[21:28] <T1> (4x 3.5" bays)
[21:29] <ben1> well you'll still do twice as many writes to the ssd
[21:29] <T1> no..
[21:29] <ben1> with raid1
[21:29] <T1> I've got 2 SSDs
[21:29] * wiebalck_ (~wiebalck@AAnnecy-653-1-50-224.w90-41.abo.wanadoo.fr) has joined #ceph
[21:29] <ben1> 2 ssd with raid 1 = 2x the writes
[21:29] <T1> put those two in a mirror
[21:29] <T1> 1 write = 1 write on each
[21:30] <T1> there is no amplification
[21:30] <ben1> yeah but not having them in raid = 1 write on one of them, alternating
[21:30] <ben1> so it's like .5 write vs 1 write
[21:30] <ben1> and depending on your network it could slow down some transactions
[21:31] <T1> but loosing wither one of then without raid1 I will either loose the entire node (if I loose the one with OS) or I will loose both OSDs (if I loose the one with journal for both OSDs)
[21:31] <ben1> if it's gigabit i suspect there wouldn't be that much diff
[21:31] <ben1> t1: yeah, but what i was saynig could be done is raid some partitions with mdadm
[21:31] <ben1> t1: then journal one osd per ssd with the other partition
[21:32] <ben1> rather than whole disk raid
[21:32] * puvo (~theghost9@77.109.139.87) has joined #ceph
[21:32] <ben1> so losing one ssd loses one disk
[21:32] <T1> and as I've said multiple times already - I've done multiple tests without seeing any performance penalty for raid1 versus raw disk using Sebastiens tools
[21:32] <T1> or.. using his "how to test if your SSDs are suitable for journals"
[21:33] <ben1> did you compare doing those tests on two disks at once vs doing on the raid?
[21:33] <T1> and no - loosing just a single OSD due to journal failure is unacceptable
[21:33] <T1> no I did not
[21:34] <T1> I could no see any reason
[21:34] <ben1> ok well if you're happy with it, it's what it is
[21:34] <T1> a single SATA 7200RPM disk demands.. what.. 120IOPS to be available from the journal
[21:34] <T1> that's no where close to the limits of the Intel S3710s that I've got
[21:35] <T1> it's the setup I'm scaling up with in the future
[21:35] * atod (~atod@cpe-74-73-129-35.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[21:35] <T1> we're looking at some 10x 2.5" bays chassis (Dell R something)
[21:36] <T1> 8 OSDs on 8x 4TB 2.5" and 2x SSDs for OS and journal in software raid1
[21:36] <T1> still _well_ within what a single S3710 can handle IOPS-wise
[21:37] <ben1> what is the TBW limits on those though?
[21:37] <T1> and thus, still not a problem to have software raid handle
[21:37] <T1> S3710s?
[21:37] <T1> you mean how much data can we written?
[21:37] <ben1> yeah
[21:37] <ben1> looks reasonably high
[21:37] <T1> I cant rememeber if its 5 or 10 daily full writes for 5 yers
[21:38] <T1> but it's pretty high
[21:38] <ben1> it's 10
[21:38] <T1> probably 5
[21:38] <T1> ah
[21:38] <T1> samsung has raised the bar on its DC SSDs, but I'm not going anywhere near those for the next 3 or 5 yers untill they have a proved track record
[21:39] <T1> years even (damn VNC keeps cutting some characters)
[21:39] <T1> ((don't ask..))
[21:44] * puvo (~theghost9@77.109.139.87) Quit (Ping timeout: 480 seconds)
[21:47] * iamchrist (~aarcane@99-42-64-115.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[21:47] * root________ (~aarcane@99-42-64-115.lightspeed.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[21:49] * T1 (~the_one@5.186.54.143) Quit (Read error: Connection reset by peer)
[21:51] * Discovery (~Discovery@109.235.52.3) Quit (Read error: Connection reset by peer)
[21:53] * sugoruyo (~textual@host81-151-155-205.range81-151.btcentralplus.com) has joined #ceph
[22:07] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[22:08] * dgurtner (~dgurtner@176.35.230.73) Quit (Ping timeout: 480 seconds)
[22:15] * sugoruyo (~textual@host81-151-155-205.range81-151.btcentralplus.com) Quit (Quit: Textual IRC Client: www.textualapp.com)
[22:20] * anonymous (~anonymous@114.121.129.133) has joined #ceph
[22:21] <anonymous> ping
[22:23] * anonymous (~anonymous@114.121.129.133) has left #ceph
[22:35] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[22:36] * Concubidated (~cube@c-73-12-218-131.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[22:38] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[22:40] * wiebalck_ (~wiebalck@AAnnecy-653-1-50-224.w90-41.abo.wanadoo.fr) Quit (Quit: wiebalck_)
[22:41] * T1 (~the_one@5.186.54.143) has joined #ceph
[22:41] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) has joined #ceph
[22:47] * rotbeard (~redbeard@aftr-109-90-233-215.unity-media.net) Quit (Quit: Leaving)
[22:54] * Jeffrey4l_ (~Jeffrey@110.252.73.52) has joined #ceph
[22:55] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Read error: Connection reset by peer)
[22:55] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[22:58] * derjohn_mob (~aj@x590c6c73.dyn.telefonica.de) has joined #ceph
[22:58] * Jeffrey4l (~Jeffrey@110.252.64.206) Quit (Ping timeout: 480 seconds)
[23:00] * root________ (~aarcane@99-42-64-115.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[23:00] * iamchrist (~aarcane@99-42-64-115.lightspeed.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[23:02] * minnesotags (~herbgarci@c-50-137-242-97.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[23:03] * Concubidated (~cube@66.87.134.12) has joined #ceph
[23:05] * Grimhound (~CoZmicShR@108.61.123.67) has joined #ceph
[23:10] * root________ (~aarcane@99-42-64-115.lightspeed.irvnca.sbcglobal.net) Quit (Quit: Leaving)
[23:14] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Read error: Connection reset by peer)
[23:16] * atod (~atod@cpe-74-73-129-35.nyc.res.rr.com) has joined #ceph
[23:24] * atod (~atod@cpe-74-73-129-35.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:26] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[23:28] * Concubidated1 (~cube@104.220.228.114) has joined #ceph
[23:28] * Concubidated (~cube@66.87.134.12) Quit (Read error: No route to host)
[23:34] * Grimhound (~CoZmicShR@108.61.123.67) Quit (Ping timeout: 480 seconds)
[23:38] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Quit: Leaving.)
[23:44] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[23:47] * tallest_red (~storage@108.61.122.72) has joined #ceph
[23:51] * KindOne (kindone@0001a7db.user.oftc.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.