#ceph IRC Log

Index

IRC Log for 2016-01-03

Timestamps are in GMT/BST.

[0:00] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:ed0c:dd2f:18a6:e24d) has joined #ceph
[0:01] * haomaiwang (~haomaiwan@103.15.217.218) Quit (Remote host closed the connection)
[0:01] * haomaiwang (~haomaiwan@103.15.217.218) has joined #ceph
[0:04] * Hazmat (~Pulec@104.207.136.72) has joined #ceph
[0:09] * Geoffrey (~geoffrey@169-0-138-190.ip.afrihost.co.za) Quit (Quit: Nettalk6 - www.ntalk.de)
[0:14] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:ed0c:dd2f:18a6:e24d) Quit (Ping timeout: 480 seconds)
[0:16] * T1 (~the_one@87.104.212.66) has joined #ceph
[0:19] * bvi (~bastiaan@152-64-132-5.ftth.glasoperator.nl) has joined #ceph
[0:22] * Geoffrey (~geoffrey@169-0-138-190.ip.afrihost.co.za) has joined #ceph
[0:23] * T1 (~the_one@87.104.212.66) Quit (Read error: Connection reset by peer)
[0:25] * Geoffrey (~geoffrey@169-0-138-190.ip.afrihost.co.za) Quit (Quit: Nettalk6 - www.ntalk.de)
[0:27] * T1 (~the_one@87.104.212.66) has joined #ceph
[0:34] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[0:34] * Hazmat (~Pulec@104.207.136.72) Quit ()
[0:37] * olid1982 (~olid1982@aftr-185-17-206-143.dynamic.mnet-online.de) Quit (Ping timeout: 480 seconds)
[0:42] * Geoffrey (~geoffrey@169-0-138-190.ip.afrihost.co.za) has joined #ceph
[0:45] * Geoffrey (~geoffrey@169-0-138-190.ip.afrihost.co.za) Quit ()
[0:58] * Cybert1nus is now known as Cybertinus
[1:00] * haomaiwang (~haomaiwan@103.15.217.218) Quit (Remote host closed the connection)
[1:01] * haomaiwang (~haomaiwan@103.15.217.218) has joined #ceph
[1:18] * treenerd (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[1:19] * treenerd (~gsulzberg@cpe90-146-148-47.liwest.at) Quit ()
[1:19] * treenerd (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[1:20] * treenerd (~gsulzberg@cpe90-146-148-47.liwest.at) Quit ()
[1:21] * oms101 (~oms101@p20030057EA5E9700C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:26] * Geoffrey (~Geoffrey@169-0-138-190.ip.afrihost.co.za) has joined #ceph
[1:28] * Geoffrey (~Geoffrey@169-0-138-190.ip.afrihost.co.za) Quit ()
[1:29] * oms101 (~oms101@p20030057EA353800C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:58] * bvi (~bastiaan@152-64-132-5.ftth.glasoperator.nl) Quit (Ping timeout: 480 seconds)
[2:01] * haomaiwang (~haomaiwan@103.15.217.218) Quit (Remote host closed the connection)
[2:01] * haomaiwang (~haomaiwan@103.15.217.218) has joined #ceph
[2:17] * Discovery (~Discovery@178.239.49.69) Quit (Read error: Connection reset by peer)
[2:47] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) Quit (Quit: Lost terminal)
[2:48] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[2:56] * rendar (~I@95.233.118.222) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[3:01] * haomaiwang (~haomaiwan@103.15.217.218) Quit (Remote host closed the connection)
[3:01] * haomaiwang (~haomaiwan@103.15.217.218) has joined #ceph
[3:03] * derjohn_mobi (~aj@x590d9365.dyn.telefonica.de) has joined #ceph
[3:10] * aj__ (~aj@x4db24b53.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[3:37] * Diablodoct0r (~Thononain@76GAAAXX6.tor-irc.dnsbl.oftc.net) has joined #ceph
[4:01] * haomaiwang (~haomaiwan@103.15.217.218) Quit (Remote host closed the connection)
[4:01] * haomaiwang (~haomaiwan@103.15.217.218) has joined #ceph
[4:07] * Diablodoct0r (~Thononain@76GAAAXX6.tor-irc.dnsbl.oftc.net) Quit ()
[4:31] <flaf> IcePic: just for information => http://paste.alacon.org/39214
[4:32] <flaf> I don't know which information we can pull from that...
[4:45] * yuriw1 (~Adium@2601:645:4380:112c:c9c3:216b:8a5a:217b) has joined #ceph
[4:51] * yuriw (~Adium@c-73-231-242-62.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:01] * haomaiwang (~haomaiwan@103.15.217.218) Quit (Remote host closed the connection)
[5:01] * haomaiwang (~haomaiwan@103.15.217.218) has joined #ceph
[5:20] * blank1 (~Keiya@tor2r.ins.tor.net.eu.org) has joined #ceph
[5:21] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[5:32] * yanzheng (~zhyan@182.139.23.32) has joined #ceph
[5:49] * blank1 (~Keiya@4MJAAAYSK.tor-irc.dnsbl.oftc.net) Quit ()
[5:53] * Vacuum__ (~Vacuum@88.130.210.132) has joined #ceph
[6:00] * Vacuum_ (~Vacuum@88.130.210.217) Quit (Ping timeout: 480 seconds)
[6:01] * haomaiwang (~haomaiwan@103.15.217.218) Quit (Remote host closed the connection)
[6:01] * haomaiwang (~haomaiwan@103.15.217.218) has joined #ceph
[6:01] * LeaChim (~LeaChim@host81-157-237-29.range81-157.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[6:07] * yanzheng (~zhyan@182.139.23.32) Quit (Quit: This computer has gone to sleep)
[6:11] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[6:16] * rogst (~Spikey@109.201.133.100) has joined #ceph
[6:46] * rogst (~Spikey@4MJAAAYTL.tor-irc.dnsbl.oftc.net) Quit ()
[7:01] * haomaiwang (~haomaiwan@103.15.217.218) Quit (Remote host closed the connection)
[7:01] * haomaiwang (~haomaiwan@103.15.217.218) has joined #ceph
[7:08] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[7:15] * vbellur (~vijay@2601:647:4f00:4960:5e51:4fff:fee8:6a5c) has joined #ceph
[7:39] * sdefede (~sdefede@75.160.178.100) has joined #ceph
[7:45] * linjan_ (~linjan@176.195.50.55) has joined #ceph
[7:52] * linjan (~linjan@176.193.218.216) Quit (Ping timeout: 480 seconds)
[8:01] * haomaiwang (~haomaiwan@103.15.217.218) Quit (Remote host closed the connection)
[8:01] * haomaiwang (~haomaiwan@103.15.217.218) has joined #ceph
[8:10] * dgbaley27 (~matt@75.148.118.217) Quit (Quit: Leaving.)
[8:38] * mykola (~Mikolaj@91.225.200.219) has joined #ceph
[8:42] * raindog (~MatthewH1@76GAAAX5S.tor-irc.dnsbl.oftc.net) has joined #ceph
[8:56] * haomaiwang (~haomaiwan@103.15.217.218) Quit (Remote host closed the connection)
[8:56] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[9:01] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[9:01] * sdefede (~sdefede@75.160.178.100) Quit (Quit: sdefede)
[9:01] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[9:12] * raindog (~MatthewH1@76GAAAX5S.tor-irc.dnsbl.oftc.net) Quit ()
[9:13] * i_m (~ivan.miro@88.206.113.199) has joined #ceph
[9:13] * alfredodeza (~alfredode@198.206.133.89) Quit (Read error: Connection reset by peer)
[9:28] * AndroUser (~androirc@static-5-103-131-129.seas-nve.net) has joined #ceph
[9:28] * AndroUser is now known as offerlam
[9:32] <offerlam> Hi cephers.. I was wondering about the ceph replication. Recommendation is 10gb network but what about redundency? Would it be wise to use teaming like features with two 10gb interfaces? I dont see why its not enough just to have two 10gb network cards wich are on the same network and subnet but with different ips..is this right
[9:33] <offerlam> Pr server i mean
[9:35] <iggy> bonding will also buy you improved speed
[9:35] <iggy> (although not between 2 servers generally)
[9:37] <offerlam> Jiggy are you talking to me?
[9:37] <iggy> yes
[9:41] <offerlam> So i could have two 10gbe in my servers with ip on the same network but connected to say two different switches so if one switch fails rebuilding would be done over the nics on the other switch? Ceph is ok with that? Could the two nics be on different networks both configures to be used for replication in ceph?
[9:46] <offerlam> Im trying to keep the network side down in cost which means im trying to avoide stacking features and teaming
[9:46] <offerlam> If it can be done
[10:01] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[10:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[10:04] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) Quit (Remote host closed the connection)
[10:05] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) has joined #ceph
[10:07] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) Quit (Remote host closed the connection)
[10:10] * rogierm (~rogierm@a82-94-41-183.adsl.xs4all.nl) has joined #ceph
[10:12] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) has joined #ceph
[10:17] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) Quit (Remote host closed the connection)
[10:18] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) has joined #ceph
[10:20] * rogierm (~rogierm@a82-94-41-183.adsl.xs4all.nl) Quit (Remote host closed the connection)
[10:22] * kanagaraj (~kanagaraj@61.3.123.160) has joined #ceph
[10:22] * kanagaraj (~kanagaraj@61.3.123.160) Quit ()
[10:26] * rogierm (~rogierm@2001:985:1c56:1:ecc4:fa72:e168:a618) has joined #ceph
[10:26] * rogierm (~rogierm@2001:985:1c56:1:ecc4:fa72:e168:a618) Quit (Remote host closed the connection)
[10:26] * rogierm (~rogierm@a82-94-41-183.adsl.xs4all.nl) has joined #ceph
[10:28] * rwheeler (~rwheeler@bzq-82-81-161-51.red.bezeqint.net) has joined #ceph
[10:35] * ad_jb (~Adium@64.19.195.213) has joined #ceph
[10:35] * rogierm (~rogierm@a82-94-41-183.adsl.xs4all.nl) Quit (Remote host closed the connection)
[10:36] <ad_jb> Hello, is anyone here? I am new to Ceph and have a question about the CRUSH Maps. It is my understanding that the crush map is used to read and write to the OSDs. However in the instance of having 2 data centers geographically separated with a higher latency, how can you access the same pools from the local devices at each side? I want to distribute across data centers, but read from only one. Is this possible?
[10:39] * rogierm (~rogierm@2001:985:1c56:1:dc23:ed5a:7346:5d76) has joined #ceph
[11:00] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[11:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[11:02] <IcePic> I'm not sure, but I think the crush map isnt split into read and write, so whatever rules is in the crush map, goes for both reads and writes I guess.
[11:04] <[arx]> you could create the pool in each region and replication snapshots to the second pool that you don't want to read from.
[11:04] <[arx]> replicate*
[11:05] <[arx]> or use federated gateways if you are using rgw
[11:12] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) Quit (Remote host closed the connection)
[11:13] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) has joined #ceph
[11:18] * rogierm (~rogierm@2001:985:1c56:1:dc23:ed5a:7346:5d76) Quit (Remote host closed the connection)
[11:19] * ad_jb (~Adium@64.19.195.213) Quit (Quit: Leaving.)
[11:23] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[11:23] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[11:27] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[11:27] * ade (~abradshaw@dslb-092-078-134-013.092.078.pools.vodafone-ip.de) has joined #ceph
[11:30] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) Quit (Remote host closed the connection)
[11:34] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) has joined #ceph
[11:39] * rendar (~I@host65-178-dynamic.49-79-r.retail.telecomitalia.it) has joined #ceph
[11:45] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[11:46] * nardial (~ls@dslb-088-072-094-077.088.072.pools.vodafone-ip.de) has joined #ceph
[11:47] * olid1982 (~olid1982@aftr-185-17-206-52.dynamic.mnet-online.de) has joined #ceph
[11:57] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) Quit (Remote host closed the connection)
[11:58] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) has joined #ceph
[11:59] * Nacer (~Nacer@176.31.89.99) has joined #ceph
[12:00] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[12:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[12:05] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) Quit (Remote host closed the connection)
[12:05] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) has joined #ceph
[12:06] * Keiya (~mollstam@192.42.115.101) has joined #ceph
[12:11] * Nacer_ (~Nacer@vir78-1-82-232-38-190.fbx.proxad.net) has joined #ceph
[12:11] * Nacer_ (~Nacer@vir78-1-82-232-38-190.fbx.proxad.net) Quit (Remote host closed the connection)
[12:12] * Nacer_ (~Nacer@176.31.89.99) has joined #ceph
[12:14] * Nacer__ (~Nacer@vir78-1-82-232-38-190.fbx.proxad.net) has joined #ceph
[12:15] * Nacer___ (~Nacer@vir78-1-82-232-38-190.fbx.proxad.net) has joined #ceph
[12:16] * Nacer__ (~Nacer@vir78-1-82-232-38-190.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[12:16] * Nacer___ (~Nacer@vir78-1-82-232-38-190.fbx.proxad.net) Quit (Remote host closed the connection)
[12:16] * Nacer_ (~Nacer@176.31.89.99) Quit (Read error: Connection reset by peer)
[12:17] * Nacer_ (~Nacer@176.31.89.99) has joined #ceph
[12:17] * Nacer (~Nacer@176.31.89.99) Quit (Ping timeout: 480 seconds)
[12:26] * Lego (~Geoffrey@169-0-138-190.ip.afrihost.co.za) has joined #ceph
[12:27] * Nacer_ (~Nacer@176.31.89.99) Quit (Remote host closed the connection)
[12:30] * Lego (~Geoffrey@169-0-138-190.ip.afrihost.co.za) Quit ()
[12:31] * Geo (~Geoffrey@169-0-138-190.ip.afrihost.co.za) has joined #ceph
[12:36] * Keiya (~mollstam@6YRAABVA5.tor-irc.dnsbl.oftc.net) Quit ()
[12:41] * olid1983 (~olid1982@p5484AB03.dip0.t-ipconnect.de) has joined #ceph
[12:41] * T1 (~the_one@87.104.212.66) Quit (Read error: Connection reset by peer)
[12:44] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) Quit (Remote host closed the connection)
[12:44] * T1 (~the_one@87.104.212.66) has joined #ceph
[12:45] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) has joined #ceph
[12:46] * whydidyoustealmynick (~shakamuny@c-67-180-191-38.hsd1.ca.comcast.net) has joined #ceph
[12:47] * LDA (~DM@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[12:48] * olid1982 (~olid1982@aftr-185-17-206-52.dynamic.mnet-online.de) Quit (Ping timeout: 480 seconds)
[12:48] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[12:49] * Vale (~smf68@uncle-enzo.mit.edu) has joined #ceph
[12:53] * offerlam (~androirc@static-5-103-131-129.seas-nve.net) Quit (Remote host closed the connection)
[12:53] * barra204 (~shakamuny@c-67-180-191-38.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[12:53] * ade (~abradshaw@dslb-092-078-134-013.092.078.pools.vodafone-ip.de) Quit (Quit: Too sexy for his shirt)
[12:55] * Geo (~Geoffrey@169-0-138-190.ip.afrihost.co.za) Quit (Quit: Going offline, see ya! (www.adiirc.com))
[12:57] * yanzheng (~zhyan@182.139.23.32) has joined #ceph
[13:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[13:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[13:05] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[13:08] * nardial (~ls@dslb-088-072-094-077.088.072.pools.vodafone-ip.de) Quit (Quit: Leaving)
[13:17] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[13:19] * Vale (~smf68@6YRAABVBT.tor-irc.dnsbl.oftc.net) Quit ()
[13:20] * bsukfh (~abcdef@41.46.195.204) has joined #ceph
[13:21] <bsukfh> ?
[13:21] <bsukfh> .
[13:21] <bsukfh> .
[13:21] <bsukfh> .
[13:21] <bsukfh> .
[13:21] <bsukfh> .did usa intel supply isis with weapons like they did with al-qaeda to justify creating wars?
[13:21] <bsukfh> does the breakout of wars and violence in the middle east represent creative chaos usa declared to make in the middle east?
[13:21] <bsukfh> iraq&syria suffered too much.plz,send others my qs ,help to limit usa&israel aggression against others.
[13:21] * bsukfh (~abcdef@41.46.195.204) has left #ceph
[13:26] * olid1984 (~olid1982@aftr-185-17-206-52.dynamic.mnet-online.de) has joined #ceph
[13:27] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[13:31] * rwheeler (~rwheeler@bzq-82-81-161-51.red.bezeqint.net) Quit (Quit: Leaving)
[13:32] * olid1983 (~olid1982@p5484AB03.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[13:39] * olid1985 (~olid1982@p5484AB03.dip0.t-ipconnect.de) has joined #ceph
[13:41] * yanzheng (~zhyan@182.139.23.32) Quit (Quit: This computer has gone to sleep)
[13:43] * yanzheng (~zhyan@182.139.23.32) has joined #ceph
[13:44] * amospalla (~amospalla@0001a39c.user.oftc.net) Quit (Quit: WeeChat 1.0.1)
[13:44] * olid1984 (~olid1982@aftr-185-17-206-52.dynamic.mnet-online.de) Quit (Ping timeout: 480 seconds)
[13:45] * amospalla (~amospalla@0001a39c.user.oftc.net) has joined #ceph
[13:48] * offerlam (~androirc@94.191.189.64.bredband.3.dk) has joined #ceph
[13:54] * offerlam (~androirc@94.191.189.64.bredband.3.dk) Quit (Remote host closed the connection)
[13:54] * offerlam (~androirc@94.191.189.64.bredband.3.dk) has joined #ceph
[13:54] * yanzheng (~zhyan@182.139.23.32) Quit (Quit: This computer has gone to sleep)
[13:58] * offerlam (~androirc@94.191.189.64.bredband.3.dk) Quit (Read error: Connection reset by peer)
[14:00] * bvi (~bastiaan@152-64-132-5.ftth.glasoperator.nl) has joined #ceph
[14:00] * LeaChim (~LeaChim@host81-157-237-29.range81-157.btcentralplus.com) has joined #ceph
[14:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[14:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[14:12] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Remote host closed the connection)
[14:21] * frox (~oftc-webi@pD957871A.dip0.t-ipconnect.de) has joined #ceph
[14:25] <frox> hi folks, I am looking to replace my nas with a more scalable solution, and ceph looks promising. Since I am looking for a cheap solution, I am wondering if I can run ceph on a cheap arm cluster, like the banana pi or something. has anyone had any experience with this kind of setup?
[14:26] <frox> I am interested in using the seagate archive2 disks with it, so the recommended 1gig memory per tb data will not be met. will this be a problem for the cluster?
[14:27] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[14:27] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Quit: Leaving...)
[14:29] * linjan_ (~linjan@176.195.50.55) Quit (Ping timeout: 480 seconds)
[14:43] * LeaChim (~LeaChim@host81-157-237-29.range81-157.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[14:43] * danieagle (~Daniel@179.111.172.42) has joined #ceph
[14:51] * LeaChim (~LeaChim@host86-132-236-140.range86-132.btcentralplus.com) has joined #ceph
[14:59] <T1> ceph is not suited for a NAS replacement
[15:01] <frox> well, I don't want to replace the NAS as such, I want to move stuff that is written once, deleted close do never and mainly read off the NAS
[15:01] <T1> that still seems like a very poor job for ceph
[15:01] <frox> can you elaborate?
[15:02] <T1> you'd be better of just using those pies as a regular machine with some storage that you provide to clients via smb, nfs or whatever
[15:05] <frox> okay, so what are the "propper" use cases for ceph? from what I have read, people are using it for exactly these kinds of scenarios, ie. backups and static data storage
[15:06] <T1> ues, but that is with "proper" hardware, 10Gbit or better interfaces for cluste-rnetwork, 1gbit or better for client network, 10+GB ram, 8+ cores etc etc etc
[15:07] <T1> hm, the start of that sentence was cut off..
[15:07] <T1> I think I meant to say something along these lines:
[15:09] <T1> the usage pattern is soemwhat correct, but that is mainly for multi (100+ TB or a few+ PB storage) usage and other usecases
[15:12] * xolotl (~legion@84ZAAA0XB.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:22] <frox> from the throughput I am expecting, I can't really imagine the bandwidth being much of a bottlekneck, so what does ceph do that it would need that much processing power and memory even when handling, by your numbers, small data volumes?
[15:24] <T1> during backfilling
[15:25] <T1> if a single 4TB disk fail
[15:25] <T1> then all the data on that disk needs to be replicated to new OSDs in order to fullfill the required number of replicas
[15:25] <T1> that can very easy saturate a 1GB link
[15:26] <T1> and saturate it so much that normal client I/O grinds to a halt and read-requests are stuck waiting for several minutes
[15:27] <T1> during backfill/recovery there is also a much higher ram and cpu requirement
[15:27] <T1> MONs are much busier
[15:27] <T1> all I am saying is that you are in for a bad bad experience
[15:28] <T1> you can do it
[15:28] <T1> but you are advised not to
[15:35] <frox> i expect the cluster to be offline when an ods goes down until it is restored actually, my main focus is to have scalable storage (as in easily adding/extending capacity) and reliable data replication
[15:36] * shawniverson (~shawniver@199.66.65.7) Quit (Remote host closed the connection)
[15:36] <frox> but maybe gluster will be a better fit for that
[15:36] * Geo (~Geoffrey@169-0-138-190.ip.afrihost.co.za) has joined #ceph
[15:37] <frox> the main benefit i saw in ceph is the completely heterogenous setup, which is what most other are lacking
[15:42] * xolotl (~legion@84ZAAA0XB.tor-irc.dnsbl.oftc.net) Quit ()
[16:24] * elder_ (~elder@c-24-61-14-142.hsd1.ma.comcast.net) has joined #ceph
[16:25] * derjohn_mobi (~aj@x590d9365.dyn.telefonica.de) Quit (Read error: Connection reset by peer)
[16:40] * derjohn_mob (~aj@x4db11e4b.dyn.telefonica.de) has joined #ceph
[16:56] * olid1986 (~olid1982@aftr-185-17-206-52.dynamic.mnet-online.de) has joined #ceph
[17:03] * olid1985 (~olid1982@p5484AB03.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[17:08] * amospalla (~amospalla@0001a39c.user.oftc.net) Quit (Quit: WeeChat 1.0.1)
[17:09] * amospalla (~amospalla@0001a39c.user.oftc.net) has joined #ceph
[17:17] * Rens2Sea (~LorenXo@c9.63.01a8.ip4.static.sl-reverse.com) has joined #ceph
[17:18] * flaf (~flaf@2001:41d0:1:7044::1) Quit (Remote host closed the connection)
[17:21] * flaf (~flaf@2001:41d0:1:7044::1) has joined #ceph
[17:28] <flaf> Hi @all.
[17:28] * via_ is now known as via
[17:28] <flaf> What a pity, ad_jb is not here anymore.
[17:29] <flaf> Because concerning his question, I think the primary affinity was the good answer.
[17:34] * test1_ (~francois@bau91-1-82-239-246-157.fbx.proxad.net) has joined #ceph
[17:35] <flaf> With primary affinity == 0 for each osd in dc2, there is reading only in dc2.
[17:38] * test1_ (~francois@bau91-1-82-239-246-157.fbx.proxad.net) Quit (Quit: Leaving.)
[17:42] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:ed0c:dd2f:18a6:e24d) has joined #ceph
[17:43] * bvi (~bastiaan@152-64-132-5.ftth.glasoperator.nl) Quit (Quit: Ex-Chat)
[17:46] * Rens2Sea (~LorenXo@c9.63.01a8.ip4.static.sl-reverse.com) Quit ()
[17:53] * flaf (~flaf@2001:41d0:1:7044::1) Quit (Quit: WeeChat 1.0.1)
[17:53] * flaf (~flaf@2001:41d0:1:7044::1) has joined #ceph
[17:58] * The1_ (~the_one@87.104.212.66) has joined #ceph
[17:58] * flaf (~flaf@2001:41d0:1:7044::1) Quit (Remote host closed the connection)
[17:58] * flaf (~flaf@2001:41d0:1:7044::1) has joined #ceph
[18:04] * T1 (~the_one@87.104.212.66) Quit (Ping timeout: 480 seconds)
[18:13] * ade (~abradshaw@dslb-092-078-134-013.092.078.pools.vodafone-ip.de) has joined #ceph
[18:23] * olid1986 (~olid1982@aftr-185-17-206-52.dynamic.mnet-online.de) Quit (Ping timeout: 480 seconds)
[18:26] * rogierm_ (~rogierm@2001:985:1c56:1:c504:a796:ba60:9aaa) has joined #ceph
[18:33] * Geoff (~geoff@169-0-138-190.ip.afrihost.co.za) has joined #ceph
[18:39] * olid1986 (~olid1982@aftr-185-17-206-52.dynamic.mnet-online.de) has joined #ceph
[18:43] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:44] * Geo (~Geoffrey@169-0-138-190.ip.afrihost.co.za) Quit (Quit: Going offline, see ya! (www.adiirc.com))
[18:45] * Geoff (~geoff@169-0-138-190.ip.afrihost.co.za) Quit (Quit: Nettalk6 - www.ntalk.de)
[18:46] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:ed0c:dd2f:18a6:e24d) Quit (Ping timeout: 480 seconds)
[18:47] * rogierm_ (~rogierm@2001:985:1c56:1:c504:a796:ba60:9aaa) Quit (Remote host closed the connection)
[18:49] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[18:49] * rogierm (~rogierm@2001:985:1c56:1:24c0:40a2:9fb9:9eab) has joined #ceph
[18:50] * xarses_ (~xarses@rrcs-76-79-238-170.west.biz.rr.com) Quit (Ping timeout: 480 seconds)
[18:56] * rogierm (~rogierm@2001:985:1c56:1:24c0:40a2:9fb9:9eab) Quit (Remote host closed the connection)
[19:00] * Geoff (~geoff@169-0-138-190.ip.afrihost.co.za) has joined #ceph
[19:01] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:04] * Guest3520 (~Tracer@169-0-138-190.ip.afrihost.co.za) has joined #ceph
[19:05] * Guest3520 (~Tracer@169-0-138-190.ip.afrihost.co.za) has left #ceph
[19:07] * linjan_ (~linjan@176.195.50.55) has joined #ceph
[19:07] * GeoTracer (~Tracer@169-0-138-190.ip.afrihost.co.za) has joined #ceph
[19:08] * Geoff (~geoff@169-0-138-190.ip.afrihost.co.za) Quit (Quit: Nettalk6 - www.ntalk.de)
[19:11] * GeoTracer (~Tracer@169-0-138-190.ip.afrihost.co.za) Quit ()
[19:13] * rogierm (~rogierm@a82-94-41-183.adsl.xs4all.nl) has joined #ceph
[19:20] * frox (~oftc-webi@pD957871A.dip0.t-ipconnect.de) Quit (Quit: Page closed)
[19:21] * rogierm (~rogierm@a82-94-41-183.adsl.xs4all.nl) Quit (Remote host closed the connection)
[19:22] * rogierm_ (~rogierm@2001:985:1c56:1:5852:37ff:8ec6:51d6) has joined #ceph
[19:26] * rogierm_ (~rogierm@2001:985:1c56:1:5852:37ff:8ec6:51d6) Quit (Remote host closed the connection)
[19:27] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[19:41] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:42] * ade (~abradshaw@dslb-092-078-134-013.092.078.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[20:13] * dgbaley27 (~matt@75.148.118.217) has joined #ceph
[20:21] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[20:22] * mgolub (~Mikolaj@91.225.201.213) has joined #ceph
[20:23] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:ed0c:dd2f:18a6:e24d) has joined #ceph
[20:25] * mykola (~Mikolaj@91.225.200.219) Quit (Ping timeout: 480 seconds)
[20:27] * rogierm (~rogierm@2001:985:1c56:1:f0c1:dbe4:5457:f0e5) has joined #ceph
[20:35] * rogierm (~rogierm@2001:985:1c56:1:f0c1:dbe4:5457:f0e5) Quit (Ping timeout: 480 seconds)
[20:51] * danieagle (~Daniel@179.111.172.42) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[20:53] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[20:57] * rogierm (~rogierm@2001:985:1c56:1:74e1:8ef3:b570:1625) has joined #ceph
[21:00] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:ed0c:dd2f:18a6:e24d) Quit (Ping timeout: 480 seconds)
[21:04] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:10] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[21:10] * rogierm (~rogierm@2001:985:1c56:1:74e1:8ef3:b570:1625) Quit (Remote host closed the connection)
[21:11] * ad_jb (~Adium@64.19.195.251) has joined #ceph
[21:17] * rogierm_ (~rogierm@2001:985:1c56:1:f8dd:f69c:f047:605) has joined #ceph
[21:19] * doppelgrau (~doppelgra@p5DC06FD9.dip0.t-ipconnect.de) has joined #ceph
[21:20] <doppelgrau> Noirjour
[21:20] * linjan_ (~linjan@176.195.50.55) Quit (Ping timeout: 480 seconds)
[21:21] <ad_jb> What is the best way to replicate data across higher latency links, but still have that data accessible locally on the other side? Is there a way to keep pools syncronized outside of the Crush maps?
[21:25] <doppelgrau> ad_jb: Snapshots and rbd-diff
[21:25] <doppelgrau> ad_jb: but thats only an async one way replication
[21:32] <ad_jb> no way to have a writable pool on each side?
[21:32] * linjan_ (~linjan@176.195.50.55) has joined #ceph
[21:33] <MACscr> anyone know how to make ceph rbd image bootable? if i map it and then check it with parted, its not bootable
[21:33] <MACscr> if i try to set the flag with parted, it says 'Error: The flag 'boot' is not available for loop disk labels.'
[21:37] * elder_ (~elder@c-24-61-14-142.hsd1.ma.comcast.net) Quit (Read error: Connection reset by peer)
[21:45] * nardial (~ls@dslb-088-072-094-077.088.072.pools.vodafone-ip.de) has joined #ceph
[21:49] * rogierm_ (~rogierm@2001:985:1c56:1:f8dd:f69c:f047:605) Quit (Remote host closed the connection)
[21:49] * rogierm (~rogierm@a82-94-41-183.adsl.xs4all.nl) has joined #ceph
[21:52] * rogierm (~rogierm@a82-94-41-183.adsl.xs4all.nl) Quit (Remote host closed the connection)
[21:53] * linjan_ (~linjan@176.195.50.55) Quit (Ping timeout: 480 seconds)
[21:53] <The1_> ad_jb: sort answer: no
[21:54] <The1_> ad_jb: long answer: no, not as things are now
[21:54] * dgbaley27 (~matt@75.148.118.217) Quit (Quit: Leaving.)
[21:54] <The1_> take a look at the mailinglist - there have been som threads about that subject and why it is not possible at the moment
[21:54] <The1_> some even
[21:56] <The1_> MACscr: I would think that it's not possible without librados and dependencies inside the bootloader
[21:56] * rogierm (~rogierm@a82-94-41-183.adsl.xs4all.nl) has joined #ceph
[21:59] * nardial (~ls@dslb-088-072-094-077.088.072.pools.vodafone-ip.de) Quit (Quit: Leaving)
[22:02] * rogierm (~rogierm@a82-94-41-183.adsl.xs4all.nl) Quit (Remote host closed the connection)
[22:03] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:ed0c:dd2f:18a6:e24d) has joined #ceph
[22:05] <flaf> ad_jb: you can use primary affinity
[22:05] * linjan_ (~linjan@176.195.50.55) has joined #ceph
[22:07] <flaf> ad_jb: a thing which possible in theory is to have reading in only one datacenter (but for writing it's always all osds which are requested).
[22:09] <flaf> ad_jb: if you set primary affinity to 0 for each osd in dc2 (for instance), all your primary osds will be in dc1 and, for reading, only the osds of dc1 will be requested.
[22:11] <flaf> But for _writing_, it changes nothing because the primary osd always waits for the acknowledgment of the non-primary osds.
[22:11] * rogierm_ (~rogierm@a82-94-41-183.adsl.xs4all.nl) has joined #ceph
[22:15] <flaf> ad_jb: about primary affinity this paragraph is short and clear => http://docs.ceph.com/docs/master/rados/operations/crush-map/#primary-affinity
[22:15] * yuriw1 is now known as yuriw
[22:18] * rogierm_ (~rogierm@a82-94-41-183.adsl.xs4all.nl) Quit (Remote host closed the connection)
[22:28] * mgolub (~Mikolaj@91.225.201.213) Quit (Quit: away)
[22:31] <flaf> But I think it could be cool if there was an option so that, for writing, the primary osd sends a ack to the client just after its replica is written, _without_ waiting for the acks from the non-primary osds.
[22:41] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[22:42] * i_m (~ivan.miro@88.206.113.199) Quit (Ping timeout: 480 seconds)
[22:52] * Eduardo_ (~Eduardo@bl5-1-253.dsl.telepac.pt) has joined #ceph
[22:53] <Eduardo_> greetings everyone, not sure if in the right place, but am seeking some guidance on setting up a Ceph virtual cluster
[22:55] <flaf> Eduardo_: I think you are in the right place and should ask your question.
[22:55] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[22:56] <Eduardo_> thank you <flaf>
[22:56] * LDA (~DM@host217-114-156-249.pppoe.mark-itt.net) Quit (Quit: Nettalk6 - www.ntalk.de)
[22:57] <Eduardo_> I'm starting my masters thesis on object storage and would like to try to createa virtual cluster o vmware so I could explore Ceph
[23:01] <Eduardo_> first I have some doubts in the design of the infrastructure, I'd like to start with the simplest structure possible, so I was not sure if I'm getting this right, but I was thinking of setting up 4 VMs, one with an OSD and a storage disc, another with a monitor (not sure if I can use only one monitor), and another with teh RADOS layer
[23:01] <Eduardo_> adn finally a client VM
[23:01] <Eduardo_> *and
[23:02] * elder_ (~elder@c-24-61-14-142.hsd1.ma.comcast.net) has joined #ceph
[23:02] <Eduardo_> am I thinking this right or did I misunderstood the system?
[23:03] <flaf> Eduardo_: first, yes it's possible to have a cluster with just one monitor (no problem for a _testing_ cluster).
[23:05] <flaf> Personally, for just basic testing, I see => 2 cluster nodes ceph1 and ceph2 (ie 2 VM), with 1 OSD each and 1 monitor on ceph1 (for instance). 1 VM ceph-client (and if you want to test 1 VM radosgw).
[23:06] <flaf> For testing, you can put the unique monitor in a osd server.
[23:07] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[23:07] <Eduardo_> ok
[23:07] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[23:08] <flaf> Be careful, with only 2 osd, you need to set replicated pool with size == 2 (ie 2 replica, one by OSD).
[23:10] <Eduardo_> ok, thank you
[23:10] <flaf> yw.
[23:13] <Eduardo_> for them to communicate, I'll have to configure a lan segment I assume and som kind of virtual swtich or router
[23:14] <flaf> Generally ceph uses 2 VLANs
[23:14] <flaf> a public (clients <-> cluster nodes) and a private (between cluster nodes only).
[23:15] <flaf> But I think it's possible to configure ceph to have just one public network and ceph uses it for the private work too.
[23:20] * asalor (~asalor@0001ef37.user.oftc.net) Quit (Quit: leaving)
[23:20] <Eduardo_> maybe its best to have the two, for isolation
[23:20] <flaf> yes and perfs.
[23:21] <flaf> But if it's for a basic testing cluster, no problem with just one network.
[23:21] * asalor (~asalor@2a00:1028:96c1:4f6a:feaa:14ff:fe7f:9be2) has joined #ceph
[23:22] * rendar (~I@host65-178-dynamic.49-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[23:23] <motk> at the missing mailing list archives ever to be restored?
[23:25] * rendar (~I@host65-178-dynamic.49-79-r.retail.telecomitalia.it) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.