#ceph IRC Log

Index

IRC Log for 2015-10-26

Timestamps are in GMT/BST.

[0:10] * xarses_ (~xarses@zz2012427835719DFD82.userreverse.dion.ne.jp) Quit (Remote host closed the connection)
[0:10] * xarses_ (~xarses@zz2012427835719DFD82.userreverse.dion.ne.jp) has joined #ceph
[0:13] * jwilkins (~jowilkin@c-67-180-123-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[0:17] * phyphor (~AGaW@89.248.162.153) Quit ()
[0:22] * Nicola-1980 (~Nicola-19@dslb-178-005-028-154.178.005.pools.vodafone-ip.de) Quit (Remote host closed the connection)
[0:27] * Nicola-1980 (~Nicola-19@dslb-178-005-028-154.178.005.pools.vodafone-ip.de) has joined #ceph
[0:31] * xarses_ (~xarses@zz2012427835719DFD82.userreverse.dion.ne.jp) Quit (Ping timeout: 480 seconds)
[0:33] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[0:33] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[0:35] * Nicola-1980 (~Nicola-19@dslb-178-005-028-154.178.005.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[0:35] * haomaiwang (~haomaiwan@122.147.80.133) Quit (Remote host closed the connection)
[0:39] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[0:39] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[0:42] * jwilkins (~jowilkin@c-67-180-123-48.hsd1.ca.comcast.net) has joined #ceph
[0:47] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[0:47] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[0:50] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:51] * LeaChim (~LeaChim@host86-171-91-180.range86-171.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[0:59] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[0:59] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[1:02] * rendar (~I@host183-181-dynamic.49-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[1:05] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[1:05] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[1:23] * georgem (~Adium@69-196-163-180.dsl.teksavvy.com) has joined #ceph
[1:24] * pabluk_ (~pabluk@2a01:e34:ec16:93e0:adcc:5e3c:41fe:c11) Quit (Ping timeout: 480 seconds)
[1:26] * georgem (~Adium@69-196-163-180.dsl.teksavvy.com) Quit ()
[1:28] * Nicola-1980 (~Nicola-19@dslb-178-005-028-154.178.005.pools.vodafone-ip.de) has joined #ceph
[1:32] * pabluk_ (~pabluk@2a01:e34:ec16:93e0:1573:d6c6:4bb4:6a6b) has joined #ceph
[1:33] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:38] * oms101 (~oms101@p20030057EA74FE00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:42] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[1:43] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[1:46] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[1:47] * oms101 (~oms101@p20030057EA014600C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:54] * xarses_ (~xarses@zz2012427835719DFD82.userreverse.dion.ne.jp) has joined #ceph
[2:09] * georgem (~Adium@206.108.127.16) has joined #ceph
[2:13] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[2:13] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[2:14] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[2:14] * georgem (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[2:14] * georgem (~Adium@69-196-163-180.dsl.teksavvy.com) has joined #ceph
[2:20] * evl (~chatzilla@39.138.216.139.sta.dodo.net.au) has joined #ceph
[2:29] * yanzheng (~zhyan@182.139.21.176) has joined #ceph
[2:36] * kefu (~kefu@114.86.210.253) has joined #ceph
[2:41] * w0lfeh (~nastidon@tor-exit.squirrel.theremailer.net) has joined #ceph
[2:49] * georgem (~Adium@69-196-163-180.dsl.teksavvy.com) Quit (Quit: Leaving.)
[2:54] * xarses_ (~xarses@zz2012427835719DFD82.userreverse.dion.ne.jp) Quit (Remote host closed the connection)
[2:54] * xarses_ (~xarses@zz2012427835719DFD82.userreverse.dion.ne.jp) has joined #ceph
[3:03] * jdillaman_afk (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman_afk)
[3:04] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[3:11] * w0lfeh (~nastidon@5P6AAADYE.tor-irc.dnsbl.oftc.net) Quit ()
[3:11] * naoto (~naotok@2401:bd00:b001:8920:27:131:11:254) has joined #ceph
[3:21] * kefu is now known as kefu|afk
[3:29] * Kyso_ (~Shadow386@94.242.228.43) has joined #ceph
[3:29] * zhaochao (~zhaochao@125.39.8.235) has joined #ceph
[3:35] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[3:35] * codice_ (~toodles@75-128-34-237.static.mtpk.ca.charter.com) has joined #ceph
[3:37] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) Quit (Ping timeout: 480 seconds)
[3:40] * BranchPredictor (branch@predictor.org.pl) Quit (Ping timeout: 480 seconds)
[3:41] * naoto_ (~naotok@27.131.11.254) has joined #ceph
[3:42] * swami1 (~swami@163.138.224.245) has joined #ceph
[3:43] * BranchPredictor (branch@predictor.org.pl) has joined #ceph
[3:47] * naoto (~naotok@2401:bd00:b001:8920:27:131:11:254) Quit (Ping timeout: 480 seconds)
[3:59] * Kyso_ (~Shadow386@7V7AAAQVP.tor-irc.dnsbl.oftc.net) Quit ()
[3:59] * SEBI1 (~raindog@4Z9AAADSL.tor-irc.dnsbl.oftc.net) has joined #ceph
[4:08] * kefu|afk is now known as kefu
[4:19] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) has joined #ceph
[4:19] * swami1 (~swami@163.138.224.245) Quit (Quit: Leaving.)
[4:26] * overclk (~overclk@121.244.87.117) has joined #ceph
[4:29] * SEBI1 (~raindog@4Z9AAADSL.tor-irc.dnsbl.oftc.net) Quit ()
[4:29] * Sokrates (~gh@130.193.246.164) has joined #ceph
[4:30] * haomaiwang (~haomaiwan@122.147.80.133) has joined #ceph
[4:30] * Sokrates (~gh@130.193.246.164) Quit ()
[4:34] * evl (~chatzilla@39.138.216.139.sta.dodo.net.au) Quit (Quit: ChatZilla 0.9.92 [Firefox 41.0.2/20151015125802])
[4:41] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:42] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[4:42] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[4:49] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[4:49] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[4:49] * haomaiwang (~haomaiwan@122.147.80.133) Quit (Ping timeout: 480 seconds)
[4:51] * evl (~chatzilla@39.138.216.139.sta.dodo.net.au) has joined #ceph
[4:54] * haomaiwang (~haomaiwan@122.147.80.133) has joined #ceph
[4:55] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) Quit (Quit: Leaving)
[5:02] * haomaiwang (~haomaiwan@122.147.80.133) Quit (Remote host closed the connection)
[5:10] * evl (~chatzilla@39.138.216.139.sta.dodo.net.au) Quit (Quit: ChatZilla 0.9.92 [Firefox 41.0.2/20151015125802])
[5:15] * Vacuum__ (~Vacuum@88.130.196.69) has joined #ceph
[5:22] * Vacuum_ (~Vacuum@88.130.215.251) Quit (Ping timeout: 480 seconds)
[5:27] * Jaska (~Deiz@tsn109-201-154-157.dyn.nltelcom.net) has joined #ceph
[5:37] * kefu (~kefu@114.86.210.253) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:38] * vbellur (~vijay@122.172.223.120) Quit (Ping timeout: 480 seconds)
[5:51] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[5:51] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[5:57] * Jaska (~Deiz@4Z9AAADUW.tor-irc.dnsbl.oftc.net) Quit ()
[6:15] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:25] * adun153 (~ljtirazon@112.198.90.179) has joined #ceph
[6:26] <adun153> Hi, I have a ceph newbie question: Let's say I have a 3-node cluster running, which IP address should I point my client to, to communicate with the ceph storage cluster?
[6:27] <adun153> Is it preferrable to have a virtual IP shared by the three?
[6:28] * rdas (~rdas@122.168.210.191) has joined #ceph
[6:33] * blip2 (~Kaervan@tsn109-201-154-157.dyn.nltelcom.net) has joined #ceph
[6:34] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[6:34] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[6:37] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[6:42] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[6:42] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[6:53] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[6:53] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[6:55] * amote (~amote@121.244.87.116) has joined #ceph
[6:55] <m0zes> adun153: the ceph.conf should have references to the monitor(s). from there, the clients will get all information necessary.
[6:56] <m0zes> a floating ip isn't necessary.
[6:56] * rakeshgm (~rakesh@121.244.87.124) has joined #ceph
[6:56] * rakesh_ (~rakesh@121.244.87.124) has joined #ceph
[6:56] * rakeshgm (~rakesh@121.244.87.124) Quit ()
[6:56] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[6:57] <m0zes> unless you're talking radosgw, then the ceph client is radosgw, and you would have clients pointing to the radosgw server(s).
[6:57] <adun153> m0zes So, the monitors sort of act as the "interface", right?
[6:58] <adun153> Any requests done by remote servers should be directed to where the monitor is?
[6:58] <m0zes> more or less, they keep the cluster consistent, and hand out information about other monitors, about the osds, crushmap and mds servers.
[6:58] <adun153> Or I guess I don't understand ceph yet.
[6:59] <m0zes> correct. the ceph.conf should reference all monitors, and the client will try them at random, until they find one that is up and in quorum. from there the client will get the rest of the information necessary to function.
[7:00] * nardial (~ls@dslb-088-072-094-085.088.072.pools.vodafone-ip.de) has joined #ceph
[7:00] <m0zes> (well, besides access tokens/keyrings)
[7:00] <adun153> So, let's say that I have that 3-node cluster setup, and each node has a monitor on it as well. I have a server with KVM/libvirt on it, and I want that server to use the ceph cluster as a backing disk for VMs. How does the process work?
[7:01] <adun153> Just to help me understand the flow of information within the networkk.
[7:01] <adun153> m0zes
[7:03] * derjohn_mob (~aj@p54BF82B6.dip0.t-ipconnect.de) has joined #ceph
[7:03] <m0zes> libvirt starts a vm, qemu reads ceph.conf, and is handed a keyring, then it will connect to one of the monitors. it will grab a osdmap and a crushmap. it will read the pool information for the disk it is trying to access, grab an exclusive lock and start reading the individual objects that make up a rbd volume.
[7:03] * blip2 (~Kaervan@4Z9AAADWH.tor-irc.dnsbl.oftc.net) Quit ()
[7:03] <adun153> I see, so does this mean that I need to install a ceph client on the KVM server?
[7:04] * rakesh_ (~rakesh@121.244.87.124) Quit (Ping timeout: 480 seconds)
[7:04] <m0zes> if you are wanting a userspace connection to the rbd volumes, yes. it can be done with the kernel interface, but it isn't as fast (in my experience)
[7:05] <m0zes> qemu has to be built with rbd support for the userspace connections to work.
[7:05] <m0zes> qemu should already be built with rbd support in el7 and jessie, not sure about other distributions/versions
[7:08] <adun153> I see, so each client has to have the IP address of the monitors in the ceph conf file, the client asks one of the monitors for a certain file/block and the monitor responds with a URI for the locations of the data. Then the client queries nodes of the cluster directly for the data. Did I get that right?
[7:09] <m0zes> more or less, afaict, the monitors don't hand back a uri. they give the client a copy of the crushmap (i.e. a distributed weighted hash table) that lets the clients calculate where the chunk of data is/should be.
[7:09] * naoto_ (~naotok@27.131.11.254) Quit (Quit: Leaving...)
[7:09] * trociny (~mgolub@93.183.239.2) Quit (Remote host closed the connection)
[7:10] * naoto (~naotok@27.131.11.254) has joined #ceph
[7:10] <adun153> So, no need at all for a virtual IP.
[7:10] <m0zes> correct.
[7:10] <adun153> I think I get it now. Thanks, m0zes!
[7:10] <adun153> :)
[7:10] * evl (~chatzilla@39.138.216.139.sta.dodo.net.au) has joined #ceph
[7:11] * kefu (~kefu@114.86.210.253) has joined #ceph
[7:12] <m0zes> they try to keep the monitors out of the data path, as they would quickly become overloaded with requests if *every* request had to go through them.
[7:13] * trociny (~mgolub@93.183.239.2) has joined #ceph
[7:13] <m0zes> anyway, I've got work in 6 hours. time for bed. This channel is usually much more active in about 8 hours.
[7:14] * evl (~chatzilla@39.138.216.139.sta.dodo.net.au) Quit ()
[7:14] <adun153> Thanks again!
[7:19] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[7:19] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[7:21] * HoboPickle (~TomyLobo@4Z9AAADYA.tor-irc.dnsbl.oftc.net) has joined #ceph
[7:24] * jcsp (~jspray@2402:c800:ff64:300:df3d:9e5b:2e44:13e7) has joined #ceph
[7:24] * evl (~chatzilla@39.138.216.139.sta.dodo.net.au) has joined #ceph
[7:24] * evl (~chatzilla@39.138.216.139.sta.dodo.net.au) Quit ()
[7:29] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:33] <Guest6560> What are good ways to backup ceph? I'm planning to use it as an object storage (rados-gw) and looking for a data protection strategy.
[7:36] <lathiat> Guest6560: answer to that seems complicated from what i've read. i didnt' find any good docs on generic ceph level backing up.. more on specific apps, mostly RBD
[7:38] <adun153> Guest6560: You want to back up a whole Ceph cluster?
[7:38] <Guest6560> Yes
[7:41] <Kvisle> Guest6560: I've made a solution for backing up radosgw
[7:42] <Guest6560> That seems useful. Can you share it?
[7:43] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[7:43] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[7:46] <Kvisle> Guest6560: it's a plugin to bareos-fd, so it's only compatible with bareos as of now ... it basically works as a bridge between the s3 api and bareos ... kind of like fuse, but it's fast enough to be useful for backup purposes, and we have support for incremental backups
[7:48] <Kvisle> to be honest, direct restore haven't been solved yet - so we need to restore to an ordinary file system and sync files to radosgw to perform a restore
[7:49] <Kvisle> I intend to release the plugin at some point, when I get time to clean it up and perhaps solve the restore problem
[7:49] <Kvisle> but unless you're using bareos yourself, it's probably of little interest
[7:51] * HoboPickle (~TomyLobo@4Z9AAADYA.tor-irc.dnsbl.oftc.net) Quit ()
[7:53] * jcsp (~jspray@2402:c800:ff64:300:df3d:9e5b:2e44:13e7) Quit (Ping timeout: 480 seconds)
[7:56] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[7:56] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[7:56] * galaxyAbstractor (~Crisco@5P6AAAD9G.tor-irc.dnsbl.oftc.net) has joined #ceph
[8:01] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[8:01] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[8:06] * sileht (~sileht@sileht.net) Quit (Ping timeout: 480 seconds)
[8:07] * evl (~chatzilla@39.138.216.139.sta.dodo.net.au) has joined #ceph
[8:13] * enax (~enax@hq.ezit.hu) has joined #ceph
[8:13] * sileht (~sileht@sileht.net) has joined #ceph
[8:15] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[8:15] * jrocha_ (~jrocha@AAnnecy-651-1-290-164.w90-27.abo.wanadoo.fr) has joined #ceph
[8:15] * amote (~amote@121.244.87.116) has joined #ceph
[8:16] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[8:21] * branto1 (~branto@ip-213-220-232-132.net.upcbroadband.cz) has joined #ceph
[8:23] * kefu (~kefu@114.86.210.253) Quit (Max SendQ exceeded)
[8:24] * kefu (~kefu@114.86.210.253) has joined #ceph
[8:26] * galaxyAbstractor (~Crisco@5P6AAAD9G.tor-irc.dnsbl.oftc.net) Quit ()
[8:26] * Nicola-1980 (~Nicola-19@dslb-178-005-028-154.178.005.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[8:30] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[8:35] * kefu (~kefu@114.86.210.253) Quit (Max SendQ exceeded)
[8:36] * kefu (~kefu@114.86.210.253) has joined #ceph
[8:37] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[8:38] <Be-El> hi
[8:38] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[8:44] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[8:45] * kawa2014 (~kawa@151.33.10.211) has joined #ceph
[8:46] <cetex> JarekO_: reposting again: Did you find a resolution to the rgw_obj_remove(): cls_cxx_remove returned -2?
[8:46] <cetex> i've done a full deep scrub :>
[8:47] <sep> stj, thanks ill try that one
[8:52] * derjohn_mob (~aj@p54BF82B6.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[8:52] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:53] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[8:54] * jrocha_ (~jrocha@AAnnecy-651-1-290-164.w90-27.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[8:57] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:cd4d:d92f:19ae:81de) has joined #ceph
[8:57] * adun153 (~ljtirazon@112.198.90.179) Quit (Read error: Connection reset by peer)
[8:59] * kefu (~kefu@114.86.210.253) Quit (Max SendQ exceeded)
[8:59] * kefu (~kefu@114.86.210.253) has joined #ceph
[9:02] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:09] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[9:10] * evl (~chatzilla@39.138.216.139.sta.dodo.net.au) Quit (Quit: ChatZilla 0.9.92 [Firefox 41.0.2/20151015125802])
[9:11] * evl (~chatzilla@39.138.216.139.sta.dodo.net.au) has joined #ceph
[9:11] * adun153 (~ljtirazon@112.198.90.179) has joined #ceph
[9:13] * kefu (~kefu@114.86.210.253) Quit (Max SendQ exceeded)
[9:13] * kefu (~kefu@114.86.210.253) has joined #ceph
[9:13] * analbeard (~shw@support.memset.com) has joined #ceph
[9:16] * pabluk_ is now known as pabluk
[9:16] * linjan_ (~linjan@176.195.92.125) has joined #ceph
[9:17] * jasuarez (~jasuarez@243.Red-81-39-64.dynamicIP.rima-tde.net) has joined #ceph
[9:17] * dan (~dan@pb-d-128-141-3-42.cern.ch) has joined #ceph
[9:19] * dan_ (~dan@dvanders-pro.cern.ch) has joined #ceph
[9:26] * dan (~dan@pb-d-128-141-3-42.cern.ch) Quit (Ping timeout: 480 seconds)
[9:27] * kefu (~kefu@114.86.210.253) Quit (Max SendQ exceeded)
[9:27] * kefu (~kefu@114.86.210.253) has joined #ceph
[9:31] * dgurtner (~dgurtner@178.197.231.233) has joined #ceph
[9:33] * shinobu (~oftc-webi@nat-pool-nrt-t1.redhat.com) Quit (Ping timeout: 480 seconds)
[9:34] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[9:34] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[9:37] * kawa2014 (~kawa@151.33.10.211) Quit (Read error: Connection reset by peer)
[9:38] * kawa2014 (~kawa@151.33.10.211) has joined #ceph
[9:42] * rendar (~I@95.234.183.170) has joined #ceph
[9:44] * Nicola-1980 (~Nicola-19@178.19.210.162) has joined #ceph
[9:46] * fsimonce (~simon@host30-173-dynamic.23-79-r.retail.telecomitalia.it) has joined #ceph
[9:46] * linjan_ (~linjan@176.195.92.125) Quit (Ping timeout: 480 seconds)
[9:48] * kefu (~kefu@114.86.210.253) Quit (Max SendQ exceeded)
[9:49] * kefu (~kefu@114.86.210.253) has joined #ceph
[9:53] * rotbeard (~redbeard@aftr-95-222-29-74.unity-media.net) has joined #ceph
[9:56] * fauxhawk (~Rens2Sea@62-210-77-149.rev.poneytelecom.eu) has joined #ceph
[9:56] * Nacer (~Nacer@cpc14-lewi15-2-0-cust249.2-4.cable.virginm.net) has joined #ceph
[10:04] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[10:05] * Nacer (~Nacer@cpc14-lewi15-2-0-cust249.2-4.cable.virginm.net) Quit (Remote host closed the connection)
[10:06] * kefu (~kefu@114.86.210.253) Quit (Max SendQ exceeded)
[10:07] * kefu (~kefu@114.86.210.253) has joined #ceph
[10:09] * yanzheng1 (~zhyan@125.71.108.204) has joined #ceph
[10:11] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:11] * ade (~abradshaw@tmo-109-73.customers.d1-online.com) has joined #ceph
[10:12] * redf_ (~red@chello084112110034.11.11.vie.surfer.at) has joined #ceph
[10:12] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[10:13] * yanzheng (~zhyan@182.139.21.176) Quit (Ping timeout: 480 seconds)
[10:18] * adun153 (~ljtirazon@112.198.90.179) Quit (Quit: Leaving)
[10:18] * red (~red@chello084112110034.11.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[10:19] * LeaChim (~LeaChim@host86-143-17-156.range86-143.btcentralplus.com) has joined #ceph
[10:20] * Nicola-1_ (~Nicola-19@178.19.210.162) has joined #ceph
[10:20] * Nicola-1980 (~Nicola-19@178.19.210.162) Quit (Read error: Connection reset by peer)
[10:21] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:25] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[10:26] * fauxhawk (~Rens2Sea@4K6AAB9Q1.tor-irc.dnsbl.oftc.net) Quit ()
[10:29] * hroussea (~hroussea@000200d7.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:30] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[10:30] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[10:33] * kefu (~kefu@114.86.210.253) Quit (Max SendQ exceeded)
[10:33] * kefu (~kefu@114.86.210.253) has joined #ceph
[10:34] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[10:38] * haomaiwang (~haomaiwan@122.147.80.133) has joined #ceph
[10:39] * haomaiwang (~haomaiwan@122.147.80.133) Quit (Remote host closed the connection)
[10:44] * fen (~fen@HSI-KBW-217-008-056-240.hsi.kabelbw.de) has joined #ceph
[10:44] <fen> good morning
[10:46] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[10:47] <fen> is it enough to have two physical osd machines when i want to have simple redundancy with one replica? or do i need to have three osd nodes and two replicas as the minimum for production
[10:47] <T1w> it dependes on how important you think your production data is
[10:48] <fen> simple-redundancy-important
[10:48] * adun153 (~ljtirazon@112.198.90.179) has joined #ceph
[10:48] <T1w> recent mail on the users mailinglist tells a story of multiple failures resulting in loss with just 2 replicas
[10:48] <T1w> .. and it happended more than once for this person
[10:48] <fen> ok - well then maybe it's not such a great idea..
[10:48] <fen> thank you
[10:49] <T1w> if one OSD with a 4TB drive fails, then you have just 1 other copy
[10:49] <T1w> 4TB data is a lot - and it takes time to rebalance
[10:49] <Gugge-47527> of you have size=2, and loose a disk on two different nodes, you loose data :)
[10:49] <fen> that's a good argument
[10:49] <T1w> during that window another "right" failure will cause you to loose data
[10:50] <T1w> someone wrote that at CERN they use 4 copies - beliving that 3 copies is not enough to ensure no dataloss
[10:50] <kiranos> yes its the same as raid5
[10:50] <fen> it's the same with raid
[10:50] <kiranos> raid6 is like 3 replicas, raid5,raid10 2 replicas
[10:50] <fen> yeah :-)
[10:51] <fen> i'll try to push to one more physical node - the argument with the rebalancing time is probably a really good one
[10:51] <kiranos> but yes it "steals" alot of production TB to have 3 replicas
[10:51] <fen> we have more than enough - so that's not the problem
[10:51] <fen> the issue is to buy one more node
[10:51] <T1w> a raid10 over "enough disks" sould be able to tolerate 3 or even 4 failures at the same time - as long as it's the right disks that fail
[10:51] <fen> budget wise
[10:51] <T1w> it's just not possible to predict
[10:52] <kiranos> T1w: yes but its not as safe as raid6 where any 2 drives can fail
[10:52] <T1w> kiranos: indeed..
[10:52] <fen> that's right - and if one drive fails often the next one fials within days (if same lot)
[10:53] <T1w> I almost shat horses at one time I replaced a failed drive in a raid6 array - upon insert of the new drive another drive failed..
[10:54] <kiranos> an argument about raid5 etc is that its a pretty high risk of the healthy drive to fail if a raid5 array becomes degraded, the rebuilding causes alot of stress/disk activity on the remaining drives so its a high % that another drive will crash during this period compared to normal production use
[10:56] <Gugge-47527> I use raid5 for osd's and size=2 ... i can loose a disk in each raidset, and a whole raidset on each node, without loosing data :P
[10:57] * Heliwr (~Quatrokin@83.149.126.156) has joined #ceph
[10:58] * joao (~joao@8.184.114.89.rev.vodafone.pt) has joined #ceph
[10:58] * ChanServ sets mode +o joao
[11:00] <kiranos> Gugge-47527: how much disks do you "save" with this setup? and how many drives are in your raid5?
[11:00] <Gugge-47527> 6 drives in each
[11:00] <fen> wouldn't it be more efficient to have the disks on an own mountpoint running one osd daemon per mountpoint?
[11:01] <Gugge-47527> number chosen because my 24bay supermicro machines have 6 in each column :)
[11:01] <Gugge-47527> i would have to run with size 3 or 4 then, and loose a lot more space
[11:01] <kiranos> Gugge-47527: yes that would lower the cost
[11:01] <Gugge-47527> and performance is more than fine for this setup
[11:02] <Gugge-47527> i just need a lot of space, but not much io :)
[11:02] <kiranos> yes, and a pretty big cluster with lots of physical disks to spread the pg's
[11:02] <kiranos> as your osd's are so big
[11:02] <Gugge-47527> its around 350TB right now
[11:03] <Gugge-47527> each osd is 6x4TB disks
[11:03] <kiranos> nice, yes then you lower the cost drasticly compared to raw replica = 3
[11:04] <Gugge-47527> i really should test EC + cache tier some time :)
[11:04] <Gugge-47527> could lower it even more :)
[11:04] <kiranos> I've looked over it, EC is not recommened other than object storage
[11:04] <kiranos> its supposedly very slow
[11:04] <Gugge-47527> my setup is not really recommended either :)
[11:04] * kefu is now known as kefu|afk
[11:06] <kiranos> Gugge-47527: :) but raid5 still seems like a valid choice to me, dont see any real downside, its a pretty low % that two raid5's will be unusable at the same time
[11:06] <kiranos> one can become unusable and the other degraded, but it will still be able to recover
[11:07] <Gugge-47527> yes
[11:08] <Gugge-47527> with size=2 you just have to make sure you never loose an osd on two nodes at the same time :)
[11:09] <fen> do the osd daemons know about their physical location?
[11:09] <fen> so that they know where to put the replicas?
[11:09] <kiranos> yes check ceph osd tree
[11:10] <kiranos> its handled by the crushmap
[11:10] <fen> ok - great..
[11:10] <kiranos> so it never places pg on the same physical server
[11:10] <fen> and it's best to have one osd daemon per physical drive right
[11:11] <kiranos> fen: yes
[11:13] * swami1 (~swami@zz2012427835719DFD82.userreverse.dion.ne.jp) has joined #ceph
[11:13] <fen> kiranos: so i do not need a raid capable controller in the system jbod is just fine
[11:14] * kefu|afk is now known as kefu
[11:15] * linjan_ (~linjan@86.62.112.22) has joined #ceph
[11:17] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) has joined #ceph
[11:19] <kiranos> fen: yes jbod is the recommended approach
[11:20] <kiranos> with pool size 3
[11:20] <kiranos> min_size 2
[11:20] <kiranos> which defaults use
[11:20] <kiranos> minsize is pool size /2 but goes to 2
[11:20] <fen> ok - i'll lookup what pools_size and min_size is (newbe) :)
[11:20] <kiranos> number of replicas
[11:21] <kiranos> and when the cluser should stall (not write anything to minimize risk of corruption )
[11:21] <fen> ah ok
[11:21] <kiranos> if there is only one replica left (should never happen) if you have min_size 2 it will stall the read/writes
[11:21] <fen> at 2 replicas must be available to accept writes
[11:21] <kiranos> yes
[11:22] <fen> sounds like cassandra sometimes :-D
[11:22] <kiranos> never used :)
[11:23] <fen> i never used ceph - so we're even :-)
[11:25] <kiranos> ;)
[11:27] * Heliwr (~Quatrokin@5P6AAAEFH.tor-irc.dnsbl.oftc.net) Quit ()
[11:30] * adun153 (~ljtirazon@112.198.90.179) Quit (Read error: Connection reset by peer)
[11:32] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[11:32] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[11:35] * vbellur (~vijay@121.244.87.117) has joined #ceph
[11:36] * georgem (~Adium@69-196-163-180.dsl.teksavvy.com) has joined #ceph
[11:38] * georgem (~Adium@69-196-163-180.dsl.teksavvy.com) Quit ()
[11:38] * kefu (~kefu@114.86.210.253) Quit (Max SendQ exceeded)
[11:38] * kefu (~kefu@114.86.210.253) has joined #ceph
[11:44] * shawniverson (~shawniver@192.69.183.61) Quit (Remote host closed the connection)
[11:44] * overclk (~overclk@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:52] * maxxware (~maxx@149.210.133.105) Quit (Remote host closed the connection)
[11:55] * rdas (~rdas@122.168.210.191) Quit (Quit: Leaving)
[11:58] * brutuscat (~brutuscat@41.Red-83-47-113.dynamicIP.rima-tde.net) has joined #ceph
[12:07] * naoto (~naotok@27.131.11.254) Quit (Quit: Leaving...)
[12:17] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[12:22] * bara (~bara@213.175.37.10) has joined #ceph
[12:23] * stewiem2000 (~Adium@185.80.132.129) Quit (Quit: Leaving.)
[12:24] * aldiyen (~Peaced@185.101.107.227) has joined #ceph
[12:26] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:26] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[12:27] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[12:27] * rakeshgm (~rakesh@121.244.87.124) has joined #ceph
[12:28] * stewiem2000 (~Adium@185.80.132.129) has joined #ceph
[12:31] * swami1 (~swami@zz2012427835719DFD82.userreverse.dion.ne.jp) Quit (Quit: Leaving.)
[12:43] * zhaochao (~zhaochao@125.39.8.235) Quit (Quit: ChatZilla 0.9.92 [Iceweasel 38.3.0/20150922225347])
[12:50] * rakeshgm (~rakesh@121.244.87.124) Quit (Ping timeout: 480 seconds)
[12:51] * Icey (~chris@0001bbad.user.oftc.net) has joined #ceph
[12:53] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[12:53] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[12:54] * aldiyen (~Peaced@7V7AAAQ6J.tor-irc.dnsbl.oftc.net) Quit ()
[12:58] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[12:59] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:59] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[12:59] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[12:59] * overclk (~overclk@121.244.87.117) has joined #ceph
[13:04] * overclk (~overclk@121.244.87.117) Quit ()
[13:05] <mfa298> Is it the case that if replicas < min_size it also stalls read. The documentation I've seen doesn't always seem that clear if everything is stalled or just writes.
[13:07] <cetex> mfa298: yeah. i think so.
[13:08] <cetex> mfa298: since replicas means replica of the pg where the data is then written, so if the number of replicas of the pg isn't =min_size the pg is basically "locked" for writing.
[13:09] <cetex> i guess
[13:09] <cetex> :)
[13:09] * Nicola-1_ (~Nicola-19@178.19.210.162) Quit (Read error: No route to host)
[13:09] <mfa298> I'm more interested in the impact to reads.
[13:09] * Nicola-1980 (~Nicola-19@178.19.210.162) has joined #ceph
[13:11] * swami1 (~swami@zz2012427835719DFD82.userreverse.dion.ne.jp) has joined #ceph
[13:11] <cetex> aaah
[13:11] <cetex> not sure.. :)
[13:12] * Nicola-1_ (~Nicola-19@178.19.210.162) has joined #ceph
[13:12] * Nicola-1980 (~Nicola-19@178.19.210.162) Quit (Read error: Connection reset by peer)
[13:13] * georgem (~Adium@184.151.178.109) has joined #ceph
[13:14] <kiranos> mfa298: "This ensures that no object in the data pool will receive I/O with fewer than min_size replicas."
[13:14] <kiranos> seems to stall both read and write
[13:15] <kiranos> so it stalls the entire pool
[13:15] <kiranos> If I read it correct
[13:15] <cetex> hm hm..
[13:15] <cetex> that's actually annoying :D
[13:15] <cetex> but i guess we'll just set min_size = 1
[13:15] <cetex> :)
[13:15] <kiranos> you can set min_size = 1 yes then if it reach 0 and stalls you have bigger issues :)
[13:17] <cetex> yeah.
[13:17] <cetex> but i can live with that
[13:17] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[13:27] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[13:28] * Nicola-1980 (~Nicola-19@178.19.210.162) has joined #ceph
[13:28] * Nicola-1_ (~Nicola-19@178.19.210.162) Quit (Read error: Connection reset by peer)
[13:29] * rotbeard (~redbeard@aftr-95-222-29-74.unity-media.net) Quit (Quit: Leaving)
[13:30] * brutuscat (~brutuscat@41.Red-83-47-113.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:31] * rotbeard (~redbeard@aftr-95-222-29-74.unity-media.net) has joined #ceph
[13:32] * kefu (~kefu@114.86.210.253) Quit (Max SendQ exceeded)
[13:33] * kefu (~kefu@114.86.210.253) has joined #ceph
[13:36] <mfa298> that seems to match our observations, we've had stalled reads when it looks like only one replica is up
[13:36] <kiranos> mfa298: yes if you have default settings that will cause this
[13:41] * evl (~chatzilla@39.138.216.139.sta.dodo.net.au) Quit (Quit: ChatZilla 0.9.92 [Firefox 41.0.2/20151015125802])
[13:44] * georgem (~Adium@184.151.178.109) Quit (Quit: Leaving.)
[13:44] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[13:46] * fen (~fen@HSI-KBW-217-008-056-240.hsi.kabelbw.de) Quit (Quit: fen)
[13:46] * ira (~ira@nat-pool-bos-t.redhat.com) has joined #ceph
[13:47] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[13:49] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[13:51] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[13:52] * dyasny (~dyasny@104.158.35.250) Quit (Ping timeout: 480 seconds)
[13:57] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:58] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[14:01] * kefu (~kefu@114.86.210.253) Quit (Max SendQ exceeded)
[14:02] * kefu (~kefu@114.86.210.253) has joined #ceph
[14:04] <m0zes> from my perspective, that is a good thing. if you're dataset is that close to disapearing, I would want all resources available to make it "safe" again. the clients don't matter as much as getting those pgs re-replicated.
[14:07] * rakeshgm (~rakesh@121.244.87.124) has joined #ceph
[14:12] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[14:13] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[14:15] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[14:15] * erhudy (uid89730@id-89730.ealing.irccloud.com) has joined #ceph
[14:17] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:24] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[14:25] * yanzheng1 (~zhyan@125.71.108.204) Quit (Quit: This computer has gone to sleep)
[14:25] * yanzheng1 (~zhyan@125.71.108.204) has joined #ceph
[14:25] * TheSov (~TheSov@cip-248.trustwave.com) has joined #ceph
[14:26] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[14:27] * Arfed (~Teddybare@4Z9AAAEE9.tor-irc.dnsbl.oftc.net) has joined #ceph
[14:28] * danieagle (~Daniel@187.74.64.55) has joined #ceph
[14:29] * overclk (~overclk@59.93.66.173) has joined #ceph
[14:29] * brutuscat (~brutuscat@41.Red-83-47-113.dynamicIP.rima-tde.net) has joined #ceph
[14:30] * overclk_ (~overclk@59.93.66.173) has joined #ceph
[14:36] * overclk__ (~overclk@59.93.65.41) has joined #ceph
[14:37] * overclk (~overclk@59.93.66.173) Quit (Ping timeout: 480 seconds)
[14:38] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[14:39] * vbellur (~vijay@122.172.223.120) has joined #ceph
[14:40] * overclk_ (~overclk@59.93.66.173) Quit (Ping timeout: 480 seconds)
[14:44] * yanzheng1 (~zhyan@125.71.108.204) Quit (Quit: This computer has gone to sleep)
[14:46] * Wielebny (~Icedove@cl-927.waw-01.pl.sixxs.net) Quit (Quit: Wielebny)
[14:49] * shinobu_ (~oftc-webi@pdf874b16.tokynt01.ap.so-net.ne.jp) has joined #ceph
[14:55] * overclk__ (~overclk@59.93.65.41) Quit (Remote host closed the connection)
[14:57] * Arfed (~Teddybare@4Z9AAAEE9.tor-irc.dnsbl.oftc.net) Quit ()
[15:00] * overclk (~overclk@59.93.65.41) has joined #ceph
[15:06] * dyasny (~dyasny@38.108.87.20) has joined #ceph
[15:06] * kefu (~kefu@114.86.210.253) Quit (Max SendQ exceeded)
[15:07] * kefu (~kefu@114.86.210.253) has joined #ceph
[15:16] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) has joined #ceph
[15:16] * ChanServ sets mode +o nhm
[15:20] * pam (~mpalma@193.106.183.1) has joined #ceph
[15:21] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[15:21] * rdas (~rdas@122.168.210.191) has joined #ceph
[15:23] * overclk (~overclk@59.93.65.41) Quit (Remote host closed the connection)
[15:23] <pam> Hi, retrieving the release key from https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc results in a timeout. Does anybody have the same issue?
[15:24] <alfredodeza> yep same issue here
[15:24] * alfredodeza looks into it
[15:24] <pam> ok thanks alfredodeza
[15:25] <m0zes> http://download.ceph.com/keys/release.asc
[15:25] <m0zes> for those that need it now.
[15:26] <pam> but ceph-deploy goes out and fetches it from git.ceph.com
[15:26] <alfredodeza> you can certainly pass the url for ceph-deploy
[15:26] <alfredodeza> check `ceph-deploy install --help`
[15:26] <alfredodeza> iirc --gpg-url
[15:27] <pam> ok I missed that. sorry!
[15:29] * swami1 (~swami@zz2012427835719DFD82.userreverse.dion.ne.jp) Quit (Quit: Leaving.)
[15:30] * overclk (~overclk@59.93.65.41) has joined #ceph
[15:32] * RayTracer (~RayTracer@pk-dyn-4-39.inf.ug.edu.pl) has joined #ceph
[15:35] * overclk (~overclk@59.93.65.41) Quit (Remote host closed the connection)
[15:36] * branto1 (~branto@ip-213-220-232-132.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[15:38] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) has joined #ceph
[15:40] <Be-El> if you were to write the specs for new OSD hosts, how many hard disk would you like to have in host (with host = failure domain)?
[15:40] <Be-El> 4? 6?
[15:41] * rdas (~rdas@122.168.210.191) Quit (Quit: Leaving)
[15:41] <Be-El> and how well do 6 TB hard disks perform as OSD disk?
[15:47] * rotbeard (~redbeard@aftr-95-222-29-74.unity-media.net) Quit (Ping timeout: 480 seconds)
[15:48] * rotbeard (~redbeard@aftr-95-222-29-74.unity-media.net) has joined #ceph
[15:48] * Nicola-1980 (~Nicola-19@178.19.210.162) Quit (Read error: Connection reset by peer)
[15:48] * Nicola-1980 (~Nicola-19@178.19.210.162) has joined #ceph
[15:49] * branto1 (~branto@213.175.37.10) has joined #ceph
[15:49] <m0zes> my 6TB disks are faster than my 4TB disks by exactly 1/3.
[15:49] * rakeshgm (~rakesh@121.244.87.124) Quit (Remote host closed the connection)
[15:51] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) Quit (Remote host closed the connection)
[15:53] <m0zes> I'm fairly happy with my 16 disk osd boxes. I can rebuild a lost host in about 16-24 hours. faster if I really crank up the backfills.
[15:53] * ade (~abradshaw@tmo-109-73.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[15:56] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:00] * jtw (~john@2601:644:4100:bfef:7c53:6fc3:6921:d6ee) has joined #ceph
[16:01] <Be-El> m0zes: how many ssd do you use with 16 disk? 4? or 3?
[16:01] <TheSov> m0zes, did you see i got a an OSD down to about 250 bucks?
[16:02] * rdas (~rdas@122.168.210.191) has joined #ceph
[16:02] <m0zes> Be-El: We're purchasing 2 intel DC P3700 400GB ssds for each host, but at the moment, no ssds for journaling. the os ssds were too terrible to be used for ceph journals.
[16:03] * fretb (~fretb@n-3n.static-37-72-162.as30961.net) Quit (Ping timeout: 480 seconds)
[16:04] <m0zes> TheSov: no i didn't, but we're probably locked into our hardware platform for at least the next year.
[16:04] <boolman> m0zes: so you are only using rotating disks on your osd nodes?
[16:05] * dan (~dan@2001:1458:202:225::101:124a) has joined #ceph
[16:05] * nardial (~ls@dslb-088-072-094-085.088.072.pools.vodafone-ip.de) Quit (Quit: Leaving)
[16:06] <m0zes> boolman: we've got our os ssds serving up a small pool for metadata, but with a custom crush rule that keeps primary on ssd, and 3 other copies on rust. that is how much I don't trust these ssds.
[16:07] <m0zes> the rest is spinning rust, 4x 4TB disks, 12x 6TB disks. 24x hosts with the hardware.
[16:08] * davidz (~davidz@2605:e000:1313:8003:24ff:2a30:bad3:8cdf) has joined #ceph
[16:09] * mattbenjamin (~mbenjamin@aa2.linuxbox.com) has joined #ceph
[16:10] * dan_ (~dan@dvanders-pro.cern.ch) Quit (Ping timeout: 480 seconds)
[16:11] <boolman> m0zes: I guess you have atleast 10G on the osd nodes with that many disks?
[16:11] <m0zes> 40Gb for the latency benefits.
[16:11] <m0zes> and because networking is cheap-ish.
[16:12] * bitserker1 (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[16:13] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[16:13] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[16:18] <kiranos> alfredodeza: perhaps some monitoring about the repo? so we can see status, not to long ago there was downtime of the entire repo over the weekend
[16:18] <kiranos> would be nice to have green light if all is reporting ok :)
[16:18] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[16:18] * fireD (~fireD@93-142-254-59.adsl.net.t-com.hr) has joined #ceph
[16:19] * enax (~enax@hq.ezit.hu) Quit (Ping timeout: 480 seconds)
[16:22] <debian112> for monitors: what is the recommended convention:
[16:22] <debian112> when naming them: mon.0 or mon.a or mon.servername1
[16:22] <debian112> ?
[16:23] <kiranos> I use hostname -s
[16:24] <m0zes> hostname, as the rank may change if you add new monitors.
[16:26] <debian112> kiranos, yeah I thought about that, but m0zes has a good point.
[16:27] <debian112> m0zes: what's the recommended? mon.0-anynumber or mon.a-z
[16:27] * linjan_ (~linjan@86.62.112.22) Quit (Ping timeout: 480 seconds)
[16:28] <m0zes> I use hostname.
[16:31] <kiranos> lol I read that twice just so I didnt missunderstand before :)
[16:32] <kiranos> "didnt we say the same"
[16:33] <debian112> I see, just trying understand why ceph doc is using (a-z), then I see some people using 0-*, and servernames
[16:34] <kiranos> I use hostname as its easy to instantly see which monitor is having issues
[16:36] * overclk (~overclk@59.93.65.41) has joined #ceph
[16:38] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[16:38] * kefu (~kefu@114.86.210.253) Quit (Max SendQ exceeded)
[16:38] <TheSov> I really want the guys that make freenas, IX systems to implement ceph in their freenas releases
[16:38] <TheSov> BSD is rock solic and has zfs native
[16:39] * kefu (~kefu@114.86.210.253) has joined #ceph
[16:39] * kefu is now known as kefu|afk
[16:41] * overclk (~overclk@59.93.65.41) Quit (Remote host closed the connection)
[16:41] * overclk (~overclk@59.93.65.41) has joined #ceph
[16:45] * dyasny (~dyasny@38.108.87.20) Quit (Ping timeout: 480 seconds)
[16:45] * Aeso_ (~AesoSpade@c-68-37-97-11.hsd1.mi.comcast.net) has joined #ceph
[16:49] * overclk (~overclk@59.93.65.41) Quit (Remote host closed the connection)
[16:50] * moore (~moore@64.202.160.88) has joined #ceph
[16:52] * ksperis (~laurent@46.218.42.103) has joined #ceph
[16:56] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[16:56] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[16:56] * kefu|afk is now known as kefu
[16:57] * jtw (~john@2601:644:4100:bfef:7c53:6fc3:6921:d6ee) Quit (Quit: Leaving)
[16:57] * RayTrace_ (~RayTracer@153.19.7.39) has joined #ceph
[16:58] * gregmark (~Adium@68.87.42.115) has joined #ceph
[17:01] * marrusl (~mark@209-150-46-243.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[17:01] * jwilkins (~jowilkin@c-67-180-123-48.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[17:03] * RayTracer (~RayTracer@pk-dyn-4-39.inf.ug.edu.pl) Quit (Ping timeout: 480 seconds)
[17:05] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:06] * kefu is now known as kefu|afk
[17:07] * kefu|afk is now known as kefu
[17:07] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[17:12] * kefu (~kefu@114.86.210.253) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:13] * branto1 (~branto@213.175.37.10) Quit (Quit: Leaving.)
[17:15] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:21] * jwilkins (~jowilkin@c-67-180-123-48.hsd1.ca.comcast.net) has joined #ceph
[17:22] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:25] * sudocat (~dibarra@192.185.1.20) has left #ceph
[17:26] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:27] * xcezzz (~xcezzz@97-96-111-106.res.bhn.net) has joined #ceph
[17:35] * evilrob (~evilrob@2600:3c00::f03c:91ff:fedf:1d3d) has joined #ceph
[17:41] * kawa2014 (~kawa@151.33.10.211) Quit (Quit: Leaving)
[17:46] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[17:53] <cetex> i just name my monitors mon.1, mon.2, mon.3 for now.. :>
[17:53] <cetex> and have them on dedicated ip, but not dedicated host yet.
[17:53] <cetex> so i can move them around in case a node goes down
[17:54] * brutuscat (~brutuscat@41.Red-83-47-113.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[17:55] * fretb (~fretb@n-3n.static-37-72-162.as30961.net) has joined #ceph
[17:56] * overclk_ (~overclk@59.93.65.41) has joined #ceph
[18:02] * RayTrace_ (~RayTracer@153.19.7.39) Quit (Remote host closed the connection)
[18:03] * overclk_ (~overclk@59.93.65.41) Quit (Remote host closed the connection)
[18:04] <TheSov> ok so for vmware right now i have a cephfs that im exporting via NFS, is there a better way to do this?
[18:05] * bitserker1 (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[18:07] <Kvisle> you want to put vmdk files on cephfs?
[18:10] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[18:10] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[18:11] <TheSov> prefer rbd's
[18:11] * rotbeard (~redbeard@aftr-95-222-29-74.unity-media.net) Quit (Quit: Leaving)
[18:16] * pam (~mpalma@193.106.183.1) Quit (Ping timeout: 480 seconds)
[18:16] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[18:16] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[18:19] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[18:22] * jclm (~jclm@ip68-108-16-17.lv.lv.cox.net) has joined #ceph
[18:25] * ksperis (~laurent@46.218.42.103) has left #ceph
[18:25] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[18:25] * dgurtner (~dgurtner@178.197.231.233) Quit (Ping timeout: 480 seconds)
[18:26] * jasuarez (~jasuarez@243.Red-81-39-64.dynamicIP.rima-tde.net) Quit (Quit: WeeChat 1.2)
[18:26] * xarses_ (~xarses@zz2012427835719DFD82.userreverse.dion.ne.jp) Quit (Read error: Connection reset by peer)
[18:26] * xarses (~xarses@zz2012427835719DFD82.userreverse.dion.ne.jp) has joined #ceph
[18:26] * mykola (~Mikolaj@91.225.202.254) has joined #ceph
[18:27] * xarses (~xarses@zz2012427835719DFD82.userreverse.dion.ne.jp) Quit (Remote host closed the connection)
[18:27] * xarses (~xarses@zz2012427835719DFD82.userreverse.dion.ne.jp) has joined #ceph
[18:31] * overclk (~overclk@59.93.65.41) has joined #ceph
[18:32] * shawniverson (~shawniver@208.70.47.116) has joined #ceph
[18:32] * overclk (~overclk@59.93.65.41) Quit ()
[18:34] * derjohn_mob (~aj@p54BF82B6.dip0.t-ipconnect.de) has joined #ceph
[18:34] * jclm (~jclm@ip68-108-16-17.lv.lv.cox.net) Quit (Quit: Leaving.)
[18:34] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[18:36] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[18:46] * dyasny (~dyasny@104.158.35.250) has joined #ceph
[18:47] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[18:50] * shylesh (~shylesh@59.95.69.210) has joined #ceph
[18:50] * Nicola-1980 (~Nicola-19@178.19.210.162) Quit (Remote host closed the connection)
[18:51] * omar_m (~omar_m@G68-90-105-72.sbcis.sbc.com) has joined #ceph
[18:52] * omar_m (~omar_m@G68-90-105-72.sbcis.sbc.com) Quit ()
[18:58] * nardial (~ls@dslb-088-072-094-085.088.072.pools.vodafone-ip.de) has joined #ceph
[19:03] * bara (~bara@213.175.37.10) Quit (Quit: Bye guys! (??????????????????? ?????????)
[19:07] * pdrakewe_ (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) has joined #ceph
[19:07] * ira (~ira@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[19:09] <m0zes> in that case, iscsi re-exporting rbds is the way I've seen people go: http://www.sebastien-han.fr/blog/2014/07/07/start-with-the-rbd-support-for-tgt/
[19:11] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) Quit (Ping timeout: 480 seconds)
[19:13] * pabluk is now known as pabluk_
[19:14] <lincolnb> has else anyone seen significant OSD memory usage since 0.94.4?
[19:16] <lincolnb> one of my boxes w/ 12 OSDs is using ~100GB of RAM right now.
[19:16] * m0zes isn't seeing any more than normal. if anything, usage has dropped a bit. ~600MB per osd, maybe.
[19:17] <lincolnb> hrm. well, i'll give thme a kick. some of my OSDs are using as much as 7GB resident
[19:19] * derjohn_mob (~aj@p54BF82B6.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[19:19] <m0zes> I meant, it dropped by about 600MB per osd. my osds use about 2GB per at the moment.
[19:19] <lincolnb> yeah
[19:22] <lincolnb> http://imgur.com/a/E3ZeI OSDs have been pretty hungry.
[19:22] * lincolnb shrug
[19:22] <m0zes> ooh ganglia!
[19:22] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[19:23] * biGGer (~delcake@se6x.mullvad.net) has joined #ceph
[19:23] <lincolnb> haha
[19:23] <lincolnb> RRDs are not web 3.0 enough :( (or whatever web version we're up to now)
[19:24] <m0zes> http://bit.ly/1iadOon mine. the thin line between week 42+43 is when I updated.
[19:24] <magicrobotmonkey> does anyone know what '[hb in]` in `ceph pg dump` is referring to?
[19:27] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[19:27] <lincolnb> hrm, thanks m0zes. ill just have to keep an eye on these
[19:28] <m0zes> guess I mean to link to the month view. oh well.
[19:28] <lincolnb> its ok, i'm well acquainted with ganglia :)
[19:33] <via> is there any literature on designing crush maps to reduce data movement on cluster expansion?
[19:42] * athompson (~oftc-webi@wnpgmb1154w-ds01-228-156.static.mtsallstream.net) has joined #ceph
[19:43] <athompson> Just noticed the CEPH apt repos for Debian Wheezy seem out of sync with the package index... known problem?
[19:44] <alfredodeza> athompson: I am syncing them at the moment
[19:44] <athompson> http://download.ceph.com/debian-hammer/dists/wheezy/main/binary-amd64/Packages points to 0.94.5, but only 0.94.3 exists in pool/ directory.
[19:44] <athompson> OK, I'll try again in a little while. Bad timing for me :-(
[19:45] <alfredodeza> sorry :(
[19:45] <alfredodeza> it is almost done though, I will ping you as soon as it completes athompson
[19:45] * biGGer (~delcake@se6x.mullvad.net) Quit (Remote host closed the connection)
[19:45] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[19:45] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[19:45] <athompson> aaaaaand here they come. *fingers crossed*
[19:45] <alfredodeza> athompson: it just completed
[19:45] <Kvisle> is ceph-extras removed?
[19:46] <alfredodeza> Kvisle: yes
[19:46] <alfredodeza> you should not need it anymore
[19:47] <alfredodeza> those packages should be available from your chosen distribution
[19:47] <Kvisle> even for el6?
[19:47] <TheSov> whats the difference between cephx auth and normal keyring?
[19:48] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[19:48] <alfredodeza> Kvisle: I believe so, yes.
[19:48] <ktdreyer> Kvisle: re: ceph-extras, see http://www.spinics.net/lists/ceph-users/msg22539.html
[19:49] <ktdreyer> you'll need to upgrade to CentOS 7 for librbd support in CentOS' qemu
[19:49] <Kvisle> ktdreyer: that's exactly what I was suspecting
[19:49] <Kvisle> thank you
[19:49] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[19:49] <athompson> alfredodeza: Thanks, upgrade completed on this host. BTW, .deb pkgs don't restart OSDs automatically - by design?
[19:50] <ktdreyer> athompson: yeah, that's by design
[19:50] <athompson> OK, wasn't sure. Thanks.
[19:50] <m0zes> restarting the cluster suddenly could be extremely problematic.
[19:52] <alfredodeza> athompson: correct
[19:53] <TheSov> some of you guys run ceph osd's stateless is that correct?
[19:53] * danieagle (~Daniel@187.74.64.55) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[19:54] <via> can you have two writeback caches in front of the same backend pool?
[19:54] * dyasny (~dyasny@104.158.35.250) Quit (Ping timeout: 480 seconds)
[19:54] <via> with the understanding that it allows inconsistency
[19:55] <m0zes> I don't believe ceph allows mulitple tiers (yet?) per backend pool.
[19:55] <via> damn. that would solve a lot of problems with multi-dc pools
[19:57] <Be-El> what should be the purpose of multiple cache tiers?
[19:57] <via> huh, ceph just let me do that
[19:57] <via> one backend pool, maybe EC, stretched between two DCs
[19:57] <via> and one SSD pool per DC
[19:58] <via> being a cache in front of the slower one
[19:58] <via> basically a way to get out of the synchronous write requirement for the lower latency connections
[19:58] <m0zes> I'm betting the second overwrote the first.
[19:58] <via> ceph osd dump shows them both
[19:58] <m0zes> ot that it is going to get very ugly very fast.
[19:59] <via> why?
[19:59] <Be-El> you can setup multiple rados gateways with synchronization between them. that's something similar at least for object storage
[19:59] <via> yeah i know thats one choice
[19:59] <Be-El> in all other cases you have consistency requirements that will be rendered invalid by multiple caches
[19:59] <TheSov> basically Im trying to see if one can build an OSD distro that essentially is just a stripped down debian, with ceph osd installed, and assumes any disk that is xfs is part of the cluster and attempts to start an osd on it
[20:00] <via> Be-El: yeah, i realize that
[20:00] <via> if we wanted a client to switch to another DC, we would have to have a 'flush cache' procedure
[20:00] * nardial (~ls@dslb-088-072-094-085.088.072.pools.vodafone-ip.de) Quit (Quit: Leaving)
[20:00] <via> but that doesn't seem like the end of the world
[20:00] * b0e (~aledermue@p5083D2B3.dip0.t-ipconnect.de) has joined #ceph
[20:03] <Be-El> via: a complete solution would recognize that the data is currently stored in the other dc and attempt to transfer it so ensure data locality
[20:03] <via> yeah, a cache consistency protocol
[20:03] <Be-El> like a kind of proxy checking a local pool first
[20:03] <via> that'd be awesome
[20:03] <Be-El> feel free to implement it ;-)
[20:04] <via> oh man, ceph is a bit of daunting codebase to me. probably not going to happen :p
[20:05] <Be-El> well, back to the cephfs debugging...
[20:09] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:09] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[20:09] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[20:09] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Quit: Lost terminal)
[20:10] * neurodrone (~neurodron@162.243.191.67) has joined #ceph
[20:10] <via> is there a way to see cache status, like what percentage is dirty, writeback speeds, etc?
[20:12] * mykola (~Mikolaj@91.225.202.254) Quit (Quit: away)
[20:12] <neurodrone> Anyone has an experience running cbt here?
[20:14] * shylesh (~shylesh@59.95.69.210) Quit (Remote host closed the connection)
[20:26] * jrocha_ (~jrocha@AAnnecy-651-1-290-164.w90-27.abo.wanadoo.fr) has joined #ceph
[20:30] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) has joined #ceph
[20:30] * athompson (~oftc-webi@wnpgmb1154w-ds01-228-156.static.mtsallstream.net) Quit (Quit: Page closed)
[20:33] <via> so i'm trying to undo what i did, trying to remove the cache tier gives 'Error EBUSY: pool 'data' is in use by CephFS via its tier' despite the fact that the cache mode is none and the cache pool has nothing in it
[20:33] <via> i never set it as an overlay
[20:33] <via> how do i delete these?
[20:34] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[20:34] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[20:53] * Helleshin (~Azerothia@94.242.228.43) has joined #ceph
[20:59] * shawniverson (~shawniver@208.70.47.116) Quit (Ping timeout: 480 seconds)
[20:59] <TheSov> so does anyone know why its bad to export cephfs to vmware?
[21:06] <TheSov> ne1?
[21:06] * dupont-y (~dupont-y@familledupont.org) has joined #ceph
[21:06] <Aeso_> the stability of cephfs is largely untested in production environments
[21:09] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[21:11] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[21:11] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[21:12] <TheSov> odd
[21:12] <TheSov> i heard that it was stable with only 1 MDS
[21:12] <Guest6560> Is anyone using GIGABYTE D120-S3G (http://b2b.gigabyte.com/products/product-page.aspx?pid=5424#ov)? or any other machines with 16hdd per rack U?
[21:14] <TheSov> 16hdd per U? thats impressive
[21:15] * hroussea (~hroussea@goyavier-10ge-vl69-4-2.par01.moulticast.net) has joined #ceph
[21:16] <Guest6560> Or ASUS S1016P - https://www.asus.com/Commercial-Servers-Workstations/S1016P/
[21:17] <TheSov> uhh thats there no that can run ceph well
[21:17] <TheSov> it has 1 ecc dism slot
[21:17] <TheSov> dimm
[21:17] <TheSov> its a cold storage server which means the disk controller on there is probably shite
[21:18] <TheSov> damnit i hate sites that need flash
[21:23] * Helleshin (~Azerothia@4K6AAB904.tor-irc.dnsbl.oftc.net) Quit ()
[21:23] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[21:33] * enax (~enax@94-21-125-156.pool.digikabel.hu) has joined #ceph
[21:34] * enax (~enax@94-21-125-156.pool.digikabel.hu) has left #ceph
[21:34] * TheSov2 (~TheSov@38.106.143.234) has joined #ceph
[21:35] * TheSov3 (~TheSov@204.13.200.248) has joined #ceph
[21:38] * dyasny (~dyasny@104.158.33.251) has joined #ceph
[21:41] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[21:41] * TheSov (~TheSov@cip-248.trustwave.com) Quit (Ping timeout: 480 seconds)
[21:42] * TheSov2 (~TheSov@38.106.143.234) Quit (Ping timeout: 480 seconds)
[21:46] * mattronix (~quassel@mail.mattronix.nl) Quit (Remote host closed the connection)
[21:47] * mattronix (~quassel@mail.mattronix.nl) has joined #ceph
[21:48] * rendar (~I@95.234.183.170) Quit (Ping timeout: 480 seconds)
[21:50] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[21:50] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[21:50] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[21:51] * rendar (~I@95.234.183.170) has joined #ceph
[21:55] * derjohn_mob (~aj@p54BF94E1.dip0.t-ipconnect.de) has joined #ceph
[21:55] * Jowtf (~JoHo@mail.dkv.lu) Quit (Read error: Connection reset by peer)
[21:55] * Jowtf (~JoHo@mail.dkv.lu) has joined #ceph
[22:00] * dyasny (~dyasny@104.158.33.251) Quit (Ping timeout: 480 seconds)
[22:04] * b0e (~aledermue@p5083D2B3.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[22:06] * TheSov3 (~TheSov@204.13.200.248) Quit (Read error: Connection reset by peer)
[22:07] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[22:09] * dyasny (~dyasny@198.251.59.55) has joined #ceph
[22:12] * Shadow386 (~kalmisto@195-154-191-67.rev.poneytelecom.eu) has joined #ceph
[22:20] * georgem (~Adium@184.151.178.109) has joined #ceph
[22:21] * linjan (~linjan@176.195.12.73) has joined #ceph
[22:23] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[22:27] * shinobu_ (~oftc-webi@pdf874b16.tokynt01.ap.so-net.ne.jp) Quit (Ping timeout: 480 seconds)
[22:35] * dupont-y (~dupont-y@familledupont.org) Quit (Quit: Ex-Chat)
[22:37] * linjan (~linjan@176.195.12.73) Quit (Ping timeout: 480 seconds)
[22:37] * swami1 (~swami@zz2012427835719DFD82.userreverse.dion.ne.jp) has joined #ceph
[22:38] * shawniverson (~shawniver@192.69.183.61) has joined #ceph
[22:42] * Shadow386 (~kalmisto@4Z9AAAE0K.tor-irc.dnsbl.oftc.net) Quit ()
[22:51] <georgem> any advice for choosing between 2 dual-port 10 Gb vs one quad-port 10 Gb?
[22:52] <monsted> georgem: some PCIe slots might have problems feeding dual-ports and there's always some redundancy in having two
[22:52] <georgem> I agree,although cost might not be linear
[22:53] * dan_ (~dan@dvanders-pro.cern.ch) has joined #ceph
[22:53] <monsted> heck, many new servers come with dualport 10G already
[22:53] <georgem> I've just read this https://communities.intel.com/community/wired/blog/2009/12/18/port-density-dual-vs-quad but it's pretty old
[22:55] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[22:55] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[22:55] <monsted> *most* modern servers should happily run with dual ports, but it's worth checking. of course using dual port cards leaves an extra slot open for later expansion.
[22:56] <monsted> i don't look much at prices, but i know that in some cases we even put in two dual port cards and use one port on each, just for the heck of it
[22:57] <georgem> thanks
[22:58] * dan (~dan@2001:1458:202:225::101:124a) Quit (Ping timeout: 480 seconds)
[22:59] * georgem (~Adium@184.151.178.109) Quit (Quit: Leaving.)
[23:00] <monsted> wow, intel X710-DA2 is getting cheap.
[23:13] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[23:16] * swami1 (~swami@zz2012427835719DFD82.userreverse.dion.ne.jp) Quit (Quit: Leaving.)
[23:22] * pabluk_ is now known as pabluk
[23:25] * mattbenjamin (~mbenjamin@aa2.linuxbox.com) Quit (Quit: Leaving.)
[23:31] * jrocha_ (~jrocha@AAnnecy-651-1-290-164.w90-27.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[23:31] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[23:32] * lcurtis (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[23:41] * clusterfudge (~mrapple@162.216.46.182) has joined #ceph
[23:42] * fireD (~fireD@93-142-254-59.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[23:44] * yguang11 (~yguang11@66.228.162.44) has joined #ceph
[23:48] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[23:51] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[23:51] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Write error: connection closed)
[23:56] * jcsp (~jspray@zz2012427835719DFD83.userreverse.dion.ne.jp) has joined #ceph
[23:57] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[23:57] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[23:58] * fsimonce (~simon@host30-173-dynamic.23-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.