#ceph IRC Log

Index

IRC Log for 2015-11-02

Timestamps are in GMT/BST.

[0:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[0:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[0:13] * rendar (~I@87.18.177.146) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:14] * pabluk_ (~pabluk@2a01:e34:ec16:93e0:1573:d6c6:4bb4:6a6b) Quit (Ping timeout: 480 seconds)
[0:15] * mattronix (~quassel@2a02:898:162:1::1) Quit (Remote host closed the connection)
[0:16] * mattronix (~quassel@2a02:898:162:1::1) has joined #ceph
[0:19] * mattronix (~quassel@2a02:898:162:1::1) Quit (Remote host closed the connection)
[0:19] * Zyn (~MKoR@7V7AAAXOM.tor-irc.dnsbl.oftc.net) Quit ()
[0:22] * pabluk_ (~pabluk@2a01:e34:ec16:93e0:499e:138e:c835:1727) has joined #ceph
[0:32] * dgurtner_ (~dgurtner@178.197.229.31) Quit (Ping timeout: 480 seconds)
[0:36] * Gorazd (~Gorazd@89-212-99-37.dynamic.t-2.net) Quit (Ping timeout: 480 seconds)
[0:37] * DoDzy (~Dinnerbon@195-154-231-147.rev.poneytelecom.eu) has joined #ceph
[0:48] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[0:48] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[0:51] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[0:52] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[0:54] * bene2 (~bene@2601:18c:8300:f3ae:ea2a:eaff:fe08:3c7a) has joined #ceph
[0:54] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[0:54] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[0:59] * mattronix (~quassel@2a02:898:162:1::1) has joined #ceph
[0:59] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[0:59] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[1:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[1:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[1:03] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) has joined #ceph
[1:06] * DoDzy (~Dinnerbon@4K6AACCVN.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[1:07] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[1:07] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[1:13] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[1:13] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[1:21] * hgichon (~hgichon@220.90.135.162) has joined #ceph
[1:32] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[1:32] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[1:33] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[1:51] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[1:51] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[1:54] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[1:57] * olid12 (~olid1982@aftr-185-17-204-254.dynamic.mnet-online.de) Quit (Ping timeout: 480 seconds)
[2:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[2:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[2:05] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[2:05] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[2:21] * marrusl (~mark@209-150-46-243.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[2:28] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[2:44] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[2:44] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[2:49] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[2:49] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[2:51] * yanzheng (~zhyan@182.139.23.79) has joined #ceph
[2:51] * kefu (~kefu@114.92.106.70) has joined #ceph
[2:58] * sileht (~sileht@sileht.net) Quit (Ping timeout: 480 seconds)
[3:09] * sankarshan (~sankarsha@121.244.87.117) has joined #ceph
[3:10] * bene2 (~bene@2601:18c:8300:f3ae:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[3:10] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[3:11] * naoto (~naotok@2401:bd00:b001:8920:27:131:11:254) has joined #ceph
[3:15] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[3:17] * aj__ (~aj@x590d8ae8.dyn.telefonica.de) has joined #ceph
[3:20] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[3:24] * derjohn_mobi (~aj@x590d5e73.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[3:36] * zhaochao (~zhaochao@125.39.8.237) has joined #ceph
[3:40] * djidis__ (~Tonux@162.216.46.77) has joined #ceph
[3:43] * kefu is now known as kefu|afk
[3:43] * naoto_ (~naotok@27.131.11.254) has joined #ceph
[3:48] * naoto (~naotok@2401:bd00:b001:8920:27:131:11:254) Quit (Ping timeout: 480 seconds)
[3:51] * kefu|afk (~kefu@114.92.106.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[4:00] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[4:00] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[4:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[4:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[4:07] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[4:07] * yguang11 (~yguang11@c-50-131-146-113.hsd1.ca.comcast.net) has joined #ceph
[4:10] * djidis__ (~Tonux@162.216.46.77) Quit ()
[4:11] * Kidlvr (~Blueraven@195-154-231-147.rev.poneytelecom.eu) has joined #ceph
[4:16] * overclk_ (~overclk@59.93.67.41) has joined #ceph
[4:17] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) has joined #ceph
[4:20] * adun153 (~ljtirazon@112.198.90.112) has joined #ceph
[4:24] * overclk_ (~overclk@59.93.67.41) Quit (Remote host closed the connection)
[4:25] * overclk (~overclk@59.93.67.41) has joined #ceph
[4:27] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[4:27] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[4:40] * Kidlvr (~Blueraven@7V7AAAXS6.tor-irc.dnsbl.oftc.net) Quit ()
[4:44] * yguang11 (~yguang11@c-50-131-146-113.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[4:46] * yuan (~yzhou67@shzdmzpr01-ext.sh.intel.com) Quit (Ping timeout: 480 seconds)
[4:46] * yguang11 (~yguang11@c-50-131-146-113.hsd1.ca.comcast.net) has joined #ceph
[4:49] * nastidon (~Tonux@tor-exit-node-nibbana.dson.org) has joined #ceph
[5:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[5:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[5:03] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[5:06] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[5:06] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[5:06] * kefu (~kefu@114.92.106.70) has joined #ceph
[5:07] * Vacuum_ (~Vacuum@i59F79F0F.versanet.de) has joined #ceph
[5:14] * Vacuum__ (~Vacuum@88.130.202.26) Quit (Ping timeout: 480 seconds)
[5:19] * nastidon (~Tonux@7V7AAAXT3.tor-irc.dnsbl.oftc.net) Quit ()
[5:20] * vbellur (~vijay@117.222.107.82) has joined #ceph
[5:20] * yguang11_ (~yguang11@2001:4998:effd:7804::10bd) has joined #ceph
[5:21] * ibravo (~ibravo@72.198.142.104) Quit (Quit: This computer has gone to sleep)
[5:27] * yguang11 (~yguang11@c-50-131-146-113.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:36] * overclk (~overclk@59.93.67.41) Quit (Remote host closed the connection)
[5:36] * overclk (~overclk@59.93.67.41) has joined #ceph
[5:45] * kefu is now known as kefu|afk
[5:47] * nihilifer (nihilifer@s6.mydevil.net) has joined #ceph
[5:47] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[5:53] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[5:57] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[5:57] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[6:01] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[6:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[6:01] * adun153 (~ljtirazon@112.198.90.112) Quit (Quit: Leaving)
[6:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[6:02] * yguang11_ (~yguang11@2001:4998:effd:7804::10bd) Quit (Ping timeout: 480 seconds)
[6:17] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[6:17] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[6:17] * kefu|afk (~kefu@114.92.106.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:18] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[6:23] * rdas (~rdas@122.168.244.209) has joined #ceph
[6:24] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[6:28] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[6:36] * shylesh__ (~shylesh@121.244.87.124) has joined #ceph
[6:37] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[6:42] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[6:42] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[6:46] * vbellur (~vijay@117.222.107.82) Quit (Ping timeout: 480 seconds)
[6:56] * vbellur (~vijay@117.198.251.216) has joined #ceph
[6:57] * kefu (~kefu@114.92.106.70) has joined #ceph
[7:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[7:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[7:07] * sileht (~sileht@sileht.net) has joined #ceph
[7:12] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:15] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[7:15] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[7:22] * reed (~reed@i121-112-171-20.s41.a014.ap.plala.or.jp) has joined #ceph
[7:23] * nardial (~ls@dslb-088-072-094-085.088.072.pools.vodafone-ip.de) has joined #ceph
[7:25] * aj__ (~aj@x590d8ae8.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[7:27] * vbellur (~vijay@117.198.251.216) Quit (Ping timeout: 480 seconds)
[7:35] * amote (~amote@121.244.87.116) has joined #ceph
[7:41] * dotblank (~Yopi@23.105.134.170) has joined #ceph
[7:41] * branto1 (~branto@ip-213-220-232-132.net.upcbroadband.cz) has joined #ceph
[7:44] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[7:44] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[7:46] * vbellur (~vijay@117.198.246.209) has joined #ceph
[7:51] * jith (~chatzilla@14.139.180.40) has joined #ceph
[7:57] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[7:59] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[7:59] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[8:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[8:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[8:01] * jclm (~jclm@ip-64-134-184-248.public.wayport.net) has joined #ceph
[8:03] * kefu is now known as kefu|afk
[8:08] * linjan_ (~linjan@176.195.172.161) has joined #ceph
[8:10] * kefu|afk is now known as kefu
[8:10] * dotblank (~Yopi@23.105.134.170) Quit ()
[8:13] * reed (~reed@i121-112-171-20.s41.a014.ap.plala.or.jp) Quit (Quit: Ex-Chat)
[8:14] * raso1 (~raso@deb-multimedia.org) has joined #ceph
[8:16] * remy1991 (~ravi@115.114.59.182) has joined #ceph
[8:20] * raso (~raso@deb-multimedia.org) Quit (Ping timeout: 480 seconds)
[8:44] * kefu (~kefu@114.92.106.70) Quit (Max SendQ exceeded)
[8:44] * kefu (~kefu@114.92.106.70) has joined #ceph
[8:46] * enax (~enax@hq.ezit.hu) has joined #ceph
[8:48] * hroussea (~hroussea@000200d7.user.oftc.net) Quit (Remote host closed the connection)
[8:48] * jclm (~jclm@ip-64-134-184-248.public.wayport.net) Quit (Quit: Leaving.)
[8:48] * dan (~dan@dvanders-pro.cern.ch) has joined #ceph
[8:56] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[8:56] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[9:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[9:01] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[9:01] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Quit: segutier)
[9:05] * MrBy (~MrBy@85.115.23.2) has joined #ceph
[9:08] * jasuarez (~jasuarez@243.Red-81-39-64.dynamicIP.rima-tde.net) has joined #ceph
[9:09] * Kurimus1 (~vegas3@216.17.99.183) has joined #ceph
[9:09] * rakeshgm (~rakesh@121.244.87.124) has joined #ceph
[9:10] * pabluk_ is now known as pabluk
[9:10] * pabluk is now known as pabluk_
[9:13] * pabluk_ is now known as pabluk
[9:13] <cetex> so, i have this issue http://pastebin.com/ANUQthij
[9:14] <cetex> ceph starts throwing 2015-11-01 23:39:55.708781 7fa651e8d700 0 -- 10.255.254.11:6805/1 \u003e\u003e xx.yy.zz.76:0/1 pipe(0x275c8000 sd=123 :6805 s=0 pgs=0 cs=0 l=1 c=0x27225340).accept replacing existing (lossy) channel (new one lossy=1)
[9:14] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[9:14] <cetex> i have a few GB's of log.
[9:14] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[9:15] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[9:15] <cetex> the logfiles kinda grew by 500KB/s of those messages.
[9:15] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[9:16] <cetex> (hm, rather 250KB/s)
[9:16] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[9:16] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[9:17] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[9:18] * overclk (~overclk@59.93.67.41) Quit (Remote host closed the connection)
[9:20] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:21] * rendar (~I@host114-183-dynamic.37-79-r.retail.telecomitalia.it) has joined #ceph
[9:22] * overclk (~overclk@59.93.67.41) has joined #ceph
[9:22] * analbeard (~shw@support.memset.com) has joined #ceph
[9:23] * garphy`aw is now known as garphy
[9:25] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[9:25] <Be-El> hi
[9:28] * thomnico (~thomnico@2a01:e35:8b41:120:546:2d4f:48d3:b4a1) has joined #ceph
[9:29] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[9:30] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[9:30] * fsimonce (~simon@host30-173-dynamic.23-79-r.retail.telecomitalia.it) has joined #ceph
[9:35] * adun153 (~ljtirazon@112.198.90.112) has joined #ceph
[9:39] * Kurimus1 (~vegas3@5P6AAAPNA.tor-irc.dnsbl.oftc.net) Quit ()
[9:46] * ade (~abradshaw@tmo-108-70.customers.d1-online.com) has joined #ceph
[9:48] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[9:48] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[9:53] * overclk (~overclk@59.93.67.41) Quit (Remote host closed the connection)
[9:55] * hroussea (~hroussea@goyavier-10ge-vl69-4-2.par01.moulticast.net) has joined #ceph
[9:56] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[10:00] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:00] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[10:00] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[10:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[10:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[10:01] * brutuscat (~brutuscat@60.Red-193-152-185.dynamicIP.rima-tde.net) has joined #ceph
[10:01] * brutuscat (~brutuscat@60.Red-193-152-185.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[10:02] * brutuscat (~brutuscat@60.Red-193-152-185.dynamicIP.rima-tde.net) has joined #ceph
[10:07] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[10:09] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[10:09] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[10:10] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) has joined #ceph
[10:12] * kefu (~kefu@114.92.106.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[10:14] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:15] * jluis (~joao@217.149.135.180) has joined #ceph
[10:16] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[10:16] * krypto (~krypto@192.71.175.30) has joined #ceph
[10:23] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:25] * overclk (~overclk@59.93.67.41) has joined #ceph
[10:25] * kawa2014 (~kawa@89.184.114.246) Quit (Read error: Connection reset by peer)
[10:25] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[10:25] * harlequin (~loris@62-193-45-2.as16211.net) has joined #ceph
[10:28] * bitserker (~toni@88.87.194.130) has joined #ceph
[10:28] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[10:28] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[10:33] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[10:33] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[10:38] * kefu (~kefu@114.92.106.70) has joined #ceph
[10:44] * olid12 (~olid1982@aftr-185-17-204-121.dynamic.mnet-online.de) has joined #ceph
[10:45] * rakeshgm (~rakesh@121.244.87.124) Quit (Ping timeout: 480 seconds)
[10:47] * xarses (~xarses@118.103.8.153) has joined #ceph
[10:47] * derjohn_mob (~aj@88.128.80.246) has joined #ceph
[10:54] * DV__ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[10:55] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[10:57] * RayTracer (~RayTracer@153.19.7.39) has joined #ceph
[11:00] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[11:00] * kawa2014 (~kawa@89.184.114.246) Quit (Read error: Connection reset by peer)
[11:00] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[11:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[11:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[11:05] * kefu is now known as kefu|afk
[11:05] * kefu|afk (~kefu@114.92.106.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:06] * sankarshan (~sankarsha@121.244.87.117) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[11:09] * MatthewH121 (~notarima@131.ip-158-69-208.net) has joined #ceph
[11:09] * kefu (~kefu@114.92.106.70) has joined #ceph
[11:10] * adun153 (~ljtirazon@112.198.90.112) Quit (Quit: Leaving)
[11:10] * rdas (~rdas@122.168.244.209) Quit (Quit: Leaving)
[11:11] * adun153 (~ljtirazon@112.198.90.112) has joined #ceph
[11:13] * vbellur (~vijay@117.198.246.209) Quit (Ping timeout: 480 seconds)
[11:15] * jluis (~joao@217.149.135.180) Quit (Ping timeout: 480 seconds)
[11:16] * zhaochao (~zhaochao@125.39.8.237) Quit (Max SendQ exceeded)
[11:17] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[11:17] * kefu (~kefu@114.92.106.70) Quit (Max SendQ exceeded)
[11:17] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:18] * kefu (~kefu@114.92.106.70) has joined #ceph
[11:18] * zhaochao (~zhaochao@125.39.8.237) has joined #ceph
[11:19] * derjohn_mob (~aj@88.128.80.246) Quit (Ping timeout: 480 seconds)
[11:21] * jluis (~joao@217.149.135.180) has joined #ceph
[11:23] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[11:24] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[11:27] <hoonetorg> hi
[11:27] <hoonetorg> want to upgrade my prod servers to latest hammer release
[11:28] * rakeshgm (~rakesh@121.244.87.124) has joined #ceph
[11:28] <hoonetorg> hv seen there was some issue with 0.94.4
[11:28] <hoonetorg> 0.94.5: anybody issues with it???
[11:28] * ksperis (~laurent@46.218.42.103) has joined #ceph
[11:29] <hoonetorg> i want to add a new osd
[11:30] <hoonetorg> and before that i upgrade all existing to latest minor release
[11:30] <hoonetorg> 0.94.5 only 5 days old, mmmh
[11:31] * krypto (~krypto@192.71.175.30) Quit (Quit: Leaving)
[11:31] <hoonetorg> ususally i wait longer before upgrading prod, but i need the new osd ...
[11:34] * jith (~chatzilla@14.139.180.40) Quit (Quit: ChatZilla 0.9.92 [Firefox 38.0/20150511103818])
[11:35] * shawniverson_ (~shawniver@208.38.236.111) Quit (Remote host closed the connection)
[11:36] * adun153 (~ljtirazon@112.198.90.112) Quit (Quit: Leaving)
[11:38] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[11:39] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[11:39] * MatthewH121 (~notarima@4Z9AAAPB4.tor-irc.dnsbl.oftc.net) Quit ()
[11:40] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[11:41] <harlequin> Hi all! Is it possible to change the ID of entries in the CRUSH map?
[11:42] <harlequin> Whet would the consequences of such a *totally irresponsible* act be? :D
[11:43] <harlequin> *What
[11:43] <harlequin> (Just a BIG shuffle of data between hosts?)
[11:45] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[11:45] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[11:54] * kawa2014 (~kawa@89.184.114.246) Quit (Remote host closed the connection)
[11:54] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[11:56] * bryn1 (~Adium@host4.velocix.com) has joined #ceph
[11:56] * linjan__ (~linjan@176.195.239.174) has joined #ceph
[12:00] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[12:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[12:01] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[12:03] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[12:03] * bryn (~Adium@host4.velocix.com) Quit (Ping timeout: 480 seconds)
[12:03] * linjan_ (~linjan@176.195.172.161) Quit (Ping timeout: 480 seconds)
[12:07] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[12:08] * shawniverson (~shawniver@208.38.236.111) has joined #ceph
[12:18] * naoto_ (~naotok@27.131.11.254) Quit (Quit: Leaving...)
[12:22] * shylesh__ (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[12:23] * thebevans (~bevans@109.69.234.234) has joined #ceph
[12:24] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[12:24] * thebevans (~bevans@109.69.234.234) Quit ()
[12:28] * kawa2014 (~kawa@89.184.114.246) Quit (Read error: Connection reset by peer)
[12:29] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[12:33] * xarses (~xarses@118.103.8.153) Quit (Ping timeout: 480 seconds)
[12:33] * overclk (~overclk@59.93.67.41) Quit (Remote host closed the connection)
[12:33] * cdelatte (~cdelatte@2606:a000:6e72:300:b5a5:752b:feb4:2a5e) has joined #ceph
[12:35] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[12:41] * jluis (~joao@217.149.135.180) Quit (Ping timeout: 480 seconds)
[12:41] * zhaochao (~zhaochao@125.39.8.237) Quit (Quit: ChatZilla 0.9.92 [Iceweasel 38.3.0/20150922225347])
[12:42] * jluis (~joao@217.149.135.180) has joined #ceph
[12:44] * jluis (~joao@217.149.135.180) Quit ()
[12:44] * jluis (~joao@217.149.135.180) has joined #ceph
[12:45] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[12:45] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[12:56] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[12:56] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[12:57] * shawniverson (~shawniver@208.38.236.111) Quit (Remote host closed the connection)
[12:58] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[13:01] * bla_ (~bla@2001:67c:670:100:1d::c4) has joined #ceph
[13:03] * overclk (~overclk@59.93.67.41) has joined #ceph
[13:05] * rf`1 (~Bonzaii@7V7AAAX3B.tor-irc.dnsbl.oftc.net) has joined #ceph
[13:06] * overclk (~overclk@59.93.67.41) Quit (Remote host closed the connection)
[13:07] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[13:08] * brutuscat (~brutuscat@60.Red-193-152-185.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:08] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[13:08] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[13:09] * kefu is now known as kefu|afk
[13:09] * kefu|afk (~kefu@114.92.106.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:16] * overclk (~overclk@59.93.67.41) has joined #ceph
[13:17] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[13:20] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[13:22] * xarses (~xarses@118.103.8.153) has joined #ceph
[13:23] * delattec (~cdelatte@cpe-71-75-20-42.carolina.res.rr.com) has joined #ceph
[13:25] * cdelatte (~cdelatte@2606:a000:6e72:300:b5a5:752b:feb4:2a5e) Quit (Ping timeout: 480 seconds)
[13:29] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[13:30] * jluis (~joao@217.149.135.180) Quit (Quit: Lost terminal)
[13:31] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[13:31] * harlequin (~loris@62-193-45-2.as16211.net) Quit (Quit: leaving)
[13:34] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[13:35] * rf`1 (~Bonzaii@7V7AAAX3B.tor-irc.dnsbl.oftc.net) Quit ()
[13:35] * thomnico (~thomnico@2a01:e35:8b41:120:546:2d4f:48d3:b4a1) Quit (Quit: Ex-Chat)
[13:38] * ibravo (~ibravo@72.198.142.104) Quit (Quit: Leaving)
[13:38] * thomnico (~thomnico@2a01:e35:8b41:120:546:2d4f:48d3:b4a1) has joined #ceph
[13:39] * overclk (~overclk@59.93.67.41) Quit (Remote host closed the connection)
[13:41] * olid12 (~olid1982@aftr-185-17-204-121.dynamic.mnet-online.de) Quit (Ping timeout: 480 seconds)
[13:43] * madkiss (~madkiss@2001:6f8:12c3:f00f:488c:c6f:71a9:3f32) Quit (Remote host closed the connection)
[13:43] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[13:47] * DV__ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[13:47] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[13:48] * Icey (~IceyEC_@0001bbad.user.oftc.net) has joined #ceph
[13:48] * ibravo (~ibravo@72.83.69.64) has joined #ceph
[13:51] * kefu (~kefu@114.92.106.70) has joined #ceph
[13:53] * dan (~dan@dvanders-pro.cern.ch) Quit (Remote host closed the connection)
[13:54] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[13:54] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[13:56] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[14:01] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[14:01] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[14:01] * brutuscat (~brutuscat@60.Red-193-152-185.dynamicIP.rima-tde.net) has joined #ceph
[14:07] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[14:08] * rakeshgm (~rakesh@121.244.87.124) Quit (Ping timeout: 480 seconds)
[14:08] * ibravo (~ibravo@72.83.69.64) Quit (Quit: Leaving)
[14:11] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[14:14] * shaunm (~shaunm@208.102.161.229) has joined #ceph
[14:14] * overclk (~overclk@59.93.67.41) has joined #ceph
[14:15] * overclk (~overclk@59.93.67.41) Quit (Remote host closed the connection)
[14:17] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[14:18] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) has joined #ceph
[14:18] * ChanServ sets mode +o nhm
[14:19] * bara (~bara@nat-pool-brq-u.redhat.com) has joined #ceph
[14:21] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[14:21] * mattronix_ (~quassel@2a01:7c8:aab8:616:5054:ff:fe89:506b) has joined #ceph
[14:22] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[14:23] * kefu (~kefu@114.92.106.70) Quit (Max SendQ exceeded)
[14:24] * kefu (~kefu@114.92.106.70) has joined #ceph
[14:27] * yanzheng (~zhyan@182.139.23.79) Quit (Quit: This computer has gone to sleep)
[14:27] * ira (~ira@nat-pool-bos-t.redhat.com) has joined #ceph
[14:28] * yanzheng (~zhyan@182.139.23.79) has joined #ceph
[14:28] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[14:33] * Kurt (~Adium@2001:628:1:5:f5c5:104c:ce7a:27f8) has joined #ceph
[14:38] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[14:40] * ade (~abradshaw@tmo-108-70.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[14:42] * allenmelon (~Peaced@c-6d5ee255.02-9-65736b4.cust.bredbandsbolaget.se) has joined #ceph
[14:44] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[14:46] * overclk (~overclk@59.93.67.41) has joined #ceph
[14:48] * overclk (~overclk@59.93.67.41) Quit (Remote host closed the connection)
[14:51] * mancdaz (~mancdaz@2a00:1a48:7806:117:be76:4eff:fe08:7623) has joined #ceph
[14:56] * overclk (~overclk@59.93.67.41) has joined #ceph
[14:56] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[14:57] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[15:00] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[15:00] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[15:02] * kefu is now known as kefu|afk
[15:03] * kefu|afk (~kefu@114.92.106.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:11] * dgurtner (~dgurtner@77.95.96.78) has joined #ceph
[15:12] * allenmelon (~Peaced@5P6AAAP0F.tor-irc.dnsbl.oftc.net) Quit ()
[15:12] * Coestar (~KapiteinK@5P6AAAP2D.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:16] * olid12 (~olid1982@82.113.98.222) has joined #ceph
[15:17] * sileht (~sileht@sileht.net) Quit (Ping timeout: 480 seconds)
[15:23] * kefu (~kefu@114.92.106.70) has joined #ceph
[15:24] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[15:25] * overclk (~overclk@59.93.67.41) Quit (Remote host closed the connection)
[15:26] * olid13 (~olid1982@82.113.98.222) has joined #ceph
[15:26] * olid12 (~olid1982@82.113.98.222) Quit (Read error: Connection reset by peer)
[15:34] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[15:34] * s3an2 (~root@korn.s3an.me.uk) Quit (Quit: leaving)
[15:35] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[15:39] * Aeso (~AesoSpade@c-68-37-97-11.hsd1.mi.comcast.net) has joined #ceph
[15:39] * yanzheng (~zhyan@182.139.23.79) Quit (Quit: This computer has gone to sleep)
[15:42] * Coestar (~KapiteinK@5P6AAAP2D.tor-irc.dnsbl.oftc.net) Quit ()
[15:44] * Wielebny (~Icedove@cl-927.waw-01.pl.sixxs.net) Quit (Quit: Wielebny)
[15:44] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[15:44] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[15:46] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) has joined #ceph
[15:48] * thomnico (~thomnico@2a01:e35:8b41:120:546:2d4f:48d3:b4a1) Quit (Quit: Ex-Chat)
[15:49] * kefu (~kefu@114.92.106.70) Quit (Max SendQ exceeded)
[15:49] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[15:51] * dgurtner (~dgurtner@77.95.96.78) Quit (Ping timeout: 480 seconds)
[15:52] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[15:53] * olid13 (~olid1982@82.113.98.222) Quit (Ping timeout: 480 seconds)
[15:53] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[15:53] * olid13 (~olid1982@82.113.98.222) has joined #ceph
[15:54] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:55] * mattbenjamin (~mbenjamin@aa2.linuxbox.com) has joined #ceph
[16:00] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[16:01] * kefu (~kefu@114.92.106.70) has joined #ceph
[16:01] <Heebie> How much SSD are people using for journaling on 3TB 7200RPM disks?
[16:03] * ircolle (~Adium@2601:285:201:2bf9:8919:8fda:7dbc:888c) has joined #ceph
[16:07] * haomaiwang (~haomaiwan@li414-102.members.linode.com) has joined #ceph
[16:09] <m0zes> 1GB should be enough. 100MB/s * 2 * $min_sync_interval; min_sync_interval is 5 by default. 100MB/s is about the fastest you can expect out of rust. 2 is padding... all of that is based on the journal sizing formula.
[16:10] * delatte (~cdelatte@2001:1998:2000:101::26) has joined #ceph
[16:11] <Heebie> There's a journal sizing formula? Hmm.. I'm guessing "rust" is referencing iron oxide, which further references spinning platters by implication?
[16:11] <m0zes> when I get my ssds in, I'm planning 2GB for spinners, 15GB for SSDs. mostly because my ssds are fast (~1GB/s write rates)
[16:12] <m0zes> yes. rust in this case is slang for spinning disks.
[16:12] * olid13 (~olid1982@82.113.98.222) Quit (Ping timeout: 480 seconds)
[16:13] <m0zes> "The journal size should be at least twice the product of the expected drive speed multiplied by filestore max sync interval." http://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/
[16:13] * delattec (~cdelatte@cpe-71-75-20-42.carolina.res.rr.com) Quit (Ping timeout: 480 seconds)
[16:14] * Heebie has *SOOO* much documentation open! =O
[16:14] <m0zes> there is a lot of it, and sometimes it is very hard to find *just* the right bit of it.
[16:17] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:18] * vata1 (~vata@207.96.182.162) has joined #ceph
[16:18] * shaunm (~shaunm@208.102.161.229) Quit (Ping timeout: 480 seconds)
[16:19] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[16:22] * olid13 (~olid1982@82.113.98.222) has joined #ceph
[16:23] * dyasny (~dyasny@38.108.87.20) has joined #ceph
[16:23] * sudocat (~dibarra@2602:306:8bc7:4c50::1f) Quit (Quit: Leaving.)
[16:24] <Heebie> You're not kidding m0zes
[16:25] * overclk (~overclk@59.93.67.41) has joined #ceph
[16:26] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[16:29] * olid14 (~olid1982@82.113.98.222) has joined #ceph
[16:29] * olid13 (~olid1982@82.113.98.222) Quit (Read error: Connection reset by peer)
[16:33] * overclk (~overclk@59.93.67.41) Quit (Ping timeout: 480 seconds)
[16:34] * jole (~oftc-webi@ip-176-52-204-228.static.reverse.dsi.net) has joined #ceph
[16:34] * georgem (~Adium@206.108.127.16) has joined #ceph
[16:36] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[16:36] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[16:37] <jole> Hello! I've created a cluster with 72 OSDs, 3 monitors and 1 MDS running on Gentoo. Everything works fine. However, everytime I (re)start my ceph-mds.0 deamon, I get the following message: deprecation warning: "MDS id 'mds.0' is invalid and will be forbidden in a future version. MDS names may not start with a numeric digit." There's nothing about this in the documentation. Can you please tell me what could this warning mean?
[16:38] <jole> My MDS deamon does not start with a numeric value, so I'm a bit confused by this warning.
[16:38] * haomaiwang (~haomaiwan@li414-102.members.linode.com) Quit (Remote host closed the connection)
[16:41] <m0zes> jole: pastebin 'ceph mds dump' ?
[16:41] * jasuarez (~jasuarez@243.Red-81-39-64.dynamicIP.rima-tde.net) Quit (Quit: WeeChat 1.2)
[16:42] <m0zes> and it looks like your mds daemon name does start with a digit. the .0 in the init script name indicates it, at least.
[16:43] * dyasny (~dyasny@38.108.87.20) Quit (Ping timeout: 480 seconds)
[16:44] * olid14 (~olid1982@82.113.98.222) Quit (Ping timeout: 480 seconds)
[16:44] * dyasny (~dyasny@209.171.88.6) has joined #ceph
[16:45] <jole> m0zes: Ok, thanks. Here's the pastebin: http://pastebin.com/ASCzcNKq
[16:46] <m0zes> the '0' in "34798: <IPADDRESS>:<PORT>/XXX '0' mds.0.6 up:active seq 5" says the mds server is just named 0.
[16:46] <jole> m0zes: FYI, I didn't use ceph-deploy (it's doesn't support Gentoo), I configured the cluster manually.
[16:46] <m0zes> the standard nameing convention would be $(hostname)
[16:47] <m0zes> right, I'm very familiar with gentoo. I don't run ceph on it, but everything else in my cluster is gentoo.
[16:47] <jole> Hm, ok. Probably I missed this somehow.
[16:48] <jole> I'll remove the current one and try to create a new one.
[16:48] * mhack (~mhack@66-168-117-78.dhcp.oxfr.ma.charter.com) has joined #ceph
[16:49] <jole> I thought that ceph-mds.0 will be the name, and that's why I was confused when it said "it starts with numeric value"
[16:49] <jole> m0zes: Thanks for your help! Cheers
[16:49] * jole (~oftc-webi@ip-176-52-204-228.static.reverse.dsi.net) Quit (Quit: Page closed)
[16:50] <m0zes> I'm not seeing any info on manually deploying an mds server. hrm. and the gentoo docs only reference ceph-mds.0 guess I should check out how to do it properly.
[16:52] <lincolnb> usually its just 'ceph mds newfs [metadata pool] [data pool] --yes-i-really-mean-it' but that gives a numeric name by default iirc
[16:53] * moore (~moore@64.202.160.88) has joined #ceph
[16:54] <m0zes> and that creates a new mds? because ceph --help says "make new filesystem using pools" about that command.
[16:54] <lincolnb> ah yes sorry, my bad
[16:54] <lincolnb> that's a new filesystem
[16:54] <lincolnb> hence newfs..
[16:55] <m0zes> right, which is why I was confused.
[16:55] <lincolnb> hrmm
[16:55] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[16:55] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[16:55] * kevinc (~kevinc__@client65-125.sdsc.edu) has joined #ceph
[16:57] * jwilkins (~jowilkin@67.204.149.211) has joined #ceph
[16:58] <lincolnb> http://www.sebastien-han.fr/blog/2013/05/13/deploy-a-ceph-mds-server/ fom sebastien's blog, seems to be just defining it in /etc/ceph.conf, creating the key, and starting it
[16:58] * amote (~amote@1.39.96.51) has joined #ceph
[16:58] <lincolnb> in my ceph.conf i simply have
[16:58] <lincolnb> [mds.a]
[16:59] <lincolnb> host = ceph-mds01
[17:00] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) has joined #ceph
[17:00] * kefu (~kefu@114.92.106.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:02] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:09] * delattec (~cdelatte@cpe-71-75-20-42.carolina.res.rr.com) has joined #ceph
[17:10] * stj (~stj@0001c20c.user.oftc.net) has joined #ceph
[17:10] <m0zes> hrm. I haven't defined any of that in ceph.conf. I wonder if it just has to have a calid keyring...
[17:10] <m0zes> s/calid/valid/
[17:12] * delatte (~cdelatte@2001:1998:2000:101::26) Quit (Read error: Connection reset by peer)
[17:13] <neurodrone> I want to create a new cluster but using the same fsid and keyring as the old one. Is there a guide to do this properly?
[17:14] * bryn1 (~Adium@host4.velocix.com) Quit (Quit: Leaving.)
[17:17] * shaunm (~shaunm@cpe-74-132-70-216.kya.res.rr.com) has joined #ceph
[17:17] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[17:18] <neurodrone> For some reason I keep on getting: ???e0 unable to load initial keyring???
[17:18] <neurodrone> One of the location it checks is /etc/ceph/ceph.keyring, which basically contains my keyring I copy-pasted into it.
[17:18] <neurodrone> Somehow it???s unable to recognize it.
[17:24] * shylesh (~shylesh@59.95.71.120) has joined #ceph
[17:24] * shylesh (~shylesh@59.95.71.120) Quit (autokilled: This host may be infected. Mail support@oftc.net with questions. BOPM (2015-11-02 16:24:50))
[17:25] * davidzlap (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) has joined #ceph
[17:26] * markednmbr1 (~Diego@cpc1-lewi13-2-0-cust267.2-4.cable.virginm.net) has joined #ceph
[17:26] <markednmbr1> Hello all
[17:27] <markednmbr1> anyone got any thoughts on this http://tracker.ceph.com/issues/13568#change-60511
[17:28] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[17:29] <m0zes> looks like the journal partitions were created manually? on an mbr disk. not a gpt disk. ceph's tools were designed around gpt.
[17:30] <m0zes> if the journal is symlinked ot the device, it is working. and I do mean *to* the device. someone recently just had their journal symlinked to a non-existent file named 'sdaX'
[17:32] <m0zes> personally I'd change the symlink to point at the right /dev/disk/by-id/ entry.
[17:33] <m0zes> just in case disks get renumbered. (I've had hosts do this on *every* reboot)
[17:34] <Anticimex> how do i debug what ceph has parsed from the config file?
[17:34] <Anticimex> i suspect my config statements aren't getting parsed properly
[17:35] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:35] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:35] <m0zes> Anticimex: ceph daemon osd.${x} config show
[17:35] <Anticimex> process is radosgw in this case
[17:35] <m0zes> ahh. not sure. is there a admin socket for rgw?
[17:36] <Anticimex> not sure
[17:36] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[17:37] <m0zes> ls -lah /var/run/ceph (you'r looking for a asok file with rgw or radosgw in the name, probably)
[17:37] * sileht (~sileht@sileht.net) has joined #ceph
[17:38] * dyasny (~dyasny@209.171.88.6) Quit (Ping timeout: 480 seconds)
[17:38] <Anticimex> found a asok file with the key name
[17:39] * stewiem2000 (~Adium@185.80.132.129) Quit (Quit: Leaving.)
[17:39] <m0zes> then 'ceph daemon ${socket} config show'; where socket is without the ceph- initial portion, and without the .asok
[17:39] <m0zes> hopefully that works.
[17:39] * overclk (~overclk@59.93.67.41) has joined #ceph
[17:40] * remy1991 (~ravi@115.114.59.182) Quit (Ping timeout: 480 seconds)
[17:40] <Anticimex> yeah, i get output
[17:40] <Anticimex> thx
[17:40] * overclk_ (~overclk@59.93.67.41) has joined #ceph
[17:40] <markednmbr1> m0zes, whats the correct way to prepare the journal? does it just have to be a normal fs?
[17:41] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) has joined #ceph
[17:42] <m0zes> the "correct" way involves gpt partitioning, and setting the parttype to a special uuid. if the journals are on your root fs disks, you've probably done the best you can.
[17:42] <Anticimex> ahh, i start to see..
[17:42] * zenpac (~zenpac3@66.55.33.66) Quit (Quit: Leaving)
[17:43] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[17:43] * amote (~amote@1.39.96.51) Quit (Quit: Leaving)
[17:43] <markednmbr1> yea, they are on the root fs... what I mean is do the partitions need to be formatted in any specific format?
[17:43] <m0zes> there is no filesytem on the journal, it is a (circular?) transaction log that can be replayed in the event things shutdown unexpectedly.
[17:43] <markednmbr1> or is jsut ext4/xfs etc ok
[17:43] * zenpac (~zenpac3@66.55.33.66) has joined #ceph
[17:43] <markednmbr1> right, so your not supposed to format the partition
[17:44] <m0zes> it should use the raw partition if you are using separate disks.
[17:44] <Anticimex> librados and radosgw seems to not agree on how some args should be used :]
[17:44] <markednmbr1> but ceph-disk list will not show that the journal is in use when it's set up like this?
[17:45] * overclk_ (~overclk@59.93.67.41) Quit ()
[17:45] <m0zes> markednmbr1: I don't think so. the tools were designed with a slightly different configuration in mind, primarily ceph having complete control over the disks, or at least gpt partitioning.
[17:45] * mhack (~mhack@66-168-117-78.dhcp.oxfr.ma.charter.com) Quit (Quit: Leaving)
[17:46] * mhack (~mhack@66-168-117-78.dhcp.oxfr.ma.charter.com) has joined #ceph
[17:46] <m0zes> and booting off gpt can be done, but it is more complicated and usually requires EFI. not worth it in the server space, usually.
[17:47] * yguang11 (~yguang11@66.228.162.44) has joined #ceph
[17:47] * overclk (~overclk@59.93.67.41) Quit (Ping timeout: 480 seconds)
[17:47] <markednmbr1> hmm I was hoping that the low performance was because the journals weren't being used but it sounds like they are..
[17:48] * yguang11 (~yguang11@66.228.162.44) Quit ()
[17:48] * dgurtner (~dgurtner@185.10.235.250) has joined #ceph
[17:50] <m0zes> how nice are the ssds the journals are on?
[17:50] <m0zes> http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
[17:51] <m0zes> I have some ssds for OS disks that were so terrible as journals it was actually *faster* to move the journals back onto the spinners.
[17:51] * stewiem2000 (~Adium@185.80.132.129) has joined #ceph
[17:53] * enax (~enax@hq.ezit.hu) Quit (Ping timeout: 480 seconds)
[17:54] <darkfader> m0zes: what types were that bad?
[17:54] <darkfader> $friend told me they have an ashtray filled with ssds that didn't pass the enhanceio load tests ;)
[17:55] <m0zes> Lite-ON ect-480n9s
[17:55] <m0zes> and they've gotten worse with time.
[17:55] <darkfader> yuck haha
[17:55] <m0zes> I've got a pair currently *read* at 2MB/s
[17:56] * vbellur (~vijay@122.172.210.192) has joined #ceph
[17:56] <darkfader> haha i also had burned through a samsung 830, it was 4MB/s write. the read is ... wow. how they even manage to break it so much
[17:56] <m0zes> $boss decided "read-optimized" ssds were a fine thing to buy
[17:57] <m0zes> now were spending $45,000 on new pcie ssds for journaling.
[17:57] <darkfader> it's one of the best euphemisms in our industry, didn't ever read that till last week in some dell catalogue
[17:57] <darkfader> m0zes: probably the liteon crap wasn't too much in $$$
[17:58] <darkfader> so the "experience" wasn't too expensive
[17:58] <darkfader> which pcie will it be?
[17:58] <m0zes> Intel DC P3700 400GB. 2 per host, 24 hosts, and 2 cold spares.
[17:59] <darkfader> shiney.
[17:59] <m0zes> I'm hoping so.
[17:59] <darkfader> i recommended same thing to one place who aren't happy with performance atm
[17:59] <darkfader> but i dont know if they'll switch
[17:59] <markednmbr1> m0zes, this ssd is giving me 13.4 MB/s
[17:59] <m0zes> ended up buying them for $875 each. dell wanted $300 more per.
[18:00] <darkfader> they're kinda too big to define a custom server build and it hurts them
[18:00] <darkfader> thats a good price yes
[18:01] <m0zes> markednmbr1: 1 writer, or multiple? and is this while the cluster is under load?
[18:02] <markednmbr1> that was just a dd test from the bottom of that page
[18:02] <markednmbr1> no load on the cluster, its just testing
[18:03] * dgurtner (~dgurtner@185.10.235.250) Quit (Remote host closed the connection)
[18:03] <m0zes> what ssds are these?
[18:03] <markednmbr1> if I run the same test on the sata drive itself I get 12.6MB/s
[18:03] <markednmbr1> so not much difference
[18:03] * dgurtner (~dgurtner@185.10.235.250) has joined #ceph
[18:03] <markednmbr1> sandisk x100
[18:04] <markednmbr1> just a desktop class jobby
[18:04] * bara (~bara@nat-pool-brq-u.redhat.com) Quit (Ping timeout: 480 seconds)
[18:04] * dyasny (~dyasny@209-112-41-210.dedicated.allstream.net) has joined #ceph
[18:04] * olid14 (~olid1982@aftr-185-17-204-121.dynamic.mnet-online.de) has joined #ceph
[18:04] <m0zes> right, and if the ssd can't be 3x faster in parallel, it *could* slow down the write speed of everything.
[18:04] <markednmbr1> I see
[18:05] <markednmbr1> what sort of write speeds are you seeing on your setup
[18:05] <m0zes> since this is testing, I'd move the journals to the osd disk, and see if performance increases.
[18:07] <m0zes> I can write to my cache pool at ~1.5GB/s at the moment, with no ssd journaling. I'm expecing that to ~double with my ssds when I get them in, and reads should hopefully double as well, as the few iops these spinners have can then be dedicated to more useful things
[18:11] <markednmbr1> I've not looked in to this cache pool before
[18:12] <markednmbr1> whats the purpose of that vs journals?
[18:12] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[18:12] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[18:13] <markednmbr1> might I be better off trying these SSD's in a cache pool instead of as journals?
[18:13] <m0zes> I've got a cache tier sitting in front of an erasure coded pool for cephfs. this is because the cephfs clients can't talk directly to an erasure coded pool. cache tiering isn't as fast as it could be at the moment. I think there's quite a bit of work going on for it in infernalis and jewel.
[18:16] <m0zes> you can certainly try them in a cache tier, I doubt they'd help as you'd stil have to journal them, and then you'd have some serious write amplification, potentially causing the ssds to wear out 5-10x faster than you would expect.
[18:16] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[18:17] * Loveboyme (~quaLity@130.193.201.104) has joined #ceph
[18:17] * brutuscat (~brutuscat@60.Red-193-152-185.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[18:17] * fabioFVZ (~fabiofvz@213.187.10.8) has joined #ceph
[18:17] * fabioFVZ (~fabiofvz@213.187.10.8) Quit ()
[18:19] * scuttlemonkey is now known as scuttle|afk
[18:20] * domurcuk (~selin@79349dc4.test.dnsbl.oftc.net) has joined #ceph
[18:20] * nevvare (~CigD]em@79349dc4.test.dnsbl.oftc.net) has joined #ceph
[18:20] * nizamettin (~Geveze@111-252-141-167.dynamic.hinet.net) has joined #ceph
[18:20] * duray (~Geveze@111-252-141-167.dynamic.hinet.net) has joined #ceph
[18:20] * domurcuk (~selin@79349dc4.test.dnsbl.oftc.net) has left #ceph
[18:20] * nevvare (~CigD]em@79349dc4.test.dnsbl.oftc.net) has left #ceph
[18:20] * duray (~Geveze@111-252-141-167.dynamic.hinet.net) has left #ceph
[18:20] * nizamettin (~Geveze@111-252-141-167.dynamic.hinet.net) has left #ceph
[18:20] * Loveboyme (~quaLity@130.193.201.104) Quit ()
[18:20] <markednmbr1> m0zes, what fs do you use?
[18:21] <m0zes> xfs
[18:21] * RayTracer (~RayTracer@153.19.7.39) Quit (Remote host closed the connection)
[18:23] * dustinm` (~dustinm`@105.ip-167-114-152.net) Quit (Ping timeout: 480 seconds)
[18:25] <markednmbr1> does the journal still come before the cache tier then?
[18:26] <markednmbr1> or does the cache tier still use a journal
[18:28] <darkfader> m0zes: yeah the new tiering will be nice, it's the first time any (open) software based caching makes a real detection of hot areas
[18:28] <darkfader> no more cache flush just by daring to do backups
[18:29] <m0zes> cache tier has its own journal, as does the underlying pool. therefor 3x write amplification already on the ssds. plus the promote/eviction that will happen on a full cachetier, adding more...
[18:30] * joshd (~jdurgin@206.169.83.146) has joined #ceph
[18:30] * remy1991 (~ravi@122.170.76.73) has joined #ceph
[18:31] * olid14 (~olid1982@aftr-185-17-204-121.dynamic.mnet-online.de) Quit (Ping timeout: 480 seconds)
[18:31] <m0zes> darkfader: indeed. I've churned my cache tier a couple times. trying to fix files hit by bug 12551
[18:31] <kraken> couldn't access that URL. response was 502: No JSON object could be decoded
[18:31] * bla_ (~bla@2001:67c:670:100:1d::c4) Quit (Remote host closed the connection)
[18:32] <m0zes> 50TB cache tier in front of 400TB of real data. thats a lot of promoting/evicting...
[18:34] <Kvisle> that would depend on how much of the data is hot, wouldn't it?
[18:35] <m0zes> detecting and fixing files hit by bug 12551, means you need to read in the file, truncate it to $size + 1, then truncate it to $size. thus huge churn, because every file has to be read.
[18:35] * olid14 (~olid1982@aftr-185-17-204-121.dynamic.mnet-online.de) has joined #ceph
[18:35] <kraken> couldn't access that URL. response was 502: No JSON object could be decoded
[18:36] <markednmbr1> m0zes do you think it would be worth opting for a pci-e nvm card or something just for ceph journal? would that provide a serious performance boost?
[18:36] <m0zes> my fix just reads 128 bytes of the first object, so less churn, but pain none-the-less.
[18:36] <darkfader> m0zes: yuck, ok i think i don't want to be on the bleeding edge in that case
[18:37] <darkfader> thanks for going into that detail
[18:37] <darkfader> might have to in 4 weeks but i'll savor the days
[18:38] <m0zes> markednmbr1: a pcie ssd for 3 osds in a host is probably overkill, maybe one 2.5" DC class ssds from intel... of course, it all depends on what you need vs what you can afford ;)
[18:39] <cetex> so, i have an issue with ceph :)
[18:39] <cetex> http://pastebin.com/ANUQthij
[18:40] * davidzlap (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) Quit (Read error: Connection reset by peer)
[18:40] <cetex> i get loads of ".accept replacing existing (lossy) channel (new one lossy=1)" both between osd's and from osds <-> mons
[18:40] * davidzlap (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) has joined #ceph
[18:40] <cetex> about 250KB/s of logs like that per osd..
[18:40] <m0zes> darkfader: yeah, that was an ugly bug. and it took a while for me to even realize we had an issue, since it turns out my users were just deleting the "corrupt" files and recreating them. not telling me and then getting angry at me in the process.
[18:41] * dwangoAC (~dwangoac@snowball.greenfly.net) has left #ceph
[18:43] * lcurtis_ (~lcurtis@47.19.105.250) has joined #ceph
[18:44] * branto1 (~branto@ip-213-220-232-132.net.upcbroadband.cz) Quit (Quit: Leaving.)
[18:46] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) Quit (Quit: Leaving)
[18:49] * dyasny (~dyasny@209-112-41-210.dedicated.allstream.net) Quit (Ping timeout: 480 seconds)
[18:50] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:56] * mykola (~Mikolaj@91.225.202.134) has joined #ceph
[18:56] <Heebie> Is it necessary to schedule regular scrubbing in ceph?
[18:56] <cetex> Heebie: as far as i know it's done regularly
[18:57] <Heebie> I just noticed there's a scrub command, so I didn't know if it was something automatic.
[18:58] <m0zes> yeah, there is regular scrubbing done by default. also deep-scrubbing.
[18:59] <m0zes> as long as you haven't set noscrub or nodeep-scrub
[19:00] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[19:02] * terje (~root@135.109.216.239) has joined #ceph
[19:07] * nardial (~ls@dslb-088-072-094-085.088.072.pools.vodafone-ip.de) Quit (Quit: Leaving)
[19:10] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[19:12] * davidzlap (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) Quit (Read error: Connection reset by peer)
[19:12] * davidzlap (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) has joined #ceph
[19:14] * Quatroking (~sese_@185.101.107.227) has joined #ceph
[19:20] <markednmbr1> m0zes, looking at that link you sent me, the DC3710 didn't even get great results with that io test. only 37MB/s
[19:21] * ibravo (~ibravo@72.198.142.104) Quit (Quit: Leaving)
[19:22] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[19:22] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[19:24] * pabluk is now known as pabluk_
[19:25] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has left #ceph
[19:25] * dupont-y (~dupont-y@familledupont.org) has joined #ceph
[19:26] * Venturi (Venturi@93-103-91-169.dynamic.t-2.net) has joined #ceph
[19:26] * garphy is now known as garphy`aw
[19:26] <Venturi> what does ceph do in case of small write (aka 5KBs), how does data striping work here since 4MB is smallest unit to write object to rados, how does this work?
[19:27] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[19:28] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[19:37] * madkiss (~madkiss@2001:6f8:12c3:f00f:b9a0:8ede:9404:feb0) has joined #ceph
[19:41] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[19:42] * kutija (~kutija@95.180.90.38) has joined #ceph
[19:42] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[19:42] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[19:42] * vbellur (~vijay@122.172.210.192) Quit (Ping timeout: 480 seconds)
[19:44] * bitserker (~toni@88.87.194.130) Quit (Ping timeout: 480 seconds)
[19:44] * Quatroking (~sese_@4Z9AAAPYN.tor-irc.dnsbl.oftc.net) Quit ()
[19:45] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) has joined #ceph
[19:46] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[19:48] <markednmbr1> m0zes, whats your opinion on using a single pcie ssd (like Intel 750) partitioned up to be journal for sata disks, and as a cache tier?
[19:55] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:8929:c00d:eb9:354b) Quit (Ping timeout: 480 seconds)
[19:58] * dupont-y (~dupont-y@familledupont.org) Quit (Ping timeout: 480 seconds)
[19:58] * dgurtner (~dgurtner@185.10.235.250) Quit (Ping timeout: 480 seconds)
[20:01] * ira (~ira@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[20:02] * daviddcc (~dcasier@80.215.226.131) has joined #ceph
[20:04] * davidzlap1 (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) has joined #ceph
[20:05] * davidzlap (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) Quit (Read error: Connection reset by peer)
[20:08] * remy1991 (~ravi@122.170.76.73) Quit (Ping timeout: 480 seconds)
[20:10] * thomnico (~thomnico@2a01:e35:8b41:120:546:2d4f:48d3:b4a1) has joined #ceph
[20:16] <m0zes> markednmbr1: the 750 series is only rated for 70GB a day of writes.
[20:19] * ibravo (~ibravo@72.83.69.64) has joined #ceph
[20:20] <monsted> the P-series ones are a good fit, though
[20:22] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:c11e:ac88:f68b:f7e8) has joined #ceph
[20:22] <monsted> a bit large for a journal, but 4.8 TBW/day...
[20:23] <monsted> and holy hell, that performance :)
[20:23] <monsted> /Users/monsted/Downloads/ssd-dc-p3608-spec.pdf
[20:23] <monsted> er
[20:23] <monsted> i totally meant to do that...
[20:23] <monsted> http://www.intel.com/content/www/us/en/solid-state-drives/ssd-dc-p3608-spec.html
[20:23] <cetex> :)
[20:24] <cetex> i've been looking at p3600 / p3700
[20:24] <cetex> but we'll kill them too fast..
[20:24] <cetex> :<
[20:24] <cetex> so i'm investigating journal in ram instead.
[20:24] <monsted> you're doing more than 3 drive writes per day?
[20:24] <cetex> yeah.
[20:24] <cetex> we're looking at doing more than 3 drive writes per day.
[20:25] <cetex> depends though. i've been looking at the s3600 / s3700
[20:25] <cetex> 100GB variant.
[20:25] <monsted> yeah, this one is 1.6TB and 3 DWPD
[20:25] <monsted> (and you can have 4 TB)
[20:25] <cetex> if we go for p3700 instead we'll have at least 400GB, so should last considerably longer, but then we'd need more hdd's per host.
[20:26] <monsted> the NVMe speeds are sexy though :)
[20:26] <cetex> to make up for the expense.
[20:26] <cetex> yeah.
[20:28] <monsted> $5k each is a little steep :)
[20:28] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[20:28] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[20:29] <cetex> but we're looking at 0.75-1TBW/HDD/day, so the s3700/100GB for journal and 2 HDD's would get 15-20 DWPD.
[20:29] <monsted> but i could imagine stuffing two of them into something like the HP Apollo 4200
[20:30] <cetex> P3700 / 400GB is quite a bit more pricey but would only get 3.75 - 5DWPD though.
[20:30] <monsted> damn, that's a lot
[20:30] <cetex> yeah. it's probably not a common use-case, we need a 200TB ringbuffer.
[20:30] <monsted> and you'll be rewriting that data over and over again?
[20:30] <cetex> yeah.
[20:31] <cetex> live video w/ timeshifting.
[20:31] <cetex> on average we store 1 weeks worth of video.
[20:31] <cetex> kinda.
[20:31] <cetex> for now.. :>
[20:33] <m0zes> the 400GB P3700 is rated at 10 DWPD
[20:33] * daviddcc (~dcasier@80.215.226.131) Quit (Read error: Connection reset by peer)
[20:33] <cetex> aaah. right. yeah.
[20:33] <cetex> so that should last us quite some time.
[20:33] <monsted> at some point you might be better off without the SSDs :)
[20:33] <cetex> but then we need new hosts ;D
[20:33] <m0zes> and it is only $875 (at least, that is what we just paid for 50 of them)
[20:33] * PcJamesy (~Qiasfah@tor-exit-node-nibbana.dson.org) has joined #ceph
[20:34] <cetex> i'd like to squeeze in 2-3 hdd's into our blades since we have a couple of hundred of them.
[20:34] <cetex> but then i can only (maybe) fit pci-e ssd's.
[20:36] * davidzlap (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) has joined #ceph
[20:36] * davidzlap1 (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) Quit (Read error: Connection reset by peer)
[20:38] * davidzlap1 (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) has joined #ceph
[20:38] * davidzlap (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) Quit (Read error: Connection reset by peer)
[20:40] <cetex> so i'm seriously contemplating just shoving more ram into them and running journal on ramdisk.
[20:40] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[20:40] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[20:42] <m0zes> if the data isn't important in the long term, that might ba a decent idea ;-). would it be a fatal (or even painful) blow to the company if the data was lost?
[20:42] <cetex> nope.
[20:42] <cetex> would be annoying
[20:43] <m0zes> the other question would be "how much work it would be to get the cluster back into a useable state?" after a power outage event.
[20:44] <cetex> yeah. i've done some tests.
[20:45] <monsted> isn't the point of the journal to ensure that the file system is consistent?
[20:45] <cetex> problem doesn't seem to be loss of the journal, problem is ceph prints this (http://pastebin.com/ANUQthij) at roughly 500KB/s (which fills up the ramdisk everything is running in within 2-3hours)
[20:45] <cetex> yeah. it is.
[20:45] <monsted> how about just using ZFS and letting it figure out the consistency itself?
[20:46] <cetex> the nice thing with ceph is that radosgw works on top of it. not sure how well it will work with 300-500M objects stored though.
[20:46] <cetex> so we have s3'ish interface available (we use S3 today)
[20:46] <monsted> (i think Btrfs does the same, but lulz, btrfs...)
[20:46] <m0zes> the journal is still necessary for zfs and btrfs.
[20:48] <m0zes> I don't know enough about the logging to know what the heck those messages are talking about. would an option be to logrotate every hour?
[20:48] <cetex> yeah. i do that already
[20:48] <cetex> but it's very far from optimal :)
[20:49] <cetex> ceph also uses all sockets available on the host
[20:49] <m0zes> have you increased the open file limit from the default?
[20:49] <cetex> hm, no. i haven't actually. one sec and i'll see what it's set at
[20:50] <cetex> 524288
[20:50] <cetex> that's the number of FD's set. (no limit, is root)
[20:50] <cetex> ulimit -n :)
[20:51] <m0zes> mine's set to 13093150
[20:52] <cetex> that's a lot of FD's..
[20:52] <cetex> if ceph needs that many i wonder where it went wrong ;D
[20:54] <monsted> you might consider logging to logstash or splunk on a different host
[20:54] <cetex> maybe, but no. i don't really care about the logs.
[20:54] <m0zes> I'm honestly no sure where that is being set from.
[20:54] <cetex> more important that ceph doesn't kill the host :)
[20:55] <cetex> https://github.com/ceph/ceph/blob/master/src/msg/simple/Pipe.cc
[20:56] <cetex> https://github.com/ceph/ceph/blob/master/src/msg/simple/Pipe.cc#L494
[20:56] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[20:57] <m0zes> the default file-max seems to be ~10 * MB of memory in the host.
[20:58] <m0zes> s/M/K/
[20:58] <cetex> yeah. we have 24gb and 48gb on the hosts. the one with 524288fd's had 48GB.
[20:59] * rendar (~I@host114-183-dynamic.37-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:59] <cetex> oooh..
[20:59] <cetex> hm
[20:59] * mykola (~Mikolaj@91.225.202.134) Quit (Quit: away)
[21:00] <cetex> nope. same thing. 524k on that one as well, at least within the container. only 24GB ram though.
[21:01] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[21:01] * rendar (~I@host114-183-dynamic.37-79-r.retail.telecomitalia.it) has joined #ceph
[21:02] <cetex> hm.. maybe it's sysctl's fs.file-max that's blocking?
[21:03] * PcJamesy (~Qiasfah@4K6AACDGC.tor-irc.dnsbl.oftc.net) Quit ()
[21:04] <cetex> nope. 2.5M there..
[21:04] <cetex> baah
[21:04] <cetex> :>
[21:04] * mattronix (~quassel@2a02:898:162:1::1) Quit (Remote host closed the connection)
[21:05] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[21:10] * mattronix (~quassel@2a02:898:162:1::1) has joined #ceph
[21:12] <cetex> hm. found one issue with ramdisk only
[21:13] <cetex> rebooted all monitors simultaneously, and didn't realize they were only running on ramdisk. :)
[21:13] <cetex> no way to recover.
[21:18] * dgurtner (~dgurtner@185.10.235.250) has joined #ceph
[21:20] * davidzlap1 (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) Quit (Read error: Connection reset by peer)
[21:20] * davidzlap (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) has joined #ceph
[21:23] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:c11e:ac88:f68b:f7e8) Quit (Ping timeout: 480 seconds)
[21:39] <cetex> only a lab though :)
[21:39] <cetex> meaning was to kill ceph (haven't succeeded in destroying data before, even though i've certainly tried)
[21:47] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[21:48] * thomnico (~thomnico@2a01:e35:8b41:120:546:2d4f:48d3:b4a1) Quit (Quit: Ex-Chat)
[21:52] * davidzlap1 (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) has joined #ceph
[21:52] * davidzlap (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) Quit (Read error: Connection reset by peer)
[21:52] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[21:58] * curtis864 (~capitalth@94.242.243.250) has joined #ceph
[21:59] * enax (~enax@94-21-125-223.pool.digikabel.hu) has joined #ceph
[22:11] <Venturi> ho does ceph deal with small files, is it proper for some non-VM environment, such as storge backend for build environment (linux compile environment) where also object smaller than 4MB could be saved to RADOS? Does CEPH truncate smaller objects into 4 MB chunks....how does that influence the performance?
[22:12] <lurbs_> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg24586.html
[22:12] * lurbs_ is now known as lurbs
[22:15] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:c11e:ac88:f68b:f7e8) has joined #ceph
[22:15] * davidzlap1 (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) Quit (Read error: Connection reset by peer)
[22:15] * davidzlap (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) has joined #ceph
[22:18] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[22:20] <Venturi> lurbs_: thank you for the link.
[22:26] <markednmbr1> m0zes, only 70G a day!
[22:26] <markednmbr1> that's no use ;)
[22:27] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[22:27] <m0zes> correct. which is why "enterprise" drives are recommended. that would be a "consumer" drive, but I'm sure intel would price it at the "prosumer" range.
[22:27] * curtis864 (~capitalth@4Z9AAAP5W.tor-irc.dnsbl.oftc.net) Quit ()
[22:28] * dyasny (~dyasny@198.251.57.99) has joined #ceph
[22:30] <markednmbr1> m0zes, what are you using for connectivity between nodes, 10Gb?
[22:30] <m0zes> I'm using 40GbE. mostly for latency, and making sure network is never the bottleneck.
[22:31] <m0zes> 65% of my clients are connected at 40GbE, the rest are 10GbE
[22:32] * chrisinajar (~Pommesgab@c-6d5ee255.02-9-65736b4.cust.bredbandsbolaget.se) has joined #ceph
[22:32] <markednmbr1> infiniband?
[22:32] <m0zes> nope, ethernet.
[22:33] <m0zes> some of the 10Gb clients have Infiniband as well, but not for ceph access.
[22:33] <markednmbr1> hah didn't even know that was round
[22:33] * _28_ria (~kvirc@194.226.6.106) Quit (Read error: Connection reset by peer)
[22:33] <m0zes> we're getting some 100Gb ethernet to test any day now.
[22:34] * _28_ria (~kvirc@194.226.6.106) has joined #ceph
[22:34] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[22:35] <markednmbr1> nice
[22:35] <markednmbr1> is there anything in between the 750 and the p3700
[22:36] <markednmbr1> the p3600 and p3500 don't have the iops of the 3700
[22:39] <markednmbr1> tried any other brands other than intel?
[22:40] * segutier (~segutier@sfo-vpn1.shawnlower.net) has joined #ceph
[22:40] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[22:41] * shaunm (~shaunm@cpe-74-132-70-216.kya.res.rr.com) Quit (Ping timeout: 480 seconds)
[22:45] <debian112> can I run cache-tiering and regular data objects on the same ssd drives?
[22:47] <debian112> I am sure I can, but is it a good practice?
[22:47] <lurbs> By having two different pools (one for cache, one for data) that use the same CRUSH map that selects SSDs, yes.
[22:47] <lurbs> Really depends on your workload(s).
[22:49] <debian112> cool, I guess its worth a try to see how it performs
[22:52] <m0zes> I don't recommend it, especially if one pool depends on the other.
[22:53] <lurbs> Oh yes, putting the cache in front of the data pool on the same SSDs would be a very bad idea.
[22:54] <m0zes> I did something like that with spinners, cachepool and ecpool were on the same disks, slowness all around under load.
[22:54] * dustinm` (~dustinm`@2607:5300:100:200::160d) has joined #ceph
[22:54] <m0zes> moving the cachepool to the 4tb disks, ecpool to the 6tb disks and all of a sudden the cluster is 95% happier.
[22:55] <lurbs> Putting the cache in front of a pool of spinners might work, but you could very easily bottleneck on the SSDs if you need performance for both the cache and data pools simultaneously.
[22:55] <markednmbr1> anyone done a cluster with just ssds (no spinners?)
[22:56] <markednmbr1> whats performance like with that
[22:56] <debian112> not planning to do that
[22:56] * enax (~enax@94-21-125-223.pool.digikabel.hu) Quit (Ping timeout: 480 seconds)
[22:57] <debian112> all SSD cache Tiering will rollback to slow sata disk
[22:58] <markednmbr1> yea that makes sense, just wondering if only doing SSD works really well without having to have cache/journals elsewhere
[23:01] * davidzlap1 (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) has joined #ceph
[23:01] * davidzlap (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) Quit (Read error: Connection reset by peer)
[23:01] * olid14 (~olid1982@aftr-185-17-204-121.dynamic.mnet-online.de) Quit (Ping timeout: 480 seconds)
[23:02] * chrisinajar (~Pommesgab@7V7AAAYI4.tor-irc.dnsbl.oftc.net) Quit ()
[23:06] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[23:08] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[23:09] <m0zes> there have been good and bad experiences put on the mailing list. https://drive.google.com/file/d/0B2gTBZrkrnpZS1Q4VktjZkhrNHc/view talks about getting some decent throughput with 4 nodes and 16 (total) SSDs.
[23:10] * brad- (~Brad@184-94-17-26.dedicated.allstream.net) Quit (Quit: Leaving)
[23:10] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[23:11] * cooey (~Sigma@185.101.107.227) has joined #ceph
[23:11] <m0zes> I'd expect in the order of 2-3GB/s 128kb random writes, based on the differences between the 4k read and write tests.
[23:14] * georgem (~Adium@207.164.79.37) has joined #ceph
[23:14] * georgem (~Adium@207.164.79.37) Quit ()
[23:14] * georgem (~Adium@206.108.127.16) has joined #ceph
[23:15] <Venturi> Let's say I have a case with to VMs. I create a RADOS block device/image (rbd create ceph-client-rbd --size 102040) within ceph rbd pool on a storage server. Is it possible to map created image as a read-only to one of the VMs and the second VM would have also write access?
[23:15] * georgem (~Adium@206.108.127.16) has left #ceph
[23:15] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:c11e:ac88:f68b:f7e8) Quit (Ping timeout: 480 seconds)
[23:17] <Aeso> Venturi, it depends enitrely on the filesystem you put on the RBD
[23:18] <m0zes> you might be able to do it, but the "writer" may not flush data to disk, or rbd might cache it, or the metadata could be updated without the "read-only" mount knowing it.
[23:19] <m0zes> in my experience, you need a clustered filesystem to handle multi-mounting. whether or not one is read-only.
[23:19] <m0zes> i.e. ocfs or gfs.
[23:21] * davidzlap1 (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) Quit (Read error: Connection reset by peer)
[23:21] * davidzlap (~Adium@2605:e000:1313:8003:2128:c8c7:1de0:6989) has joined #ceph
[23:22] <Venturi> if xfs is put on RBD?
[23:23] <m0zes> xfs is not clustered, and it updates metadata on mount, whether or not the mount is going to be read-only. that will screw up your disk very very quickly.
[23:24] <markednmbr1> im just looking at trying out a cache tier
[23:25] <markednmbr1> I have an SSD with 100G free in each host, that I want to try as the cache. Any pointers on how to create this? Do I have to make a seperate bucket in the crush map or should it go with the existing host
[23:26] <Venturi> so you prefere ot use some clustered fs on top of such mounts?
[23:27] <Aeso> Venturi, what you're trying to do isn't really the intent of rbd. It sounds like CephFS will do what you're looking for just fine.
[23:27] <Venturi> we have some legacy applications ported to VM, and within two VMs there are tow mounts intended for drbd data replication. now , i want to move this out of the VMs to the ceph rados pool
[23:28] <m0zes> markednmbr1: http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
[23:29] <markednmbr1> thanks m0zes
[23:33] * dack (~darrelle@gateway.ola.bc.ca) has joined #ceph
[23:34] <dack> I know a lot of work has been going into cephfs - does anyone know if it is going to be considered "production ready" in infernalis?
[23:35] <gregsfortytwo> I think the upstream community will be marking it production ready in Jewel
[23:36] <gregsfortytwo> that's what Sage said at the OpenStack summit last week, and is when we'll have at least our first-run fsck stuff done :)
[23:36] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[23:36] <dack> awesome, thanks!
[23:36] <gregsfortytwo> I'm not sure when Red Hat/SUSE/Canonical will be offering support in their downstreams
[23:36] <Venturi> Is CephFS already mature enough or should I go with this: http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
[23:37] <Venturi> aha ok, so this mature ready cephfs would be in 2016?
[23:37] <gregsfortytwo> that's the expectation
[23:38] <dack> right now we are using a proprietary shared filesystem, and i'm dreaming of the day we can switch that over to ceph
[23:40] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[23:41] * cooey (~Sigma@4Z9AAAP9B.tor-irc.dnsbl.oftc.net) Quit ()
[23:42] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[23:45] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[23:49] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[23:49] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[23:50] * jdillaman_ (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[23:51] * dgurtner (~dgurtner@185.10.235.250) Quit (Ping timeout: 480 seconds)
[23:53] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[23:53] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[23:55] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[23:55] * jdillaman_ is now known as jdillaman
[23:58] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[23:59] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.