#ceph IRC Log

Index

IRC Log for 2015-12-18

Timestamps are in GMT/BST.

[0:00] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[0:00] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[0:01] * leseb_ is now known as leseb_away
[0:07] * linuxkidd (~linuxkidd@49.sub-70-209-96.myvzw.com) Quit (Remote host closed the connection)
[0:08] * Szernex (~bildramer@76GAAACV3.tor-irc.dnsbl.oftc.net) Quit ()
[0:09] * mtb` (~mtb`@157.130.171.46) Quit (Ping timeout: 480 seconds)
[0:14] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[0:15] * fred`` (fred@2001:6f8:10c0:0:2010:abec:24d:2500) Quit (Quit: +++ATH0)
[0:19] * wjw-freebsd2 (~wjw@smtp.digiware.nl) has joined #ceph
[0:20] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Read error: Network is unreachable)
[0:23] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[0:23] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[0:25] * ira (~ira@c-73-238-173-100.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[0:32] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[0:34] * ircolle (~Adium@2601:285:201:2bf9:28f3:2196:58af:cd5a) has joined #ceph
[0:37] * Kingrat (~shiny@2605:a000:161a:c0f6:bc6e:d47:d89a:dc06) Quit (Remote host closed the connection)
[0:41] * fred`` (fred@earthli.ng) has joined #ceph
[0:43] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[0:46] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[0:47] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit ()
[0:53] * ira (~ira@c-73-238-173-100.hsd1.ma.comcast.net) has joined #ceph
[0:53] * ira (~ira@c-73-238-173-100.hsd1.ma.comcast.net) Quit ()
[0:56] * leseb_away is now known as leseb_
[0:56] * leseb_ is now known as leseb_away
[0:57] * ircolle (~Adium@2601:285:201:2bf9:28f3:2196:58af:cd5a) Quit (Quit: Leaving.)
[0:57] * leseb_away is now known as leseb_
[0:58] * leseb_ is now known as leseb_away
[0:58] * ircolle (~Adium@2601:285:201:2bf9:146b:a453:e2f4:90a1) has joined #ceph
[0:59] * cathode (~cathode@50-198-166-81-static.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[1:03] * rendar (~I@host227-109-dynamic.49-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[1:03] * ircolle (~Adium@2601:285:201:2bf9:146b:a453:e2f4:90a1) Quit ()
[1:03] * hgichon (~hgichon@112.220.91.130) has joined #ceph
[1:07] * leseb_away is now known as leseb_
[1:07] * leseb_ is now known as leseb_away
[1:14] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) Quit (Ping timeout: 480 seconds)
[1:14] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Remote host closed the connection)
[1:16] * fsimonce (~simon@host159-1-dynamic.54-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:16] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:25] * EinstCrazy (~EinstCraz@117.15.122.189) Quit (Remote host closed the connection)
[1:27] * krypto (~krypto@65.115.222.48) has joined #ceph
[1:39] * oms101 (~oms101@p20030057EA030600C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:47] * yanzheng (~zhyan@125.71.106.102) has joined #ceph
[1:47] * mattbenjamin (~mbenjamin@aa2.linuxbox.com) Quit (Quit: Leaving.)
[1:48] * oms101 (~oms101@p20030057EA012D00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:51] <rdleblanc> https://github.com/ceph/ceph/pull/6964 - I can forward declare Formatter.h and make is happy and things are fine. However, building the test suite fails where Formatter has not been declaired.
[1:51] <rdleblanc> It doesn't matter if I include the common/Formatter.h in all of the files, it is still undeclaired. Any ideas?
[1:53] <rdleblanc> I didn't run into it I guess because I already had things built when I worked on the test suite.
[1:55] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[2:00] * leseb_away is now known as leseb_
[2:00] * leseb_ is now known as leseb_away
[2:02] * EinstCrazy (~EinstCraz@111.30.21.47) has joined #ceph
[2:02] <joshd> rdleblanc: not sure, but you tried including Formatter.h in OpQueue.h?
[2:02] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[2:04] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:05] <joshd> rdleblanc: oh, need OpQueue isn't in the ceph namespace, need to refer to it as ceph::Formatter in OpQueue.h
[2:11] <rdleblanc> joshd: I tried ceph::Formatter too, let me try again.
[2:16] * leseb_away is now known as leseb_
[2:16] <rdleblanc> Without namespace "'ceph' has not been declared"
[2:17] * garphy is now known as garphy`aw
[2:17] * leseb_ is now known as leseb_away
[2:17] <rdleblanc> with namespace "'Formatter' in namespace 'ceph' does not name a type."
[2:20] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:51c5:14c7:e846:667c) Quit (Ping timeout: 480 seconds)
[2:20] <joshd> include/types.h does the forward declaration, and refers to it as ceph::Formatter
[2:21] * KaneK (~kane@12.206.204.58) Quit (Quit: KaneK)
[2:23] * igoryonya (~kvirc@80.83.238.67) has joined #ceph
[2:24] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[2:25] * johnavp1989 (~jpetrini@pool-100-14-5-21.phlapa.fios.verizon.net) has joined #ceph
[2:28] <igoryonya> When I do command: ceph-deploy mon create host-name, it errors out at the point: admin_socket: exeption getting command descriptions: [Error 2] No such file or directory
[2:29] <igoryonya> Ubuntu version 15.10
[2:29] <igoryonya> ceph version 0.94.5
[2:29] <igoryonya> I've checked, on the remote host, where the command tries to deploy/create the mon, there is no file, named /var/run/ceph/ceph-mon.host-name.asok
[2:29] <igoryonya> It tried to do: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.host-name.asok mon_status
[2:30] * Rachana (~Rachana@2601:87:3:3601::65f2) Quit (Ping timeout: 480 seconds)
[2:31] <joshd> igoryonya: the admin socket is normally created when the monitor is running. if there are no ceph-mon processes, check for earlier errors from ceph-deploy or /var/log/ceph/ceph-mon.*.log
[2:32] * spidu_ (~ZombieL@46.166.186.215) has joined #ceph
[2:37] <igoryonya> I've created the main monitor, and it's running on the main host, using the command: ceph-deploy mon create-initial
[2:39] <igoryonya> joshd: the: ceph status shows that it's running, but health_warn, and it's understandable, because it is only a 1 host cluster yet.
[2:42] <joshd> igoryonya: so you're trying to add a second monitor now?
[2:44] <igoryonya> Yes
[2:44] <igoryonya> joshd: Yes
[2:44] <joshd> igoryonya: you'll want to check for issues with the 2nd monitor its log in /var/log/ceph on the 2nd host then
[2:44] <igoryonya> joshd: OK, I'l check
[2:48] * LeaChim (~LeaChim@host86-185-146-193.range86-185.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:50] * zhaochao (~zhaochao@111.161.77.238) has joined #ceph
[2:52] <flaf> Hi, I have a ceph cluster with 5 nodes, with each 1 SSD and 3 SATA disks, so 3 OSD per server (the OS and the 3 journals are in the SSD). I'm trying to bench cephfs and I avec ~ 600 iops for 50%/50% rw. It's very low, isn't it?
[2:52] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[2:53] <flaf> (replicat size == 3)
[2:56] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[2:56] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[2:56] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[3:00] * yanzheng (~zhyan@125.71.106.102) Quit (Quit: This computer has gone to sleep)
[3:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[3:01] <igoryonya> joshd: The 1st error, that appears from the last command launch, says that host-name does not exist in nomap, will attempt to join a working cluster,..., ahh, it's not an error, but I've typed so much too pity to erase :)
[3:02] <joshd> igoryonya: got to go, good luck
[3:02] * yanzheng (~zhyan@125.71.106.102) has joined #ceph
[3:02] * spidu_ (~ZombieL@46.166.186.215) Quit ()
[3:03] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[3:05] * leseb_away is now known as leseb_
[3:05] * leseb_ is now known as leseb_away
[3:06] <igoryonya> joshd: It actually was an error, because the next message says: no public_addr or public_network specified, and mon.host-name not present in monmap or ceph.conf and those messages repeated 4 times.
[3:09] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[3:10] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[3:11] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[3:18] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[3:25] * theengineer (~theengine@45-31-177-36.lightspeed.austtx.sbcglobal.net) Quit (Quit: This computer has gone to sleep)
[3:26] * KaneK (~kane@cpe-172-88-240-14.socal.res.rr.com) has joined #ceph
[3:46] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[3:58] * overclk (~vshankar@59.93.68.132) has joined #ceph
[3:59] * Kupo1 (~tyler.wil@23.111.254.159) Quit (Read error: Connection reset by peer)
[3:59] * tenshi (~David@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Read error: Connection reset by peer)
[3:59] * marrusl (~mark@209-150-46-243.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Quit: bye!)
[4:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[4:01] * emsnyder (~emsnyder@65.170.86.132) has joined #ceph
[4:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[4:01] * lcurtis_ (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[4:05] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:18] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[4:21] * angdraug (~angdraug@c-69-181-140-42.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[4:22] * naoto (~naotok@27.131.11.254) has joined #ceph
[4:24] * wjw-freebsd2 (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[4:25] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[4:40] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[4:48] * dyasny (~dyasny@dsl.198.58.153.172.ebox.ca) Quit (Ping timeout: 480 seconds)
[5:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[5:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[5:02] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[5:06] * lxo (~aoliva@185.101.107.136) has joined #ceph
[5:08] * CoZmicShReddeR (~rf`@85.17.25.22) has joined #ceph
[5:12] * vbellur (~vijay@121.244.87.124) has joined #ceph
[5:18] * Vacuum__ (~Vacuum@88.130.195.142) has joined #ceph
[5:21] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) has joined #ceph
[5:26] * Vacuum_ (~Vacuum@i59F79C7C.versanet.de) Quit (Ping timeout: 480 seconds)
[5:33] * leseb_away is now known as leseb_
[5:33] * leseb_ is now known as leseb_away
[5:38] * CoZmicShReddeR (~rf`@76GAAAC8N.tor-irc.dnsbl.oftc.net) Quit ()
[5:45] * shaunm (~shaunm@208.102.161.229) Quit (Ping timeout: 480 seconds)
[5:46] * overclk (~vshankar@59.93.68.132) Quit (Quit: Zzzz..)
[5:56] * offender (~Neon@ns316491.ip-37-187-129.eu) has joined #ceph
[5:58] * krypto (~krypto@65.115.222.48) Quit (Ping timeout: 480 seconds)
[6:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[6:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[6:11] * kefu (~kefu@114.92.107.250) has joined #ceph
[6:15] * leseb_away is now known as leseb_
[6:15] * leseb_ is now known as leseb_away
[6:18] * yanzheng1 (~zhyan@182.139.204.153) has joined #ceph
[6:18] * yanzheng (~zhyan@125.71.106.102) Quit (Ping timeout: 480 seconds)
[6:18] * kefu_ (~kefu@114.92.107.250) has joined #ceph
[6:20] * kefu (~kefu@114.92.107.250) Quit (Ping timeout: 480 seconds)
[6:22] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[6:24] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Remote host closed the connection)
[6:26] * yanzheng1 (~zhyan@182.139.204.153) Quit (Ping timeout: 480 seconds)
[6:26] * offender (~Neon@4MJAAAJF4.tor-irc.dnsbl.oftc.net) Quit ()
[6:28] * yanzheng (~zhyan@182.149.64.193) has joined #ceph
[6:31] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[6:35] * overclk (~vshankar@121.244.87.117) has joined #ceph
[6:49] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[6:55] * kefu_ (~kefu@114.92.107.250) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[7:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[7:07] * Neon (~Tumm@195-154-69-88.rev.poneytelecom.eu) has joined #ceph
[7:16] * overclk (~vshankar@121.244.87.117) Quit (Ping timeout: 480 seconds)
[7:29] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[7:30] * kefu (~kefu@114.92.107.250) has joined #ceph
[7:30] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) Quit (Quit: Ex-Chat)
[7:32] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[7:32] * rdas (~rdas@6.snat-111-91-72.hns.net.in) has joined #ceph
[7:32] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[7:36] * haomaiwa_ (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[7:37] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[7:37] * Neon (~Tumm@4MJAAAJH7.tor-irc.dnsbl.oftc.net) Quit ()
[7:37] * leseb_away is now known as leseb_
[7:37] * leseb_ is now known as leseb_away
[7:41] * click (~ulterior@31.220.4.161) has joined #ceph
[7:41] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[7:44] * linjan (~linjan@86.62.112.22) has joined #ceph
[7:45] * leseb_away is now known as leseb_
[7:45] * leseb_ is now known as leseb_away
[7:46] * leseb_away is now known as leseb_
[7:47] * leseb_ is now known as leseb_away
[7:48] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[7:48] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[7:53] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[7:54] <Be-El> hi
[8:01] * haomaiwa_ (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[8:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[8:09] * kefu (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[8:10] * kefu (~kefu@114.92.107.250) has joined #ceph
[8:11] * click (~ulterior@84ZAAADDS.tor-irc.dnsbl.oftc.net) Quit ()
[8:21] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:23] * Xmd (~Xmd@78.85.35.236) has joined #ceph
[8:29] * kefu (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[8:30] * kefu (~kefu@114.92.107.250) has joined #ceph
[8:36] * naoto (~naotok@27.131.11.254) Quit (Read error: Connection reset by peer)
[8:36] * naoto (~naotok@2401:bd00:b001:8920:27:131:11:254) has joined #ceph
[8:37] * bvi (~bastiaan@185.56.32.1) has joined #ceph
[8:40] * bliu (~liub@203.192.156.9) Quit (Remote host closed the connection)
[8:40] * jasuarez (~jasuarez@237.Red-83-39-111.dynamicIP.rima-tde.net) has joined #ceph
[8:40] * shohn (~shohn@dslb-146-060-205-193.146.060.pools.vodafone-ip.de) has joined #ceph
[8:40] * leseb_away is now known as leseb_
[8:40] * leseb_ is now known as leseb_away
[8:43] * leseb_away is now known as leseb_
[8:43] * leseb_ is now known as leseb_away
[8:46] * leseb_away is now known as leseb_
[8:46] * leseb_ is now known as leseb_away
[8:46] * leseb_away is now known as leseb_
[8:46] * leseb_ is now known as leseb_away
[8:47] * leseb_away is now known as leseb_
[8:47] * leseb_ is now known as leseb_away
[8:47] * leseb_away is now known as leseb_
[8:47] * leseb_ is now known as leseb_away
[8:47] * i_m (~ivan.miro@deibp9eh1--blueice4n4.emea.ibm.com) has joined #ceph
[8:48] * leseb_away is now known as leseb_
[8:48] * bliu (~liub@203.192.156.9) has joined #ceph
[8:48] * leseb_ is now known as leseb_away
[8:48] * leseb_away is now known as leseb_
[8:48] * leseb_ is now known as leseb_away
[8:49] * leseb_away is now known as leseb_
[8:49] * garphy`aw is now known as garphy
[8:50] * leseb_ is now known as leseb_away
[8:50] * leseb_away is now known as leseb_
[8:50] * leseb_ is now known as leseb_away
[8:50] * leseb_away is now known as leseb_
[8:50] * leseb_ is now known as leseb_away
[8:52] * leseb_away is now known as leseb_
[8:52] * leseb_ is now known as leseb_away
[8:52] * leseb_away is now known as leseb_
[8:52] * leseb_ is now known as leseb_away
[8:52] * leseb_away is now known as leseb_
[8:53] * leseb_ is now known as leseb_away
[8:53] * leseb_away is now known as leseb_
[8:53] * leseb_ is now known as leseb_away
[8:53] * leseb_away is now known as leseb_
[8:53] * leseb_ is now known as leseb_away
[8:57] * enax (~enax@hq.ezit.hu) has joined #ceph
[8:57] * nardial (~ls@dslb-088-072-085-164.088.072.pools.vodafone-ip.de) has joined #ceph
[8:59] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[9:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[9:02] * garphy is now known as garphy`aw
[9:06] * naoto_ (~naotok@27.131.11.254) has joined #ceph
[9:08] * egonzalez (~egonzalez@72.Red-83-44-163.dynamicIP.rima-tde.net) has joined #ceph
[9:12] * naoto (~naotok@2401:bd00:b001:8920:27:131:11:254) Quit (Ping timeout: 480 seconds)
[9:16] * analbeard (~shw@support.memset.com) has joined #ceph
[9:22] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[9:22] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) has joined #ceph
[9:24] * kefu (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[9:24] * jordanP (~jordan@41.140.144.77.rev.sfr.net) has joined #ceph
[9:24] * kefu (~kefu@114.92.107.250) has joined #ceph
[9:24] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:27] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[9:27] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Remote host closed the connection)
[9:27] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[9:29] * vbellur (~vijay@121.244.87.124) has joined #ceph
[9:31] * garphy`aw is now known as garphy
[9:31] * vbellur (~vijay@121.244.87.124) Quit (Read error: Connection reset by peer)
[9:31] * vbellur (~vijay@121.244.87.124) has joined #ceph
[9:35] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[9:36] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[9:39] * fsimonce (~simon@host159-1-dynamic.54-79-r.retail.telecomitalia.it) has joined #ceph
[9:39] * naoto_ (~naotok@27.131.11.254) Quit (Remote host closed the connection)
[9:40] * garphy is now known as garphy`aw
[9:40] * daviddcc (~dcasier@80.12.63.88) has joined #ceph
[9:43] * naoto (~naotok@2401:bd00:b001:8920:27:131:11:254) has joined #ceph
[9:44] * naoto (~naotok@2401:bd00:b001:8920:27:131:11:254) Quit (Remote host closed the connection)
[9:44] * naoto (~naotok@27.131.11.254) has joined #ceph
[9:46] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[9:50] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[9:55] * kefu (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[9:56] * kefu (~kefu@114.92.107.250) has joined #ceph
[10:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[10:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[10:02] * garphy`aw is now known as garphy
[10:10] <IcePic> ..dont run scripts in your client that you cant control..
[10:10] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[10:13] * bliu (~liub@203.192.156.9) Quit (Quit: Leaving)
[10:13] * TiCPU (~jeromepou@190-130.cgocable.ca) Quit (Ping timeout: 480 seconds)
[10:18] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:21] * MinedAWAY (~Mined@c-f9c0e455.05-162-6c6b701.cust.bredbandsbolaget.se) Quit (Remote host closed the connection)
[10:22] * TiCPU (~jeromepou@c207.134.3-38.clta.globetrotter.net) has joined #ceph
[10:22] * daviddcc (~dcasier@80.12.63.88) Quit (Ping timeout: 480 seconds)
[10:24] * igoryonya (~kvirc@80.83.238.67) Quit (Ping timeout: 480 seconds)
[10:32] * MinedAWAY (~Mined@c-f9c0e455.05-162-6c6b701.cust.bredbandsbolaget.se) has joined #ceph
[10:36] * igoryonya (~kvirc@80.83.238.67) has joined #ceph
[10:36] * hgichon (~hgichon@112.220.91.130) Quit (Ping timeout: 480 seconds)
[10:36] * Daniel (~Daniel@office.34sp.com) has joined #ceph
[10:37] * Daniel is now known as Guest1716
[10:38] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[10:40] * EinstCrazy (~EinstCraz@111.30.21.47) Quit (Remote host closed the connection)
[10:40] * kefu (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[10:41] * kefu (~kefu@114.92.107.250) has joined #ceph
[10:42] * treenerd (~treenerd@85.193.140.98) has joined #ceph
[10:56] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:1d6f:c65d:8181:2e23) has joined #ceph
[11:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[11:01] * allaok (~allaok@machine107.orange-labs.com) has left #ceph
[11:01] * analbeard (~shw@support.memset.com) Quit (Ping timeout: 480 seconds)
[11:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[11:05] * rendar (~I@host68-58-dynamic.40-79-r.retail.telecomitalia.it) has joined #ceph
[11:08] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[11:11] * LeaChim (~LeaChim@host86-185-146-193.range86-185.btcentralplus.com) has joined #ceph
[11:14] * analbeard (~shw@5.153.255.226) has joined #ceph
[11:16] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[11:22] * Tarazed (~skrblr@176.123.6.153) has joined #ceph
[11:29] * garphy is now known as garphy`aw
[11:41] * EinstCrazy (~EinstCraz@117.15.122.189) has joined #ceph
[11:49] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[11:52] * Tarazed (~skrblr@4MJAAAJPO.tor-irc.dnsbl.oftc.net) Quit ()
[11:59] * jordanP (~jordan@41.140.144.77.rev.sfr.net) Quit (Quit: Leaving)
[12:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[12:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[12:01] <Kvisle> I wrote a blog post about how we keep our OSD nodes stateless: http://redpill-linpro.com/sysadvent/2015/12/18/stateless-osd-servers.html ... a long post about a really simple trick
[12:03] * kefu (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[12:03] * kefu (~kefu@211.22.145.245) has joined #ceph
[12:05] * badone (~badone@66.187.239.16) Quit (Remote host closed the connection)
[12:05] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[12:10] * badone (~badone@66.187.239.16) has joined #ceph
[12:14] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[12:18] * shylesh (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[12:19] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[12:20] * zhaochao (~zhaochao@111.161.77.238) Quit (Quit: ChatZilla 0.9.92 [Iceweasel 38.5.0/20151216011944])
[12:28] * elt (~Diablodoc@4MJAAAJRR.tor-irc.dnsbl.oftc.net) has joined #ceph
[12:32] * mortn (~mortn@217-215-219-69-no229.tbcn.telia.com) has joined #ceph
[12:32] * naoto (~naotok@27.131.11.254) Quit (Quit: Leaving...)
[12:36] <mortn> can I get rid of "mds0: Client [client host] failing to respond to cache pressure" without altering ceph.conf?
[12:43] * ade (~abradshaw@129.206.219.242) has joined #ceph
[12:44] <Be-El> mortn: most changes to ceph.conf can be injected into running daemons
[12:45] <mortn> yes, that's what i've seen as well. I guess my question was more the path under /proc where to inject what :-)
[12:45] <mortn> sorry for being unclear
[12:45] <T1w> don't do that
[12:45] <mortn> ok?
[12:45] <T1w> use ceph tell
[12:45] <Be-El> there's no path in /proc
[12:45] <mortn> ahh, ok
[12:46] <mortn> /sys maybe ?
[12:46] <mortn> had a friend showing some inject hacks the other day - can't remember the path though
[12:47] <mortn> and i just wanted to play around with the mds cache settings without restarting the service
[12:47] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:48] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[12:50] <T1w> using /proc or /sys seems to be the wrong way of doing things where you have multiple daemons on mulitple servers
[12:53] <mortn> ceph mds tell \* injectargs '--mds-cache-size 1000000' gives "Invalid command: saw 0 of args..."
[12:58] * elt (~Diablodoc@4MJAAAJRR.tor-irc.dnsbl.oftc.net) Quit ()
[13:00] * overclk (~vshankar@121.244.87.117) has joined #ceph
[13:06] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[13:08] * kefu_ (~kefu@114.92.107.250) has joined #ceph
[13:10] * kefu (~kefu@211.22.145.245) Quit (Ping timeout: 480 seconds)
[13:15] * nardial (~ls@dslb-088-072-085-164.088.072.pools.vodafone-ip.de) Quit (Quit: Leaving)
[13:18] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Read error: Connection reset by peer)
[13:18] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[13:18] * ade (~abradshaw@129.206.219.242) Quit (Quit: Too sexy for his shirt)
[13:22] * ade (~abradshaw@129.206.219.242) has joined #ceph
[13:30] * ade (~abradshaw@129.206.219.242) Quit (Quit: Too sexy for his shirt)
[13:31] * kanagaraj (~kanagaraj@27.7.8.160) has joined #ceph
[13:32] * sc-rm (~rene@mail-outbound.microting.com) has joined #ceph
[13:33] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[13:33] <sc-rm> When I have a running cluster and want to increase the number of pg???s for a pool, but the number exceeds per-OSD max of, but ceph -w ???health HEALTH_WARN pool images has too few pgs???
[13:34] <sc-rm> I tried to just do ceph osd pool set images pg_num 1024, but is not allowed to do so, because : Error E2BIG: specified pg_num 1024 is too large (creating 960 new PGs on ~15 OSDs exceeds per-OSD max of 32)
[13:34] * rdas (~rdas@6.snat-111-91-72.hns.net.in) Quit (Quit: Leaving)
[13:35] <sc-rm> I don???t care if the cluster is slow for 1 or more hours - since it???s not in anything else than getting filled state
[13:36] <T1w> you need to grown the number of PGs in smaller increments
[13:36] <T1w> how may do you have now?
[13:36] <T1w> you cannot jump from eg. 32 PGs to 2014 in a single step
[13:36] <T1w> but you can go from 32 to 64
[13:36] <T1w> then from 64 to 128
[13:36] <T1w> etc etc etc..
[13:37] <T1w> err.. not 2014 - it should have been 1024
[13:38] * vikhyat (~vumrao@121.244.87.116) Quit (Ping timeout: 480 seconds)
[13:43] * vata (~vata@cable-21.246.173-197.electronicbox.net) Quit (Ping timeout: 480 seconds)
[13:45] * wjw-freebsd (~wjw@vpn.ecoracks.nl) has joined #ceph
[13:46] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[13:47] * Mousey (~Inuyasha@162.216.46.12) has joined #ceph
[13:47] * kanagaraj (~kanagaraj@27.7.8.160) Quit (Quit: Leaving)
[13:50] * analbeard (~shw@5.153.255.226) Quit (Quit: Leaving.)
[13:51] * bugzc_EC (~bugzc_EC@ec2-52-3-149-142.compute-1.amazonaws.com) Quit (Read error: Connection reset by peer)
[13:51] * bugzc_EC (~bugzc_EC@ec2-52-3-149-142.compute-1.amazonaws.com) has joined #ceph
[13:52] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[13:54] * ira (~ira@24.34.255.34) has joined #ceph
[13:56] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[13:59] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (Quit: No Ping reply in 180 seconds.)
[13:59] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[13:59] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[14:00] * Cybert1nus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[14:02] * jamespage (~jamespage@2a00:1098:0:80:1000:42:0:1) Quit (Ping timeout: 480 seconds)
[14:03] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Ping timeout: 480 seconds)
[14:03] * harmw (~harmw@chat.manbearpig.nl) Quit (Ping timeout: 480 seconds)
[14:03] * jamespage (~jamespage@2a00:1098:0:80:1000:42:0:1) has joined #ceph
[14:06] * harmw (~harmw@chat.manbearpig.nl) has joined #ceph
[14:10] * johnavp1989 (~jpetrini@pool-100-14-5-21.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[14:11] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Ping timeout: 480 seconds)
[14:15] * jordanP (~jordan@41.140.144.77.rev.sfr.net) has joined #ceph
[14:15] * overclk (~vshankar@121.244.87.117) Quit (Quit: Zzz...)
[14:17] * Mousey (~Inuyasha@162.216.46.12) Quit ()
[14:19] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[14:20] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[14:32] * fred`` (fred@earthli.ng) Quit (Read error: Connection reset by peer)
[14:36] * Rachana (~Rachana@2601:87:3:3601::65f2) has joined #ceph
[14:38] * Rachana (~Rachana@2601:87:3:3601::65f2) Quit ()
[14:43] * fred`` (fred@earthli.ng) has joined #ceph
[14:45] * Rachana (~Rachana@2601:87:3:3601::766) has joined #ceph
[14:46] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[14:46] * analbeard (~shw@support.memset.com) has joined #ceph
[14:47] * nardial (~ls@dslb-088-072-085-164.088.072.pools.vodafone-ip.de) has joined #ceph
[14:49] * Rachana (~Rachana@2601:87:3:3601::766) Quit ()
[14:49] * Rachana (~Rachana@2601:87:3:3601::766) has joined #ceph
[14:57] * cooldharma06 (~chatzilla@14.139.180.40) Quit (Quit: ChatZilla 0.9.92 [Iceweasel 21.0/20130515140136])
[14:57] * johnavp1989 (~jpetrini@64.94.196.252) has joined #ceph
[14:57] * kefu_ (~kefu@114.92.107.250) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[15:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[15:05] * visbits (~textual@8.29.138.28) has joined #ceph
[15:05] * moore (~moore@71-211-73-118.phnx.qwest.net) Quit (Remote host closed the connection)
[15:05] * garphy`aw is now known as garphy
[15:06] <treenerd> mortn: did you already got the solution; or do you need some help?
[15:08] * dyasny (~dyasny@dsl.198.58.153.94.ebox.ca) has joined #ceph
[15:11] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:12] <mortn> thank you for asking :-) I just modified ceph.conf and did a restart ceph-mds-all anyway
[15:12] <mortn> mds cache size = 100000
[15:12] <mortn> client cache size = 2048
[15:12] <mortn> under the [mds] section
[15:13] <treenerd> mornt: Just for the next time, with ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show you can get the running values on the node where the daemon is running.
[15:13] <treenerd> mortn: for examplte "osd_max_backfills": "1"
[15:13] <treenerd> mortn: the thing is there are underlines
[15:15] * johnavp1989 (~jpetrini@64.94.196.252) Quit (Read error: Connection reset by peer)
[15:15] <treenerd> mortn: then you can inject the arguments via the monitor node; "ceph tell osd.* injectargs --osd_max_backfills=5" for example
[15:16] <mortn> Thank you, that command I'll have to remember!
[15:16] <treenerd> mortn: then ypu can check the setting again on an node with an running osd daemon in that cluster with "ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | grep backfill" for example
[15:17] <treenerd> mortn: same thing should work with mds daemons.
[15:17] <treenerd> mortn: if you want to test that on only one osd just use osd.0 instead of osd.*
[15:18] <treenerd> mortn: to make the changes persistant, you have to add the values to the ceph.conf, otherwise they where lost if the daemon restarts
[15:19] * johnavp1989 (~jpetrini@64.94.196.252) has joined #ceph
[15:20] <mortn> @treenerd: yes, that has had me confused a bit. since the mon's and osd's themselves seem to work fine (in Hammer/Trusty with a Wily LTS-enablementstack) without being added to ceph.conf at all
[15:20] <cephalobot> mortn: Error: "treenerd:" is not a valid command.
[15:22] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[15:24] <mortn> treenerd: yes, that has had me confused a bit. since the mon's and osd's themselves seem to work fine (in Hammer/Trusty with a Wily LTS-enablementstack) without being added to ceph.conf at all
[15:31] * LDA (~DM@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[15:32] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[15:32] * egonzalez (~egonzalez@72.Red-83-44-163.dynamicIP.rima-tde.net) Quit (Quit: Saliendo)
[15:32] * theengineer (~theengine@45-31-177-36.lightspeed.austtx.sbcglobal.net) has joined #ceph
[15:34] * johnavp1989 (~jpetrini@64.94.196.252) Quit (Read error: Connection reset by peer)
[15:36] * johnavp1989 (~jpetrini@mobile-166-171-057-205.mycingular.net) has joined #ceph
[15:37] <treenerd> mortn: Okay; Never tried trusty with 4.2 kernel in ceph environment yet. Sounds also interesting. Wish you a nice day; I'm afk for today;
[15:40] * jclm (~jclm@AVelizy-151-1-8-215.w82-120.abo.wanadoo.fr) has joined #ceph
[15:41] * kefu (~kefu@114.92.107.250) has joined #ceph
[15:47] * Moriarty (~Miho@104.238.176.106) has joined #ceph
[15:48] * treenerd (~treenerd@85.193.140.98) Quit (Ping timeout: 480 seconds)
[15:53] * kefu (~kefu@114.92.107.250) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[16:00] * kefu (~kefu@114.92.107.250) has joined #ceph
[16:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[16:01] * mattbenjamin1 (~mbenjamin@aa2.linuxbox.com) has joined #ceph
[16:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[16:02] * MannerMan (~oscar@user170.217-10-117.netatonce.net) Quit (Quit: Leaving)
[16:02] * linjan (~linjan@86.62.112.22) Quit (Ping timeout: 480 seconds)
[16:03] * yanzheng (~zhyan@182.149.64.193) Quit (Quit: This computer has gone to sleep)
[16:04] * vbellur (~vijay@122.171.93.166) has joined #ceph
[16:05] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) has joined #ceph
[16:06] * sc-rm (~rene@mail-outbound.microting.com) Quit (Quit: sc-rm)
[16:10] * i_m (~ivan.miro@deibp9eh1--blueice4n4.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[16:13] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:13] * johnavp19891 (~jpetrini@64.94.196.252) has joined #ceph
[16:14] * mtb` (~mtb`@157.130.171.46) has joined #ceph
[16:14] * mtb` (~mtb`@157.130.171.46) Quit ()
[16:16] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:16] * mtb` (~mtb`@157.130.171.46) has joined #ceph
[16:17] * garphy is now known as garphy`aw
[16:17] * Moriarty (~Miho@76GAAADXT.tor-irc.dnsbl.oftc.net) Quit ()
[16:17] * johnavp19891 (~jpetrini@64.94.196.252) Quit (Read error: Connection reset by peer)
[16:19] * mhack|lunch is now known as mhack|mtg
[16:19] * johnavp1989 (~jpetrini@mobile-166-171-057-205.mycingular.net) Quit (Ping timeout: 480 seconds)
[16:20] * johnavp1989 (~jpetrini@64.94.196.252) has joined #ceph
[16:20] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[16:21] <TiCPU> On Firefly 0.80.10, I'm having an active+clean+inconsistent PG, and I'm trying to repair / deep-scrub it, however, I just found out that any "ceph pg <deep-scrub|repair>" command is being completely ignored by the cluster, I tried restarting each affected OSD and electing a new monitor without success. Any idea? Automatic deep-scrub are working too.
[16:23] * johnavp1989 (~jpetrini@64.94.196.252) Quit (Read error: Connection reset by peer)
[16:23] * moore (~moore@64.202.160.88) has joined #ceph
[16:23] * johnavp1989 (~jpetrini@mobile-166-171-057-205.mycingular.net) has joined #ceph
[16:24] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[16:25] * kefu (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[16:26] * kefu (~kefu@211.22.145.245) has joined #ceph
[16:27] * TheSov2 (~TheSov@cip-248.trustwave.com) has joined #ceph
[16:27] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[16:28] * jclm (~jclm@AVelizy-151-1-8-215.w82-120.abo.wanadoo.fr) Quit (Quit: Leaving.)
[16:34] * tsg (~tgohad@134.134.139.74) has joined #ceph
[16:36] * shohn (~shohn@dslb-146-060-205-193.146.060.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[16:42] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[16:43] <flaf> Hi, I'm searching a page which show ceph-fuse mount options. I have bad performance and I would like to make some test.
[16:43] <flaf> Do you known where I can find this?
[16:45] <TheSov2> what is your performance experience?
[16:46] <flaf> TheSov2: I have explained my problem in a thread (recent) here http://www.mail-archive.com/ceph-users@lists.ceph.com/msg25710.html
[16:46] * pam (~pam@193.106.183.1) has joined #ceph
[16:46] <TheSov2> how many osd servers do you have
[16:46] <TheSov2> and how man osds in each?
[16:47] <TheSov2> and the size of the pool replication
[16:47] <pam> Hell
[16:47] <pam> o
[16:47] <TheSov2> hello
[16:47] <pam> maybe stupid question but is it possible to use one cache pool for multiple ec pools?
[16:48] <m0zes> no
[16:48] <m0zes> 1-1 ratio...
[16:48] <TheSov2> yeah, thats a negatory
[16:48] <pam> ok thanks for this quick response
[16:48] <TheSov2> top of the morning to you!
[16:48] <flaf> TheSov2: I have 5 servers with each 3 4GB SATA disks (one OSD per disk, disk directly connected to the motherboard), and 1 SSD (which contains the OS and the journal of each OSD). The size replication is 3.
[16:49] <TheSov2> so you have 15 osds total am i correct?
[16:49] <flaf> Yes correct.
[16:49] <TheSov2> and you have applied all relevent kernel sysctl's to max out your speed?
[16:49] * krypto (~krypto@198.24.6.220) has joined #ceph
[16:50] <flaf> No, what parameters are you talking about?
[16:50] <TheSov2> echo kernel.pid_max = 4194303 >> /etc/sysctl.conf
[16:50] <TheSov2> echo net.ipv4.netfilter.ip_conntrack_max = 196608 >> /etc/sysctl.conf
[16:50] <TheSov2> echo net.netfilter.nf_conntrack_max = 1048576 >> /etc/sysctl.conf
[16:50] <TheSov2> echo net.core.rmem_max = 16777216 >> /etc/sysctl.conf
[16:50] <TheSov2> echo net.core.wmem_max = 16777216 >> /etc/sysctl.conf
[16:50] <TheSov2> echo net.core.wmem_default = 262144 >> /etc/sysctl.conf
[16:50] <TheSov2> echo net.core.rmem_default = 262144 >> /etc/sysctl.conf
[16:50] <TheSov2> echo net.ipv4.tcp_rmem = '8192 87380 16777216' >> /etc/sysctl.conf
[16:50] <TheSov2> echo net.ipv4.tcp_wmem = '4096 65536 16777216' >> /etc/sysctl.conf
[16:50] <TheSov2> echo net.core.netdev_max_backlog = 30000 >> /etc/sysctl.conf
[16:50] <TheSov2> echo net.ipv4.tcp_congestion_control = htcp >> /etc/sysctl.conf
[16:50] <TheSov2> echo net.ipv4.conf.all.rp_filter = 2 >> /etc/sysctl.conf
[16:51] <TheSov2> those
[16:51] <m0zes> those mem ones depend on your own tuning...
[16:51] <flaf> TheSov2: in the client side or cluster node side ?
[16:51] <TheSov2> true
[16:51] <TheSov2> on the osd servers
[16:51] <TheSov2> and also coulnt hurt on the monitors
[16:51] <m0zes> doesn't hurt on client side...
[16:51] <TheSov2> yeah
[16:51] <TheSov2> it would help
[16:52] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit (Quit: Leaving)
[16:52] <flaf> Ok. No reboot is necessary, correct? I will test, currently the cluster is a test cluster, completely available for me.
[16:52] <TheSov2> no these are sysctl's you need to reboot
[16:52] <TheSov2> unless you change each manually
[16:52] <TheSov2> the thing i pasted adds it to the conf
[16:53] <TheSov2> dont use that directly unless you want that to be perminent
[16:53] * enax (~enax@hq.ezit.hu) Quit (Ping timeout: 480 seconds)
[16:53] <TheSov2> when you do that, it should go a bit faster
[16:53] <TheSov2> the fuse options for cephfs will not increase its speed tremendously
[16:54] <flaf> (yes if I run sysclt -w /etc/sysctl.d/my-tuning.conf, no reboot)
[16:54] <TheSov2> correct
[16:54] <flaf> Ok so, I trying that now...
[16:54] * flaf is back in 5 minutes...
[16:54] <TheSov2> ceph with a low amount of osd's 15... is not going to be super fast. keep in mind overhead is high
[16:55] * LDA (~DM@host217-114-156-249.pppoe.mark-itt.net) Quit (Quit: Nettalk6 - www.ntalk.de)
[16:56] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[16:56] <flaf> TheSov2: yes but with a rbd for instance, I have better perf (~600 iops with cephfs against 1500 iops with rbd).
[16:57] <TheSov2> yes but rbd is very flat
[16:57] <TheSov2> cephfs is controlled via MDS
[16:57] <TheSov2> you are adding a whole other layer
[16:57] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) Quit (Quit: Bye guys! (??????????????????? ?????????)
[16:58] * KungFuHamster (~ahmeni@89.36.211.79) has joined #ceph
[16:58] * krypto (~krypto@198.24.6.220) Quit (Ping timeout: 480 seconds)
[16:58] <TheSov2> rbd is simple, here is a data map, block 1-40 are on osd 0,1,3 block 41-80 are on osd 2,4,8 etc
[16:58] <TheSov2> cephfs is not that cut and dry
[16:59] * krypto (~krypto@G68-121-13-216.sbcis.sbc.com) has joined #ceph
[16:59] * lcurtis_ (~lcurtis@47.19.105.250) has joined #ceph
[17:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[17:01] * krypto (~krypto@G68-121-13-216.sbcis.sbc.com) Quit (Read error: Connection reset by peer)
[17:01] * krypto (~krypto@198.24.6.220) has joined #ceph
[17:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[17:02] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[17:02] <flaf> TheSov2: http://paste.alacon.org/39132 <= some parameters seem to no exist in my Ubuntu Trusty.
[17:07] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:09] * pam (~pam@193.106.183.1) Quit (Quit: pam)
[17:10] * johnavp1989 (~jpetrini@mobile-166-171-057-205.mycingular.net) Quit (Ping timeout: 480 seconds)
[17:11] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[17:11] * krypto (~krypto@198.24.6.220) Quit (Ping timeout: 480 seconds)
[17:15] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[17:15] <TheSov2> flaf, thats fine
[17:15] <TheSov2> the most important ones are kernel.pid_max net.core.netdev_max_backlog = 30000
[17:15] <TheSov2> net.ipv4.tcp_congestion_control = htcp
[17:15] <TheSov2> net.ipv4.conf.all.rp_filter = 2
[17:16] * linuxkidd (~linuxkidd@49.sub-70-209-96.myvzw.com) has joined #ceph
[17:20] * johnavp1989 (~jpetrini@64.94.196.252) has joined #ceph
[17:21] <flaf> TheSov2: ok, but unfortunately I have no improvement, always ~500-600 iops.
[17:21] <flaf> It's very curious because the value of iops during the bench is not constant.
[17:21] <TheSov2> what system do you have cephfs mounted?
[17:22] <flaf> My cluster nodes are Ubuntu Trusty and my client node (where cephfs is mounted) too.
[17:23] <flaf> In the client, I'm using cephfs-fuse (not the kernel cephfs module).
[17:25] <flaf> Currently I have 1 SSD for 3 OSD, and the OSD contains too the OS and, for 3 servers only, it contains too the monitor working dir. Do you think it could be a good idea to put the cephfsmetadata pool in the SSD disks too (via crushmap)?
[17:28] * KungFuHamster (~ahmeni@76GAAAD1Z.tor-irc.dnsbl.oftc.net) Quit ()
[17:28] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[17:29] * johnavp1989 (~jpetrini@64.94.196.252) Quit (Read error: Connection reset by peer)
[17:30] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[17:31] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:33] <TheSov2> wait
[17:33] <TheSov2> you are sharing osd disks with os?
[17:35] * jclm (~jclm@AVelizy-151-1-8-215.w82-120.abo.wanadoo.fr) has joined #ceph
[17:39] * johnavp1989 (~jpetrini@mobile-166-171-057-205.mycingular.net) has joined #ceph
[17:43] * kefu (~kefu@211.22.145.245) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:44] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[17:45] * kefu (~kefu@114.92.107.250) has joined #ceph
[17:46] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[17:48] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[17:49] * krypto (~krypto@129.192.176.66) has joined #ceph
[17:50] * johnavp1989 (~jpetrini@mobile-166-171-057-205.mycingular.net) Quit (Ping timeout: 480 seconds)
[17:52] <TheSov2> I wanted to know if anyone here has ceph osd servers running oh dhcp
[17:52] <lookcrabs> I didn't think that was possible
[17:53] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[17:53] <TheSov2> in theory it is if you have a really good dns setup
[17:53] <TheSov2> well afaik
[17:53] <lookcrabs> wait can you explain?
[17:53] <TheSov2> how else can you do a stateless setup?
[17:53] <TheSov2> as long as monitors can resolve names to ip's and vice versa, it should be good
[17:54] <TheSov2> monitors do need to be static
[17:54] <lookcrabs> ah yeah I thought the monitors need static ips
[17:54] <lookcrabs> but yeah indeed that would be neat to have osd hosts in a dhcp pool instead
[17:54] <lookcrabs> just to see anyway
[17:54] <TheSov2> right now i have a script that converts their dhcp to static when they first boot
[17:55] <TheSov2> but im wondering if i can just keep my dhcp
[17:55] <TheSov2> #convert dhcp to static start
[17:55] <TheSov2> apt-get install -y moreutils
[17:55] <TheSov2> strip="$(ifdata -pa eth0)"
[17:55] <TheSov2> strsub="$(ifdata -pn eth0)"
[17:55] <TheSov2> strgw="$(ip route show 0.0.0.0/0 dev eth0 | cut -d\ -f3)"
[17:55] <TheSov2> rm /etc/network/interfaces
[17:55] <TheSov2> touch /etc/network/interfaces.d/eth0
[17:55] <TheSov2> touch /etc/network/interfaces
[17:55] <TheSov2> echo "auto lo" >> /etc/network/interfaces
[17:55] <TheSov2> echo "iface lo inet loopback" >> /etc/network/interfaces
[17:55] <TheSov2> echo "source-directory /etc/network/interfaces.d" >> /etc/network/interfaces
[17:55] <TheSov2> echo "auto eth0" >> /etc/network/interfaces.d/eth0
[17:55] <TheSov2> echo "iface eth0 inet static" >> /etc/network/interfaces.d/eth0
[17:55] <TheSov2> echo "address $strip" >> /etc/network/interfaces.d/eth0
[17:55] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[17:55] <TheSov2> echo "netmask $strsub" >> /etc/network/interfaces.d/eth0
[17:56] <TheSov2> echo "gateway $strgw" >> /etc/network/interfaces.d/eth0
[17:56] <TheSov2> #convert dhcp to static end
[17:57] * krypto (~krypto@129.192.176.66) Quit (Ping timeout: 480 seconds)
[17:57] <lookcrabs> I use chef here to provision the hosts so they are assigned static addresses from the get go.
[17:58] * krypto (~krypto@G68-121-13-95.sbcis.sbc.com) has joined #ceph
[17:58] * voxadam (~adam@2601:1c2:380:76:be5f:f4ff:fe55:a58f) has joined #ceph
[17:58] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) has joined #ceph
[18:00] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[18:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[18:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[18:02] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[18:04] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) Quit (Quit: valeech)
[18:05] * kefu (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[18:06] * kefu (~kefu@114.92.107.250) has joined #ceph
[18:06] * jordanP (~jordan@41.140.144.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[18:10] <voxadam> I was just wondering, if storage hardware manufacturers eventually do move towards hardware based object stores (e.g. Seagate Kinetic) would Ceph need to be modified to support using such an architecture or would it "simply" be viewed in the same way that btrfs/xfs/ext are now?
[18:10] <mtb`> TheSov2 we use consul and dnsmasq for node addressability
[18:13] * cathode (~cathode@50-198-166-81-static.hfc.comcastbusiness.net) has joined #ceph
[18:15] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:17] * marrusl (~mark@209-150-46-243.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[18:20] * jclm1 (~jclm@AVelizy-151-1-8-215.w82-120.abo.wanadoo.fr) has joined #ceph
[18:20] * Sophie1 (~rikai@5.79.242.126) has joined #ceph
[18:25] * jclm1 (~jclm@AVelizy-151-1-8-215.w82-120.abo.wanadoo.fr) Quit (Quit: Leaving.)
[18:25] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[18:26] * jclm (~jclm@AVelizy-151-1-8-215.w82-120.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[18:28] * wjw-freebsd (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[18:29] * linjan (~linjan@176.193.194.186) has joined #ceph
[18:31] * xarses_ (~xarses@64.124.158.100) has joined #ceph
[18:34] <flaf> 17:33 <TheSov2> you are sharing osd disks with os? <= no, the OS is in the SSD (which contains the 3 OSD journal and the montior working dir)?
[18:34] <flaf> .
[18:34] <TheSov2> yes
[18:34] <TheSov2> ok
[18:34] <TheSov2> thats ok i guess, but you really shouldnt do that
[18:35] * Guest1716 (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[18:35] <flaf> And if I add new OSD in these SSD to put the cephfsmetadata pool, maybe it's lot things for just one SSD, no?
[18:36] <flaf> *lot of.
[18:36] <TheSov2> typically you put 3 to 6 journals per osd
[18:36] <TheSov2> depending on the quality of SSD and the speed ot the spinning disk
[18:37] <TheSov2> for instance an intell 500 series ssd will do like 20 journals
[18:37] <flaf> My SSD are Intel 3710 series (200GB).
[18:38] <TheSov2> eh, thats ok
[18:39] <TheSov2> sorry i meant 750 series
[18:39] <TheSov2> not 500
[18:39] <TheSov2> 500 are the 2.5 inch drives
[18:41] <flaf> Ah, 20 journal in just one Intel SSD 75O series, really? It seems to me so big. I thought that 1 SSD for 5 journal was the max.
[18:41] <TheSov2> well the journals by default are 5 gig
[18:41] <TheSov2> thats only 100 gig for 20
[18:41] <TheSov2> but the point is the wear leveling
[18:41] <TheSov2> and the speed
[18:41] <rkeene> Oh god, please do not add DHCP/DNS to Ceph
[18:41] <TheSov2> rkeene, when i deploy i have dhcp on and then convert the dhcp's to static
[18:42] <TheSov2> and remove that range from dhcp
[18:42] <TheSov2> just for ease of configuration
[18:43] <rkeene> I deploy via DHCP/PXE (for OSD nodes, monitors are static -- but there are considerably fewer of them) but the DHCP server remembers the MAC address and gives the node the same IP -- and even if it didn't, it'd still work
[18:44] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[18:45] <TheSov2> so i saw this ipxe thing someone was talking about earlier in here
[18:45] <TheSov2> so im gonna see if i can do a stateless and bootdiskless osd server
[18:46] <TheSov2> the problem is the static ips
[18:46] <TheSov2> so gotta make a database of mac addresses and crap
[18:46] <TheSov2> and that puts me back in the same boat
[18:46] <cathode> you can deploy iscsi boot parameters via dhcp options
[18:46] <TheSov2> too much work, need to think more lazy
[18:46] <rkeene> TheSov2, The OSDs don't need static IPs, and the DHCP server automatically remembers the mapping of MAC=>IP
[18:47] <TheSov2> true
[18:47] <rkeene> TheSov2, I do exactly this, stateless OSD nodes
[18:47] <TheSov2> ubuntu?
[18:47] <rkeene> TheSov2, Also stateless Compute/KVM nodes
[18:47] <TheSov2> what do you use for server?
[18:47] <rkeene> No, I have my own Linux distribution (this is an appliance)
[18:48] <TheSov2> you have a ceph appliance?1?
[18:48] <rkeene> Yes
[18:48] <TheSov2> are you selling them?
[18:48] <rkeene> Yes
[18:48] <TheSov2> with support
[18:48] <TheSov2> ?
[18:49] <rkeene> Yes, both with Capacity Management/Capacity as a Service and with just the OS (bring your own hardware)
[18:49] <TheSov2> watcha chargin?
[18:50] <rkeene> http://knightpoint.com/what-we-do/offerings/on-premises/infrastructure-platform/ -- "Storage as a Service" describes the Ceph appliance running on your datacenter with us doing Capacity Management (you tell us how much boxes you want and how much disks and we handle it) -- I'm not sure where the web page is for just the OS
[18:50] * Sophie1 (~rikai@76GAAAD6A.tor-irc.dnsbl.oftc.net) Quit ()
[18:51] <rkeene> I'm not sure how much we charge... I'd have to click around a lot to try to figure that out
[18:52] <rkeene> I can give you a demo though !
[18:53] * fsimonce (~simon@host159-1-dynamic.54-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[18:54] <rkeene> I just got finished upgrading our commercial public cloud that uses it, now it's at ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43) on Linux 4.1.13
[18:54] <TheSov2> the place i work is all about being able to automate things, and im more of a hands on person
[18:54] <TheSov2> so we opt for supported solutions most of the time
[18:54] <TheSov2> which is annoying to me
[18:54] <TheSov2> but i just work here
[18:55] <TheSov2> so ceph was a difficult sell for them but if we get a supported deal, that means they may goto prod
[18:55] <rkeene> We sell support, but don't try to lock you out of the appliance
[18:55] <TheSov2> even better
[18:57] <rkeene> The stateless-ness of it does mean you can't do most things in the same way -- since it never installs an OS on the nodes every time you reboot you get back the base OS, so making config changes requires you to do things that happen at boot, there are several ways we do that AND that you can manage
[18:57] <TheSov2> well i mean its dedicated to ceph, i dont intend on many changes
[18:58] <cathode> well, you guys might be biased but given a choice between a pair of servers replicating data in real-time via HAST (similar to DRBD), vs a Ceph cluster with two nodes for OSDs only, which would you guys reccomend?
[18:58] <TheSov2> I'm looking at these supermicro 32 drive systems but they scare me in that I can no longer provide 1 gigahertz per osd and 4 gigs of ram per osd
[18:58] <rkeene> Yeah, it's usually not a problem
[18:58] <TheSov2> so the 24 drive systems are better for performance but they cost more per osd
[18:59] <rkeene> We're seeing only about 1.5GB of RAM per OSD usage with 1TB OSDs
[18:59] <TheSov2> cathode dont do ceph on less than 3 nodes
[18:59] <cathode> ok lets say i used 3 nodes then.
[18:59] <TheSov2> rkeene, we are planning 6tb's per osd
[19:00] <TheSov2> cathode, with just 3 nodes, you are going to see the performance of 1 system
[19:00] <TheSov2> maybe less
[19:00] <TheSov2> ceph has overhead
[19:00] <rkeene> I haven't tested anything that large so I'm not sure how much RAM to expect -- are you guys seeing 4GB/OSD at that disk size ?
[19:00] <TheSov2> but it will be safe
[19:00] <cathode> which is what we have now (active/passive)
[19:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[19:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[19:01] * kefu (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[19:01] <rkeene> cathode, Will you ever grow or always be small ?
[19:02] * voxadam (~adam@2601:1c2:380:76:be5f:f4ff:fe55:a58f) Quit (Ping timeout: 480 seconds)
[19:02] <cathode> we'll probably grow
[19:02] * kefu (~kefu@211.22.145.245) has joined #ceph
[19:02] <rkeene> Ah, then probably something like Ceph is warranted -- otherwise if you were going to stay 2 or 3 nodes I'd do DRBD+GFS2
[19:03] <cathode> depends. we don't use much capacity at all, we have less than 1TB of "live" data
[19:03] <cathode> but it may grow faster as everything uses more data, you know. higher resolution pictures on newer cameras for example
[19:03] <TheSov2> then ceph is right for you
[19:04] <cathode> i gotta do a PoC in my home lab, right now my boss isn't keen on changing anything
[19:04] <TheSov2> if you intend on cephfs however you should make 2 mds servers, 1 active 1 passive and use heartbeat or something to control which one is on
[19:04] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Quit: Ex-Chat)
[19:05] <cathode> we have two nodes running freebsd 10.2 and using ZFS. in an all-windows shop, it was hard enough to get approval to use non-windows
[19:05] <rkeene> TheSov2, Won't just running two MDS servers let Ceph make that fencing automatically ?
[19:05] <rkeene> TheSov2, https://rkeene.org/terminal/1
[19:05] <rkeene> err
[19:05] <rkeene> TheSov2, https://rkeene.org/terminal/1/
[19:05] <TheSov2> right now only single MDS cephfs is considered stable
[19:05] <TheSov2> Not Found
[19:05] <TheSov2> The requested URL /terminal/1 was not found on this server.
[19:06] <rkeene> Use the second one
[19:06] <rkeene> TheSov2, https://rkeene.org/terminal/1/
[19:06] <TheSov2> Service Unavailable
[19:06] <TheSov2> The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
[19:07] <rkeene> Try again
[19:07] <rkeene> The tunnel to my laptop dies occasionally
[19:07] <TheSov2> ok
[19:08] <TheSov2> wow nice
[19:08] <cathode> that's cool
[19:08] <TheSov2> i didnt know it did that
[19:08] <TheSov2> i thought it ran active/active
[19:08] <rkeene> Not by default
[19:08] <TheSov2> ahh maybe they will change it once it passes QA
[19:09] <TheSov2> so this whole time i setup this heartbeat deal for no reason :D
[19:09] * fred`` (fred@earthli.ng) Quit (Quit: +++ATH0)
[19:09] <TheSov2> what is this screen app that shows your terminal
[19:09] <TheSov2> over web
[19:09] <TheSov2> thats amazing
[19:09] <cathode> i would only be able to mount CephFS on a linux system right? there are no clients for windows?
[19:09] <rkeene> GoTTY
[19:09] * kefu (~kefu@211.22.145.245) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:10] <TheSov2> theres no clients for windows anywhere
[19:10] <TheSov2> LOL
[19:10] <rkeene> cathode, Right (but you can share it out via NFS/SMB)
[19:10] <TheSov2> its so sad
[19:11] <TheSov2> thats a sweet setup
[19:11] <TheSov2> so your stateless deal just registers any xfs drive as a osd eh?
[19:11] <rkeene> Yeah, it's very repeatable since it's almost entirely stateless
[19:11] * fred`` (fred@earthli.ng) has joined #ceph
[19:11] <rkeene> No, any drive you plugin that doesn't have a label it gets labeled and XFS gets put on it and it gets added to Ceph
[19:11] <rkeene> (Scans are every 10 minutes)
[19:11] <rkeene> Well.. not any drive
[19:12] <TheSov2> ahh
[19:12] <rkeene> It figures out (by benchmarking) if it's an SSD or HD
[19:12] <TheSov2> well here im trying to get it so it only does that to preformatted xfs disks
[19:12] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:12] <rkeene> Then you can tell it what to do for each of those (make it a journal, make it a data disk, make it journal+data)
[19:12] <TheSov2> getting the journals to be dynamic is the problem right now
[19:12] <cathode> so you guys typically run Ceph on systems with 3.5" drives in the multi-TB range? what about using higher numbers of low-capacity 2.5" drives
[19:13] <TheSov2> cathode, my current at home environment is 6TB, my work's "test" environmenet we use for backups is larger 16 osd hosts and 3 monitors
[19:13] <rkeene> TheSov2, Whenever you tell it that it's going to be a "journal disk" it creates as many GPT partitions on it as possible (at 8GB/ea.) with the partition label "log.osd.NEW", then when a new data disk comes along it consumes one of the partitions named "log.osd.NEW"
[19:13] <TheSov2> wow
[19:13] <TheSov2> nice
[19:14] <cathode> hypothetically, for a lab environment.... if i have about a hundred 10k rpm sas drives at only 36GB, could i make a ton of ceph OSDs on those?
[19:14] <rkeene> TheSov2, Then whenever it sees the OSD again (once again, identified by disk label) it also sees the journal with a cooresponding label and configures it to use that
[19:14] <TheSov2> cathode, yes
[19:14] <TheSov2> the more spindels the better
[19:14] <cathode> ok
[19:14] <TheSov2> rkeene, seriously thats pro
[19:15] <cathode> i just need two more supermicro SC216 chassis and i'm good to go at home :)
[19:15] <TheSov2> it renames it when matching to an osd?
[19:15] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[19:15] <rkeene> TheSov2, Right
[19:16] * enax (~enax@94-21-125-67.pool.digikabel.hu) has joined #ceph
[19:16] <TheSov2> wow thats friggin A
[19:16] <TheSov2> going to lunch ill be back
[19:16] <TheSov2> should i close this?
[19:17] <rkeene> That's up to you, I can keep showing it off for hours -- I've spent about a year developing it
[19:17] <TheSov2> lol
[19:17] <rkeene> (It does more than Ceph, but that's a big feature)
[19:17] <TheSov2> later guys
[19:19] <cathode> rkeene - is that part of the appliance you are selling?
[19:22] <rkeene> cathode, Yes
[19:22] * vata (~vata@207.96.182.162) has joined #ceph
[19:24] * enax (~enax@94-21-125-67.pool.digikabel.hu) Quit (Ping timeout: 480 seconds)
[19:25] * johnavp19891 (~jpetrini@8.39.115.8) has joined #ceph
[19:26] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) has joined #ceph
[19:26] * krypto (~krypto@G68-121-13-95.sbcis.sbc.com) Quit (Read error: Connection reset by peer)
[19:27] * krypto (~krypto@198.24.6.220) has joined #ceph
[19:27] <rkeene> cathode, Let me know if you want a demo :-)
[19:28] <cathode> heh
[19:29] <rkeene> (via webex/gotomeeting -- of course, if you want one in person that can be arranged too :-D)
[19:30] <cathode> i will keep that in mind. i still want to set up a ceph cluster at home though
[19:30] <rkeene> It's relatively straight-forward, even if you compile from source and don't use any of the provided scripts
[19:31] * tenshi (~David@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[19:31] <tenshi> hey ppl
[19:31] <tenshi> Package ceph-deploy-1.5.30-0.noarch.rpm is not signed // is that normal on centos7 ? I ve installed it one week ago on centos7 without issues, but today..
[19:31] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[19:35] * magicrobotmonkey (~magicrobo@8.29.8.68) Quit (Quit: WeeChat 0.4.2)
[19:42] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[19:44] * EdGruberman (~cmrn@46.166.190.179) has joined #ceph
[19:49] * vbellur (~vijay@122.171.93.166) Quit (Ping timeout: 480 seconds)
[19:49] * voxadam (~adam@2601:1c2:380:76:be5f:f4ff:fe55:a58f) has joined #ceph
[19:49] * krypto (~krypto@198.24.6.220) Quit (Ping timeout: 480 seconds)
[19:50] * krypto (~krypto@G68-121-13-85.sbcis.sbc.com) has joined #ceph
[19:53] * mykola (~Mikolaj@91.225.202.195) has joined #ceph
[19:54] * daviddcc (~dcasier@80.12.35.214) has joined #ceph
[20:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[20:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[20:05] * johnavp19891 (~jpetrini@8.39.115.8) Quit (Quit: Leaving.)
[20:14] * EdGruberman (~cmrn@46.166.190.179) Quit ()
[20:15] * Meths_ is now known as Meths
[20:27] * nardial (~ls@dslb-088-072-085-164.088.072.pools.vodafone-ip.de) Quit (Quit: Leaving)
[20:27] * daviddcc (~dcasier@80.12.35.214) Quit (Ping timeout: 480 seconds)
[20:27] * igoryonya (~kvirc@80.83.238.67) Quit (Ping timeout: 480 seconds)
[20:27] * mortn (~mortn@217-215-219-69-no229.tbcn.telia.com) Quit (Read error: Connection reset by peer)
[20:28] * mortn (~mortn@217-215-219-69-no229.tbcn.telia.com) has joined #ceph
[20:32] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) Quit (Ping timeout: 480 seconds)
[20:33] * TheSov3 (~TheSov@204.13.200.248) has joined #ceph
[20:37] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) has joined #ceph
[20:39] * cathode (~cathode@50-198-166-81-static.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[20:40] * TheSov2 (~TheSov@cip-248.trustwave.com) Quit (Ping timeout: 480 seconds)
[20:41] <mtb`> does anyone know where i can find the valid options for osd_objectstore?
[20:42] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[20:47] * theengineer (~theengine@45-31-177-36.lightspeed.austtx.sbcglobal.net) Quit (Quit: This computer has gone to sleep)
[20:53] <mtb`> looks like i found it in the code
[20:55] * igoryonya (~kvirc@80.83.238.49) has joined #ceph
[20:56] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) Quit (Ping timeout: 480 seconds)
[20:58] * blip2 (~qable@85.17.25.22) has joined #ceph
[20:58] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[20:59] * xarses_ (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[21:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[21:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[21:09] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) has joined #ceph
[21:10] * theengineer (~theengine@45-31-177-36.lightspeed.austtx.sbcglobal.net) has joined #ceph
[21:16] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[21:23] * garphy`aw is now known as garphy
[21:25] * tenshi (~David@MTRLPQ42-1176054809.sdsl.bell.ca) has left #ceph
[21:28] * blip2 (~qable@76GAAAEDD.tor-irc.dnsbl.oftc.net) Quit ()
[21:29] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) Quit (Ping timeout: 480 seconds)
[21:33] * kiasyn (~Sun7zu@195-154-69-88.rev.poneytelecom.eu) has joined #ceph
[21:33] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) has joined #ceph
[21:34] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[21:36] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) has joined #ceph
[21:38] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[21:40] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[21:40] <igoryonya> joshd: I figured it out. I've added a public network = current.network.ip/mask, and mon then added without problem
[21:45] * emsnyder (~emsnyder@65.170.86.132) Quit (Ping timeout: 480 seconds)
[21:47] <joshd> igoryonya: cool, makes sense
[21:48] * fractalirish (~fractal@skynet.skynet.ie) Quit (Quit: Lost terminal)
[21:54] * linuxkidd (~linuxkidd@49.sub-70-209-96.myvzw.com) Quit (Ping timeout: 480 seconds)
[21:55] * linuxkidd (~linuxkidd@49.sub-70-209-96.myvzw.com) has joined #ceph
[22:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[22:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[22:02] * kiasyn (~Sun7zu@76GAAAEEV.tor-irc.dnsbl.oftc.net) Quit ()
[22:03] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[22:20] * mason (~vegas3@84ZAAAEIH.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:33] * mykola (~Mikolaj@91.225.202.195) Quit (Quit: away)
[22:48] * mortn (~mortn@217-215-219-69-no229.tbcn.telia.com) Quit (Read error: Connection reset by peer)
[22:48] * moore (~moore@64.202.160.88) has joined #ceph
[22:50] * mason (~vegas3@84ZAAAEIH.tor-irc.dnsbl.oftc.net) Quit ()
[22:57] * jasuarez (~jasuarez@237.Red-83-39-111.dynamicIP.rima-tde.net) Quit (Quit: WeeChat 1.3)
[22:57] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:1d6f:c65d:8181:2e23) Quit (Ping timeout: 480 seconds)
[23:00] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[23:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[23:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[23:02] * mattronix (~quassel@server1.mattronix.nl) Quit (Remote host closed the connection)
[23:03] * TheSov3 (~TheSov@204.13.200.248) Quit (Read error: Connection reset by peer)
[23:03] * mattronix (~quassel@server1.mattronix.nl) has joined #ceph
[23:05] * mortn (~mortn@217-215-219-69-no229.tbcn.telia.com) has joined #ceph
[23:06] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[23:07] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[23:09] * mattronix (~quassel@server1.mattronix.nl) Quit (Read error: Connection reset by peer)
[23:09] * mattronix (~quassel@server1.mattronix.nl) has joined #ceph
[23:10] * rendar (~I@host68-58-dynamic.40-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[23:13] * rendar (~I@host68-58-dynamic.40-79-r.retail.telecomitalia.it) has joined #ceph
[23:22] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[23:23] * xarses_ (~xarses@64.124.158.100) has joined #ceph
[23:31] * krypto (~krypto@G68-121-13-85.sbcis.sbc.com) Quit (Read error: Connection reset by peer)
[23:32] * krypto (~krypto@198.24.6.220) has joined #ceph
[23:41] * Rachana (~Rachana@2601:87:3:3601::766) Quit (Quit: Leaving)
[23:42] * Rachana (~Rachana@2601:87:3:3601::766) has joined #ceph
[23:43] * mtb` (~mtb`@157.130.171.46) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[23:43] * ira (~ira@24.34.255.34) Quit (Quit: Leaving)
[23:47] * krypto (~krypto@198.24.6.220) Quit (Quit: Leaving)
[23:53] <loicd> from time to time /usr/bin/systemctl stop ceph-osd@2 fails with "Job for ceph-osd@2.service canceled.". What does it mean ?
[23:54] * lcurtis_ (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[23:57] * linjan (~linjan@176.193.194.186) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.