#ceph IRC Log

Index

IRC Log for 2016-04-06

Timestamps are in GMT/BST.

[0:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[0:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[0:03] * Architect (~Xylios@06SAAAZQE.tor-irc.dnsbl.oftc.net) Quit ()
[0:03] * blank (~Bored@6AGAAAGXQ.tor-irc.dnsbl.oftc.net) has joined #ceph
[0:04] * gopher_49 (~gopher_49@host2.drexchem.com) has joined #ceph
[0:12] * gopher_49 (~gopher_49@host2.drexchem.com) Quit (Ping timeout: 480 seconds)
[0:13] * vicente (~vicente@1-163-219-137.dynamic.hinet.net) has joined #ceph
[0:13] * Kwen (~Sliker@4MJAADZRU.tor-irc.dnsbl.oftc.net) Quit ()
[0:13] * blank1 (~cheese^@4MJAADZTB.tor-irc.dnsbl.oftc.net) has joined #ceph
[0:18] * Razva (~Razva@00021622.user.oftc.net) Quit ()
[0:21] * vicente (~vicente@1-163-219-137.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[0:27] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[0:33] * Concubidated (~cube@c-50-173-245-118.hsd1.ca.comcast.net) has joined #ceph
[0:33] * blank (~Bored@6AGAAAGXQ.tor-irc.dnsbl.oftc.net) Quit ()
[0:33] * roaet (~Aethis@95.47.156.249) has joined #ceph
[0:34] * dgurtner (~dgurtner@217.149.140.193) Quit (Ping timeout: 480 seconds)
[0:40] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[0:43] * blank1 (~cheese^@4MJAADZTB.tor-irc.dnsbl.oftc.net) Quit ()
[0:43] * thomnico (~thomnico@132-110.80-90.static-ip.oleane.fr) Quit (Quit: Ex-Chat)
[0:49] * ircolle1 (~Adium@2601:285:201:2bf9:c7e:39a4:eedd:431f) Quit (Quit: Leaving.)
[0:52] * fsimonce (~simon@host201-70-dynamic.26-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:53] * mattbenjamin1 (~mbenjamin@aa2.linuxbox.com) Quit (Ping timeout: 480 seconds)
[0:54] <debian112> if I have a cluster that is recovering, do I have to wait till it is finish before adding more osd's?
[1:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[1:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[1:01] * rdias (~rdias@bl7-92-98.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[1:02] * terje (~joey@c-50-134-191-177.hsd1.co.comcast.net) has joined #ceph
[1:03] <terje> I have created a user in radosgw and can use the swift cmdline client to create buckets and upload files.
[1:03] * roaet (~Aethis@06SAAAZSS.tor-irc.dnsbl.oftc.net) Quit ()
[1:03] <terje> I do this using the -K <secret_key> command line option.
[1:04] * Szernex1 (~Eric@6AGAAAG0F.tor-irc.dnsbl.oftc.net) has joined #ceph
[1:04] <terje> I'd like to do it using the --os-username and --os-password command line switches
[1:04] <terje> is this possible?
[1:07] * rdias (~rdias@2001:8a0:749a:d01:9d:51d9:9389:4dc3) has joined #ceph
[1:09] * lcurtis (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[1:13] * wyang (~wyang@59.45.74.43) has joined #ceph
[1:13] * vicente (~vicente@1-163-219-137.dynamic.hinet.net) has joined #ceph
[1:14] * pepzi1 (~KapiteinK@06SAAAZTJ.tor-irc.dnsbl.oftc.net) has joined #ceph
[1:15] * rendar (~I@host189-116-dynamic.53-82-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[1:23] * rdias (~rdias@2001:8a0:749a:d01:9d:51d9:9389:4dc3) Quit (Ping timeout: 480 seconds)
[1:23] * wyang (~wyang@59.45.74.43) Quit (Quit: This computer has gone to sleep)
[1:25] * rdias (~rdias@2001:8a0:749a:d01:9d:51d9:9389:4dc3) has joined #ceph
[1:27] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[1:33] * vicente (~vicente@1-163-219-137.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[1:33] * Szernex1 (~Eric@6AGAAAG0F.tor-irc.dnsbl.oftc.net) Quit ()
[1:36] <flaf> debian112: no completely sure (I'm not a ceph expert) but personally I think I will are OSDs before the recovering is finished in order to minimize data balancing.
[1:36] * georgem (~Adium@24.114.58.126) has joined #ceph
[1:36] * georgem (~Adium@24.114.58.126) Quit ()
[1:36] * georgem (~Adium@206.108.127.16) has joined #ceph
[1:37] <flaf> If not, data will be put in a OSD A and after in a new OSD B. So it seems to me better if the data is put in the new OSD B directly.
[1:37] <debian112> yeah, my thoughts too
[1:40] * olid19811115 (~olid1982@aftr-88-217-181-40.dynamic.mnet-online.de) Quit (Ping timeout: 480 seconds)
[1:42] * motk (~motk@2600:3c00::f03c:91ff:fe98:51ee) has joined #ceph
[1:43] * pepzi1 (~KapiteinK@06SAAAZTJ.tor-irc.dnsbl.oftc.net) Quit ()
[1:43] * redbeast121 (~pakman__@4MJAADZWF.tor-irc.dnsbl.oftc.net) has joined #ceph
[1:43] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Quit: Leaving.)
[1:45] * davidzlap (~Adium@2605:e000:1313:8003:6124:45f:ff05:e26b) has joined #ceph
[1:55] * oms101 (~oms101@p20030057EA04D300C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:56] * Kupo1 (~tyler.wil@23.111.254.159) has left #ceph
[1:57] * xarses_ (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[1:58] <diq> TMM, I thought about your use case, and your PG calc is off
[1:58] <diq> if your data is confined on a pool per rack basis, then your data % for that pool is 100%
[1:59] <flaf> When I install multiple radosgw servers, should I create only one ceph user (client.radosgw.gateway) for all the rgw or should I create one ceph user _per_ radosgw servers?
[2:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[2:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[2:03] * oms101 (~oms101@p20030057EA04B100C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[2:03] * mrapple (~luigiman@orion.enn.lu) has joined #ceph
[2:05] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[2:06] * Lea (~LeaChim@host86-168-120-216.range86-168.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:07] * linuxkidd (~linuxkidd@29.sub-70-193-113.myvzw.com) Quit (Quit: Leaving)
[2:08] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[2:09] <motk> man, I wish there was a sexy crushmap visualisation and editing tool
[2:12] <flaf> Furthermore (concerning radosgw), we can read here http://docs.ceph.com/docs/master/radosgw/config/ ???Configuring a Ceph Object Gateway requires a running Ceph Storage Cluster, and an Apache web server with the FastCGI module.??? I thought that Apache was not longer recommended in radosgw and civetweb was prefered. Am I wrong?
[2:13] * redbeast121 (~pakman__@4MJAADZWF.tor-irc.dnsbl.oftc.net) Quit ()
[2:13] * Jaska (~Borf@5.135.85.23) has joined #ceph
[2:14] <motk> flaf: no, for infernalis you can use native civetweb
[2:14] <motk> that said I was using civetweb in firefly with no issues
[2:15] * xarses_ (~xarses@50.141.35.103) has joined #ceph
[2:15] * wwdillingham (~LobsterRo@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[2:15] <flaf> ah ok. So the doc in not updated on this point.
[2:16] <flaf> motk: with radosgw should I use one ceph user per radosgw or just one ceph user for all the radosgw servers?
[2:17] <motk> just the one I think; I do wish the focs were less ancient
[2:17] <motk> that's probably a good question for th elist
[2:18] <flaf> ok, thx motk
[2:20] * j3roen (~j3roen@77.60.46.13) Quit (Ping timeout: 480 seconds)
[2:23] * xarses_ (~xarses@50.141.35.103) Quit (Ping timeout: 480 seconds)
[2:25] * RameshN (~rnachimu@101.222.183.198) has joined #ceph
[2:25] * bjornar_ (~bjornar@ti0099a430-1561.bb.online.no) Quit (Ping timeout: 480 seconds)
[2:26] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[2:33] * mrapple (~luigiman@06SAAAZU8.tor-irc.dnsbl.oftc.net) Quit ()
[2:33] * Atomizer (~Kidlvr@62.102.148.67) has joined #ceph
[2:34] * huangjun (~kvirc@113.57.168.154) has joined #ceph
[2:36] * sudocat (~dibarra@2602:306:8bc7:4c50:1dfc:2b7a:3650:8e0b) has joined #ceph
[2:43] * Jaska (~Borf@4MJAADZXF.tor-irc.dnsbl.oftc.net) Quit ()
[2:45] * xarses_ (~xarses@50.141.34.127) has joined #ceph
[2:47] * andreww (~xarses@50.141.34.127) has joined #ceph
[2:47] * xarses_ (~xarses@50.141.34.127) Quit (Read error: Connection reset by peer)
[2:53] * wwdillingham (~LobsterRo@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wwdillingham)
[2:59] * huangjun (~kvirc@113.57.168.154) Quit (Ping timeout: 480 seconds)
[3:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[3:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[3:03] * Atomizer (~Kidlvr@06SAAAZV2.tor-irc.dnsbl.oftc.net) Quit ()
[3:08] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:18] * bene2 (~bene@2601:18c:8501:25e4:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[3:20] * wwdillingham (~LobsterRo@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[3:26] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Ping timeout: 480 seconds)
[3:28] * whatevsz_ (~quassel@ppp-46-244-224-202.dynamic.mnet-online.de) has joined #ceph
[3:34] * Eric1 (~Kayla@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[3:34] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:35] * whatevsz (~quassel@ppp-46-244-224-168.dynamic.mnet-online.de) Quit (Ping timeout: 480 seconds)
[3:39] * geli (~geli@geli-2015.its.utas.edu.au) has joined #ceph
[3:39] * georgem (~Adium@69-165-151-116.dsl.teksavvy.com) has joined #ceph
[3:41] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) Quit (Quit: Leaving.)
[3:41] * huangjun (~kvirc@113.57.168.154) has joined #ceph
[3:48] * Lite (~Tenk@tor-exit.eecs.umich.edu) has joined #ceph
[3:51] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[3:59] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[3:59] * zhaochao (~zhaochao@125.39.9.156) has joined #ceph
[4:01] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[4:03] * Eric1 (~Kayla@06SAAAZX2.tor-irc.dnsbl.oftc.net) Quit ()
[4:04] * rapedex (~utugi____@88.79-160-125.customer.lyse.net) has joined #ceph
[4:04] * zhat (~zhat@115.79.36.242) has joined #ceph
[4:05] * zhat (~zhat@115.79.36.242) Quit ()
[4:05] <flaf> Hi, with a radosgw via infernalis, impossible to bind civetweb to port 80 because now radosgw is running under the ceph account. So if I set a port like 8080 (> 1024) no problem. Is it possible to use the port 80 and still use the ceph account (not root) to launch radosgw and civetweb?
[4:05] * zhat (~zhat@115.79.36.242) has joined #ceph
[4:07] * terje (~joey@c-50-134-191-177.hsd1.co.comcast.net) Quit (Read error: Connection reset by peer)
[4:07] <zhat> hi everyone.! i have a question about step to install camalari monitor for ceph.
[4:08] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[4:08] * terje (~joey@c-50-134-191-177.hsd1.co.comcast.net) has joined #ceph
[4:08] <lurbs> flaf: You can use capabilities to allow it, with something like: setcap 'cap_net_bind_service=+ep' /path/to/file
[4:08] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[4:09] <lurbs> Doesn't work if it /path/to/file is a script, though.
[4:09] <lurbs> s/it //
[4:09] <flaf> lurbs: where I should put this line? In which file?
[4:10] <zhat> i was installed ceph follow manual step on centos website, but when i use command "ceph-deploy calamari xxx", ceph-deploy: error: argument COMMAND: invalid choice: 'calamari'
[4:10] <zhat> i know ceph-deploy don't know about calamari parameter, what should i do .?
[4:12] <flaf> lurbs: ah, do you mean I should launch this command on the shell: setcap cap_net_bind_service=+ep /usr/bin/radosgw, correct?
[4:12] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[4:13] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[4:13] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[4:13] <lurbs> Yeah.
[4:13] <lurbs> You'd need to do it again after an upgrade, or whenever /usr/bin/radosgw changed.
[4:14] <lurbs> You could also instead use tcpserver (https://cr.yp.to/ucspi-tcp/tcpserver.html) or local iptables rules to redirect the high port to low port.
[4:14] <flaf> Ok, but is this setting persistent after a reboot?
[4:14] <lurbs> Yes.
[4:15] <lurbs> Or a proxy (haproxy, nginx, whatever) in front of radosgw listening on port 80.
[4:15] <flaf> Ok, thx lurbs. I see a ticket here. http://tracker.ceph.com/issues/13600 The fix is not released currently.
[4:18] * Lite (~Tenk@06SAAAZYJ.tor-irc.dnsbl.oftc.net) Quit ()
[4:18] * ricin (~ZombieL@tor-exit.eecs.umich.edu) has joined #ceph
[4:19] <flaf> lurbs: unfortunately, it seems to not work. "cannot bind to 80" but the caps is set according to "getcap /usr/bin/radosgw".
[4:20] * kefu (~kefu@183.193.163.144) has joined #ceph
[4:33] * rapedex (~utugi____@06SAAAZYW.tor-irc.dnsbl.oftc.net) Quit ()
[4:35] * yanzheng (~zhyan@118.116.113.6) has joined #ceph
[4:37] * kawa2014 (~kawa@94.166.188.199) has joined #ceph
[4:38] * xul (~Jamana@176.31.119.194) has joined #ceph
[4:45] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[4:47] * IvanJobs (~hardes@103.50.11.146) has joined #ceph
[4:47] * j3roen (~j3roen@77.60.46.13) has joined #ceph
[4:48] * ricin (~ZombieL@6AGAAAG8N.tor-irc.dnsbl.oftc.net) Quit ()
[4:48] * lmg (~Quackie@192.42.115.101) has joined #ceph
[4:51] * RameshN (~rnachimu@101.222.183.198) Quit (Ping timeout: 480 seconds)
[4:53] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[5:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[5:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[5:01] * georgem (~Adium@69-165-151-116.dsl.teksavvy.com) Quit (Quit: Leaving.)
[5:08] * shyu (~Shanzhi@119.254.120.66) has joined #ceph
[5:08] * xul (~Jamana@06SAAAZZ4.tor-irc.dnsbl.oftc.net) Quit ()
[5:08] * murmur1 (~mog_@5.153.233.58) has joined #ceph
[5:18] * lmg (~Quackie@4MJAADZ11.tor-irc.dnsbl.oftc.net) Quit ()
[5:18] * Enikma (~Szernex@tor-exit.eecs.umich.edu) has joined #ceph
[5:19] * PaulCuzner (~paul@115-188-90-87.jetstream.xtra.co.nz) has left #ceph
[5:24] * rakeshgm (~rakesh@106.51.225.4) Quit (Quit: Leaving)
[5:27] * Vacuum_ (~Vacuum@88.130.217.241) has joined #ceph
[5:33] * Racpatel (~Racpatel@2601:87:3:3601::4edb) Quit (Ping timeout: 480 seconds)
[5:34] * Vacuum__ (~Vacuum@i59F79D27.versanet.de) Quit (Ping timeout: 480 seconds)
[5:34] * overclk (~quassel@121.244.87.117) has joined #ceph
[5:38] * murmur1 (~mog_@6AGAAAHAL.tor-irc.dnsbl.oftc.net) Quit ()
[5:38] * _303 (~Xeon06@tor-exit1-readme.dfri.se) has joined #ceph
[5:42] * davidzlap (~Adium@2605:e000:1313:8003:6124:45f:ff05:e26b) Quit (Quit: Leaving.)
[5:47] * davidzlap (~Adium@2605:e000:1313:8003:6124:45f:ff05:e26b) has joined #ceph
[5:48] * Enikma (~Szernex@4MJAADZ2X.tor-irc.dnsbl.oftc.net) Quit ()
[5:48] * Scaevolus (~EdGruberm@146.0.74.160) has joined #ceph
[6:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[6:01] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[6:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[6:05] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[6:08] * _303 (~Xeon06@06SAAAZ14.tor-irc.dnsbl.oftc.net) Quit ()
[6:08] * Vale (~Snowman@tor-exit.dhalgren.org) has joined #ceph
[6:12] * sudocat (~dibarra@2602:306:8bc7:4c50:1dfc:2b7a:3650:8e0b) Quit (Ping timeout: 480 seconds)
[6:12] * gopher_49 (~gopher_49@75.66.43.16) has joined #ceph
[6:18] * Scaevolus (~EdGruberm@4MJAADZ3O.tor-irc.dnsbl.oftc.net) Quit ()
[6:18] * maku (~Sophie@185.100.85.132) has joined #ceph
[6:21] * yanzheng (~zhyan@118.116.113.6) Quit (Read error: Connection reset by peer)
[6:23] * yanzheng (~zhyan@118.116.113.6) has joined #ceph
[6:27] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[6:28] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[6:29] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[6:37] * zhat (~zhat@115.79.36.242) Quit (Ping timeout: 480 seconds)
[6:38] * Vale (~Snowman@4MJAADZ39.tor-irc.dnsbl.oftc.net) Quit ()
[6:38] * Tenk (~Diablodoc@06SAAAZ3R.tor-irc.dnsbl.oftc.net) has joined #ceph
[6:43] * kefu is now known as kefu|afk
[6:48] * maku (~Sophie@06SAAAZ27.tor-irc.dnsbl.oftc.net) Quit ()
[6:48] * Misacorp (~click@93.115.95.202) has joined #ceph
[6:54] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[6:54] * kefu|afk (~kefu@183.193.163.144) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:56] * whatevsz_ (~quassel@ppp-46-244-224-202.dynamic.mnet-online.de) Quit (Ping timeout: 480 seconds)
[6:58] * tobiash (~quassel@mail.bmw-carit.de) Quit (Remote host closed the connection)
[7:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[7:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[7:03] * wwdillingham (~LobsterRo@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wwdillingham)
[7:08] * masteroman (~ivan@93-139-203-237.adsl.net.t-com.hr) has joined #ceph
[7:08] * Tenk (~Diablodoc@06SAAAZ3R.tor-irc.dnsbl.oftc.net) Quit ()
[7:10] * tobiash (~quassel@mail.bmw-carit.de) has joined #ceph
[7:13] * SweetGirl (~mr_flea@6AGAAAHE8.tor-irc.dnsbl.oftc.net) has joined #ceph
[7:14] * masterom1 (~ivan@93-142-193-60.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[7:15] * owlbot (~supybot@pct-empresas-50.uc3m.es) Quit (Ping timeout: 480 seconds)
[7:15] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:18] * Misacorp (~click@06SAAAZ34.tor-irc.dnsbl.oftc.net) Quit ()
[7:18] * Wijk (~visored@95.211.169.35) has joined #ceph
[7:18] * gopher_49 (~gopher_49@75.66.43.16) Quit (Ping timeout: 480 seconds)
[7:20] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[7:22] * zhat (~zhat@14.161.35.227) has joined #ceph
[7:25] * evelu (~erwan@62.147.161.106) has joined #ceph
[7:31] * gopher_49 (~gopher_49@75.66.43.16) has joined #ceph
[7:33] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) Quit (Quit: Ex-Chat)
[7:38] * andreww (~xarses@50.141.34.127) Quit (Remote host closed the connection)
[7:39] * xarses (~xarses@50.141.34.127) has joined #ceph
[7:42] * SweetGirl (~mr_flea@6AGAAAHE8.tor-irc.dnsbl.oftc.net) Quit ()
[7:42] * KristopherBel (~Sami345@edwardsnowden2.torservers.net) has joined #ceph
[7:44] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[7:48] * Wijk (~visored@4MJAADZ6D.tor-irc.dnsbl.oftc.net) Quit ()
[7:48] * ZombieL (~murmur@6AGAAAHGK.tor-irc.dnsbl.oftc.net) has joined #ceph
[7:49] * davidzlap (~Adium@2605:e000:1313:8003:6124:45f:ff05:e26b) Quit (Quit: Leaving.)
[7:51] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[7:51] * xarses (~xarses@50.141.34.127) Quit (Ping timeout: 480 seconds)
[7:52] * gopher_49 (~gopher_49@75.66.43.16) Quit (Ping timeout: 480 seconds)
[7:58] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) Quit (Quit: Bye)
[8:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[8:01] * AndroUser (~androirc@103.51.75.14) has joined #ceph
[8:01] * AndroUser is now known as xyz
[8:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[8:04] <xyz> Hi ! I have a non-retina Mac book Pro which is upgraded to 16 gb RAM . it has a 250 gb ssd + 500gb hdd . runs a core i5 @ 2.7 ghz and intel hd 4000 as its gpu. I would like to know if i can build ceph on this config if i have a linux distro installed on a partition .Linux would be using all these configs as it is dual booted . need help here . thanks .
[8:07] * kefu (~kefu@183.193.163.144) has joined #ceph
[8:08] * krobelus_ (~krobelus@178-191-110-232.adsl.highway.telekom.at) has joined #ceph
[8:08] * xyz (~androirc@103.51.75.14) Quit (Quit: AndroIRC - Android IRC Client ( http://www.androirc.com ))
[8:09] * madkiss (~madkiss@2001:6f8:12c3:f00f:1541:eaa8:e84e:1fb0) Quit (Quit: Leaving.)
[8:12] * KristopherBel (~Sami345@06SAAAZ50.tor-irc.dnsbl.oftc.net) Quit ()
[8:12] * dicko (~ricin@cloud.tor.ninja) has joined #ceph
[8:15] * krobelus (~krobelus@178-191-104-25.adsl.highway.telekom.at) Quit (Ping timeout: 480 seconds)
[8:15] * swami1 (~swami@49.32.0.56) has joined #ceph
[8:18] * ZombieL (~murmur@6AGAAAHGK.tor-irc.dnsbl.oftc.net) Quit ()
[8:23] * kefu (~kefu@183.193.163.144) Quit (Max SendQ exceeded)
[8:28] * kefu (~kefu@183.193.163.144) has joined #ceph
[8:29] * linjan__ (~linjan@176.193.197.8) Quit (Ping timeout: 480 seconds)
[8:30] * dgurtner (~dgurtner@82.199.64.68) has joined #ceph
[8:32] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[8:42] * dicko (~ricin@06SAAAZ64.tor-irc.dnsbl.oftc.net) Quit ()
[8:42] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:43] * pepzi1 (~Catsceo@06SAAAZ8B.tor-irc.dnsbl.oftc.net) has joined #ceph
[8:46] * hyperbaba (~hyperbaba@private.neobee.net) has joined #ceph
[9:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[9:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[9:02] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[9:02] <TMM> diq, that is an excellent point
[9:04] * analbeard (~shw@support.memset.com) has joined #ceph
[9:07] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[9:07] * kefu is now known as kefu|afk
[9:12] * pepzi1 (~Catsceo@06SAAAZ8B.tor-irc.dnsbl.oftc.net) Quit ()
[9:12] * starcoder (~PcJamesy@strasbourg-tornode.eddai.su) has joined #ceph
[9:16] * Geph (~Geoffrey@41.77.153.99) has joined #ceph
[9:19] * kefu|afk (~kefu@183.193.163.144) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[9:22] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[9:29] * bjornar_ (~bjornar@ti0099a430-1561.bb.online.no) has joined #ceph
[9:29] * fsimonce (~simon@host201-70-dynamic.26-79-r.retail.telecomitalia.it) has joined #ceph
[9:31] * kefu (~kefu@183.193.163.144) has joined #ceph
[9:33] <mistur> Hello
[9:33] <mistur> I'm looking for a tool to benchmark a radosgw
[9:34] <mistur> I have 2 pool (replicated & erasure code)
[9:35] <mistur> and I'd like to know the influence of replicated vs erasure code on the global performance
[9:39] * kefu_ (~kefu@114.92.120.83) has joined #ceph
[9:42] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:42] * starcoder (~PcJamesy@4MJAADZ96.tor-irc.dnsbl.oftc.net) Quit ()
[9:43] * Malcovent (~Salamande@hessel2.torservers.net) has joined #ceph
[9:45] * kefu (~kefu@183.193.163.144) Quit (Ping timeout: 480 seconds)
[9:46] <frickler> mistur: cosbench does that quite nicely
[9:48] * _s1gma (~Enikma@atlantic480.us.unmetered.com) has joined #ceph
[9:49] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[9:51] * rendar (~I@host27-192-dynamic.181-80-r.retail.telecomitalia.it) has joined #ceph
[9:52] <mistur> frickler: ok thx I gonna give a look
[9:55] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[9:55] * goretoxo (~psilva@62.82.24.134.static.user.ono.com) has joined #ceph
[9:58] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) has joined #ceph
[9:58] * chengmao (~chengmao@113.57.168.154) has joined #ceph
[9:58] * allaok (~allaok@machine107.orange-labs.com) has left #ceph
[9:59] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[10:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[10:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[10:03] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[10:05] * bara (~bara@213.175.37.12) has joined #ceph
[10:05] * bara is now known as bara_XIA
[10:07] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[10:12] * Malcovent (~Salamande@06SAAA0AJ.tor-irc.dnsbl.oftc.net) Quit ()
[10:13] * chengmao (~chengmao@113.57.168.154) Quit (Quit: Going offline, see ya! (www.adiirc.com))
[10:15] * linjan__ (~linjan@86.62.112.22) has joined #ceph
[10:17] * Mattress (~Kidlvr@dsl-olubrasgw1-54fb5b-165.dhcp.inet.fi) has joined #ceph
[10:18] * olid19811115 (~olid1982@aftr-185-17-206-43.dynamic.mnet-online.de) has joined #ceph
[10:18] * _s1gma (~Enikma@4MJAAD0A6.tor-irc.dnsbl.oftc.net) Quit ()
[10:23] * dyasny (~dyasny@bzq-82-81-161-51.red.bezeqint.net) has joined #ceph
[10:23] * allaok (~allaok@machine107.orange-labs.com) Quit (Quit: Leaving.)
[10:23] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[10:23] * allaok (~allaok@machine107.orange-labs.com) Quit ()
[10:24] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[10:27] * owlbot (~supybot@pct-empresas-50.uc3m.es) has joined #ceph
[10:32] * Frank___ (~Frank@149.210.210.150) has joined #ceph
[10:37] * Frank_ (~Frank@149.210.210.150) Quit (Ping timeout: 480 seconds)
[10:39] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:44] * kawa2014 (~kawa@94.166.188.199) Quit (Ping timeout: 480 seconds)
[10:44] * kawa2014 (~kawa@94.162.169.103) has joined #ceph
[10:46] * rraja (~rraja@121.244.87.117) has joined #ceph
[10:47] * Mattress (~Kidlvr@6AGAAAHMI.tor-irc.dnsbl.oftc.net) Quit ()
[10:49] * bara_XIA (~bara@213.175.37.12) Quit (Quit: Bye guys! (??????????????????? ?????????)
[10:51] * evelu (~erwan@62.147.161.106) Quit (Ping timeout: 480 seconds)
[10:52] * hassifa (~ain@198.50.200.139) has joined #ceph
[10:52] * shohn (~shohn@80.195.117.87) has joined #ceph
[10:53] * allaok (~allaok@machine107.orange-labs.com) Quit (Quit: Leaving.)
[10:54] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Quit: Leaving.)
[10:55] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[10:56] * Lea (~LeaChim@host86-168-120-216.range86-168.btcentralplus.com) has joined #ceph
[10:57] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[10:57] * bnrou (~oftc-webi@178.237.98.13) has joined #ceph
[11:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[11:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[11:01] * boogibugs (~boogibugs@gandalf.csc.fi) has joined #ceph
[11:02] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:60fc:c21d:c1c6:96e4) has joined #ceph
[11:03] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[11:07] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[11:08] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit ()
[11:09] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[11:10] * TMM (~hp@185.5.122.2) has joined #ceph
[11:11] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[11:14] * swami2 (~swami@49.44.57.245) has joined #ceph
[11:21] * farblue (~farblue@wor.namesco.net) has joined #ceph
[11:21] * swami1 (~swami@49.32.0.56) Quit (Ping timeout: 480 seconds)
[11:22] * hassifa (~ain@6AGAAAHNZ.tor-irc.dnsbl.oftc.net) Quit ()
[11:23] * Sliker (~Architect@91.250.241.241) has joined #ceph
[11:24] * swami2 (~swami@49.44.57.245) Quit (Ping timeout: 480 seconds)
[11:26] * swami1 (~swami@49.44.57.245) has joined #ceph
[11:26] <farblue> hi all :) I???m just starting to learn about Ceph and I???m looking to build a small distributed storage configuration across 5 servers that I???m also using for compute. I want backing storage for docker containers so I can move containers between hosts. I figure the 2 simplist approaches are to mount an RBD device on the servers or to use CephFS. However, CephFS has a big red banner saying it isn???t production ready and I???ve read a few places it???s no
[11:26] <farblue> good idea to have an RBD mount on the same server as an OSD is running. can someone possibly advise?
[11:27] * evelu (~erwan@62.147.161.106) has joined #ceph
[11:27] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Ping timeout: 480 seconds)
[11:31] * swami2 (~swami@49.44.57.245) has joined #ceph
[11:34] * olid19811115 (~olid1982@aftr-185-17-206-43.dynamic.mnet-online.de) Quit (Ping timeout: 480 seconds)
[11:34] <Be-El> farblue: the problem refers to kernel based rbd. afaik docker has native support for rbd via librbd (userspace).
[11:35] <bnrou> Hi al, I'm stick with "0 librados: client.radosgw.gateway authentication error (1) Operation not permitted" and "-1 Couldn't init storage provider (RADOS)" when I try to startup the radosgw
[11:35] <Be-El> (and I'm not sure whether the problem still exists)
[11:36] * swami1 (~swami@49.44.57.245) Quit (Ping timeout: 480 seconds)
[11:36] <farblue> Be-El: ok, I wasn???t aware there was a difference but as I???m new to it all I think I???d feel happier to start with mounting the RDB on the hosts and using normal host mounts into the docker containers - at least to start with :)
[11:37] <farblue> Be-El: it does look from what I can see that the problem with RBD was fixed a couple of years ago or more - but I wanted to get confirmation!
[11:37] * shylesh__ (~shylesh@59.95.71.203) has joined #ceph
[11:37] <farblue> is it fair to say the CephFS service is still not production ready? or is is now considered stable?
[11:37] <Be-El> farblue: for a definite confirmation you may want to ask on the mailing list. I'm just another user[tm]
[11:38] <farblue> sure, I understand :)
[11:38] <Be-El> farblue: in jewel the big fat red warning was removed since they have introduced a number of fsck tools
[11:38] <farblue> ah, cool. Jewel being the one being released soon?
[11:38] <farblue> if we have RBD, what is the benefit of the CephFS stuff?
[11:38] <Be-El> yes
[11:39] * swami2 (~swami@49.44.57.245) Quit (Ping timeout: 480 seconds)
[11:39] * krypto (~krypto@125.16.137.146) has joined #ceph
[11:39] <Be-El> cephfs is a distributed filesystem vs rbd providing block devices
[11:39] <farblue> so you can???t mount an RBD on multiple servers?
[11:40] <farblue> (I did say I was new to a lot of this! ;) )
[11:40] <Be-El> if you want to access the same files from multiple hosts, you either need a shared filesystem (cephfs/nfs/afs/xtreemfs/gluster/whatever) or use a cluster-aware filesystem ontop of a shared device (e.g. ocfs2 on rbd)
[11:40] * huangjun (~kvirc@113.57.168.154) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[11:40] <farblue> right, ok, makes sense
[11:40] <Be-El> you can mount it, but you will trash the data on the rbd
[11:41] <Be-El> compare rbd to iscsi
[11:41] <farblue> right, ok. Hence using RBD directly in docker containers - so each container is writing to its own block objects
[11:41] * swami1 (~swami@49.32.0.56) has joined #ceph
[11:41] <farblue> although I guess that makes backup a bit of a pain
[11:42] <farblue> I assume RBD is more performant than something like CephFS because it doesn???t allow concurrent access
[11:43] * zhaochao_ (~zhaochao@125.39.9.144) has joined #ceph
[11:43] <Be-El> that's one of the problem with cephfs, yes....and with every other network aware filesystem that tries to ensure consistency (as opposed to NFS ;-) )
[11:43] <farblue> heh, yes, makes sense :)
[11:44] <Be-El> and backup of rbd is ...well....complex. if you can ensure that no kernel cache is involved or the caches are completely flush (-> consistent state of the rbd), you can make a snapshot and mount it independently for backup purposes
[11:44] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[11:44] <farblue> Be-El - or, I guess, I have to handle backup at the container level
[11:44] <farblue> so there???s only ever one mount of the RBD
[11:45] <bnrou> Hi all, I'm stuck with "0 librados: client.radosgw.gateway authentication error (1) Operation not permitted" and "-1 Couldn't init storage provider (RADOS)" when I try to startup the radosgw
[11:45] <Be-El> cephfs might definitely help in this case, but I'm not sure how to handle file permissions and synchronized users/groups within containers
[11:45] <bnrou> Does anyone know something about his? can't find anything in the logs nor online I didn't try
[11:46] <Be-El> bnrou: your ceph user either has incorrect permissions or a wrong key
[11:46] <bnrou> how can I check its permisssions?
[11:46] <bnrou> I re-did the key 3 time, still the same message
[11:47] <Be-El> bnrou: use the rados command with the corresponding key to list the content of the rgw pools
[11:47] <bnrou> ok thanks
[11:47] <Be-El> bnrou: the problem might also be the file permissions of the keyring file
[11:48] <Be-El> bnrou: check that the user running the radosgw process is able to read the file's content
[11:48] * zhaochao (~zhaochao@125.39.9.156) Quit (Ping timeout: 480 seconds)
[11:48] * zhaochao_ is now known as zhaochao
[11:49] <bnrou> I'm using root
[11:49] * bjornar_ (~bjornar@ti0099a430-1561.bb.online.no) Quit (Ping timeout: 480 seconds)
[11:50] <bnrou> shloud it be "r" or "rw"?
[11:51] * swami2 (~swami@49.44.57.245) has joined #ceph
[11:51] * kefu_ is now known as kefu|afk
[11:51] <bnrou> I actualy know how to use the basics rados commands to list the pool, but not how to use the key with it
[11:51] <Miouge> So I got in touch with Software Definied Storage sales guy to check commercial alternatives to Ceph. When comparing their product to ceph they say: ???Ceph is limited to 3 replicas max, 2 datacenters max, Openstack integrate is Cinder+KVM only, etc?????? Mmmm.. say what?
[11:52] * skney1 (~BlS@marylou.nos-oignons.net) has joined #ceph
[11:52] * Sliker (~Architect@6AGAAAHPA.tor-irc.dnsbl.oftc.net) Quit ()
[11:52] * rcfighter1 (~Rosenblut@06SAAA0FI.tor-irc.dnsbl.oftc.net) has joined #ceph
[11:53] <bnrou> Be-El: do you have an exemple for me please?
[11:54] * swami1 (~swami@49.32.0.56) Quit (Ping timeout: 480 seconds)
[11:55] <Be-El> bnrou: try parameters --id. it is not listed explicitly, but may also be present with the rados command
[11:55] <Be-El> Miouge: they did not understand ceph
[11:57] * Mika_c (~quassel@122.146.93.152) Quit (Read error: No route to host)
[11:57] <Be-El> Miouge: you can have any level of replication (although more than 3 are difficult to justify). datacenters are just a level in the crush hierarchie, so using more than 2 is a matter of configuring crush. and openstack usage depends on whether the hypervisor is able to talk to ceph. cinder and glance both support ceph as backend
[11:58] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[11:58] <bnrou> Be-El: it tells me " librados: client.clusterTestnew.client.radosgw.keyring initialization error (2) No such file or directory", which is weird, I didn't specified ever this "client.clusterTestnew.client.radosgw.keyring", just the "clusterTestnew.client.radosgw.keyring"
[11:58] <Miouge> Be-El: the funny thing is that we use RadosGW for the Object sotrage/Swift deployment
[11:59] * swami2 (~swami@49.44.57.245) Quit (Ping timeout: 480 seconds)
[11:59] <Be-El> bnrou: see http://docs.ceph.com/docs/master/rados/operations/user-management/ for more information about client names, ids, and files
[12:00] <bnrou> Be-El: I know the docs, but I can't link my error to any of that
[12:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[12:01] <Be-El> bnrou: what's the name of the keyring file?
[12:01] * tobiash (~quassel@mail.bmw-carit.de) Quit (Remote host closed the connection)
[12:02] <bnrou> Be-El: clusterTestnew.client.radosgw.keyring
[12:02] <bnrou> this is a new one I created with a different name to be sure there is no old weird things with the old key name
[12:02] * zhat (~zhat@14.161.35.227) Quit (Quit: Leaving)
[12:03] <Be-El> bnrou: and that's wrong. the file name has start with "client"
[12:04] <bnrou> ha. that's actually a problem trough the documentation, it tells you you can use a custom cluster name, but than it doesn't tells you how you shloud really name things
[12:04] <bnrou> thanks I'm trying that
[12:05] <bnrou> Be-El: does the same goes with the admin keyring?
[12:05] * tobiash (~quassel@212.118.206.70) has joined #ceph
[12:06] <Be-El> bnrou: the admin keyring is just another client keyring, so the same rules apply, too
[12:06] <Be-El> bnrou: but I would backup that file before doing any changes
[12:06] * bjornar_ (~bjornar@ti0099a430-1561.bb.online.no) has joined #ceph
[12:07] <Be-El> bnrou: the commandline tools should all support the --cluster and --id parameters, which are used to construct the file name (afaik)
[12:07] <bnrou> Be-El: I keep the old file as I'm creating keyring files with new names
[12:08] * swami1 (~swami@49.44.57.245) has joined #ceph
[12:10] * Mika_c (~quassel@122.146.93.152) Quit (Remote host closed the connection)
[12:12] * stein (~stein@185.56.185.82) Quit (Ping timeout: 480 seconds)
[12:12] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:12] * kefu|afk is now known as kefu_
[12:14] <bnrou> Be-El: still errors...
[12:14] <bnrou> monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
[12:15] <Be-El> bnrou: use strace to check which filename rados is actually looking for
[12:15] * stein (~stein@185.56.185.82) has joined #ceph
[12:16] <Be-El> bnrou: oh, and I was wrong..the file name pattern is <cluster>.client.<name>.keyring
[12:17] <bnrou> Be-El: ha seemed fairly unusual to me ^^
[12:19] <bnrou> Be-El: not <cluster>.client.<name>.radosgw.keyring ?
[12:19] <bnrou> oh no ok
[12:19] <bnrou> nothing
[12:20] <bnrou> Be-El: "-1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication" "0 librados: client.clusterTest.client.radosgw.keyring initialization error (2) No such file or directory"
[12:21] <bnrou> I still have no idea why it keeps adding "client." at the begining of the keyring client... Makes no sense !
[12:21] * skney1 (~BlS@06SAAA0FH.tor-irc.dnsbl.oftc.net) Quit ()
[12:21] * Bwana (~Eric@marcuse-1.nos-oignons.net) has joined #ceph
[12:22] * rcfighter1 (~Rosenblut@06SAAA0FI.tor-irc.dnsbl.oftc.net) Quit ()
[12:23] * tuhnis (~BlS@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[12:23] * Nicola-1980 (~Nicola-19@ip5b41fff1.dynamic.kabel-deutschland.de) has joined #ceph
[12:24] <farblue> Be-El: regarding RBD inside docker containers, do you have any experience of doing it?
[12:25] <Be-El> farblue: nope, I'm not a fan of docker at all
[12:25] <farblue> fair enough :)
[12:25] <Be-El> bnrou: try --name instead of --id
[12:26] <liiwi> lxc ftw! :)
[12:26] <Be-El> liiwi: yes, lxc is not that completely braindead security-wise compared to docker
[12:27] <Be-El> liiwi: but the container implementation in the linux kernel has a lot of rough edges which are present in both docker and lxc
[12:27] * kawa2014 (~kawa@94.162.169.103) Quit (Read error: Connection reset by peer)
[12:27] * kawa2014 (~kawa@94.162.169.103) has joined #ceph
[12:30] <bnrou> Be-El: didn't work either
[12:31] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[12:32] <bnrou> Be-El: as much asi I read the rados man, I still don't see any way to indicate the keyring file
[12:33] * IvanJobs (~hardes@103.50.11.146) Quit (Quit: Leaving)
[12:34] * bjornar_ (~bjornar@ti0099a430-1561.bb.online.no) Quit (Ping timeout: 480 seconds)
[12:41] <Be-El> bnrou: did you try --keyring?
[12:41] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[12:45] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[12:51] * Bwana (~Eric@06SAAA0GJ.tor-irc.dnsbl.oftc.net) Quit ()
[12:52] * tuhnis (~BlS@6AGAAAHRN.tor-irc.dnsbl.oftc.net) Quit ()
[12:56] * kefu_ is now known as kefu|afk
[12:57] * Xeon06 (~Epi@static-83-41-68-212.sadecehosting.net) has joined #ceph
[13:00] <Kvisle> is there any way to make the ceph osd's utilize linux' filesystem cache to accelerate reads a teeny bit?
[13:00] * Kvisle will be looking at overlay pools soon
[13:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[13:01] <Gugge-47527> they should use the cache without you doing anything Kvisle
[13:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[13:02] * shylesh__ (~shylesh@59.95.71.203) Quit (Remote host closed the connection)
[13:02] * smerz (~ircircirc@37.74.194.90) Quit (Read error: No route to host)
[13:04] * wjw-freebsd (~wjw@vpn.ecoracks.nl) has joined #ceph
[13:12] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Read error: Connection reset by peer)
[13:17] * whatevsz (~quassel@ppp-46-244-224-202.dynamic.mnet-online.de) has joined #ceph
[13:17] * swami2 (~swami@49.32.0.4) has joined #ceph
[13:21] * swami1 (~swami@49.44.57.245) Quit (Ping timeout: 480 seconds)
[13:24] * bene2 (~bene@2601:18c:8501:25e4:ea2a:eaff:fe08:3c7a) has joined #ceph
[13:27] * Xeon06 (~Epi@6AGAAAHSV.tor-irc.dnsbl.oftc.net) Quit ()
[13:27] * adept256 (~Arfed@Relay-J.tor-exit.network) has joined #ceph
[13:28] * zhaochao (~zhaochao@125.39.9.144) Quit (Quit: ChatZilla 0.9.92 [Firefox 45.0.1/20160318172635])
[13:31] * Bosse (~bosse@erebus.klykken.com) has joined #ceph
[13:35] <Bosse> i have a cluster with hammer 0.94.3 with 270 osds, and did the mistake of adding two new servers with 60 osds with version 0.94.6. the rebalancing is ongoing, but clients are derping out - one server with a rbd mount gets messages like "osdc handle_map corrupt msg" and "osdxxx feature set mismatch".
[13:36] <Bosse> what is the best approach? allow cluster to finish rebalance and then upgrade MON->OSD, or try to abort the process by removing the newly added OSDs? they have been rebalancing for approx 24 hours now...
[13:38] <Bosse> or maybe downgrade the two new servers to 0.94.3 until it has finished the backfills?
[13:40] * olid19811115 (~olid1982@193.24.209.81) has joined #ceph
[13:43] * georgem (~Adium@24.114.68.175) has joined #ceph
[13:43] * georgem (~Adium@24.114.68.175) Quit ()
[13:43] * georgem (~Adium@206.108.127.16) has joined #ceph
[13:45] <sep> Bosse, i am just a user/learner. so i realy do not know what i am talking about.... But i think i would have downgraded the osd's, since (for me) that's just an apt-get away. should be fairly quick, ; then do the upgrade in correct order once everything is stable.
[13:48] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: This computer has gone to sleep)
[13:48] <Bosse> sep: thanks for the suggestion. i was thinking about that for a little bit after writing the above, and realized that some metadata might have been changed. the safest approach would probably be to remove the newly added osds altogether, allow backfills to complete and then upgrade the "old" cluster before adding the new nodes. what do you think?
[13:51] <sep> probably safest. i just assumed point releases did not change on disk structure, but i could ofcourse be horribly mistaken. your way is safer
[13:53] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[13:53] * analbeard (~shw@support.memset.com) has joined #ceph
[13:57] * adept256 (~Arfed@4MJAAD0JA.tor-irc.dnsbl.oftc.net) Quit ()
[14:00] <bnrou> Be-El: Seems better, just have a " 0 librados: client.admin authentication error (1) Operation not permitted"
[14:00] <bnrou> so it find the key, but it dosen't like it
[14:00] <bnrou> shloud I restart something?
[14:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[14:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[14:02] * Kidlvr1 (~KungFuHam@6AGAAAHU7.tor-irc.dnsbl.oftc.net) has joined #ceph
[14:05] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[14:09] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[14:09] <Kvisle> Gugge-47527: doesn't look like it's used very efficiently -- most of my memory isn't utilized at all ..
[14:11] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: This computer has gone to sleep)
[14:13] * boichev (~boichev@213.169.56.130) Quit (Read error: Connection reset by peer)
[14:13] * boichev (~boichev@213.169.56.130) has joined #ceph
[14:16] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[14:17] * swami1 (~swami@106.206.140.187) has joined #ceph
[14:17] * ira (~ira@nat-pool-bos-t.redhat.com) has joined #ceph
[14:19] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:20] * Miouge_ (~Miouge@h-72-233.a163.priv.bahnhof.se) has joined #ceph
[14:20] * swami2 (~swami@49.32.0.4) Quit (Ping timeout: 480 seconds)
[14:22] * AG_Scott (~PeterRabb@65.19.167.131) has joined #ceph
[14:23] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[14:23] * krypto (~krypto@125.16.137.146) Quit (Ping timeout: 480 seconds)
[14:28] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[14:28] * evelu (~erwan@62.147.161.106) Quit (Ping timeout: 480 seconds)
[14:30] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: This computer has gone to sleep)
[14:31] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[14:31] * Kidlvr1 (~KungFuHam@6AGAAAHU7.tor-irc.dnsbl.oftc.net) Quit ()
[14:31] * xolotl (~Plesioth@chomsky.torservers.net) has joined #ceph
[14:32] * evelu (~erwan@62.147.161.106) has joined #ceph
[14:32] * evelu (~erwan@62.147.161.106) Quit ()
[14:33] * evelu (~erwan@62.147.161.106) has joined #ceph
[14:35] * Miouge_ (~Miouge@h-72-233.a163.priv.bahnhof.se) Quit (Quit: Miouge_)
[14:38] * krypto (~krypto@125.16.137.146) has joined #ceph
[14:38] * Racpatel (~Racpatel@2601:87:3:3601::4edb) has joined #ceph
[14:38] * olid19811115 (~olid1982@193.24.209.81) Quit (Ping timeout: 480 seconds)
[14:46] * krypto (~krypto@125.16.137.146) Quit (Ping timeout: 480 seconds)
[14:46] * krypto (~krypto@G68-90-105-38.sbcis.sbc.com) has joined #ceph
[14:49] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:51] * AG_Scott (~PeterRabb@06SAAA0K7.tor-irc.dnsbl.oftc.net) Quit ()
[14:52] * cooey (~loft@korematsu.tor-exit.calyxinstitute.org) has joined #ceph
[14:52] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: This computer has gone to sleep)
[14:53] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[14:55] * Racpatel (~Racpatel@2601:87:3:3601::4edb) Quit (Quit: Leaving)
[14:55] * bniver (~bniver@nat-pool-bos-u.redhat.com) has joined #ceph
[14:55] * Racpatel (~Racpatel@2601:87:3:3601::4edb) has joined #ceph
[14:57] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[15:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[15:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[15:01] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[15:01] * xolotl (~Plesioth@4MJAAD0L0.tor-irc.dnsbl.oftc.net) Quit ()
[15:02] * Diablothein (~PuyoDead@tor-exit.ohdoom.net) has joined #ceph
[15:04] * Nicola-1_ (~Nicola-19@ip5b41fff1.dynamic.kabel-deutschland.de) has joined #ceph
[15:04] * Nicola-1980 (~Nicola-19@ip5b41fff1.dynamic.kabel-deutschland.de) Quit (Read error: Connection reset by peer)
[15:05] * krypto (~krypto@G68-90-105-38.sbcis.sbc.com) Quit (Quit: Leaving)
[15:06] * Hemanth (~hkumar_@121.244.87.117) has joined #ceph
[15:10] * mattbenjamin1 (~mbenjamin@aa2.linuxbox.com) has joined #ceph
[15:11] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: This computer has gone to sleep)
[15:12] * swami2 (~swami@49.32.0.16) has joined #ceph
[15:14] * vicente (~vicente@1-163-219-137.dynamic.hinet.net) has joined #ceph
[15:17] * Hemanth (~hkumar_@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:17] * swami1 (~swami@106.206.140.187) Quit (Ping timeout: 480 seconds)
[15:19] * whatevsz (~quassel@ppp-46-244-224-202.dynamic.mnet-online.de) Quit (Ping timeout: 480 seconds)
[15:20] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[15:21] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[15:21] * cooey (~loft@4MJAAD0M2.tor-irc.dnsbl.oftc.net) Quit ()
[15:22] * MKoR (~JohnO@62.102.148.67) has joined #ceph
[15:22] * ibravo (~ibravo@166.170.29.132) has joined #ceph
[15:22] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[15:23] * shyu_ (~shyu@111.197.116.113) has joined #ceph
[15:24] <hyperbaba> can some one help me please. I have a small ceph deployed with one monitor and couple of osds on two servers
[15:24] <Kvisle> only if you say what you need help with
[15:24] * overclk_ (~quassel@121.244.87.124) has joined #ceph
[15:24] <hyperbaba> root filesystem reached 100% on the monitor. I've made some space but now monitor refuses to start
[15:27] * smerz (~ircircirc@37.74.194.90) has joined #ceph
[15:27] * overclk (~quassel@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:28] * wyang (~wyang@116.216.30.3) has joined #ceph
[15:31] * Diablothein (~PuyoDead@4MJAAD0NM.tor-irc.dnsbl.oftc.net) Quit ()
[15:36] <hyperbaba> anyone? It's a realy big issue for me
[15:36] * tZ (~Diablodoc@81.89.0.199) has joined #ceph
[15:38] * kefu|afk (~kefu@114.92.120.83) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:39] <smerz> hm irc gateway gives no history :(
[15:42] * Miouge (~Miouge@94.136.92.20) Quit (Ping timeout: 480 seconds)
[15:42] * wyang (~wyang@116.216.30.3) Quit (Quit: This computer has gone to sleep)
[15:46] * wyang (~wyang@116.216.30.3) has joined #ceph
[15:47] * alkaid_ (~root@freeshell.ustc.edu.cn) has joined #ceph
[15:47] <alkaid_> 
[15:48] * alkaid_ (~root@freeshell.ustc.edu.cn) has left #ceph
[15:50] * gregmark (~Adium@68.87.42.115) has joined #ceph
[15:51] * MKoR (~JohnO@06SAAA0OM.tor-irc.dnsbl.oftc.net) Quit ()
[15:52] * Borf (~Scaevolus@Relay-J.tor-exit.network) has joined #ceph
[15:53] * overclk_ (~quassel@121.244.87.124) Quit (Ping timeout: 480 seconds)
[15:54] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[15:55] * wjw-freebsd2 (~wjw@vpn.ecoracks.nl) has joined #ceph
[15:55] * wwdillingham (~LobsterRo@65.112.8.197) has joined #ceph
[15:57] * wjw-freebsd (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[15:58] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: This computer has gone to sleep)
[15:59] * EinstCrazy (~EinstCraz@180.152.103.247) has joined #ceph
[15:59] * yanzheng (~zhyan@118.116.113.6) Quit (Quit: This computer has gone to sleep)
[15:59] * EinstCrazy (~EinstCraz@180.152.103.247) Quit (Remote host closed the connection)
[15:59] * csoukup (~csoukup@159.140.254.102) has joined #ceph
[16:00] * EinstCrazy (~EinstCraz@180.152.103.247) has joined #ceph
[16:00] * hyperbaba (~hyperbaba@private.neobee.net) Quit ()
[16:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[16:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[16:04] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[16:04] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:05] * kefu (~kefu@183.193.163.144) has joined #ceph
[16:06] * tZ (~Diablodoc@4MJAAD0PC.tor-irc.dnsbl.oftc.net) Quit ()
[16:06] * ItsCriminalAFK (~TehZomB@tor-exit-4.all.de) has joined #ceph
[16:07] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Quit: Leaving...)
[16:07] * wjw-freebsd2 (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[16:09] * swami2 (~swami@49.32.0.16) Quit (Quit: Leaving.)
[16:09] * EinstCrazy (~EinstCraz@180.152.103.247) Quit (Remote host closed the connection)
[16:09] * EinstCrazy (~EinstCraz@180.152.103.247) has joined #ceph
[16:09] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: This computer has gone to sleep)
[16:10] * wjw-freebsd (~wjw@vpn.ecoracks.nl) has joined #ceph
[16:11] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[16:11] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[16:13] * wushudoin (~wushudoin@38.140.108.2) has joined #ceph
[16:15] * wjw-freebsd2 (~wjw@vpn.ecoracks.nl) has joined #ceph
[16:15] * RameshN (~rnachimu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[16:15] * ngoswami (~ngoswami@121.244.87.116) Quit ()
[16:18] * wjw-freebsd (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[16:19] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:19] * wyang (~wyang@116.216.30.3) Quit (Quit: This computer has gone to sleep)
[16:20] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[16:21] * xarses_ (~xarses@64.124.158.100) has joined #ceph
[16:21] * Borf (~Scaevolus@06SAAA0P2.tor-irc.dnsbl.oftc.net) Quit ()
[16:22] * zapu (~geegeegee@justus.impium.de) has joined #ceph
[16:22] * Mikorc (~oftc-webi@129.102.242.53) has joined #ceph
[16:22] <Mikorc> Hello
[16:26] * wjw-freebsd2 (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[16:26] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[16:27] * georgem (~Adium@206.108.127.16) has joined #ceph
[16:27] <Mikorc> Hi guys, i run test with ceph for my company, and i have a issue with my storage. I have one admin node, one monitor node, one osd nodes with 2 disks used by ceph. On the admin node, i have 2 VMs with the same storage mount. When i add data one client1, i have to umount-mount one the client2 for have this data one client2. I made a lot of research on the web but nothing about that. Can you light my path please ? Have you ideas on what's are going on ?
[16:28] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[16:28] * ibravo (~ibravo@166.170.29.132) Quit (Read error: Connection reset by peer)
[16:29] * wjw-freebsd (~wjw@vpn.ecoracks.nl) has joined #ceph
[16:31] <farblue> Mikorc - when you say you have the same storage mount on both VMs do you mean you are using the same RBD in both VMs?
[16:31] * dgurtner_ (~dgurtner@87.215.61.26) has joined #ceph
[16:32] <wwdillingham> What is generally recommended for cephfs: kernel client or FUSE client?
[16:33] * dgurtner (~dgurtner@82.199.64.68) Quit (Ping timeout: 480 seconds)
[16:34] <Mikorc> farblue - Yes, i use the same RBD in both VMs, and i should not do that. RBD cannot be shared, that's right ? i should use CephFS ?
[16:35] <farblue> I asked exactly the same question earlier today :) RBDs cannot be shared
[16:35] <jdillaman> Mikorc: you would need to use a clustered file system like GFS2 on top of the RBD mount if you wanted to share a filesystem between two hosts (or use CephFS)
[16:36] <farblue> you can use CephFS - but I don???t think it is considered production ready yet - or you can use GFS2 or OCFS2 or another filesystem that knows about being shared
[16:36] * ItsCriminalAFK (~TehZomB@06SAAA0QT.tor-irc.dnsbl.oftc.net) Quit ()
[16:36] * Kealper (~RaidSoft@freedom.ip-eend.nl) has joined #ceph
[16:39] <Mikorc> Hum okay, because we want to shared a big storage full of images between multiple workers. CephFs is considered production ready in Jewel release maybe ?
[16:45] * kawa2014 (~kawa@94.162.169.103) Quit (Ping timeout: 480 seconds)
[16:45] * kawa2014 (~kawa@94.167.0.100) has joined #ceph
[16:46] * davidzlap (~Adium@2605:e000:1313:8003:9c4e:b36e:c89a:79ca) has joined #ceph
[16:47] <m0zes> cephfs is production ready in jewel. I've been using it in production since hammer.
[16:47] <Mikorc> hammer has release after jewel ?
[16:47] <m0zes> hammer->infernalis->jewel->kraken
[16:48] <m0zes> hammer is 0.94
[16:48] <Mikorc> Thank you
[16:48] <m0zes> infernalis is the current stable, hammer is the current long-term-stable.
[16:48] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: This computer has gone to sleep)
[16:50] * wjw-freebsd2 (~wjw@vpn.ecoracks.nl) has joined #ceph
[16:50] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[16:51] <Mikorc> Ok , this is why i user hammer , and jewel is currently the dev version ?
[16:51] * zapu (~geegeegee@06SAAA0RC.tor-irc.dnsbl.oftc.net) Quit ()
[16:52] * JWilbur (~Helleshin@freedom.ip-eend.nl) has joined #ceph
[16:52] * ngoswami (~ngoswami@121.244.87.116) Quit ()
[16:53] <m0zes> jewel is in the release candidate stage now. it should be long-term-stable "soon"
[16:54] * wjw-freebsd (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[16:54] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) has joined #ceph
[16:54] * wyang (~wyang@59.45.74.43) has joined #ceph
[16:55] * bara (~bara@213.175.37.12) has joined #ceph
[16:56] <etienneme> h->i->j->k :)
[16:56] <Mikorc> Hooo nice !
[16:57] <Mikorc> Soon ? 1-2 months or more ? we have a restrictive schedule.
[16:58] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[16:59] <m0zes> I can't set a date, but I'd expect within a month or so, based on the previous release schedules.
[17:00] <etienneme> Mikorc: http://docs.ceph.com/docs/master/releases/
[17:00] <etienneme> November
[17:00] <etienneme> My bad
[17:00] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) has joined #ceph
[17:00] <m0zes> That was the infernalis release
[17:00] <etienneme> I thought Jewel was the last one :p
[17:01] <etienneme> That's not even the right year x)
[17:02] <smerz> m0zes> cephfs is production ready in jewel. I've been using it in production since hammer.
[17:02] <smerz> is it true?
[17:03] <smerz> oh dammit wrong channel lol -.-
[17:03] <m0zes> smerz: http://docs.ceph.com/docs/master/release-notes/
[17:04] <m0zes> "This is the first release in which CephFS is declared stable and production ready!"
[17:04] <etienneme> great :)
[17:04] <smerz> omg
[17:06] * Kealper (~RaidSoft@6AGAAAH5C.tor-irc.dnsbl.oftc.net) Quit ()
[17:06] * offender (~Schaap@213.61.149.100) has joined #ceph
[17:09] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) has joined #ceph
[17:10] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) has joined #ceph
[17:11] * georgem1 (~Adium@206.108.127.16) has joined #ceph
[17:12] * wjw-freebsd2 (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[17:13] <wwdillingham> can I rename a cephfs filesystem?
[17:13] <Gugge-47527> rename what part?
[17:14] <wwdillingham> the name of the filesystem
[17:14] <jcsp> wwdillingham: not currently. It's not fundamentally forbidden but we didn't add a command to do it.
[17:14] <wwdillingham> i made the filesystem with the name cephfs_test
[17:14] * bnrou (~oftc-webi@178.237.98.13) Quit (Quit: Page closed)
[17:14] <wwdillingham> before i knew I could only have one
[17:14] <wwdillingham> now I want to rename it to be descriptive
[17:14] <wwdillingham> I could delete it entirely
[17:14] <wwdillingham> and make a new one???. if that is supported
[17:15] <wwdillingham> i dont want to break anything, the cluster is in production with rbd, but the cephfs part is not in production (yet)
[17:15] * lcurtis_ (~lcurtis@47.19.105.250) has joined #ceph
[17:16] * kefu (~kefu@183.193.163.144) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:17] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:17] <m0zes> deleting and recreating is supported.
[17:17] <wwdillingham> ok I will proceed with deleting that fs
[17:18] <m0zes> stop all mds. ceph mds fail 0; ceph fs rm $name --yes-i-really-mean-it
[17:18] <wwdillingham> m0zes: can i reuse the same dat and metadata pool, or should i blast those as well?
[17:18] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[17:18] <m0zes> I'd delete them and create new ones. I'm willing to bet funky things will happen if you re-use them without emptying them.
[17:19] <wwdillingham> sounds good
[17:19] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[17:20] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) Quit (Ping timeout: 480 seconds)
[17:20] <wwdillingham> m0zes: why ceph mds fail vs ceph mds stop ?
[17:20] <wwdillingham> or same difference
[17:21] <m0zes> you need to stop them, then fail the mds.
[17:21] <m0zes> otherwise ceph can't tell if the mds is really down vs just laggy.
[17:21] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[17:21] * JWilbur (~Helleshin@7V7AAD22X.tor-irc.dnsbl.oftc.net) Quit ()
[17:22] * FierceForm (~maku@213.61.149.100) has joined #ceph
[17:23] <wwdillingham> okay, and once i fail it, i remove the filesystem, and the pools, then I can unfail the mds how, ???ceph mds cluster_up??? ?
[17:23] * bjornar_ (~bjornar@ti0099a430-1561.bb.online.no) has joined #ceph
[17:23] <m0zes> by creating the pools, creating a new cephfs, and starting an mds.
[17:24] * Bartek (~Bartek@78.10.236.140) has joined #ceph
[17:24] * kefu (~kefu@183.193.163.144) has joined #ceph
[17:24] <wwdillingham> okay, thanks, here i go
[17:27] * allaok (~allaok@machine107.orange-labs.com) has left #ceph
[17:28] * gopher_49 (~gopher_49@host2.drexchem.com) has joined #ceph
[17:31] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) has joined #ceph
[17:31] * EinstCrazy (~EinstCraz@180.152.103.247) Quit (Remote host closed the connection)
[17:32] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[17:32] * dgurtner (~dgurtner@82.199.64.68) has joined #ceph
[17:34] * dgurtner_ (~dgurtner@87.215.61.26) Quit (Ping timeout: 480 seconds)
[17:34] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[17:35] * shohn (~shohn@80.195.117.87) Quit (Quit: Leaving.)
[17:35] * overclk (~quassel@117.202.102.82) has joined #ceph
[17:36] * ade (~abradshaw@212.77.58.61) has joined #ceph
[17:36] * offender (~Schaap@4MJAAD0S9.tor-irc.dnsbl.oftc.net) Quit ()
[17:36] * PeterRabbit (~hifi@193.107.85.57) has joined #ceph
[17:38] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[17:44] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[17:46] * PeterRabbit (~hifi@06SAAA0UI.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[17:46] * FierceForm (~maku@6AGAAAH7W.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[17:47] * Mikorc (~oftc-webi@129.102.242.53) Quit (Quit: Page closed)
[17:47] * olid19811115 (~olid1982@aftr-185-17-206-43.dynamic.mnet-online.de) has joined #ceph
[17:53] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) Quit (Quit: Leaving.)
[17:56] * bara (~bara@213.175.37.12) Quit (Quit: Bye guys! (??????????????????? ?????????)
[17:59] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[18:00] * dyasny (~dyasny@bzq-82-81-161-51.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[18:01] * Nicola-1_ (~Nicola-19@ip5b41fff1.dynamic.kabel-deutschland.de) Quit (Remote host closed the connection)
[18:02] * swami1 (~swami@27.7.172.84) has joined #ceph
[18:03] <wwdillingham> Question about user capabilities???. in the docu under modidy user capabilities the ceph auth caps gives the following example:
[18:03] <wwdillingham> ceph auth caps USERTYPE.USERID {daemon} 'allow [r|w|x|*|...] [pool={pool-name}] [namespace={namespace-name}'
[18:03] <wwdillingham> what is the namespace= ?
[18:04] <wwdillingham> i mean, a pool is its own namespace???.
[18:04] * kefu is now known as kefu|afk
[18:04] * ade (~abradshaw@212.77.58.61) Quit (Ping timeout: 480 seconds)
[18:07] * TMM (~hp@185.5.122.2) Quit (Quit: Ex-Chat)
[18:08] * mykola (~Mikolaj@91.225.200.223) has joined #ceph
[18:08] <joshd1> wwdillingham: 'namespace' is another level of namespace that does not require significant resources, unlike pools
[18:09] * rdias (~rdias@2001:8a0:749a:d01:9d:51d9:9389:4dc3) Quit (Ping timeout: 480 seconds)
[18:10] * rdias (~rdias@2001:8a0:749a:d01:84a9:ad7:2a25:566d) has joined #ceph
[18:10] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Quit: Ex-Chat)
[18:10] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[18:10] * dgurtner (~dgurtner@82.199.64.68) Quit (Ping timeout: 480 seconds)
[18:12] <wwdillingham> Thanks joshd1 so a pool can contain multiple namespaces?
[18:13] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[18:13] * rakeshgm (~rakesh@121.244.87.124) has joined #ceph
[18:14] * wyang (~wyang@59.45.74.43) Quit (Quit: This computer has gone to sleep)
[18:15] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[18:16] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[18:16] <joshd1> wwdillingham: yes, a namespace is effectively like a prefix for object names
[18:17] * kefu|afk (~kefu@183.193.163.144) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:18] <wwdillingham> i see so i can name objects prefix-a/object1 and prefix-b/object and then give clientA write access to namespace=prefix-a but only read access to prefix-b, something like that?
[18:19] <wwdillingham> ahh rados -N okay??? im getting this, kinda cool
[18:20] * linjan__ (~linjan@86.62.112.22) Quit (Ping timeout: 480 seconds)
[18:20] * skrblr (~Corneliou@tor-exit-node--proxy.scalaire.com) has joined #ceph
[18:20] <joshd1> yeah, exactly
[18:20] * rdias (~rdias@2001:8a0:749a:d01:84a9:ad7:2a25:566d) Quit (Ping timeout: 480 seconds)
[18:21] <joshd1> the reason it's not just object_prefix is so things like cephfs with existing schemes for object names can use it
[18:21] * rdias (~rdias@2001:8a0:749a:d01:84a9:ad7:2a25:566d) has joined #ceph
[18:22] * kefu (~kefu@183.193.163.144) has joined #ceph
[18:24] <farblue> I???m considering setting a Ceph system and I know there are lots of ways you can improve performance such as putting the journal on SSD etc. How easy is it to apply these performance improvements once the system is up and running? Can I start simple? Or is it best to go straight in with some of the obvious performance enhancements?
[18:27] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[18:27] * goretoxo (~psilva@62.82.24.134.static.user.ono.com) Quit (Remote host closed the connection)
[18:27] * Bartek (~Bartek@78.10.236.140) Quit (Ping timeout: 480 seconds)
[18:28] <Mosibi> farblue: it is very easy to remove an OSD from your cluster, let it rebalance and change that OSD to an OSD with the journal on SSD, or something like that.
[18:28] * bdeetz (~bdeetz@72.194.132.130) Quit (Quit: Leaving.)
[18:28] * bdeetz (~bdeetz@72.194.132.130) has joined #ceph
[18:29] <Mosibi> farblue: of course that extra space to rebalance must be present
[18:29] <farblue> sure
[18:30] <Mosibi> farblue: but you have to ask yourself if you would like to do this, the task is easy, but in time....
[18:30] <farblue> is it as simple, for instance, as bringing down an OSD, copying the journal data over to an SSD and mounting the SSD on the journal subfolder? or is there metadata, xattr etc. that would break if I did that?
[18:30] <farblue> and the same for the monitors and their data
[18:31] <Mosibi> farblue: easier.... stop the OSD (follow the procedure), and bring it back online with the journal on an SSD
[18:31] <m0zes> the journal is best used as a raw parition on a block device.
[18:31] <m0zes> stop the osd, flush the journal, create the new one, ceph-osd -i ${osdnum} --mkjournal; start the osd.
[18:32] <farblue> m0zes: if you had to choose between a journal on an SSD but where the SSD had a md0 mirror raid setup with lvm and ext4, or the same hdd partition used for the OSD data store, which would you pick?
[18:32] <farblue> oh, and where the SSD mirror/lvm/ext4 partition was the OS boot partition
[18:33] <m0zes> raid with ssds is silly. the ssds often fail at the same time. leaving you with nothing.
[18:33] <farblue> m0zes: maybe. But SSD raid *often* fail together while a non-raid SSD will *always* lose data when it fails
[18:34] <Mosibi> and RAID'ing boot disks is also silly, your configmanagement (puppet/ansible/whatever) should reinstall and fix that
[18:34] <m0zes> sure, but with ceph, you've got multiple copies of the data.
[18:34] <farblue> I wasn???t aware you could boot linux on bare metal from a Ceph filesystem
[18:35] <farblue> and you also assume my server cluster is worth the effort of puppet/ansible :)
[18:35] <Mosibi> farblue: your time is :)
[18:36] <farblue> actually, going off the point, I???m actually moving towards options like CoreOS, ProjectAtomic, Ubuntu Snappy etc. where the OS is indeed readonly.
[18:36] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) Quit (Quit: Leaving)
[18:37] <farblue> Mosibl: at the moment, the time taken to setup and maintain an ansible configuration will greately outweigh the effort of doing things by hand because the servers are in a state of transition
[18:37] <farblue> and once I get where I want to be, there will be no need for puppet or ansible, if I do things right :)
[18:37] <Mosibi> farblue: okay :)
[18:37] <farblue> at the moment I have 5 servers and all of them have a different configuration :(
[18:37] <Mosibi> farblue: but ceph is flexible enough to change when you need it
[18:38] <farblue> ok, cool :)
[18:38] * Mosibi afk
[18:38] * rakeshgm (~rakesh@121.244.87.124) Quit (Quit: Leaving)
[18:38] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[18:38] <farblue> and what about the journal - better sharing I/O with the OS on a mirrored LVM/EXT4 SSD or better being left on the OSD data drive? :)
[18:41] <m0zes> depends on the quality of the ssd. my journals are on ssds, seperate from the os disks and osds.
[18:41] <farblue> well, yeah, if I had that option it would be clear
[18:41] <farblue> I???ll not bother for the moment. Start simple and all that :)
[18:42] * rotbeard (~redbeard@aftr-95-222-30-121.unity-media.net) has joined #ceph
[18:42] <farblue> thanks for everyone???s help through the day - getting a much better understanding of Ceph now :)
[18:42] * farblue (~farblue@wor.namesco.net) Quit (Quit: farblue)
[18:45] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) Quit (Quit: Miouge)
[18:47] * DanFoster (~Daniel@2a00:1ee0:3:1337:7962:6f0c:73a:30c1) Quit (Quit: Leaving)
[18:50] * skrblr (~Corneliou@6AGAAAIAY.tor-irc.dnsbl.oftc.net) Quit ()
[18:53] * csoukup (~csoukup@159.140.254.102) Quit (Ping timeout: 480 seconds)
[18:55] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:59] * bjornar__ (~bjornar@ti0099a430-1561.bb.online.no) has joined #ceph
[18:59] * bjornar_ (~bjornar@ti0099a430-1561.bb.online.no) Quit (Read error: Connection reset by peer)
[19:01] * BrianA (~BrianA@fw-rw.shutterfly.com) has joined #ceph
[19:02] * vicente (~vicente@1-163-219-137.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[19:08] * angdraug (~angdraug@64.124.158.100) has joined #ceph
[19:08] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) has joined #ceph
[19:09] * kefu (~kefu@183.193.163.144) Quit (Ping timeout: 480 seconds)
[19:10] * kefu (~kefu@114.92.120.83) has joined #ceph
[19:11] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[19:16] * swami1 (~swami@27.7.172.84) Quit (Quit: Leaving.)
[19:17] * BrianA1 (~BrianA@fw-rw.shutterfly.com) has joined #ceph
[19:18] * Bartek (~Bartek@78.10.236.140) has joined #ceph
[19:19] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[19:19] <wwdillingham> so im noticing the mount -t ceph command for mounting cephfs doesnt require you to specify the filesystem name anywhere, presumably this works because only one filesystem is allowed per cluster, i presume this will change for jewel?
[19:20] * lobstar (~Hidendra@perry.fellwock.tor-exit.calyxinstitute.org) has joined #ceph
[19:20] * LDA (~lda@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[19:21] <gregsfortytwo1> the kernel is getting support added for multiple FSes but it's not in yet
[19:21] <gregsfortytwo1> and the final Jewel release is going to lock out enabling multiple FSes behind some big scary config changes and flags, it's not necessarily ready for users yet
[19:23] <wwdillingham> so multiple fs???s will be allowed but it will not be ???encouraged??? lets say?
[19:23] * BrianA (~BrianA@fw-rw.shutterfly.com) Quit (Ping timeout: 480 seconds)
[19:24] * jclm1 (~jclm@ip68-96-198-45.lv.lv.cox.net) has joined #ceph
[19:26] * kefu (~kefu@114.92.120.83) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:29] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[19:30] <gregsfortytwo1> yeah
[19:31] <gregsfortytwo1> you'll have to set some flags which irrevocably mark your monitor maps saying you've enabled these features ;)
[19:32] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) has joined #ceph
[19:33] * jclm1 (~jclm@ip68-96-198-45.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[19:35] * Geph (~Geoffrey@41.77.153.99) Quit (Ping timeout: 480 seconds)
[19:35] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) has joined #ceph
[19:35] * derjohn_mob (~aj@88.128.81.196) has joined #ceph
[19:49] * kawa2014 (~kawa@94.167.0.100) Quit (Quit: Leaving)
[19:49] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) Quit (Quit: Miouge)
[19:50] * lobstar (~Hidendra@4MJAAD0YE.tor-irc.dnsbl.oftc.net) Quit ()
[19:53] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) has joined #ceph
[19:54] * wwdillingham (~LobsterRo@65.112.8.197) Quit (Quit: wwdillingham)
[20:00] * hroussea (~hroussea@000200d7.user.oftc.net) has joined #ceph
[20:03] * rotbeard (~redbeard@aftr-95-222-30-121.unity-media.net) Quit (Ping timeout: 480 seconds)
[20:04] * bdeetz (~bdeetz@72.194.132.130) Quit (Quit: Leaving.)
[20:04] * bdeetz (~bdeetz@72.194.132.130) has joined #ceph
[20:04] * bdeetz (~bdeetz@72.194.132.130) Quit ()
[20:05] * bdeetz (~bdeetz@72.194.132.130) has joined #ceph
[20:05] * vicente (~vicente@1-163-219-137.dynamic.hinet.net) has joined #ceph
[20:09] * overclk (~quassel@117.202.102.82) Quit (Remote host closed the connection)
[20:13] * vicente (~vicente@1-163-219-137.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[20:15] <aarontc> I have two OSDs that crash with "hit suicide timeout" regularly. I captured some logging that seems relevant. Can anyone suggest debugging steps? http://pastebin.com/BKqVz5Sa
[20:16] * rotbeard (~redbeard@aftr-95-222-30-121.unity-media.net) has joined #ceph
[20:20] * cmrn (~Kwen@46.183.218.199) has joined #ceph
[20:24] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) Quit (Quit: Miouge)
[20:24] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) has joined #ceph
[20:27] * dkrdkr1 (~oftc-webi@72.163.220.16) has joined #ceph
[20:28] <dkrdkr1> Hello.. I have a question regarding the ceph/ceph-ansible github repo.. Does anyone have any experience with it?
[20:29] * gopher_49 (~gopher_49@host2.drexchem.com) Quit (Quit: Leaving)
[20:29] * derjohn_mob (~aj@88.128.81.196) Quit (Ping timeout: 480 seconds)
[20:30] * wwdillingham (~LobsterRo@140.247.242.44) has joined #ceph
[20:32] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[20:36] * jclm1 (~jclm@ip68-96-198-45.lv.lv.cox.net) has joined #ceph
[20:38] * angdraug (~angdraug@64.124.158.100) Quit (Ping timeout: 480 seconds)
[20:42] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[20:44] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) has joined #ceph
[20:44] * jclm1 (~jclm@ip68-96-198-45.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[20:50] * cmrn (~Kwen@6AGAAAIHX.tor-irc.dnsbl.oftc.net) Quit ()
[20:51] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[20:52] * rdias (~rdias@2001:8a0:749a:d01:84a9:ad7:2a25:566d) Quit (Ping timeout: 480 seconds)
[20:52] * bniver (~bniver@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[20:53] * rdias (~rdias@2001:8a0:749a:d01:84a9:ad7:2a25:566d) has joined #ceph
[20:56] * mykola (~Mikolaj@91.225.200.223) Quit (Remote host closed the connection)
[20:56] * angdraug (~angdraug@64.124.158.100) has joined #ceph
[20:57] <BlaXpirit> dkrdkr1, in IRC it is best to just write your question
[20:58] * masteroman (~ivan@93-139-203-237.adsl.net.t-com.hr) Quit (Quit: WeeChat 1.4)
[20:58] * gopher_49 (~gopher_49@host2.drexchem.com) has joined #ceph
[21:01] <dkrdkr1> from github.com/ceph/ceph-ansible, I see the ceph-mon and ceph-osd roles.. When I specify an inventory file with 3 mons and 3 osds, things work great.. But I have a requirement where the 3 mons need to be configured incrementally one after another.. I don't see that happening correctly via the present workflows..
[21:01] * rdias (~rdias@2001:8a0:749a:d01:84a9:ad7:2a25:566d) Quit (Ping timeout: 480 seconds)
[21:02] * gopher_49 (~gopher_49@host2.drexchem.com) Quit ()
[21:02] <dkrdkr1> all mons come up and think that it is the only mon node there is.
[21:02] <dkrdkr1> the cluster-id etc, is same across them.
[21:04] <dkrdkr1> Has anyone tried such a thing with the ceph-ansible repo?
[21:06] <wwdillingham> what does your mon_initial_members entry in ceph.conf list?
[21:07] <dkrdkr1> ceph/ceph-ansible repo removed specifying mon_initial_members in ceph.conf long time ago
[21:08] <dkrdkr1> this PR exactly - https://github.com/ceph/ceph-ansible/pull/46
[21:08] <wwdillingham> ive never used ceph-ansible, but from what i understand specifying initial members is important for the initial quorum formations to prevent split brains
[21:09] <dkrdkr1> that PR makes me think that these ansible work-flows were written with the assumption that the inventory will always specify all the mon nodes
[21:10] * rdias (~rdias@2001:8a0:749a:d01:84a9:ad7:2a25:566d) has joined #ceph
[21:10] <dkrdkr1> and even in case of adding a new node, all old nodes should also be still present in the inventory.
[21:11] * yguang11 (~yguang11@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:11] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) Quit (Quit: Leaving.)
[21:17] <aarontc> has anyone seen "leveldb: Current memtable full; waiting..." before from OSDs?
[21:20] * rraja (~rraja@121.244.87.117) has joined #ceph
[21:24] * rakeshgm (~rakesh@106.51.225.4) has joined #ceph
[21:25] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[21:25] * rotbeard (~redbeard@aftr-95-222-30-121.unity-media.net) Quit (Quit: Leaving)
[21:27] * derjohn_mob (~aj@x4db29d58.dyn.telefonica.de) has joined #ceph
[21:28] <dkrdkr1> @wwdillingham, thanks. I will may be ask the question again in some time.. hoping to find someone who has experience with ceph/ceph-ansible
[21:30] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:35] <dkrdkr1> meanwhile, I started going through the documentation of adding a monitor manually.. which is precisely my use-case.. It is well-documented at http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/ Can anyone tell me what does step #3 does. `ceph auth get mon. -o {tmp}/{key-filename}` What I am failing to grasp is from where will it fetch this info? I keep getting a fault message at this stage while trying it manually.
[21:35] <dkrdkr1> Could it be because of the missing "mon_initial_members" in the other mon node
[21:37] <dkrdkr1> May be my questions are too basic :-( In that case, can you guys point me to some documentation? I have gone through the relevant parts of the ceph documentation and I am still confused
[21:40] <wwdillingham> dkrdkr1: I always follow this method for adding a new monitor to an existing quorum http://docs.ceph.com/docs/hammer/dev/mon-bootstrap/#initially-peerless-expansion
[21:41] <wwdillingham> the ???initially peerless expansion"
[21:42] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[21:46] <dkrdkr1> wwdillingham: that helps. thanks. going through the link
[21:50] * offender (~elt@06SAAA041.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:50] * Discovery (~Discovery@178.239.49.70) has joined #ceph
[21:55] * Long_yanG (~long@15255.s.time4vps.eu) has joined #ceph
[21:58] * rendar (~I@host27-192-dynamic.181-80-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[22:00] * rendar (~I@host27-192-dynamic.181-80-r.retail.telecomitalia.it) has joined #ceph
[22:02] * LongyanG (~long@15255.s.time4vps.eu) Quit (Ping timeout: 480 seconds)
[22:08] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[22:09] * wwdillingham_ (~LobsterRo@65.112.8.197) has joined #ceph
[22:11] * wwdillingham_ (~LobsterRo@65.112.8.197) Quit ()
[22:16] * wwdillingham (~LobsterRo@140.247.242.44) Quit (Ping timeout: 480 seconds)
[22:20] * offender (~elt@06SAAA041.tor-irc.dnsbl.oftc.net) Quit ()
[22:20] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) has joined #ceph
[22:20] * Vale1 (~Zyn@6AGAAAIOF.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:20] * wwdillingham (~LobsterRo@65.112.8.205) has joined #ceph
[22:21] * Nicola-1980 (~Nicola-19@x4e37ef42.dyn.telefonica.de) has joined #ceph
[22:25] * mattbenjamin1 (~mbenjamin@aa2.linuxbox.com) Quit (Ping timeout: 480 seconds)
[22:26] * Bartek (~Bartek@78.10.236.140) Quit (Ping timeout: 480 seconds)
[22:28] * georgem1 (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[22:29] * Bartek (~Bartek@78.10.236.140) has joined #ceph
[22:38] * Skaag (~lunix@65.200.54.234) has joined #ceph
[22:40] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[22:46] * LDA (~lda@host217-114-156-249.pppoe.mark-itt.net) Quit (Quit: LDA)
[22:46] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) has joined #ceph
[22:47] * dgurtner (~dgurtner@217.149.140.193) has joined #ceph
[22:49] * georgem (~Adium@24.114.53.123) has joined #ceph
[22:49] * georgem (~Adium@24.114.53.123) Quit (Read error: Connection reset by peer)
[22:49] * georgem (~Adium@206.108.127.16) has joined #ceph
[22:50] * Vale1 (~Zyn@6AGAAAIOF.tor-irc.dnsbl.oftc.net) Quit ()
[22:50] * Kaervan (~skney@tor.thd.ninja) has joined #ceph
[22:52] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) Quit (Quit: Miouge)
[22:56] * georgem1 (~Adium@24.114.73.204) has joined #ceph
[22:58] * wwdillingham (~LobsterRo@65.112.8.205) Quit (Quit: wwdillingham)
[22:59] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[23:01] * Nicola-1980 (~Nicola-19@x4e37ef42.dyn.telefonica.de) Quit (Quit: Leaving...)
[23:02] * evelu (~erwan@62.147.161.106) Quit (Ping timeout: 480 seconds)
[23:05] * georgem1 (~Adium@24.114.73.204) Quit (Quit: Leaving.)
[23:05] * georgem (~Adium@206.108.127.16) has joined #ceph
[23:18] * linjan__ (~linjan@176.193.197.8) has joined #ceph
[23:18] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) Quit (Quit: Bye)
[23:20] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[23:20] * Kaervan (~skney@4MJAAD06A.tor-irc.dnsbl.oftc.net) Quit ()
[23:20] * Qiasfah (~kalleeen@6AGAAAIRD.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:20] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) Quit (Quit: Leaving.)
[23:27] * espeer (~quassel@phobos.isoho.st) Quit (Remote host closed the connection)
[23:30] * espeer (~quassel@phobos.isoho.st) has joined #ceph
[23:35] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[23:35] * ibravo (~ibravo@72.83.69.64) has joined #ceph
[23:36] * ibravo (~ibravo@72.83.69.64) Quit ()
[23:37] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:40] * georgem (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[23:45] * Skaag (~lunix@65.200.54.234) has joined #ceph
[23:47] * Skaag (~lunix@65.200.54.234) Quit ()
[23:50] * Qiasfah (~kalleeen@6AGAAAIRD.tor-irc.dnsbl.oftc.net) Quit ()
[23:50] * Kizzi (~Wizeon@06SAAA09K.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:50] * Skaag (~lunix@65.200.54.234) has joined #ceph
[23:54] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.