#ceph IRC Log

Index

IRC Log for 2014-03-14

Timestamps are in GMT/BST.

[0:02] * sarob (~sarob@mobile-166-137-185-196.mycingular.net) Quit (Ping timeout: 480 seconds)
[0:05] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[0:08] * thomnico (~thomnico@2a01:e35:8b41:120:143e:8474:afc2:c69b) Quit (Quit: Ex-Chat)
[0:12] * xmltok (~xmltok@216.103.134.250) Quit (Quit: Leaving...)
[0:16] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[0:28] * philips (~philips@ec2-54-196-103-51.compute-1.amazonaws.com) has joined #ceph
[0:30] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[0:30] * Cnidus (~cnidus@2601:9:7b80:8c7:2527:ae00:d3d3:f4b1) has joined #ceph
[0:36] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[0:38] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:41] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (Quit: ZNC - http://znc.sourceforge.net)
[0:45] * Cnidus (~cnidus@2601:9:7b80:8c7:2527:ae00:d3d3:f4b1) Quit (Read error: Connection reset by peer)
[0:47] * Cnidus (~cnidus@2601:9:7b80:8c7:2527:ae00:d3d3:f4b1) has joined #ceph
[0:48] * xarses (~andreww@12.164.168.117) has joined #ceph
[0:52] * piezo (~piezo@108-88-37-13.lightspeed.iplsin.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[0:54] * ircolle (~Adium@2601:1:8380:2d9:24aa:2715:602:7d0) Quit (Quit: Leaving.)
[0:55] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[0:56] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[0:57] * sarob (~sarob@2601:9:7080:13a:741a:aa39:d2c3:e1b1) has joined #ceph
[0:57] * philips (~philips@ec2-54-196-103-51.compute-1.amazonaws.com) Quit (Quit: http://ifup.org)
[0:59] * philips (~philips@ec2-54-196-103-51.compute-1.amazonaws.com) has joined #ceph
[1:05] * BillK (~BillK-OFT@106-68-37-241.dyn.iinet.net.au) has joined #ceph
[1:05] * sarob (~sarob@2601:9:7080:13a:741a:aa39:d2c3:e1b1) Quit (Ping timeout: 480 seconds)
[1:14] * piezo (~piezo@108-88-37-13.lightspeed.iplsin.sbcglobal.net) has joined #ceph
[1:14] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:16] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[1:17] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) has joined #ceph
[1:25] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[1:26] * zerick (~eocrospom@190.114.248.34) Quit (Read error: No route to host)
[1:29] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[1:31] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: Leaving.)
[1:32] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[1:32] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[1:33] * sleinen (~Adium@2001:620:1000:3:8149:81a1:cb82:9ed7) has joined #ceph
[1:41] * sleinen (~Adium@2001:620:1000:3:8149:81a1:cb82:9ed7) Quit (Ping timeout: 480 seconds)
[1:44] * zerick (~eocrospom@190.114.248.34) has joined #ceph
[1:56] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[1:56] * reed (~reed@50-0-92-79.dsl.dynamic.sonic.net) Quit (Quit: Ex-Chat)
[1:57] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[1:58] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[2:00] * zerick (~eocrospom@190.114.248.34) Quit (Remote host closed the connection)
[2:00] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[2:03] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[2:06] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:07] * rmoe_ (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:11] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:11] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:15] * yanzheng (~zhyan@134.134.139.76) has joined #ceph
[2:16] * piezo (~piezo@108-88-37-13.lightspeed.iplsin.sbcglobal.net) Quit (Read error: Operation timed out)
[2:16] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) Quit (Quit: leaving)
[2:18] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[2:20] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) Quit (Quit: Leaving.)
[2:21] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[2:22] * AfC (~andrew@215.114.154.202.sta.commander.net.au) has joined #ceph
[2:23] * glzhao (~glzhao@220.181.11.232) has joined #ceph
[2:24] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:26] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:26] * carter (~carter@2600:3c03::f03c:91ff:fe6e:6c01) Quit (Quit: ZNC - http://znc.in)
[2:26] * carter (~carter@2600:3c03::f03c:91ff:fe6e:6c01) has joined #ceph
[2:30] * bitblt (~don@128-107-239-233.cisco.com) has joined #ceph
[2:30] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[2:30] * bitblt (~don@128-107-239-233.cisco.com) Quit ()
[2:31] * AfC (~andrew@215.114.154.202.sta.commander.net.au) has left #ceph
[2:33] * Cnidus (~cnidus@2601:9:7b80:8c7:2527:ae00:d3d3:f4b1) Quit (Quit: Leaving.)
[2:33] * carter (~carter@2600:3c03::f03c:91ff:fe6e:6c01) Quit (Quit: ZNC - http://znc.in)
[2:34] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[2:37] * erkules (~erkules@port-92-193-25-15.dynamic.qsc.de) has joined #ceph
[2:41] * carter (~carter@2600:3c03::f03c:91ff:fe6e:6c01) has joined #ceph
[2:42] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:44] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[2:44] * erkules_ (~erkules@port-92-193-81-189.dynamic.qsc.de) Quit (Ping timeout: 480 seconds)
[2:51] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[2:51] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[2:53] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[2:54] * carter (~carter@2600:3c03::f03c:91ff:fe6e:6c01) Quit (Quit: ZNC - http://znc.in)
[2:55] * carter (~carter@2600:3c03::f03c:91ff:fe6e:6c01) has joined #ceph
[2:56] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[3:00] * Cnidus (~cnidus@c-67-164-72-195.hsd1.ca.comcast.net) has joined #ceph
[3:00] * sarob (~sarob@2601:9:7080:13a:cc73:17f5:c34:d2ec) has joined #ceph
[3:04] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:05] * sleinen (~Adium@2001:620:1000:3:64a2:e7c7:8189:74a5) has joined #ceph
[3:08] * sarob (~sarob@2601:9:7080:13a:cc73:17f5:c34:d2ec) Quit (Ping timeout: 480 seconds)
[3:11] * Cnidus (~cnidus@c-67-164-72-195.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[3:11] * Cnidus (~cnidus@c-67-164-72-195.hsd1.ca.comcast.net) has joined #ceph
[3:13] * sleinen (~Adium@2001:620:1000:3:64a2:e7c7:8189:74a5) Quit (Ping timeout: 480 seconds)
[3:17] * haomaiwa_ (~haomaiwan@118.187.35.10) Quit (Ping timeout: 480 seconds)
[3:18] * dmick (~dmick@2607:f298:a:607:5c0b:1a49:a0c7:4abc) has left #ceph
[3:20] * dmick (~dmick@2607:f298:a:607:d55a:4245:c90e:d3e4) has joined #ceph
[3:20] * erkules_ (~erkules@port-92-193-70-78.dynamic.qsc.de) has joined #ceph
[3:27] * erkules (~erkules@port-92-193-25-15.dynamic.qsc.de) Quit (Ping timeout: 480 seconds)
[3:29] * Vacum_ (~vovo@88.130.198.96) Quit (Ping timeout: 480 seconds)
[3:36] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[3:39] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[3:44] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[3:44] * Vacum (~vovo@i59F79249.versanet.de) has joined #ceph
[3:47] * markbby (~Adium@168.94.245.1) has joined #ceph
[3:49] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[3:54] * Cube1 (~Cube@66-87-67-145.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[3:56] * Cube (~Cube@66-87-64-169.pools.spcsdns.net) has joined #ceph
[3:57] * AfC (~andrew@215.114.154.202.sta.commander.net.au) has joined #ceph
[3:59] * haomaiwang (~haomaiwan@117.79.232.172) has joined #ceph
[4:00] * Boltsky (~textual@office.deviantart.net) Quit (Ping timeout: 480 seconds)
[4:01] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[4:09] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:22] * bandrus (~Adium@75.5.250.197) Quit (Quit: Leaving.)
[4:28] * gregmark1 (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[4:30] * shang (~ShangWu@111-252-2-229.dynamic.hinet.net) has joined #ceph
[4:30] * shang (~ShangWu@111-252-2-229.dynamic.hinet.net) Quit ()
[4:35] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) has joined #ceph
[4:38] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[4:43] * cerealkillr (~neil@c-24-5-71-242.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:46] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[4:51] * Cnidus (~cnidus@c-67-164-72-195.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[4:51] * scuttlemonkey_ (~scuttlemo@c-107-5-193-244.hsd1.mi.comcast.net) has joined #ceph
[4:53] * scuttlemonkey (~scuttlemo@c-107-5-193-244.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[5:04] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[5:04] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[5:05] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[5:06] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[5:12] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:15] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:18] * Cnidus (~cnidus@2601:9:7b80:8c7:4058:513:7e3b:ea52) has joined #ceph
[5:18] * Cnidus (~cnidus@2601:9:7b80:8c7:4058:513:7e3b:ea52) Quit ()
[5:19] * Vacum_ (~vovo@88.130.223.207) has joined #ceph
[5:26] * Vacum (~vovo@i59F79249.versanet.de) Quit (Ping timeout: 480 seconds)
[5:32] * yguang11 (~yguang11@2406:2000:ef96:e:a9de:fe4a:df44:7e7b) Quit ()
[5:35] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) has joined #ceph
[5:41] * markbby (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[5:41] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[5:44] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[5:49] * Cnidus (~cnidus@2601:9:7b80:8c7:b87a:33ae:d1f7:fc3f) has joined #ceph
[5:51] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[5:57] * AfC (~andrew@215.114.154.202.sta.commander.net.au) Quit (Quit: Leaving.)
[6:00] * jaitd (~jait@213.144.144.135) Quit (Remote host closed the connection)
[6:01] * Cnidus (~cnidus@2601:9:7b80:8c7:b87a:33ae:d1f7:fc3f) Quit (Ping timeout: 480 seconds)
[6:04] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[6:05] * sarob (~sarob@2601:9:7080:13a:b993:caf7:9381:7898) has joined #ceph
[6:08] * jks (~jks@3e6b5724.rev.stofanet.dk) Quit (Read error: Operation timed out)
[6:09] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:10] * princeholla (~princehol@pD9F60689.dip0.t-ipconnect.de) has joined #ceph
[6:13] * sarob (~sarob@2601:9:7080:13a:b993:caf7:9381:7898) Quit (Ping timeout: 480 seconds)
[6:20] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) Quit (Quit: Leaving.)
[6:20] * shengjiemin (~shengjiem@221.237.158.191) has joined #ceph
[6:21] <shengjiemin> hello guys
[6:21] <shengjiemin> quick question -
[6:28] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[6:31] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[6:50] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[6:55] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[7:02] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[7:03] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:06] * Cube (~Cube@66-87-64-169.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[7:08] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[7:16] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:30] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[7:32] * Cube (~Cube@66-87-64-169.pools.spcsdns.net) has joined #ceph
[7:34] * Cube (~Cube@66-87-64-169.pools.spcsdns.net) Quit ()
[7:51] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[8:09] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[8:10] * simulx (~simulx@vpn.expressionanalysis.com) Quit (Ping timeout: 480 seconds)
[8:10] * mnash (~chatzilla@vpn.expressionanalysis.com) Quit (Ping timeout: 480 seconds)
[8:10] * shengjiemin (~shengjiem@221.237.158.191) Quit (Ping timeout: 480 seconds)
[8:17] * thb (~me@2a02:2028:6b:a9d0:6267:20ff:fec9:4e40) has joined #ceph
[8:17] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:19] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[8:22] * Cube (~Cube@netblock-75-79-17-189.dslextreme.com) has joined #ceph
[8:27] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:30] * Cube1 (~Cube@66-87-130-66.pools.spcsdns.net) has joined #ceph
[8:32] * Cube1 (~Cube@66-87-130-66.pools.spcsdns.net) Quit ()
[8:33] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[8:37] * Cube (~Cube@netblock-75-79-17-189.dslextreme.com) Quit (Ping timeout: 480 seconds)
[8:38] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[8:39] * mattt (~textual@94.236.7.190) has joined #ceph
[8:40] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[8:41] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) has joined #ceph
[8:53] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:54] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) has joined #ceph
[8:57] * princeholla (~princehol@pD9F60689.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:00] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Quit: Not that there is anything wrong with that)
[9:01] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:04] * rendar (~s@95.234.176.202) has joined #ceph
[9:05] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[9:05] * ChanServ sets mode +v andreask
[9:07] * analbeard (~shw@141.0.32.124) has joined #ceph
[9:08] * thomnico (~thomnico@37.163.250.111) has joined #ceph
[9:12] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[9:13] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:21] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:22] * thomnico (~thomnico@37.163.250.111) Quit (Ping timeout: 480 seconds)
[9:45] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[9:47] * bitblt (~don@128-107-239-233.cisco.com) has joined #ceph
[9:48] * sth35 (~oftc-webi@193.252.138.241) has joined #ceph
[9:59] * yanzheng (~zhyan@134.134.139.76) Quit (Quit: Leaving)
[10:01] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[10:03] * dalegaard (~dalegaard@vps.devrandom.dk) Quit (Remote host closed the connection)
[10:09] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[10:09] * allsystemsarego (~allsystem@188.26.167.156) has joined #ceph
[10:13] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[10:21] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) has joined #ceph
[10:21] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:26] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[10:26] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit (Quit: Leaving.)
[10:35] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[10:36] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[10:36] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit ()
[10:43] * garphy`aw is now known as garphy
[10:45] * simulx (~simulx@66-194-114-178.static.twtelecom.net) has joined #ceph
[10:52] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:54] <jtangwk> leseb: +1 on the radosgw role
[10:55] <jtangwk> i just started sharing the haproxy configs i have for our radosgw deployment
[10:55] <leseb> jtangwk: thanks :)
[10:55] <jtangwk> the haproxy role is lifted from my existing playbooks
[10:55] <jtangwk> but it shuoldnt take much work to make it function for debian and el based systems
[10:56] <jtangwk> im going to see about replacing the dodgy roles i have with the ceph-ansible ones for dev/prod at our site
[10:56] <jtangwk> :)
[10:56] <leseb> jtangwk: great :) looks promising :)
[10:59] <leseb> jtangwk: btw what's the status of this? https://github.com/ceph/ceph-ansible/pull/8
[10:59] <jtangwk> there might be some tweaks needed in the radosgw role to get it to work properly if there if you want to use wildcard dns entries
[11:00] <jtangwk> leseb: it works for me, but it should really be tested and used by others before pulling it in
[11:00] <jtangwk> i kinda feel just having a facts module is a bit of a waste thuogh, some plans shuold really be made to get it to generate keys and other cool things
[11:00] <jtangwk> brb
[11:00] <leseb> jtangwk: ok I'll try to have a look
[11:00] <jtangwk> got a 10am meeting!
[11:00] <leseb> jtangwk: ok :)
[11:10] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[11:13] * BillK (~BillK-OFT@106-68-37-241.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[11:14] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[11:15] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[11:20] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[11:22] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[11:38] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[11:47] <hybrid512> Hi everyone
[11:47] <hybrid512> I have a few questions concerning Ceph best practices
[11:49] * `10` (~10@69.169.91.14) Quit (Ping timeout: 480 seconds)
[11:49] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[11:49] <hybrid512> I see here and there that people are building POCs with dozens fo OSDs but ine the real world, when you have let's say 1 hard drive or partition to dedicate to an OSD per node, is it advisable to have 1 OSD per available drive or is it better to split the drive in sub partition and multiply the OSD number using these partitions ?
[11:50] <hybrid512> By now, I have 3 nodes with 1 hard drive each dedicated to an OSD so I have a cluster with 3 OSDs ... good or bad ?
[11:50] <hybrid512> (I'm new to Ceph should I precise ...)
[11:51] <hybrid512> each OSD is 2TB and I also have 1 monitor per node so 3 monitors
[11:52] * tryggvil (~tryggvil@178.19.53.254) Quit (Ping timeout: 480 seconds)
[11:53] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[11:53] <singler> best practice is one OSD per drive
[11:54] <singler> there is no use of multiple OSD's per drive
[11:58] <hybrid512> That's what I thought too ...
[12:00] <hybrid512> How can I debug monitor problems ? Yesterday, one of my mons stopped with no apparent reason and I don't know why
[12:00] <hybrid512> I just restarted it and it got back normally
[12:01] <hybrid512> but at the same moment, one of my KVM virtual machine that is stored on Ceph got trashed, I don't know why
[12:01] <hybrid512> io errors on the virtual hard drive, had to restore a backup
[12:02] * `10` (~10@69.169.91.14) has joined #ceph
[12:02] <hybrid512> I have a dozen virtual machines on this storage, only this one got trashed
[12:02] <aeropuerto> question: do i need a 1 journal per OSD or per NODE?
[12:02] <hybrid512> I suppose the mon failing and the object trash are related but I can't find any information about this
[12:04] <hybrid512> I never had any trouble with Sheepdog in weeks of testing, while I got much more difficulties with Ceph an just a few days so I'm quite at fear with this :/
[12:04] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[12:04] * JCL (~JCL@2601:9:5980:39b:4017:fe3f:f872:23d1) Quit (Quit: Leaving.)
[12:04] * The_Bishop (~bishop@g229161027.adsl.alicedsl.de) has joined #ceph
[12:05] <hybrid512> either I'm not playing well with Ceph and doing bad things (which might be the case since I'm new to it), either it needs much more monitoring or stuff to work properly ... anyway, it's a bit concerning to me
[12:05] <singler> aeropuerto: every OSD has it's own journal. How many SSDs for journals per node depends on your use case (load, expectations, etc)
[12:06] <singler> hybrid512: you could try increasing verbosity for mon logs, but I doubt that crashed mon could corrupt your data
[12:07] * Koma (~Koma@0001c112.user.oftc.net) Quit (Quit: ups I did it again!)
[12:08] * Koma (~Koma@0001c112.user.oftc.net) has joined #ceph
[12:10] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[12:10] <hybrid512> singler: well, with 3 mons, I would say that one failing node shouldn't crash anything but I really don't know why this object was corrupted .... and only this one which was not even busy, this was a waiting VM which was up but not doing anything and with no activity
[12:11] <hybrid512> I have much bigger VMs that are much more active and those one are still operating normaly
[12:12] <singler> maybe VM got corrupted not because of ceph? (Or maybe you used same image for multiple VMs?)
[12:12] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[12:13] <singler> I had some corruption (not using ceph) when I tried booting VM with multiple disks which use same VG name (e.g. attaching image of another VM for recovery)
[12:14] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[12:15] * glambert (~glambert@37.157.50.80) has joined #ceph
[12:18] * tryggvil (~tryggvil@178.19.53.254) Quit (Ping timeout: 480 seconds)
[12:23] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[12:24] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[12:25] * lianghaoshen (~slhhust@175.8.105.93) has joined #ceph
[12:29] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[12:33] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[12:36] * awaay_ (~ircap@90.174.1.169) has joined #ceph
[12:37] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[12:38] * lianghaoshen (~slhhust@175.8.105.93) Quit (Quit: ??????)
[12:51] * glzhao (~glzhao@220.181.11.232) Quit (Quit: leaving)
[12:59] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) has joined #ceph
[13:00] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) Quit ()
[13:00] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[13:00] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) has joined #ceph
[13:06] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (Read error: Operation timed out)
[13:10] * Kioob (~kioob@2a01:e34:ec0a:c0f0:21e:8cff:fe07:45b6) has joined #ceph
[13:10] <Kioob> Hi
[13:10] <Kioob> I have a problem after upgrading some OSD from 0.67.5 to 0.67.7 : one of the OSD can't recover.
[13:11] <Kioob> in logs of this OSD, I see a lot of " 1 heartbeat_map is_healthy 'OSD::op_tp thread 0x7f5c629c1700' had timed out after 15"
[13:12] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[13:13] <Kioob> So it slow down part of the cluster, then this OSD is ejected from the map ([WRN] : map e297454 wrongly marked me down), and the cluster is fast again.
[13:14] <Kioob> Here the OSD retry to recover... slow down the cluster, time out, etc etc
[13:15] <Kioob> Since about 10 minutes, this OSD is in a UP/DOWN loop
[13:15] <Kioob> (which blocks requests)
[13:15] * stewiem2000 (~stewiem20@195.10.250.233) Quit (Quit: Leaving.)
[13:16] * stewiem2000 (~stewiem20@195.10.250.233) has joined #ceph
[13:17] <Kioob> what should I do ?
[13:18] * sarob (~sarob@2601:9:7080:13a:8443:b742:2006:6c77) has joined #ceph
[13:19] * fdmanana (~fdmanana@bl10-252-34.dsl.telepac.pt) has joined #ceph
[13:19] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[13:26] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[13:26] * sarob (~sarob@2601:9:7080:13a:8443:b742:2006:6c77) Quit (Ping timeout: 480 seconds)
[13:31] <singler> Kioob: is that OSD reachable from all mons via public/private networks?
[13:32] <Kioob> Yes : here I have 12 OSD on 2 hosts. I have uprade 24 OSD, only one have problems
[13:32] <Kioob> (12 OSD per hosts, and 2 hosts)
[13:33] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[13:35] <singler> is there any backend traffic?
[13:35] <Kioob> what do you call "backend traffic" ?
[13:36] <singler> cluster internal traffic (refilling, recovery, etc)
[13:37] <Kioob> When I start this OSD yes. If I shutdown it, no
[13:38] <singler> maybe it starts lagging due to traffic and does not manage to respond to heartbeats?
[13:39] <Kioob> I check that, thanks
[13:39] <singler> can someone kick out awaay_ ? He sends spam or something like that
[13:41] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[13:43] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[13:43] * mic (~oftc-webi@226.201-247-81.adsl-dyn.isp.belgacom.be) has joined #ceph
[13:43] * mic (~oftc-webi@226.201-247-81.adsl-dyn.isp.belgacom.be) Quit ()
[13:44] * mica (~oftc-webi@226.201-247-81.adsl-dyn.isp.belgacom.be) has joined #ceph
[13:44] <Kioob> it strange : at start, after "crush map has features 262144, adjusting msgr requires for osds" the OSD seem to do nothing. No IO, no socket opening, no logs.
[13:44] <Kioob> after 3 minutes, I see : 2014-03-14 13:44:04.938984 7f2f25524700 0 -- 10.0.0.17:6814/5218 >> 10.0.0.18:0/544228930 pipe(0x1254b400 sd=64 :6814 s=0 pgs=0 cs=0 l=0 c=0x124c3760).accept peer addr is really 10.0.0.18:0/544228930 (socket is 10.0.0.18:3393/0)
[13:45] <Kioob> then about 15 seconds after : 2014-03-14 13:44:22.391711 7f2f26f92700 1 heartbeat_map is_healthy 'OSD::op_tp thread 0x7f2f1c94d700' had timed out after 15
[13:47] * awaay_ (~ircap@90.174.1.169) Quit (autokilled: Do not spam. Mail support@oftc.net with questions. (2014-03-14 12:47:02))
[13:47] <Kioob> now I see sockets opened yes
[13:47] <Kioob> but not mutch trafic (about 100Mbps, on a 10GB network)
[13:48] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[13:50] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[13:51] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[13:53] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[13:58] * BillK (~BillK-OFT@124-148-208-225.dyn.iinet.net.au) has joined #ceph
[14:12] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[14:13] * mnash (~chatzilla@vpn.expressionanalysis.com) has joined #ceph
[14:13] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[14:18] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[14:18] * piezo (~piezo@108-88-37-13.lightspeed.iplsin.sbcglobal.net) has joined #ceph
[14:19] * sarob (~sarob@2601:9:7080:13a:2d1a:7094:ad21:8095) has joined #ceph
[14:24] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[14:27] * sarob (~sarob@2601:9:7080:13a:2d1a:7094:ad21:8095) Quit (Ping timeout: 480 seconds)
[14:28] * markbby (~Adium@168.94.245.4) has joined #ceph
[14:29] * markbby (~Adium@168.94.245.4) Quit ()
[14:31] * markbby (~Adium@168.94.245.4) has joined #ceph
[14:40] * mnash_ (~chatzilla@66-194-114-178.static.twtelecom.net) has joined #ceph
[14:41] * simulx2 (~simulx@vpn.expressionanalysis.com) has joined #ceph
[14:42] <mica> I have a little problem activating mys osd???s. I have an ssd that contains the os and partitions for the journals (one partition for each osd). The parititions are created using lvm. For testing I have 3 osd disks.
[14:42] <mica> When I prepare my disks using ceph-disk prepare --cluster {cluster-name} --cluster-uuid {uuid} --fs-type xfs /dev/sdb /dev/pve/journal1 And everything runs successfully
[14:42] <mica> But when I try to activate the osd with ceph-disk activate /dev/sdb1 I get the following error message:
[14:43] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[14:43] <mica> got latest monmap 2014-03-14 14:19:52.263589 7f4dcca11780 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
[14:43] <mica> 2014-03-14 14:19:52.263616 7f4dcca11780 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected be04225d-29fb-40c9-a37a-e9b531fbe559, invalid (someone else's?) journal
[14:44] <mica> 2014-03-14 14:19:52.263633 7f4dcca11780 -1 filestore(/var/lib/ceph/tmp/mnt.y1xvI9) mkjournal error creating journal on /var/lib/ceph/tmp/mnt.y1xvI9/journal: (22) Invalid argument 2014-03-14 14:19:52.263644 7f4dcca11780 -1 OSD::mkfs: FileStore::mkfs failed with error -22 2014-03-14 14:19:52.263689 7f4dcca11780 -1 ** ERROR: error creating empty object store in
[14:44] * yanzheng (~zhyan@134.134.139.74) has joined #ceph
[14:44] <mica> ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.y1xvI9: (22) Invalid argument ERROR:ceph-disk:Failed to activate
[14:44] <mica> Could someone tell me what I???m doing wrong? Or what I miss? Thanks
[14:44] * simulx (~simulx@66-194-114-178.static.twtelecom.net) Quit (Ping timeout: 480 seconds)
[14:46] * mnash (~chatzilla@vpn.expressionanalysis.com) Quit (Ping timeout: 480 seconds)
[14:46] * mnash_ is now known as mnash
[14:51] * sroy (~sroy@207.96.182.162) has joined #ceph
[14:52] * dalegaard (~dalegaard@vps.devrandom.dk) has joined #ceph
[14:53] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[14:56] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[14:56] * yanzheng (~zhyan@134.134.139.74) Quit (Remote host closed the connection)
[15:00] * BillK (~BillK-OFT@124-148-208-225.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:00] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[15:00] <gregsfortytwo1> mica: I've got to run, but you should search the mailing list archives; that looks like an issue keeping the journal partitions matched up with the right OSD that I think people have discussed there
[15:00] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[15:02] * mattt_ (~textual@94.236.7.190) has joined #ceph
[15:02] <mica> ok I'll chek if i can find something over there
[15:03] * mattt (~textual@94.236.7.190) Quit (Ping timeout: 480 seconds)
[15:03] * mattt_ is now known as mattt
[15:05] * mattt (~textual@94.236.7.190) Quit ()
[15:06] * sroy (~sroy@207.96.182.162) Quit (Read error: Operation timed out)
[15:07] * princeholla (~princehol@p5DE96CA6.dip0.t-ipconnect.de) has joined #ceph
[15:08] * mattt (~textual@94.236.7.190) has joined #ceph
[15:09] * linuxkidd (~linuxkidd@2001:420:2100:2258:39d3:de25:be2d:1e03) has joined #ceph
[15:11] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[15:14] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) has joined #ceph
[15:16] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[15:17] <aeropuerto> question: health HEALTH_WARN clock skew detected on mon.CEPHNODE02, mon.CEPHNODE03
[15:17] <aeropuerto> clocks are synced
[15:19] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:19] <aeropuerto> mon.CEPHNODE02 addr 192.168.112.222:6789/0 clock skew 0.218785s > max 0.05s (latency 0.00930092s)
[15:19] <aeropuerto> mon.CEPHNODE03 addr 192.168.112.223:6789/0 clock skew 0.218509s > max 0.05s (latency 0.00909546s)
[15:19] <saturnine> aeropuerto: Try setting mon_clock_drift_allowed = .3 in ceph.conf
[15:19] <aeropuerto> where to put this line?
[15:20] <aeropuerto> [global] ?
[15:20] <saturnine> .05s (default) is really hard to accomplish without low latency internal ntp servers
[15:20] <saturnine> yes under [global]
[15:20] <saturnine> add that, restart your monitors, and it should show health_ok
[15:22] * sroy (~sroy@207.96.182.162) has joined #ceph
[15:22] * sarob (~sarob@2601:9:7080:13a:18ab:f408:edf3:234f) has joined #ceph
[15:22] <aeropuerto> mon.CEPHNODE02 addr 192.168.112.222:6789/0 clock skew 0.0509591s > max 0.05s (latency 0.001787s)
[15:22] <aeropuerto> still an issue
[15:23] <aeropuerto> after several 'stop/start ceph-mon-all' health is ok
[15:23] <aeropuerto> thanks a lot saturnine
[15:24] <saturnine> aeropuerto: np
[15:24] <aeropuerto> mh
[15:24] <saturnine> I hope someone can answer one of my questions one day too. :)
[15:25] <aeropuerto> edited the ceph.conf on the admin node - then pushed the config files via 'ceph-deploy --overwrite-conf admin CEPHNODE02 CEPHNODE03'
[15:25] <aeropuerto> how come nothing changed?
[15:25] <aeropuerto> :D
[15:26] <aeropuerto> oh noes
[15:26] <aeropuerto> :x
[15:26] <aeropuerto> figured out myself
[15:27] <aeropuerto> sudo nano
[15:27] <aeropuerto> :D
[15:28] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit (Remote host closed the connection)
[15:30] * sarob (~sarob@2601:9:7080:13a:18ab:f408:edf3:234f) Quit (Ping timeout: 480 seconds)
[15:33] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[15:35] * jcsp (~Adium@0001bf3a.user.oftc.net) has joined #ceph
[15:36] <Kioob> I have a lots of this in mon logs (about 10 per seconds) : mon.faude@0(leader).paxos(paxos active c 39390652..39391391) is_readable now=2014-03-14 15:34:03.725717 lease_expire=2014-03-14 15:34:08.703315 has v0 lc 39391391
[15:36] <Kioob> is it ?normal? ?
[15:43] * JCL (~JCL@2601:9:5980:39b:40e2:ed93:5546:ad4e) has joined #ceph
[15:44] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) Quit (Quit: Leaving.)
[15:45] <Kioob> So, no idea on a problem upgrading from 0.67.5 to 0.67.7 ? One OSD (on 24 OSD, over 2 hosts) can't recover.
[15:45] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Read error: No route to host)
[15:45] <Kioob> I don't see any network problem
[15:45] * princeholla (~princehol@p5DE96CA6.dip0.t-ipconnect.de) Quit (Quit: Verlassend)
[15:47] * bandrus (~Adium@75.5.250.197) has joined #ceph
[15:50] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[15:51] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:53] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[15:56] * glambert (~glambert@37.157.50.80) Quit (Quit: <?php exit(); ?>)
[15:58] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[16:02] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[16:06] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:07] * davidzlap1 (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[16:07] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[16:12] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) has joined #ceph
[16:13] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) Quit ()
[16:15] <Kioob> (ok for my logs, I found an answer from Joao http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003902.html )
[16:15] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[16:17] * stus (~keny@163.117.148.65) has joined #ceph
[16:17] <stus> hello
[16:18] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[16:20] <stus> I am running Emperor on Ubuntu server, and after expanding my cluster with more OSDs and increasing pg_num, the cluster stays at HEALTH_WARN health HEALTH_WARN pool data pg_num 1024 > pgp_num 64 ..
[16:20] <stus> Also, I notice that ceph-mon consumes a lot of CPU and writes tons of logs every second to disc: mon.controller@0(leader).paxos(paxos active c 8284..8872) is_readable now=2014-03-14 16:16:48.087950 lease_expire=0.000000 has v0 lc 8872
[16:20] <stus> I can't tell what's going on
[16:21] <pmatulis_> stus: increasing PGs does take a toll on the cluster
[16:22] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[16:23] * stewiem2000 (~stewiem20@195.10.250.233) Quit (Quit: Leaving.)
[16:23] <Kioob> stus: for the "is_readable" logs, you can read http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003902.html
[16:23] <stus> pmatulis_ I expected a lot of I/O, but now ceph -w is almost quiet, all the data seems to be active+clean, and ceph-mon is writing a couple of MB/s of logs...
[16:24] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[16:25] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[16:27] <stus> Kioob thanks, I see.. but is the log expected to grow that fast?
[16:28] <Kioob> don't know, I have 10 lines of this per seconds, on each monitor
[16:28] <stus> maybe it is still doing something with the crush map, but I did the pg_num change a couple hours ago and has been growing fast since
[16:28] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[16:28] <pmatulis_> stus: have you ever changed the log level of any subsystem? how much data is in your cluster?
[16:30] <stus> I just checked, mine is logging about 9k/s
[16:30] <stus> I did not change the log level
[16:30] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:30] <stus> It is not a very big cluster, ceph reports about 10TB
[16:31] <stus> I expanded from 4TB to almost 10TB and adjusted the pg_num accordingly
[16:31] <stus> but that is the capacity, there is very few actual data
[16:32] <stus> 62725 MB used, 9634 GB / 9696 GB avail 7168 active+clean
[16:33] * lofejndif (~lsqavnbok@72.52.91.30) has joined #ceph
[16:33] * mica (~oftc-webi@226.201-247-81.adsl-dyn.isp.belgacom.be) Quit (Quit: Page closed)
[16:33] <stus> my ceph.conf is very simple http://pastebin.com/ixhcKbng
[16:37] * Siva (~sivat@117.192.40.148) has joined #ceph
[16:40] * lofejndif (~lsqavnbok@1RHAACRV3.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[16:42] * Siva_ (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[16:43] * valeech (~valeech@ip72-205-7-86.dc.dc.cox.net) has joined #ceph
[16:44] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[16:46] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:47] * Siva (~sivat@117.192.40.148) Quit (Ping timeout: 480 seconds)
[16:47] * Siva_ is now known as Siva
[16:49] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (Remote host closed the connection)
[16:50] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[16:50] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[16:51] * markbby (~Adium@168.94.245.4) has joined #ceph
[16:52] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:52] * Cnidus (~cnidus@216.129.126.126) has joined #ceph
[16:54] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[16:57] * rmoe (~quassel@12.164.168.117) has joined #ceph
[16:57] * alram (~alram@38.122.20.226) has joined #ceph
[16:57] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[17:01] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[17:02] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[17:05] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[17:05] * scuttlemonkey_ is now known as scuttlemonkey
[17:11] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[17:12] * stewiem2000 (~stewiem20@195.10.250.233) has joined #ceph
[17:14] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[17:14] <stus> Turns out many of my nodes were not properly time synchronized
[17:14] <stus> I just fixed time skews and paxos log messages are back to normal :)
[17:14] * sth35 (~oftc-webi@193.252.138.241) Quit (Quit: Page closed)
[17:16] <stus> however, my cluster is still in health_warn and says pool blah pg_num 1024 > 64 for every pool I have
[17:19] * analbeard (~shw@141.0.32.124) Quit (Quit: Leaving.)
[17:20] <bandrus> stus: sounds like your pgp_num does not equal your pg_num
[17:21] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[17:23] * sarob (~sarob@2601:9:7080:13a:a181:5ef6:a81a:126b) has joined #ceph
[17:27] * xarses (~andreww@12.164.168.117) has joined #ceph
[17:27] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[17:29] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[17:29] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[17:29] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[17:30] <stus> bandrus that could very well be the case
[17:30] <stus> I didn't properly read the output, I was reading pg_num > pg_num :O
[17:31] * sarob (~sarob@2601:9:7080:13a:a181:5ef6:a81a:126b) Quit (Ping timeout: 481 seconds)
[17:31] <bandrus> great, so just increase your pgp_num to match and you'll be good to go, keeping in mind that it will kick off some data movement. You can gradually increase it if you're worried about production availability.
[17:32] <stus> bandrus thanks, indeed I now see all the I/O I was expecting, thanks :)
[17:32] * Cnidus (~cnidus@216.129.126.126) Quit (Quit: Leaving.)
[17:33] * Cnidus (~cnidus@216.129.126.126) has joined #ceph
[17:37] * cerealkillr (~neil@c-24-5-71-242.hsd1.ca.comcast.net) has joined #ceph
[17:39] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[17:44] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[17:44] * zerick (~eocrospom@190.114.248.34) has joined #ceph
[17:45] * garphy is now known as garphy`aw
[17:45] * garphy`aw is now known as garphy
[17:48] * oblu (~o@62.109.134.112) Quit (Quit: ~)
[17:48] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[17:50] * Cube (~Cube@12.248.40.138) has joined #ceph
[17:50] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[17:51] * stewiem2000 (~stewiem20@195.10.250.233) Quit (Ping timeout: 480 seconds)
[17:56] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) Quit (Quit: Leaving.)
[17:59] * oblu (~o@62.109.134.112) has joined #ceph
[18:02] * mattt (~textual@94.236.7.190) Quit (Quit: Computer has gone to sleep.)
[18:03] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[18:03] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) Quit (Remote host closed the connection)
[18:05] * garphy is now known as garphy`aw
[18:06] * stewiem2000 (~stewiem20@195.10.250.233) has joined #ceph
[18:07] * nwat (~textual@eduroam-245-10.ucsc.edu) has joined #ceph
[18:10] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[18:11] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[18:11] * ZyTer (~ZyTer@ghostbusters.apinnet.fr) Quit (Ping timeout: 480 seconds)
[18:12] * stewiem2000 (~stewiem20@195.10.250.233) Quit (Quit: Leaving.)
[18:13] * sjustwork (~sam@2607:f298:a:607:45ad:a4fc:7bbd:392c) has joined #ceph
[18:15] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) has joined #ceph
[18:16] * stewiem2000 (~stewiem20@195.10.250.233) has joined #ceph
[18:17] * Cnidus (~cnidus@216.129.126.126) Quit (Quit: Leaving.)
[18:21] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:24] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) Quit (Quit: Leaving.)
[18:26] * Cnidus (~cnidus@216.129.126.126) has joined #ceph
[18:32] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[18:37] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[18:40] * Cnidus (~cnidus@216.129.126.126) has left #ceph
[18:41] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[18:42] * wrale_ (~wrale@cpe-107-9-20-3.woh.res.rr.com) has joined #ceph
[18:44] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) Quit ()
[18:45] <ponyofdeath> hi, i have changed to a new private / public network set up and want to set up a new monitor in the public ip space? i am following thishttp://ceph.com/docs/master/rados/operations/add-or-rm-mons/#adding-a-monitor-manual but not sure its right as when i did ceph-deploy to set up my first mon the directory in /var/lib/ceph/mon/ is my hostname not ceph-mon.{mon-id}
[18:45] <ponyofdeath> any thoughts on which way i should follow
[18:46] <ponyofdeath> does not seem i have any [mon.XX] in my /etc/ceph/ceph.conf too
[18:46] <ponyofdeath> ceph mon dump gives me 0: 10.248.5.151:6789/0 mon.prod-ent-ceph01
[18:50] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) Quit (Quit: Leaving.)
[18:51] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[18:51] * ChanServ sets mode +v andreask
[18:53] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[18:55] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:56] * sarob (~sarob@mobile-166-137-185-196.mycingular.net) has joined #ceph
[18:57] * sarob_ (~sarob@50.242.75.3) has joined #ceph
[18:57] * sarob (~sarob@mobile-166-137-185-196.mycingular.net) Quit (Read error: Connection reset by peer)
[18:58] * mtanski (~mtanski@69.193.178.202) Quit (Ping timeout: 480 seconds)
[18:58] * sarob_ (~sarob@50.242.75.3) Quit (Read error: Connection reset by peer)
[19:03] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[19:10] * stus (~keny@163.117.148.65) Quit (Quit: Leaving)
[19:14] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[19:18] * analbeard (~shw@host86-155-197-65.range86-155.btcentralplus.com) has joined #ceph
[19:20] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[19:20] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) has joined #ceph
[19:26] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[19:27] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[19:28] * sarob (~sarob@50.242.75.3) has joined #ceph
[19:33] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[19:35] * lavi (~lavi@eth-seco21th2-46-193-64-32.wb.wifirst.net) has joined #ceph
[19:36] * analbeard (~shw@host86-155-197-65.range86-155.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[19:36] * Cnidus (~cnidus@216.129.126.126) has joined #ceph
[19:42] * joshuay04 (~joshuay04@rrcs-74-218-204-10.central.biz.rr.com) has joined #ceph
[19:43] * xmltok (~xmltok@216.103.134.250) has joined #ceph
[19:44] <joshuay04> Just wanted to share a "win" I found today. If anyone is having issues with kvm machines running really slow on ceph install the newish Virtio 1-74 driver. From within the Windows VM I am now getting speeds that match that of ceph benchmarks.
[19:44] * joao|lap (~JL@bl14-168-37.dsl.telepac.pt) has joined #ceph
[19:44] * ChanServ sets mode +o joao|lap
[19:44] * Boltsky (~textual@office.deviantart.net) has joined #ceph
[19:44] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[19:48] <janos_> joshuay04, i was just downloading those virtio drivers
[19:48] <janos_> they're decent eh?
[19:49] <janos_> for a win7 pro VM
[19:49] <joshuay04> Work good, blew me away big jump from .65
[19:49] <janos_> cool, glad to hear that
[19:49] <joshuay04> Yes all windows
[19:49] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[19:49] <janos_> i hope the directories are more clear than before ;)
[19:50] <joshuay04> on 1-65 I was getting 50MB/s write 30MB/s read (using DiskSpeedTest). Put these new drivers in and it jumped to 110 write 60 read
[19:50] <janos_> cool
[19:55] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:56] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[19:57] * Cnidus (~cnidus@216.129.126.126) Quit (Quit: Leaving.)
[19:59] * linuxkidd (~linuxkidd@2001:420:2100:2258:39d3:de25:be2d:1e03) Quit (Quit: Leaving)
[20:02] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:03] * cerealkillr (~neil@c-24-5-71-242.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[20:03] * Cnidus (~cnidus@216.129.126.126) has joined #ceph
[20:04] * markbby (~Adium@168.94.245.4) has joined #ceph
[20:05] <pmatulis_> joshuay04: windows virtio drivers?
[20:05] <joshuay04> Yes
[20:05] <pmatulis_> joshuay04: ok, nothing wrong with linux then
[20:06] <joshuay04> No, never had issues with Linux
[20:07] * diegows (~diegows@190.190.5.238) has joined #ceph
[20:09] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) has joined #ceph
[20:17] * joshuay04 (~joshuay04@rrcs-74-218-204-10.central.biz.rr.com) Quit ()
[20:19] * Cnidus (~cnidus@216.129.126.126) Quit (Quit: Leaving.)
[20:25] * Cnidus (~cnidus@216.129.126.126) has joined #ceph
[20:26] * joao|lap (~JL@bl14-168-37.dsl.telepac.pt) Quit (Quit: Leaving)
[20:28] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[20:29] * wrale__ (~wrale@cpe-107-9-20-3.woh.res.rr.com) has joined #ceph
[20:34] * wrale_ (~wrale@cpe-107-9-20-3.woh.res.rr.com) Quit (Read error: Operation timed out)
[20:38] * Cnidus (~cnidus@216.129.126.126) Quit (Quit: Leaving.)
[20:40] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[20:44] * Cnidus (~cnidus@216.129.126.126) has joined #ceph
[20:45] * jeff-YF (~jeffyf@216.14.83.26) has joined #ceph
[20:47] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[20:49] * valeech (~valeech@ip72-205-7-86.dc.dc.cox.net) Quit (Quit: valeech)
[20:55] * paul_ (~quassel@S0106c8fb267c0b17.ok.shawcable.net) has joined #ceph
[20:56] <paul_> Hi guys, wonder if someone can help me clarify a few things. I have never setup a storage cluster of the types of ceph before, I've only worked with openfiler/freenas for my SANs.
[20:57] <paul_> I am wanting to setup a ceph cluster with 2 nodes for now, 3 hdds in each node and export the storage as RDB. Do I need to have an OSD for each block device? (sdb, sdc, sdd)?
[20:57] <dmick> paul_: that's a good rule of thumb, yes
[20:59] <paul_> dmick: thanks for your quick response. I'm assuming (I haven't looked for this yet) that there is documentation on what to do when one of those drives fails and how to re-sync it once it's replaced.
[21:00] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[21:01] <paul_> One more noob question. Let's say I have 6 drives (3 on each server) and one of them goes down, does ceph distribute the data in a such a way that my cluster is now down, or should I still be able to operate with 1 server up (3 out of 6 drives present in the pool)?
[21:01] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Quit: Leaving.)
[21:04] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[21:04] <janos_> you should still be fully operational
[21:04] * xcrracer (~xcrracer@fw-ext-v-1.kvcc.edu) has joined #ceph
[21:05] <janos_> the cluser speed may be less since it'll be redistributing objects
[21:05] <janos_> but it should be usable
[21:06] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[21:07] <fghaas> paul_: if you're talking about just two nodes, and both of those nodes are also MONs, then the cluster will be down on account of lost MON quorum.
[21:07] <fghaas> so no, if you have only two nodes, the cluster wouldn't be fully operational at all, sorry
[21:07] <paul_> Ahh good point!
[21:08] <janos_> yeah i was assuming the mons were elsewhere. jsut OSD's down
[21:08] <janos_> actually in that scenario, it won't be shuffling anything. you'll just be 50% degraded
[21:08] <janos_> i would think
[21:08] <paul_> janos_: that is most common though in drive failure under that condition, you're right as well.
[21:12] * jeff-YF_ (~jeffyf@67.23.123.228) has joined #ceph
[21:13] <paul_> so if I have 2 nodes, mons on both, osds on both, say 1 of my hdds fails on node1, cluster should still be ok correct? vs same scenario but say my power supply fails on node1?
[21:14] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[21:15] * jeff-YF (~jeffyf@216.14.83.26) Quit (Ping timeout: 480 seconds)
[21:15] * jeff-YF_ is now known as jeff-YF
[21:16] <janos_> if one hdd fails, you should be fine
[21:16] <janos_> though it is not recommended to have even number of mons
[21:17] <janos_> they need odd to gain quorum
[21:17] <janos_> 2 mons = a tie
[21:23] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (Remote host closed the connection)
[21:23] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[21:24] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has left #ceph
[21:25] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[21:27] * Cnidus (~cnidus@216.129.126.126) Quit (Quit: Leaving.)
[21:27] * nwat (~textual@eduroam-245-10.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:27] * Cnidus (~cnidus@216.129.126.126) has joined #ceph
[21:36] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[21:45] * sroy (~sroy@207.96.182.162) Quit (Ping timeout: 480 seconds)
[21:47] * sarob (~sarob@50.242.75.3) Quit (Remote host closed the connection)
[21:48] * sarob (~sarob@50.242.75.3) has joined #ceph
[21:53] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[21:53] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[21:54] * cerealkillr (~neil@c-24-5-71-242.hsd1.ca.comcast.net) has joined #ceph
[21:54] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[21:56] * sarob (~sarob@50.242.75.3) Quit (Ping timeout: 480 seconds)
[21:56] * sroy (~sroy@207.96.182.162) has joined #ceph
[21:57] * xcrracer (~xcrracer@fw-ext-v-1.kvcc.edu) Quit (Quit: quit)
[21:58] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[22:06] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[22:08] <paul_> janos_: I'm planning on starting with 2 (test, then production) and hopefully moving to 3+ ... is it hard to add nodes later on? Does the entire pool need to resync?
[22:09] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:09] <janos_> i don't recall a resync needed, but i haven't added a mon since pre-bobtail days
[22:10] * aeropuerto (~nnscript@dslb-092-072-189-096.pools.arcor-ip.net) Quit (Quit: ( www.nnscript.com :: NoNameScript 4.2 :: www.regroup-esports.com ))
[22:14] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[22:14] * blahnana (~bman@us1.blahnana.com) Quit (Ping timeout: 480 seconds)
[22:15] * jack_ (~alex@amp80-4-78-214-132-184.fbx.proxad.net) has joined #ceph
[22:17] * blahnana (~bman@us1.blahnana.com) has joined #ceph
[22:19] <jack_> Hi guys
[22:19] <jack_> I wonder if there is a clever way to handle rdb-based file systems. Rdb are thin-provisioned, so I cannot reserve space for my fs (which will, indeed, crash if all OSD are full). Is there a trick to handle rbd size, more cleverly than a sheet of paper and a calculator ?
[22:20] <dmick> if you really want to, you can fill the rbd images with data to actually consume the space they've got reserved
[22:21] * mkoderer (uid11949@id-11949.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[22:24] <ponyofdeath> hi, after i remove an osd from crush / cluster do i also remove its directory under /var/lib/ceph/osd/osd.# ?
[22:25] <jack_> dmick: you suggest that there is another way ? Or most people don't bother'em ab
[22:25] <jack_> out that, just oversee there OSD's status and add space before the crash ?
[22:26] <dmick> provision enough space for the VMs you have; perhaps put a check in your provisioning?...I dunno; it's fundamentally no different from other provisioning problems, is it?
[22:26] * andrein (~andrein@188.27.121.224) has joined #ceph
[22:28] <jack_> other system tells you when you are trying to use non-existant space, rbd does not and let things crash, but you're right, I guess I've just some provisionning features to add (like checks)
[22:28] <jack_> thanks
[22:30] <Kioob> well try btrfs. It will crash or send "ENOSPACE" errors, when you have 50% of space used :)
[22:30] <Kioob> :p
[22:31] <paul_> fghaas: I could install a third MON node on one of my VMs just to have proper quorum, any pitfalls there that you know of?
[22:31] <dmick> jack_: the linux kernel, for example, doesn't tell processes when they've allocated more memory than exists
[22:31] * sroy (~sroy@207.96.182.162) Quit (Quit: Quitte)
[22:32] <dmick> only when they try to go use it
[22:33] <Kioob> In my last test, LVM ?thin? behave like RBD : when it's full all writes are frozen
[22:34] * Cnidus (~cnidus@216.129.126.126) has left #ceph
[22:34] * Cnidus (~cnidus@216.129.126.126) has joined #ceph
[22:34] * Cnidus (~cnidus@216.129.126.126) Quit ()
[22:40] * allsystemsarego (~allsystem@188.26.167.156) Quit (Quit: Leaving)
[22:40] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:42] * joef1 (~Adium@2620:79:0:131:e0a8:77b7:bebf:e5a6) has joined #ceph
[22:47] * joef (~Adium@2620:79:0:131:f5cb:7a5e:4ba6:1a6e) Quit (Ping timeout: 480 seconds)
[22:47] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[22:49] * jack_ (~alex@amp80-4-78-214-132-184.fbx.proxad.net) Quit (Quit: leaving)
[22:49] * jdmason (~jon@192.55.54.38) Quit (Remote host closed the connection)
[22:49] * jdmason (~jon@jfdmzpr06-ext.jf.intel.com) has joined #ceph
[22:51] * JCL1 (~JCL@2601:9:5980:39b:d040:8876:5c07:6db3) has joined #ceph
[22:52] * The_Bishop_ (~bishop@g229096164.adsl.alicedsl.de) has joined #ceph
[22:54] * JCL (~JCL@2601:9:5980:39b:40e2:ed93:5546:ad4e) Quit (Ping timeout: 480 seconds)
[22:57] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:59] * The_Bishop (~bishop@g229161027.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[23:09] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[23:12] <bitblt> what is the right way to wipe an old OSD disk for reuse?
[23:12] <bitblt> i've tried sgdisk -z and ceph-disk zap, but then i have to reboot due to the partition changes
[23:13] <bitblt> but then after prepping and adding the osd, it goes away again after another reboot
[23:16] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) has joined #ceph
[23:23] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[23:26] * sarob (~sarob@ip-64-134-225-149.public.wayport.net) has joined #ceph
[23:32] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[23:38] <bitblt> but then I can mount the drive manually and start the osd daemon and it works
[23:43] * jeff-YF (~jeffyf@67.23.123.228) Quit (Quit: jeff-YF)
[23:44] <gregsfortytwo> either the drive's not being attached when you reboot (that should be easy to check), or else it's got the wrong disk type for the udev triggers to turn the notification into an OSD daemon startup
[23:48] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[23:48] <bitblt> it's there, I can manually mount it
[23:48] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[23:49] <bitblt> parted shows it as gpt
[23:49] <bitblt> i see the journal and the fs
[23:51] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) Quit (Quit: Leaving.)
[23:52] * elyograg (~oftc-webi@client175.mainstreamdata.com) has joined #ceph
[23:52] * elyograg (~oftc-webi@client175.mainstreamdata.com) has left #ceph
[23:52] * elyograg (~oftc-webi@client175.mainstreamdata.com) has joined #ceph
[23:52] <bitblt> so this should recognize and mount it: udevadm trigger --subsystem-match=block --action=add?
[23:53] * ircolle (~Adium@2601:1:8380:2d9:b9d1:af93:7aba:738f) has joined #ceph
[23:56] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Quit: Ex-Chat)
[23:57] * thb (~me@2a02:2028:6b:a9d0:6267:20ff:fec9:4e40) has joined #ceph
[23:58] <elyograg> I think I already know the answer to this, just looking for confirmation. Emperor was supposed to fix the problems re-sharing ceph via NFS. Has that actually happened?
[23:58] * thb is now known as Guest3354
[23:58] * lavi (~lavi@eth-seco21th2-46-193-64-32.wb.wifirst.net) Quit (Quit: Leaving)
[23:59] * rendar (~s@95.234.176.202) Quit ()

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.