#ceph IRC Log

Index

IRC Log for 2014-01-09

Timestamps are in GMT/BST.

[0:02] * sarob (~sarob@2001:4998:effd:600:582b:c8da:d90a:b1f5) Quit (Remote host closed the connection)
[0:03] * sarob (~sarob@2001:4998:effd:600:582b:c8da:d90a:b1f5) has joined #ceph
[0:04] * sarob_ (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[0:07] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Quit: Leaving.)
[0:07] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[0:11] * sarob (~sarob@2001:4998:effd:600:582b:c8da:d90a:b1f5) Quit (Ping timeout: 480 seconds)
[0:11] <sherry> is this document outdated > http://www.ssrc.ucsc.edu/Papers/ssrctr-06-01.pdf
[0:12] <zidarsk8> Is there a list of any "bigger players" using ceph in production eviroment?
[0:17] <bandrus> http://www.inktank.com/customers/
[0:17] <bandrus> zidarsk8: ^
[0:17] <bandrus> i think that's about the best I know of
[0:18] <bandrus> there are mentions in various seminars etc of plenty more??? but not sure of a centralized list containing more than that
[0:19] * ScOut3R (~scout3r@54009895.dsl.pool.telekom.hu) Quit ()
[0:19] <zidarsk8> okay. Thanks :) most i found was "numerous clients, to many to count"
[0:21] * dis (~dis@109.110.66.132) Quit (Ping timeout: 480 seconds)
[0:21] * sarob_ (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[0:21] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[0:24] * allsystemsarego (~allsystem@188.26.167.66) Quit (Quit: Leaving)
[0:25] * dis (~dis@109.110.66.17) has joined #ceph
[0:26] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Read error: Operation timed out)
[0:29] * dmsimard (~Adium@108.163.152.2) Quit (Ping timeout: 480 seconds)
[0:41] * thomnico (~thomnico@92.103.156.154) has joined #ceph
[0:44] * sarob (~sarob@2001:4998:effd:600:21d8:8062:9529:e037) has joined #ceph
[0:45] * JoeGruher (~JoeGruher@jfdmzpr01-ext.jf.intel.com) Quit (Remote host closed the connection)
[0:46] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[0:46] <Freeaqingme> until cephfs is deemed stable I want to expose some RBD's using NFS. Is it possible to install and use the client drivers on the same server as the OSDs?
[0:48] <andreask> its not recommended to use the rbd kernel module on the OSDs
[0:48] <Freeaqingme> okay
[0:49] <Freeaqingme> can I ask why not?
[0:51] <lurbs> The short answer is deadlocks, but I'm not familiar with the details.
[0:52] <Freeaqingme> k
[0:52] <Freeaqingme> gives me a bit of an idea
[0:54] * thomnico (~thomnico@92.103.156.154) Quit (Ping timeout: 480 seconds)
[0:54] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[0:55] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[0:58] * Pedras1 (~Adium@216.207.42.134) has joined #ceph
[0:59] * Pedras1 (~Adium@216.207.42.134) Quit ()
[1:03] <bandrus> Freeaqingme: i do it with no issues on a home media server, but definitely not recommended in a critical environment.
[1:05] * i_m (~ivan.miro@pool-109-191-69-34.is74.ru) has joined #ceph
[1:05] * Pedras (~Adium@216.207.42.132) Quit (Ping timeout: 480 seconds)
[1:05] * clayb (~kvirc@69.191.241.59) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[1:11] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (Read error: Connection reset by peer)
[1:13] <dmick> Freeaqingme: because in general "don't supply filesystems with a daemon that can be in the kernel at the same time the filesystem it implements is"
[1:16] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[1:17] * sarob (~sarob@2001:4998:effd:600:21d8:8062:9529:e037) Quit (Remote host closed the connection)
[1:17] * sarob (~sarob@2001:4998:effd:600:21d8:8062:9529:e037) has joined #ceph
[1:22] * yanzheng (~zhyan@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[1:24] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:25] * sarob (~sarob@2001:4998:effd:600:21d8:8062:9529:e037) Quit (Ping timeout: 480 seconds)
[1:27] * sjustlaptop (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) has joined #ceph
[1:33] * berngp (~bernardo@50-76-54-131-ip-static.hfc.comcastbusiness.net) has joined #ceph
[1:41] * bandrus1 (~Adium@63.192.141.3) has joined #ceph
[1:44] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) has joined #ceph
[1:45] * sagelap (~sage@2001:388:a098:120:c685:8ff:fe59:d486) Quit (Ping timeout: 480 seconds)
[1:46] <pmatulis> huh?
[1:48] * sagelap (~sage@182.255.122.239) has joined #ceph
[1:48] * bandrus (~Adium@63.192.141.3) Quit (Ping timeout: 480 seconds)
[1:53] * clfh (~clfh@pool-173-61-103-147.cmdnnj.fios.verizon.net) has joined #ceph
[1:54] * berngp (~bernardo@50-76-54-131-ip-static.hfc.comcastbusiness.net) Quit (Quit: berngp)
[1:56] * clfh (~clfh@pool-173-61-103-147.cmdnnj.fios.verizon.net) Quit ()
[1:56] * sagelap (~sage@182.255.122.239) Quit (Ping timeout: 480 seconds)
[1:58] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[1:59] * AfC (~andrew@182.255.122.166) has joined #ceph
[1:59] * mwarwick (~mwarwick@2407:7800:400:1011:3e97:eff:fe91:d9bf) has joined #ceph
[2:00] * sagelap (~sage@182.255.123.109) has joined #ceph
[2:02] * cofol1986 (~xwrj@110.90.119.113) Quit (Read error: Connection reset by peer)
[2:07] * AfC (~andrew@182.255.122.166) Quit (Quit: Leaving.)
[2:16] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[2:17] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[2:22] * JoeGruher (~JoeGruher@jfdmzpr01-ext.jf.intel.com) has joined #ceph
[2:24] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has left #ceph
[2:24] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:26] * nerdtron (~oftc-webi@202.60.8.250) has joined #ceph
[2:27] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:31] * AfC (~andrew@182.255.122.166) has joined #ceph
[2:33] * LeaChim (~LeaChim@host86-174-30-7.range86-174.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:34] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[2:35] * leotreasure (~leotreasu@124-148-97-102.dyn.iinet.net.au) has left #ceph
[2:36] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Quit: Leaving.)
[2:40] * xarses (~andreww@12.164.168.115) Quit (Ping timeout: 480 seconds)
[2:41] * iaXe (~axe@223.223.202.194) has joined #ceph
[2:45] * AfC (~andrew@182.255.122.166) Quit (Quit: Leaving.)
[2:45] * dmsimard (~Adium@108.163.152.66) has joined #ceph
[2:45] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[2:54] * dmsimard (~Adium@108.163.152.66) Quit (Quit: Leaving.)
[2:57] * clfh (~clfh@pool-173-61-103-147.cmdnnj.fios.verizon.net) has joined #ceph
[2:58] * haomaiwa_ (~haomaiwan@117.79.232.250) has joined #ceph
[2:59] * clfh (~clfh@pool-173-61-103-147.cmdnnj.fios.verizon.net) Quit ()
[3:04] * haomaiwang (~haomaiwan@117.79.232.238) Quit (Ping timeout: 480 seconds)
[3:04] * sagelap (~sage@182.255.123.109) Quit (Ping timeout: 480 seconds)
[3:06] * ard1t (~ad1@109.69.5.22) Quit (Ping timeout: 480 seconds)
[3:06] * ard1t (~ad1@109.69.5.22) has joined #ceph
[3:09] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) Quit (Quit: shimo)
[3:11] * clfh (~clfh@pool-173-61-103-147.cmdnnj.fios.verizon.net) has joined #ceph
[3:11] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) has joined #ceph
[3:12] * bandrus1 (~Adium@63.192.141.3) Quit (Quit: Leaving.)
[3:14] * clfh (~clfh@pool-173-61-103-147.cmdnnj.fios.verizon.net) Quit ()
[3:16] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[3:16] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[3:18] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Quit: Leaving.)
[3:18] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[3:19] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[3:24] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[3:26] * sjustlaptop (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[3:32] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[3:39] * AfC (~andrew@182.255.122.166) has joined #ceph
[3:46] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Quit: Leaving.)
[3:46] * angdraug (~angdraug@12.164.168.115) Quit (Quit: Leaving)
[3:47] * neary (~neary@def92-9-82-243-243-185.fbx.proxad.net) Quit (Read error: Operation timed out)
[3:48] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has left #ceph
[3:49] * diegows (~diegows@190.190.17.57) Quit (Ping timeout: 480 seconds)
[3:50] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) Quit (Quit: shimo)
[3:52] * cofol1986 (~xwrj@110.90.119.113) has joined #ceph
[3:54] * sjustlaptop (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) has joined #ceph
[3:59] <cofol1986> Hey guys, I deploy my ceph and find that monitor ip chanes to other ip on host, not the dedicated one writen in the config file.
[3:59] <cofol1986> Have anyone find this situation?
[4:00] * mkoo (~mkoo@chello213047069005.1.13.vie.surfer.at) Quit (Quit: mkoo)
[4:02] * sarob (~sarob@2601:9:7080:13a:45ed:c88a:516a:d2d1) has joined #ceph
[4:02] * sarob (~sarob@2601:9:7080:13a:45ed:c88a:516a:d2d1) Quit (Remote host closed the connection)
[4:02] * sarob (~sarob@2001:4998:effd:7801::1081) has joined #ceph
[4:04] * glambert_ (~glambert@ptr-22.204.219.82.rev.exa.net.uk) has joined #ceph
[4:09] * caius (~caius@pool-173-61-103-147.cmdnnj.fios.verizon.net) has joined #ceph
[4:09] * glambert (~glambert@37.157.50.80) Quit (Ping timeout: 480 seconds)
[4:13] * haomaiwa_ (~haomaiwan@117.79.232.250) Quit (Remote host closed the connection)
[4:13] * haomaiwang (~haomaiwan@101.78.195.61) has joined #ceph
[4:17] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[4:19] * haomaiwa_ (~haomaiwan@211.155.113.185) has joined #ceph
[4:22] * haomaiwang (~haomaiwan@101.78.195.61) Quit (Ping timeout: 480 seconds)
[4:25] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[4:25] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[4:27] * sarob (~sarob@2001:4998:effd:7801::1081) Quit (Ping timeout: 480 seconds)
[4:30] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[4:30] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[4:34] * sagelap (~sage@182.255.123.109) has joined #ceph
[4:36] * sagelap (~sage@182.255.123.109) Quit ()
[4:36] * xsun (~xsun@187.107.0.22) has joined #ceph
[4:37] * sagelap (~sage@182.255.123.109) has joined #ceph
[4:38] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[4:39] * sarob (~sarob@2001:4998:effd:7801::115c) has joined #ceph
[4:39] * sagelap1 (~sage@182.255.123.109) has joined #ceph
[4:39] * sagelap (~sage@182.255.123.109) Quit ()
[4:42] * AfC (~andrew@182.255.122.166) Quit (Quit: Leaving.)
[4:49] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[4:55] * sarob (~sarob@2001:4998:effd:7801::115c) Quit (Ping timeout: 480 seconds)
[5:02] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.yahoo.com) has joined #ceph
[5:05] * fireD (~fireD@93-139-141-183.adsl.net.t-com.hr) has joined #ceph
[5:07] * fireD_ (~fireD@93-142-213-144.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:07] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) Quit (Quit: Leaving.)
[5:10] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[5:17] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[5:18] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[5:18] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[5:18] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Remote host closed the connection)
[5:18] * sagelap1 (~sage@182.255.123.109) Quit (Quit: Leaving.)
[5:20] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[5:20] * sjustlaptop (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[5:25] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[5:32] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Quit: Leaving.)
[5:42] * Vacum (~vovo@i59F7A87D.versanet.de) has joined #ceph
[5:46] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[5:48] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[5:48] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Remote host closed the connection)
[5:49] * Vacum_ (~vovo@i59F79F21.versanet.de) Quit (Ping timeout: 480 seconds)
[5:50] * nhm_ (~nhm@65-128-184-39.mpls.qwest.net) has joined #ceph
[5:51] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) Quit (Quit: Leaving.)
[5:54] * nhm (~nhm@184-97-148-136.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[5:57] * nwat (~textual@74-92-218-113-Colorado.hfc.comcastbusiness.net) has joined #ceph
[6:03] * grepory (foopy@lasziv.reprehensible.net) Quit (Read error: Connection reset by peer)
[6:03] * grepory (foopy@lasziv.reprehensible.net) has joined #ceph
[6:04] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[6:06] <loicd> kbader: ping
[6:08] * zackc (~zackc@0001ba60.user.oftc.net) Quit (Read error: Operation timed out)
[6:09] * zackc (~zackc@0001ba60.user.oftc.net) has joined #ceph
[6:17] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[6:17] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[6:25] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[6:29] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:31] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[6:33] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[6:35] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[6:39] * AfC (~andrew@182.255.122.166) has joined #ceph
[6:45] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[6:46] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:47] * AfC (~andrew@182.255.122.166) Quit (Quit: Leaving.)
[6:48] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) has joined #ceph
[6:48] * i_m (~ivan.miro@pool-109-191-69-34.is74.ru) Quit (Ping timeout: 480 seconds)
[6:48] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[6:48] * shang (~ShangWu@175.41.48.77) has joined #ceph
[7:00] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[7:06] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Quit: Leaving.)
[7:08] * dzianis__ (~dzianis@86.57.255.91) Quit (Ping timeout: 480 seconds)
[7:11] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has joined #ceph
[7:11] * nwat (~textual@74-92-218-113-Colorado.hfc.comcastbusiness.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[7:12] * i_m (~ivan.miro@2001:470:1f0b:1a08:3e97:eff:febd:a80c) has joined #ceph
[7:12] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has left #ceph
[7:13] * dzianis__ (~dzianis@86.57.255.91) has joined #ceph
[7:17] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[7:25] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:28] * alfredodeza (~alfredode@198.206.133.89) Quit (Read error: Connection reset by peer)
[7:31] <nwf_> Hey channel: any contraindication to running a mon process on a FreeBSD 9 box?
[7:33] <mwarwick> hey, have a bunch of pgs stuck in down+peering state, tried repair but they are still stuck, anything I can do to recover?
[7:33] * JoeGruher (~JoeGruher@jfdmzpr01-ext.jf.intel.com) Quit (Remote host closed the connection)
[7:35] <mwarwick> ceph pg dump_stuck
[7:38] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[7:56] * mattt_ (~textual@94.236.7.190) has joined #ceph
[8:18] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[8:21] * mwarwick (~mwarwick@2407:7800:400:1011:3e97:eff:fe91:d9bf) Quit (Quit: Leaving.)
[8:22] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) has joined #ceph
[8:26] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:27] * Cube1 (~Cube@66-87-77-8.pools.spcsdns.net) Quit (Quit: Leaving.)
[8:34] * sleinen (~Adium@2001:620:0:25:5924:71c2:197b:7513) has joined #ceph
[8:37] * Sysadmin88 (~IceChat77@2.218.8.40) Quit (Quit: If your not living on the edge, you're taking up too much space)
[8:38] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Ping timeout: 480 seconds)
[8:39] * sleinen (~Adium@2001:620:0:25:5924:71c2:197b:7513) Quit ()
[8:39] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[8:40] * nerdtron (~oftc-webi@202.60.8.250) Quit (Remote host closed the connection)
[8:42] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[8:43] * tobru (~quassel@2a02:41a:3999::94) Quit (Remote host closed the connection)
[8:43] * duma (duma@d.clients.kiwiirc.com) has joined #ceph
[8:45] * neary (~neary@def92-9-82-243-243-185.fbx.proxad.net) has joined #ceph
[8:45] <duma> hi all, I'm wondering how rbd snapshot is implemented, I see a "nosnap" under the osd "current" directory which is mounted as ext4.
[8:46] * tobru (~quassel@2a02:41a:3999::94) has joined #ceph
[8:46] <duma> So it means with ext4 rbd snapshot will not be supported?
[8:47] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:48] <duma> Checked that my kernel is 2.6.37 which probably doesn't contain ext4 snapshot feature.
[8:48] * ksingh (~Adium@2001:708:10:91:55f:2bdb:d10e:b04e) has joined #ceph
[8:48] * renhc (renhc@d.clients.kiwiirc.com) has joined #ceph
[9:05] * DarkAceZ (~BillyMays@50-32-13-33.drr01.hrbg.pa.frontiernet.net) Quit (Ping timeout: 480 seconds)
[9:08] * ksingh1 (~Adium@2001:708:10:4008:b::11) has joined #ceph
[9:14] * ksingh (~Adium@2001:708:10:91:55f:2bdb:d10e:b04e) Quit (Ping timeout: 480 seconds)
[9:15] * mschiff (~mschiff@ppp-94-64-127-185.home.otenet.gr) has joined #ceph
[9:17] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Ping timeout: 480 seconds)
[9:21] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[9:24] * mschiff (~mschiff@ppp-94-64-127-185.home.otenet.gr) Quit (Remote host closed the connection)
[9:24] * neary (~neary@def92-9-82-243-243-185.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[9:25] * DarkAceZ (~BillyMays@50-32-13-33.drr01.hrbg.pa.frontiernet.net) has joined #ceph
[9:35] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[9:38] * AfC (~andrew@182.255.122.166) has joined #ceph
[9:38] * mancdaz_away is now known as mancdaz
[9:39] * mancdaz is now known as mancdaz_away
[9:40] * mancdaz_away is now known as mancdaz
[9:42] * mkoo (~mkoo@chello213047069005.1.13.vie.surfer.at) has joined #ceph
[9:43] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) has joined #ceph
[9:45] * ldurnez (~ldurnez@proxy.ovh.net) has joined #ceph
[9:57] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) has joined #ceph
[10:04] * Siva_ (~sivat@generalnat.eglbp.corp.yahoo.com) has joined #ceph
[10:04] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[10:04] * Siva_ is now known as Siva
[10:10] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[10:10] * ChanServ sets mode +v andreask
[10:12] * neary (~neary@62.129.6.2) has joined #ceph
[10:17] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[10:18] * neary (~neary@62.129.6.2) Quit (Quit: Leaving)
[10:19] * AfC (~andrew@182.255.122.166) Quit (Quit: Leaving.)
[10:19] * sleinen (~Adium@2001:620:0:25:5c3d:5f76:29a6:8ca2) has joined #ceph
[10:23] * BManojlovic (~steki@217-162-200-111.dynamic.hispeed.ch) has joined #ceph
[10:26] * xdeller (~xdeller@91.218.144.129) has joined #ceph
[10:30] * allsystemsarego (~allsystem@188.26.167.66) has joined #ceph
[10:33] * garphy`aw is now known as garphy
[10:41] * ksingh (~Adium@193.167.168.54) has joined #ceph
[10:42] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:44] * nos (nos@188.24.207.111) has joined #ceph
[10:44] <nos> 18 Fem USA - Sellinn' live shows and pics / Skype: Roxy.Tenons
[10:45] * nos (nos@188.24.207.111) Quit ()
[10:46] * ksingh1 (~Adium@2001:708:10:4008:b::11) Quit (Ping timeout: 480 seconds)
[10:47] * LeaChim (~LeaChim@host86-174-30-7.range86-174.btcentralplus.com) has joined #ceph
[10:48] * ksingh1 (~Adium@2001:708:10:10:d969:1849:330e:bd7a) has joined #ceph
[10:49] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:52] * yanzheng (~zhyan@jfdmzpr02-ext.jf.intel.com) Quit (Remote host closed the connection)
[10:54] * ksingh (~Adium@193.167.168.54) Quit (Ping timeout: 480 seconds)
[10:56] * acalvo (~acalvo@208.Red-83-61-6.staticIP.rima-tde.net) has joined #ceph
[10:57] <acalvo> Hello
[10:57] <acalvo> I'd like to know what is the best architecture for cluster with 5 nodes
[10:57] <acalvo> right now we have: node a: mon0, node b: mon1, node c: osd0, node d: osd1,node e: osd2,mds1
[10:58] <acalvo> is it efficient?
[10:58] * ksingh1 (~Adium@2001:708:10:10:d969:1849:330e:bd7a) has left #ceph
[11:01] <duma> what's the cluster for? if not for CephFS, then mds is not required.
[11:07] * renhc (renhc@d.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[11:12] <acalvo> yes, clients are using ceph-fuse to retrive and store files
[11:13] <Vacum> acalvo: an even number of mon-nodes doesn't make much sense IIRC
[11:14] <Vacum> acalvo: as the osds will only run if they reach more than half of the mon-nodes. so if you have 2 mons and one dies, the osds will suicide as they do not see _more_ than 1 mon
[11:14] <acalvo> what would be a good architecture?
[11:14] <acalvo> ok, good thing to know
[11:14] <acalvo> also, only having a MDS seems a SPOF
[11:15] <acalvo> even if data is not lost, clients wouldn't be able to retrieve it
[11:16] <duma> for 5 nodes, i guess "3 monitor, 2 mds(active/standby), X osd (X for number of disks)" is fine.
[11:17] <duma> if you want better performance, you should use ssd to storage osd journal.
[11:17] <acalvo> any special distribution of roles across nodes?
[11:18] <acalvo> like having mds and mon mixed up or osd and mds
[11:18] <acalvo> ?
[11:19] <duma> you can mixed it up as long as the resource(cpu+mem) is enough, as far as i know
[11:20] <duma> of course, 3 monitors should be put on different hosts in case of host failure.
[11:21] <acalvo> good
[11:21] <acalvo> thanks Vacum and duma
[11:21] <acalvo> last question: any tutorial about setting up 2 MDS?
[11:22] <acalvo> as for the official doc, no more than 1 MDS should be used (nor is doc available there)
[11:22] * tobru (~quassel@2a02:41a:3999::94) Quit (Remote host closed the connection)
[11:25] * rtek (~sjaak@rxj.nl) has left #ceph
[11:25] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[11:36] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[11:37] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[11:45] * Zethrok (~martin@95.154.26.34) has joined #ceph
[11:54] * duma (duma@d.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[11:56] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[12:07] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[12:09] * garphy is now known as garphy`aw
[12:10] * garphy`aw is now known as garphy
[12:11] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[12:12] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) has joined #ceph
[12:14] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[12:17] <joao> <Vacum> acalvo: as the osds will only run if they reach more than half of the mon-nodes. so if you have 2 mons and one dies, the osds will suicide as they do not see _more_ than 1 mon
[12:17] <joao> osds will not suicide
[12:17] <joao> they'll just stop handling requests
[12:18] <acalvo> Vacum's point was right, even number of OSD should not be used
[12:18] <joao> mons you mean
[12:18] * flaxy (~afx@78.130.171.68) Quit (Quit: WeeChat 0.4.2)
[12:18] * flaxy (~afx@78.130.171.68) has joined #ceph
[12:19] <joao> and again, that's a misconception; from a HA standpoint, they should not be used as 2 mons offer no more HA than 1 mon with twice the infrastructure
[12:19] <joao> from a replication standpoint, 2 mons is better than 1
[12:27] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[12:27] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[12:30] <acalvo> joao, for 5 nodes, I'll deploy 3 mon, 2 mds and 3 osd
[12:30] * shang (~ShangWu@175.41.48.77) Quit (Read error: Operation timed out)
[12:32] * AndreyGrebennikov (~Andrey@91.207.132.67) Quit (Quit: Leaving)
[12:35] <joao> seems reasonable
[12:35] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:35] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[12:38] * KindOne (KindOne@0001a7db.user.oftc.net) has joined #ceph
[12:48] * BManojlovic (~steki@217-162-200-111.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[12:50] * flaxy (~afx@78.130.171.68) Quit (Quit: WeeChat 0.4.2)
[12:51] * flaxy (~afx@78.130.171.68) has joined #ceph
[12:54] * dalchand (~dalchand@106.51.144.169) has joined #ceph
[12:55] <dalchand> anyone using swift rest api ?
[12:56] * xsun (~xsun@187.107.0.22) Quit (Ping timeout: 480 seconds)
[12:59] * caius (~caius@pool-173-61-103-147.cmdnnj.fios.verizon.net) Quit (Read error: Operation timed out)
[12:59] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[13:03] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[13:25] * glambert_ (~glambert@ptr-22.204.219.82.rev.exa.net.uk) Quit (Ping timeout: 480 seconds)
[13:30] * Siva (~sivat@115.244.233.104) has joined #ceph
[13:33] * garphy is now known as garphy`aw
[13:33] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[13:38] * iaXe (~axe@223.223.202.194) Quit (Quit: Leaving)
[13:44] * ard1t (~ad1@109.69.5.22) Quit (Ping timeout: 480 seconds)
[13:44] * hjjg (~hg@p3EE332CA.dip0.t-ipconnect.de) has joined #ceph
[13:45] * sagelap (~sage@perth.vba.iseek.com.au) has joined #ceph
[13:48] * ard1t (~ad1@109.69.5.22) has joined #ceph
[13:51] * glambert (~glambert@ptr-22.204.219.82.rev.exa.net.uk) has joined #ceph
[14:05] * getup (~getup@gw.office.cyso.net) has joined #ceph
[14:13] <dalchand> anyone using swift rest api ?
[14:13] * garphy`aw is now known as garphy
[14:14] * garphy is now known as garphy`aw
[14:15] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[14:19] * nwat (~textual@74-92-218-113-Colorado.hfc.comcastbusiness.net) has joined #ceph
[14:28] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[14:47] * thomnico (~thomnico@37.160.197.138) has joined #ceph
[14:59] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) has joined #ceph
[15:01] * sagelap (~sage@perth.vba.iseek.com.au) Quit (Ping timeout: 480 seconds)
[15:08] * thomnico (~thomnico@37.160.197.138) Quit (Ping timeout: 480 seconds)
[15:08] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[15:09] * dalchand (~dalchand@106.51.144.169) Quit (Quit: Leaving)
[15:09] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[15:13] * elder_ (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[15:20] * elder_ (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Quit: This computer has gone to sleep)
[15:23] * elder_ (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[15:25] * elder_ (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit ()
[15:25] * markbby (~Adium@168.94.245.2) has joined #ceph
[15:25] * i_m (~ivan.miro@2001:470:1f0b:1a08:3e97:eff:febd:a80c) Quit (Ping timeout: 480 seconds)
[15:35] * diegows (~diegows@mail.bittanimation.com) has joined #ceph
[15:39] * elder_ (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[15:46] * pressureman (~pressurem@62.217.45.26) has joined #ceph
[15:51] <pressureman> I am about to set up another small ceph cluster, two hosts, each with 10 disks on an LSI megaraid. Is it preferable to use hardware raid and a single OSD per host, or configure the controller to export each disk individually to the OS, and run one OSD per disk?
[15:52] * i_m (~ivan.miro@pool-109-191-69-34.is74.ru) has joined #ceph
[15:56] * nwat (~textual@74-92-218-113-Colorado.hfc.comcastbusiness.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[15:59] * BillK (~BillK-OFT@124-148-103-108.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[16:01] * asmaps (~quassel@2a03:4000:2:3c5::80) Quit (Remote host closed the connection)
[16:03] * asmaps (~quassel@2a03:4000:2:3c5::80) has joined #ceph
[16:03] * elder_ (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Quit: This computer has gone to sleep)
[16:05] <mattch> pressureman: the latter, usually. You don't gain anything from raid that you don't get from ceph with replication, and raid will probably only serve to slow things down unnecessarily. JBOD is what most folk do.
[16:05] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[16:05] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[16:06] * elder_ (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[16:06] <glambert> think i have every error under the sun under "ceph health"
[16:06] <glambert> HEALTH_WARN 16 pgs backfill; 10 pgs backfilling; 21 pgs degraded; 1 pgs down; 1 pgs incomplete; 1 pgs stuck inactive; 27 pgs stuck unclean; 100 requests are blocked > 32 sec; recovery 557946/2117303 objects degraded (26.352%); pool cloudstack has too few pgs; pool .rgw.buckets has too few pgs; clock skew detected on mon.st003, mon.st001
[16:07] <glambert> guessing the degraded and backfilling stuff is because I've rebooted some of the nodes
[16:07] <mattch> glambert: Start by fixing the clock skew... you might be lucky and that'll tidy up some other issues
[16:08] <mattch> glambert: At least it's only WARN, not ERROR, which implies that while the pool is degraded, it is taking steps to resolve that back to being HEALTH_OK
[16:08] * markbby1 (~Adium@168.94.245.3) has joined #ceph
[16:08] <pressureman> mattch, how does the OSD react to disk hardware failures, which would normally be masked by the hardware raid?
[16:09] <mattch> pressureman: It's usual to put one osd per disk, and replicate across different disks in different servers, giving you redundancy against single disk or whole-server failure
[16:10] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[16:12] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[16:12] <pressureman> mattch, what i mean is, how does the OSD behave if things start going wonky on the disk? e.g. IO timeouts, or it just craps itself entirely and no longer appears on the bus
[16:12] <pressureman> does the OSD recognize a hardware failure and mark itself as down/out?
[16:13] <Gugge-47527> that would depend on the hardware failure
[16:13] <mattch> pressureman: The osd just relies on the kernel/mount/fs to handle those kind of things... but yes, if the disk fails in some way that it stops being accessible, the osd should go out of service
[16:13] * Siva (~sivat@115.244.233.104) Quit (Quit: Siva)
[16:13] <Gugge-47527> if the PSU fails, it does not mark itself out, as its not running anymore :)
[16:13] <Gugge-47527> But the other parts of ceph marks the osd out :)
[16:14] <pressureman> i just have bad memories of mdraid hanging the whole system when a SATA disk dies....
[16:14] <mattch> as Gugge says, same applies to other failures that make an osd or server inaccessible - power failure, network outage etc
[16:15] <Gugge-47527> well, bad sata disks without tler can cause looooong hangs
[16:15] * markbby1 (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[16:15] <pressureman> and i remember even older systems where timeouts on an IDE bus would lock up the whole thing
[16:15] <mattch> pressureman: Some config opts relating to this here: http://ceph.com/docs/master/rados/configuration/mon-osd-interaction/
[16:15] <pressureman> i think modern AHCI controllers fixed a lot of that
[16:16] <Vacum> pressureman: the osd will go down then. we had [sdc] Unhandled sense code [sdc] Medium Error Sense: Unrecovered read error on a disk. the osd went down after 3 of those (after a few minutes). SAS disk btw
[16:16] * elder_ (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Quit: This computer has gone to sleep)
[16:16] * thomnico (~thomnico@2a01:e35:8b41:120:6d36:75da:9b91:b91) has joined #ceph
[16:17] <pressureman> Vacum, and no ill effects during those few minutes?
[16:18] <Vacum> pressureman: hard to tell. we were doing some load tests, but they didn't break by it. could be that the request(s) trying to access those sectors were hanging
[16:18] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Read error: Operation timed out)
[16:18] * markbby (~Adium@168.94.245.2) has joined #ceph
[16:20] <Vacum> once the osd was down, it went out after a few minutes, then backfilling started
[16:22] <mattch> pressureman: fwiw, I accidentally firewalled off a ceph test pool from the libvirt (rados) hosts for 24 hours... all the guests just blocked on disk access and carried on fine once I removed the firewall. Felt like when an nfs mount hangs :)
[16:23] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Remote host closed the connection)
[16:26] <glambert> mattch, I'm a bit confused. I've installed ntp on all three nodes and one of them is still 2 mins ahead. configured /etc/ntp.conf to use 0.uk.pool.ntp.org, 1.uk.pool.ntp.org, 2.uk.pool.ntp.org and 3.uk.pool.ntp.org
[16:26] <tnt> ntp only tweaks by small amounts so it can take some time for them to go into sync
[16:27] <mattch> glambert: Large time jumps sometimes take an age to drift back. you can do something like ntpdate -s to force an update I think...
[16:30] * mcbyte (~mcbyte@p5DCB324A.dip0.t-ipconnect.de) has joined #ceph
[16:35] * bandrus (~Adium@12.111.91.2) has joined #ceph
[16:36] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[16:38] * gregsfortytwo (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[16:41] * diegows (~diegows@mail.bittanimation.com) Quit (Ping timeout: 480 seconds)
[16:42] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[16:43] * Cube (~Cube@199.168.44.193) has joined #ceph
[16:47] * paveraware (~tomc@216.51.73.42) has joined #ceph
[16:47] * nwat (~textual@128.117.152.27) has joined #ceph
[16:47] * mattbenjamin (~matt@aa2.linuxbox.com) has joined #ceph
[16:48] * bandrus (~Adium@12.111.91.2) Quit (Quit: Leaving.)
[16:49] <pressureman> another question about multiple OSDs on a raid controller... if possible, is it preferable to export the disks as JBOD, or individual raid-0 disks?
[16:49] * bandrus (~Adium@12.111.91.2) has joined #ceph
[16:50] <pressureman> this is an LSI SAS 9280 btw
[16:50] * nwat (~textual@128.117.152.27) Quit ()
[16:51] * diegows (~diegows@mail.bittanimation.com) has joined #ceph
[16:52] <saturnin1> question, when doing 'ceph pg repair' how do you let ceph know which replica is good?
[16:52] * nwat (~textual@128.117.152.27) has joined #ceph
[16:54] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[16:57] * jhurlbert (~jhurlbert@216.57.209.252) Quit (Quit: jhurlbert)
[16:57] * bandrus (~Adium@12.111.91.2) Quit (Ping timeout: 480 seconds)
[16:59] * mcbyte (~mcbyte@p5DCB324A.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[16:59] <paveraware> I'm seeing a deadlock in the kernel driver it looks like, I think its a regression from dumpling to emperor, but its pretty easy to reproduce... create rbd, map it, format, mount, write some data, unmount, snap, mount, write, unmount, snap.... after the eighth snapshot, when you write for the ninth time the rbd becomes unmounted, and the machine must be hard powered
[17:02] * Cube (~Cube@199.168.44.193) Quit (Quit: Leaving.)
[17:06] * nwf_ (~nwf@67.62.51.95) Quit (Ping timeout: 480 seconds)
[17:08] * paveraware (~tomc@216.51.73.42) Quit (Quit: paveraware)
[17:09] * bandrus (~Adium@12.111.91.2) has joined #ceph
[17:15] * bandrus (~Adium@12.111.91.2) Quit (Quit: Leaving.)
[17:20] * i_m (~ivan.miro@pool-109-191-69-34.is74.ru) Quit (Ping timeout: 480 seconds)
[17:27] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[17:27] * alram (~alram@38.122.20.226) has joined #ceph
[17:27] * pressureman (~pressurem@62.217.45.26) Quit (Quit: Ex-Chat)
[17:30] * ircolle (~Adium@2601:1:8380:2d9:a4eb:56cb:982:9c34) has joined #ceph
[17:35] <loicd> scuttlemonkey: I'm working on organizational affiliation for the ceph repository. Do you know who is anwleung <anwleung@29311d96-e01e-0410-9327-a35deaab8ce9> and carlosm <carlosm@29311d96-e01e-0410-9327-a35deaab8ce9> ?
[17:38] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) Quit (Remote host closed the connection)
[17:44] <scuttlemonkey> loicd: I'm not sure...sagewk might know
[17:47] * bandrus (~Adium@63.192.141.3) has joined #ceph
[17:47] * thomnico (~thomnico@2a01:e35:8b41:120:6d36:75da:9b91:b91) Quit (Quit: Ex-Chat)
[17:48] * getup (~getup@gw.office.cyso.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[17:49] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:50] <diegows> hi
[17:50] <diegows> are there packages for Ubuntu Saucy (13.10)?
[17:50] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[17:52] <dmsimard> loicd: organizational affiliation ?
[17:52] <loicd> dmsimard: IWeb <contact@iweb.com> David Moreau Simard <dmsimard@iweb.com>
[17:52] <loicd> for instance
[17:52] <dmsimard> Ah..
[17:52] <loicd> so commits can be associated to organizations
[17:52] <dmsimard> Yeah, my git account is under my personal address :)
[17:53] <loicd> http://metrics.ceph.com/scm-companies.html
[17:53] <loicd> that's why there are map files ;-)
[17:53] <dmsimard> That's nice, so a bit like http://www.stackalytics.com/ then
[17:54] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has left #ceph
[18:02] * mattt_ (~textual@94.236.7.190) Quit (Quit: Computer has gone to sleep.)
[18:04] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) Quit (Quit: Leaving.)
[18:06] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) has joined #ceph
[18:10] * hjjg (~hg@p3EE332CA.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[18:14] * Cube (~Cube@199.168.44.193) has joined #ceph
[18:14] * Sysadmin88 (~IceChat77@2.218.8.40) has joined #ceph
[18:17] * angdraug (~angdraug@12.164.168.115) has joined #ceph
[18:18] * xdeller (~xdeller@91.218.144.129) Quit (Quit: Leaving)
[18:20] * JoeGruher (~JoeGruher@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[18:22] * nwf_ (~nwf@67.62.51.95) has joined #ceph
[18:28] * diegows (~diegows@mail.bittanimation.com) Quit (Read error: Operation timed out)
[18:29] * Sysadmin88 (~IceChat77@2.218.8.40) Quit (Quit: OUCH!!!)
[18:29] * Sysadmin88 (~IceChat77@2.218.8.40) has joined #ceph
[18:31] * fireD_ (~fireD@93-142-208-157.adsl.net.t-com.hr) has joined #ceph
[18:32] * fireD (~fireD@93-139-141-183.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[18:36] * root__ (~root@mordur.lhi.is) has joined #ceph
[18:37] * root__ (~root@mordur.lhi.is) Quit ()
[18:37] * mancdaz is now known as mancdaz_away
[18:38] * fcoj (~root@mordur.lhi.is) has joined #ceph
[18:39] * bjornar (~bjornar@ti0099a430-0436.bb.online.no) has joined #ceph
[18:40] <glambert> connecting to ceph rbd via libvirt but it's timing out
[18:40] <glambert> ceph is rebalancing or whatever it does
[18:41] <glambert> got objects degraded and getting messages like active+degraded+remapped+backfilling in ceph -w
[18:41] * scalability-junk (uid6422@id-6422.ealing.irccloud.com) has joined #ceph
[18:41] <glambert> any idea why rbd is timing out though?
[18:41] * fireD (~fireD@93-142-199-231.adsl.net.t-com.hr) has joined #ceph
[18:41] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[18:43] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[18:43] * fireD_ (~fireD@93-142-208-157.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[18:43] * xarses (~andreww@12.164.168.115) has joined #ceph
[18:44] <glambert> seems very similar to this: http://tracker.ceph.com/issues/6333
[18:44] * diegows (~diegows@mail.bittanimation.com) has joined #ceph
[18:53] * fireD_ (~fireD@78-0-221-95.adsl.net.t-com.hr) has joined #ceph
[18:53] <davidzlap> saturnin1: If you know which replica(s) of an object are bad, remove them manually and then repair will replace it with a good copy.
[18:54] * fireD (~fireD@93-142-199-231.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[18:56] * Haksoldier (~isLamatta@88.234.107.14) has joined #ceph
[18:56] <Haksoldier> EUZUBILLAHIMINE??EYTANIRRACIM BISMILLAHIRRAHMANIRRAHIM
[18:56] <Haksoldier> ALLAHU EKBERRRRR! LA ?LAHE ?LLALLAH MUHAMMEDEN RESULULLAH!!
[18:56] * Haksoldier (~isLamatta@88.234.107.14) has left #ceph
[18:56] <janos> again?
[18:56] <Sysadmin88> lol
[18:57] <janos> he signed me up just yesterday! man once you give they keep coming back to the well!
[18:57] <Sysadmin88> anyone tried running Ceph as an alternative to the vsphere storage appliance?
[18:57] <janos> no, HakSoldier, i won't buy any more of your inpirational CD's!!!
[18:58] <janos> i haven't tried with anything in the vmware realm
[18:58] <janos> some may have though
[18:58] <Sysadmin88> in case my client refuses to buy the windows server 2012 licenses i recommended for cluster shared volumes...
[18:59] <janos> doh
[18:59] * fireD (~fireD@78-1-225-250.adsl.net.t-com.hr) has joined #ceph
[19:01] * mattbenjamin1 (~matt@aa2.linuxbox.com) has joined #ceph
[19:01] * nwat (~textual@128.117.152.27) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[19:01] * fireD_ (~fireD@78-0-221-95.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[19:04] * mattbenjamin (~matt@aa2.linuxbox.com) Quit (Read error: Operation timed out)
[19:05] * Sysadmin88 (~IceChat77@2.218.8.40) Quit (Quit: Not that there is anything wrong with that)
[19:05] * ldurnez (~ldurnez@proxy.ovh.net) Quit (Quit: Leaving.)
[19:06] * fireD_ (~fireD@93-139-161-91.adsl.net.t-com.hr) has joined #ceph
[19:06] * fireD (~fireD@78-1-225-250.adsl.net.t-com.hr) Quit (Read error: Operation timed out)
[19:10] * sjustlaptop (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) has joined #ceph
[19:12] * joshd1 (~jdurgin@2602:306:c5db:310:f9b6:f38b:27a3:e72) has joined #ceph
[19:14] * sleinen (~Adium@2001:620:0:25:5c3d:5f76:29a6:8ca2) Quit (Quit: Leaving.)
[19:14] * sleinen (~Adium@130.59.94.67) has joined #ceph
[19:15] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[19:17] <sjustlaptop> joao: standup?
[19:21] * Cube (~Cube@199.168.44.193) Quit (Quit: Leaving.)
[19:22] * sleinen (~Adium@130.59.94.67) Quit (Ping timeout: 480 seconds)
[19:27] <Zethrok> Just curious - what happens if I abort a rbd rm of a large image? Will it resume, do I have to start all over or will I mess something up?
[19:28] <Zethrok> I kinda forgot to screen it....
[19:29] * rmoe (~quassel@12.164.168.116) has joined #ceph
[19:37] * Cube (~Cube@199.168.44.193) has joined #ceph
[19:39] <joshd1> Zethrok: it'll start over since rbd doesn't keep track of which objects exist, but it's idempotent so safe to run again
[19:40] <Zethrok> joshd1: great, thanks!
[19:42] * pvsa (~philipp@95-91-88-117-dynip.superkabel.de) has joined #ceph
[19:43] <pvsa> hi
[19:44] <pvsa> i have a question on topic ceph+gentoo+pg stuck
[19:44] <pvsa> is there anybody who can help ?
[19:49] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:50] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) has joined #ceph
[19:57] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[20:00] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) Quit (Remote host closed the connection)
[20:00] * bandrus1 (~Adium@63.192.141.3) has joined #ceph
[20:01] <loicd> does someone has contact information for Andrew Leung ? he has >100 commits under a strange alias anwleung <anwleung@29311d96-e01e-0410-9327-a35deaab8ce9> :-)
[20:04] * ScOut3R (~scout3r@54009895.dsl.pool.telekom.hu) has joined #ceph
[20:05] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) has joined #ceph
[20:06] * bandrus (~Adium@63.192.141.3) Quit (Ping timeout: 480 seconds)
[20:08] * capri (~capri@p54A5790E.dip0.t-ipconnect.de) has joined #ceph
[20:10] * capri (~capri@p54A5790E.dip0.t-ipconnect.de) Quit ()
[20:10] * capri (~capri@p54A5790E.dip0.t-ipconnect.de) has joined #ceph
[20:10] * nwat (~textual@128.117.152.27) has joined #ceph
[20:11] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Ping timeout: 480 seconds)
[20:16] * ScOut3R (~scout3r@54009895.dsl.pool.telekom.hu) Quit ()
[20:20] * sroy (~sroy@ip-208-88-110-45.savoirfairelinux.net) has joined #ceph
[20:25] * sroy (~sroy@ip-208-88-110-45.savoirfairelinux.net) Quit (Read error: Operation timed out)
[20:26] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[20:33] * danieagle (~Daniel@179.176.55.111.dynamic.adsl.gvt.net.br) has joined #ceph
[20:38] * kyann (~oftc-webi@did75-15-88-160-187-237.fbx.proxad.net) has joined #ceph
[20:43] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[20:46] * kyann (~oftc-webi@did75-15-88-160-187-237.fbx.proxad.net) Quit (Quit: Page closed)
[20:50] * BManojlovic (~steki@217-162-200-111.dynamic.hispeed.ch) has joined #ceph
[20:50] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[20:50] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:53] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[20:53] * sarob (~sarob@2001:4998:effd:600:401b:9b86:54d:2799) has joined #ceph
[20:57] * Henson_D (~kvirc@lord.uwaterloo.ca) has joined #ceph
[20:59] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[20:59] * japuzzo_ (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[21:06] * Cube (~Cube@199.168.44.193) Quit (Quit: Leaving.)
[21:06] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[21:07] * Cube (~Cube@199.168.44.193) has joined #ceph
[21:10] * Cube (~Cube@199.168.44.193) Quit ()
[21:11] * BManojlovic (~steki@217-162-200-111.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:12] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[21:13] * danieagle (~Daniel@179.176.55.111.dynamic.adsl.gvt.net.br) Quit (Quit: inte+ e Obrigado Por tudo mesmo! :-D)
[21:14] <pvsa> /exec -o echo bla
[21:14] <pvsa> bla
[21:16] <pvsa> sorry
[21:18] * kyann (~oftc-webi@did75-15-88-160-187-237.fbx.proxad.net) has joined #ceph
[21:18] * sagelap (~sage@119.225.29.62) has joined #ceph
[21:22] <pvsa> rbd/mount -c ceph doesn't work
[21:22] <pvsa> and the health says: HEALTH_WARN 292 pgs stuck inactive; 292 pgs stuck unclean; clock skew detected
[21:23] <pvsa> is the stuck or the clock skew more important ?
[21:25] * briancline_ is now known as briancline
[21:27] * capri (~capri@p54A5790E.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[21:30] * chris_lu (~ccc2@128.180.82.4) Quit (Read error: Connection reset by peer)
[21:31] * nwat (~textual@128.117.152.27) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:34] * capri (~capri@p54A55129.dip0.t-ipconnect.de) has joined #ceph
[21:35] * diegows (~diegows@mail.bittanimation.com) Quit (Ping timeout: 480 seconds)
[21:36] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[21:36] * capri_wk (~capri@p579F9ACB.dip0.t-ipconnect.de) has joined #ceph
[21:37] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[21:38] * chris_lu (~ccc2@bolin.Lib.lehigh.EDU) has joined #ceph
[21:38] * Cube (~Cube@199.168.44.193) has joined #ceph
[21:39] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[21:41] * nwat (~textual@martini.fl-guest.ucar.edu) has joined #ceph
[21:42] * capri (~capri@p54A55129.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[21:44] * diegows (~diegows@mail.bittanimation.com) has joined #ceph
[21:44] * bandrus1 (~Adium@63.192.141.3) Quit (Quit: Leaving.)
[21:45] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) has joined #ceph
[21:49] * nwat (~textual@martini.fl-guest.ucar.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:58] * nwat (~textual@martini.fl-guest.ucar.edu) has joined #ceph
[21:58] * glambert_ (~glambert@37.157.50.243) has joined #ceph
[21:58] <glambert_> hi, I've got 537 pgs, 534 active+clean and 3 incomplete? how do I get them to be active+clean?
[22:00] * nwat (~textual@martini.fl-guest.ucar.edu) Quit ()
[22:01] <JoeGruher> anyone deployed Ceph with Fuel? it seems to be saying it will use 0MB for the Ceph journal, does it run without a journal? is that supported?
[22:01] <dmick> no, wrong, and no
[22:01] * allsystemsarego (~allsystem@188.26.167.66) Quit (Quit: Leaving)
[22:01] <JoeGruher> heh k
[22:01] <janos> you seem unsure, dmick
[22:01] <janos> ;)
[22:01] <JoeGruher> i guess i'll deploy with the default and see what it actually does
[22:02] <JoeGruher> maybe it just co-locates a default size (1GB?) journal on each spindle
[22:02] <dmick> it may be trying to tell you that when you haven't allocated a separate journal, in which case you may end up with a file in your OSD data path
[22:02] <dmick> which is not really what you want
[22:03] <JoeGruher> yeah
[22:03] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[22:07] <JoeGruher> it does let you allocate space for ceph journal at the node level but it is kind of mysterious how it then makes use of that space... i dont see where to specify which OSDs have journals in that space or how large the journals should be
[22:07] * sleinen (~Adium@2001:620:0:26:d122:c32a:fad:9a4a) has joined #ceph
[22:09] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[22:10] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[22:10] * nwat (~textual@martini.fl-guest.ucar.edu) has joined #ceph
[22:12] * nwat (~textual@martini.fl-guest.ucar.edu) Quit ()
[22:21] <JoeGruher> would deleting a bunch of objects (4M objects across 4 pools) trigger a scrub? i'm running some performance testing and every time the cleanup stage finishes deleting all the objects Ceph starts scrubbing PGs
[22:22] <glambert_> anyone?
[22:24] * sagelap (~sage@119.225.29.62) Quit (Ping timeout: 480 seconds)
[22:25] <davidzlap> JoeGruher: No a regular scrub happens every 24 hours by default. Deep scrub happens every week by default. Only other way is to trigger manually with the ceph command.
[22:26] * nwat (~textual@martini.fl-guest.ucar.edu) has joined #ceph
[22:26] <JoeGruher> OK... my test automation probably just takes 24 hours to get to this point then... thanks for the input davidzlap. Any idea if a regular scrub is going to impact performance much?
[22:29] <davidzlap> JoeGruber: The regular scrub is not too stressful.
[22:33] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[22:35] * nwat (~textual@martini.fl-guest.ucar.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:37] * ScOut3R (~scout3r@54009895.dsl.pool.telekom.hu) has joined #ceph
[22:37] * sagelap (~sage@119.225.29.62) has joined #ceph
[22:38] <bjornar> When using omap/leveldb.. the keys are sorted, right?
[22:38] * diegows (~diegows@mail.bittanimation.com) Quit (Ping timeout: 480 seconds)
[22:39] * mkoo (~mkoo@chello213047069005.1.13.vie.surfer.at) Quit (Quit: mkoo)
[22:40] * Cube (~Cube@199.168.44.193) Quit (Quit: Leaving.)
[22:42] <joshd1> bjornar: yes
[22:43] * vata (~vata@2607:fad8:4:6:a891:c01c:3849:bc26) has joined #ceph
[22:43] <bjornar> joshd1, what happends if I use a nonexistant "start-key"?
[22:43] <bjornar> say I have keys a, b, d ... and ask to start from c ..
[22:44] <joshd1> you'll get d
[22:44] <bjornar> nice
[22:45] <bjornar> and no real limitation on value size=
[22:45] * pvsa (~philipp@95-91-88-117-dynip.superkabel.de) Quit (Quit: Ex-Chat)
[22:46] * nwat (~textual@martini.fl-guest.ucar.edu) has joined #ceph
[22:46] * Tamil2 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[22:47] <bjornar> joshd1, another question... when using btrfs and snapshots... what happends when data is moved?
[22:47] <joshd1> you might have performance issues with larger values (like multiple mb) but I'm not sure anyone's tried anything too large with it
[22:47] <joshd1> rados snapshots will use the clone ioctl on btrfs, which isn't the same as a btrfs snapshot
[22:50] <bjornar> joshd1, ok... so that means it will not affect moving to other osd? .. just wondered if there was some send/receive trickery
[22:50] <saturnin1> anyone know of a good script for generating signed admin ops api requests?
[22:50] * sarob (~sarob@2001:4998:effd:600:401b:9b86:54d:2799) Quit (Remote host closed the connection)
[22:50] <joshd1> no, nothing like that (afaik btrfs send/recv isn't ready yet anyway)
[22:51] * sarob (~sarob@2001:4998:effd:600:401b:9b86:54d:2799) has joined #ceph
[22:51] * diegows (~diegows@mail.bittanimation.com) has joined #ceph
[22:51] <bjornar> joshd1, was some issues with it seems this week..
[22:52] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[22:52] * JoeGruher (~JoeGruher@jfdmzpr03-ext.jf.intel.com) Quit (Remote host closed the connection)
[22:52] * sarob_ (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:53] * Henson_D (~kvirc@lord.uwaterloo.ca) Quit (Quit: KVIrc KVIrc Equilibrium 4.1.3, revision: 5988, sources date: 20110830, built on: 2012-10-09 20:23:01 UTC http://www.kvirc.net/)
[22:56] <joshd1> saturnin1: if you know python you could adapt https://github.com/ceph/radosgw-agent/blob/master/radosgw_agent/client.py
[22:56] <bjornar> joshd1, btw ... is anyone considering looking at the librados docs? eventually just replace the api docs with the sourcecode ;)
[22:57] * BManojlovic (~steki@217-162-200-111.dynamic.hispeed.ch) has joined #ceph
[22:58] <saturnin1> joshd1: cool, thanks
[22:58] <joshd1> bjornar: yeah, actually there's some more work going on there lately https://github.com/ceph/ceph/commits/wip-doc-librados-intro
[22:58] <joshd1> bjornar: patches always welcome, of course
[22:59] * sarob (~sarob@2001:4998:effd:600:401b:9b86:54d:2799) Quit (Ping timeout: 480 seconds)
[23:00] * diegows (~diegows@mail.bittanimation.com) Quit (Ping timeout: 480 seconds)
[23:01] <xarses> hmm rados bench dosn't want to do a seq op after I did write
[23:02] <xarses> http://paste.openstack.org/show/60918/
[23:03] * kaizh (~oftc-webi@128-107-239-235.cisco.com) has joined #ceph
[23:04] <bjornar> xarses, dont delete the written files..
[23:04] <bjornar> --keep-foo
[23:04] * rmoe_ (~quassel@12.164.168.115) has joined #ceph
[23:04] <xarses> on the end of write ?
[23:05] <xarses> bjornar: rados bench -p rbd 300 write --keep-foo
[23:08] * rmoe (~quassel@12.164.168.116) Quit (Ping timeout: 480 seconds)
[23:09] * bandrus (~Adium@63.192.141.3) has joined #ceph
[23:10] <bjornar> it is --no-cleanup
[23:11] <bjornar> joshd1, does the librados cluster handle corretly handle lost connections and crushmap updates?
[23:12] * nwat (~textual@martini.fl-guest.ucar.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:13] * nwat (~textual@martini.fl-guest.ucar.edu) has joined #ceph
[23:13] * nwat (~textual@martini.fl-guest.ucar.edu) Quit ()
[23:15] <dmick> bjornar: it should
[23:16] <joshd1> yeah, all retries are handled for you
[23:22] * nwat (~textual@martini.fl-guest.ucar.edu) has joined #ceph
[23:27] * sleinen (~Adium@2001:620:0:26:d122:c32a:fad:9a4a) Quit (Quit: Leaving.)
[23:27] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[23:29] * kyann (~oftc-webi@did75-15-88-160-187-237.fbx.proxad.net) Quit (Remote host closed the connection)
[23:33] <bjornar> So when I have the connection, I should always be able to get ioctx's?
[23:35] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:35] <joshd1> yeah, assuming you've got free memory
[23:36] <joshd1> and the pool exists
[23:40] * nwat (~textual@martini.fl-guest.ucar.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:40] * sagelap (~sage@119.225.29.62) Quit (Read error: Operation timed out)
[23:42] * sagelap (~sage@119.225.29.62) has joined #ceph
[23:43] * nwat (~textual@martini.fl-guest.ucar.edu) has joined #ceph
[23:43] * Sysadmin88 (~IceChat77@2.218.8.40) has joined #ceph
[23:43] * bjornar (~bjornar@ti0099a430-0436.bb.online.no) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.