#ceph IRC Log

Index

IRC Log for 2013-08-08

Timestamps are in GMT/BST.

[0:04] * tnt (~tnt@109.130.80.16) Quit (Ping timeout: 480 seconds)
[0:07] * buck (~buck@bender.soe.ucsc.edu) has left #ceph
[0:17] * sleinen (~Adium@2001:620:0:25:8ce0:88a9:4629:accd) Quit (Quit: Leaving.)
[0:21] * danieagle (~Daniel@177.205.183.226.dynamic.adsl.gvt.net.br) has joined #ceph
[0:22] * shoosah (~ssha637@en-279303.engad.foe.auckland.ac.nz) has joined #ceph
[0:22] * BillK (~BillK-OFT@124-148-246-233.dyn.iinet.net.au) has joined #ceph
[0:23] <shoosah> what is the latest version of installing ceph?!
[0:25] <lurbs> Latest stable is 0.61.7 (aka cuttlefish), although I believe that the next stable release (0.67.x, aka dumpling) is due out soon.
[0:25] <shoosah> alright thanks buddy
[0:26] <shoosah> do u have any link?!
[0:26] <lurbs> To what? A download link?
[0:27] * tziOm (~bjornar@ti0099a340-dhcp0395.bb.online.no) has joined #ceph
[0:27] <shoosah> yup and how to follow the steps to create osds, mds, bla bla bla
[0:27] <lurbs> http://ceph.com/resources/downloads/ has various downloads, for either packages or source.
[0:28] <lurbs> Deployment guidelines are at: http://ceph.com/docs/master/rados/deployment/
[0:28] * diegows (~diegows@200.68.116.185) Quit (Ping timeout: 480 seconds)
[0:28] <shoosah> great cheers
[0:30] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:31] <shoosah> is this command right > sudo chmod 0440 /etc/sudoers.d/ceph
[0:31] <shoosah> or it is supposed to be 644
[0:31] <shoosah> ?
[0:32] <lurbs> Nope, /etc/sudoers.d/* files can't be world readable or your sudo won't work.
[0:33] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[0:35] * tziOm (~bjornar@ti0099a340-dhcp0395.bb.online.no) Quit (Remote host closed the connection)
[0:36] <shoosah> this one > ceph ALL = (root) NOPASSWD:ALL, here the "ceph" referes for ceph or the user that I created?
[0:38] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[0:39] <shoosah> I just tried to add this >
[0:39] <shoosah> echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
[0:39] <shoosah> sudo chmod 0440 /etc/sudoers.d/ceph
[0:40] <shoosah> to the /etc/sudoers.d/*
[0:40] <shoosah> but Im unable to open it up anymore!
[0:45] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[0:51] * devoid (~devoid@130.202.135.223) Quit (Quit: Leaving.)
[0:54] * grepory1 (~Adium@50-115-70-146.static-ip.telepacific.net) has joined #ceph
[0:54] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) Quit (Read error: Connection reset by peer)
[0:54] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Ping timeout: 480 seconds)
[0:56] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) Quit (Ping timeout: 480 seconds)
[1:00] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) has joined #ceph
[1:00] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[1:10] * indeed (~indeed@206.124.126.33) Quit (Remote host closed the connection)
[1:14] * mschiff (~mschiff@85.182.236.82) Quit (Remote host closed the connection)
[1:19] * danieagle (~Daniel@177.205.183.226.dynamic.adsl.gvt.net.br) Quit (Quit: inte+ e Obrigado Por tudo mesmo! :-D)
[1:24] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:27] * mtanski (~mtanski@69.193.178.202) Quit (Ping timeout: 480 seconds)
[1:27] * indeed (~indeed@206.124.126.33) has joined #ceph
[1:27] * indeed (~indeed@206.124.126.33) Quit (Remote host closed the connection)
[2:10] * shoosah (~ssha637@en-279303.engad.foe.auckland.ac.nz) Quit (Quit: Konversation terminated!)
[2:32] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:39] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:39] * AfC (~andrew@2407:7800:200:100a:7583:72be:a637:55b8) has joined #ceph
[2:50] * The_Bishop (~bishop@2001:470:50b6:0:b90d:9906:f15e:d46b) Quit (Ping timeout: 480 seconds)
[2:55] * nerdtron (~kenneth@202.60.8.252) has joined #ceph
[2:59] * xmltok (~xmltok@pool101.bizrate.com) Quit (Quit: Bye!)
[2:59] * The_Bishop (~bishop@2001:470:50b6:0:c991:8a4e:626f:4c75) has joined #ceph
[3:04] * AfC (~andrew@2407:7800:200:100a:7583:72be:a637:55b8) Quit (Ping timeout: 480 seconds)
[3:07] * yy-nm (~chatzilla@115.196.74.105) has joined #ceph
[3:12] * AfC (~andrew@2407:7800:200:1011:f4e1:b1cd:48bf:d89c) has joined #ceph
[3:13] * LeaChim (~LeaChim@97e00998.skybroadband.com) Quit (Ping timeout: 480 seconds)
[3:18] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[3:47] * jeff-YF (~jeffyf@pool-173-66-21-43.washdc.fios.verizon.net) has joined #ceph
[3:57] * The_Bishop (~bishop@2001:470:50b6:0:c991:8a4e:626f:4c75) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[4:01] * jaydee (~jeandanie@124x35x46x15.ap124.ftth.ucom.ne.jp) has joined #ceph
[4:04] * silversurfer (~jeandanie@124x35x46x12.ap124.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[4:08] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) Quit (Remote host closed the connection)
[4:08] * troug (~troug@c-50-140-187-64.hsd1.il.comcast.net) Quit (Remote host closed the connection)
[4:09] * silversurfer (~jeandanie@124x35x46x12.ap124.ftth.ucom.ne.jp) has joined #ceph
[4:11] * jaydee (~jeandanie@124x35x46x15.ap124.ftth.ucom.ne.jp) Quit (Read error: Operation timed out)
[4:17] * julian (~julianwa@125.69.104.58) has joined #ceph
[4:46] * john_barbee_ (~jbarbee@c-98-220-74-174.hsd1.in.comcast.net) has joined #ceph
[5:05] * fireD (~fireD@93-139-160-151.adsl.net.t-com.hr) has joined #ceph
[5:07] * fireD_ (~fireD@93-139-175-22.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:26] * john_barbee__ (~jbarbee@c-98-220-74-174.hsd1.in.comcast.net) has joined #ceph
[5:26] * john_barbee_ (~jbarbee@c-98-220-74-174.hsd1.in.comcast.net) Quit (Read error: Connection reset by peer)
[5:39] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[5:45] * AfC (~andrew@2407:7800:200:1011:f4e1:b1cd:48bf:d89c) Quit (Quit: Leaving.)
[5:49] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley)
[5:49] * The_Bishop (~bishop@e179017183.adsl.alicedsl.de) has joined #ceph
[6:09] * yanzheng (~zhyan@134.134.139.70) has joined #ceph
[6:16] * gentleben (~sseveranc@c-98-207-40-73.hsd1.ca.comcast.net) Quit (Quit: gentleben)
[6:21] * loopy (~torment@pool-96-228-147-185.tampfl.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:25] * yy-nm (~chatzilla@115.196.74.105) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 22.0/20130618035212])
[6:38] * loopy (~torment@pool-72-64-182-94.tampfl.fios.verizon.net) has joined #ceph
[6:39] * jeff-YF (~jeffyf@pool-173-66-21-43.washdc.fios.verizon.net) Quit (Quit: jeff-YF)
[6:39] * _Tass4dar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[6:40] * grepory1 (~Adium@50-115-70-146.static-ip.telepacific.net) Quit (Quit: Leaving.)
[6:44] * _Tassadar (~tassadar@tassadar.xs4all.nl) Quit (Ping timeout: 480 seconds)
[6:44] * _Tass4da1 (~tassadar@tassadar.xs4all.nl) has joined #ceph
[6:48] * _Tass4dar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Ping timeout: 480 seconds)
[6:49] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Quit: Leaving.)
[6:49] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[6:56] * nwat (~nwat@216.1.187.162) has joined #ceph
[6:58] * dlan (~dennis@116.228.88.131) has joined #ceph
[7:02] <athrift> So, trying to follow the ceph-deploy guide from the wiki, its not clear if I am meant to run ceph-deploy as the newly created user that is in the sudoers group, or as root using sudo. It fails when running as the user, but the root user does not have the SSH keys that the created user does....
[7:03] <athrift> seems like something in the guide is missing
[7:03] <athrift> will go back to the "old" way ;)
[7:04] <mikedawson_> athrift: Inktank has a new employee working on ceph-deploy full-time, please report the issues you have so Alfredo can work through them
[7:07] * yy-nm (~chatzilla@115.196.74.105) has joined #ceph
[7:17] <athrift> mikedawson_: will do, I think its more the method desribed in the docs is not right
[7:17] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) has joined #ceph
[7:19] * john_barbee__ (~jbarbee@c-98-220-74-174.hsd1.in.comcast.net) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 22.0/20130618035212])
[7:19] * nwat (~nwat@216.1.187.162) Quit (Ping timeout: 480 seconds)
[7:19] <mikedawson_> athrift: I think John Wilkins handles the documention
[7:30] * madkiss (~madkiss@2001:6f8:12c3:f00f:15b6:17ff:bb27:feb6) has joined #ceph
[7:33] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[7:38] * Machske (~Bram@d5152D87C.static.telenet.be) Quit ()
[7:59] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[8:02] * yanzheng (~zhyan@134.134.139.70) Quit (Remote host closed the connection)
[8:04] * mschiff (~mschiff@85.182.236.82) has joined #ceph
[8:08] * tnt (~tnt@109.130.80.16) has joined #ceph
[8:11] * AfC (~andrew@2407:7800:200:1011:f0a3:241:15ee:11ca) has joined #ceph
[8:27] * huangjun (~kvirc@221.234.156.126) has joined #ceph
[8:27] * Vincent_Valentine (~Vincent_V@49.206.158.155) has joined #ceph
[8:29] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[8:29] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[8:32] * AfC (~andrew@2407:7800:200:1011:f0a3:241:15ee:11ca) Quit (Ping timeout: 480 seconds)
[8:39] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[8:39] <huangjun> our cluster was set to auto-start, and now i have an osd disk pulled out, then the system can not start, it hangs on mounting the disk to data dir.
[8:40] <huangjun> so can i add timeout if it mounting too long.
[8:41] * sleinen1 (~Adium@2001:620:0:26:34b2:d430:845c:f38) has joined #ceph
[8:44] * mschiff (~mschiff@85.182.236.82) Quit (Remote host closed the connection)
[8:46] * Vincent_Valentine (~Vincent_V@49.206.158.155) Quit (Ping timeout: 480 seconds)
[8:47] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:50] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[8:50] <nerdtron> you just puleld out an osd and did mark is as down first?
[8:56] <huangjun> no, i just poweroff the host and pull out an disk, then turn on the host, it stuck in mounting thta osd
[8:56] <huangjun> that is not friendliy, so i need to timeout the mount process
[9:00] <nerdtron> that is bad...that method is wrong..can't you just plug the disk back and mark the osd as down on the ceph cluster and then you can safely remove it
[9:00] <nerdtron> BTW how many host, mon and osd does the cluster have?
[9:02] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[9:03] <joelio> that doesn't really help in a fault situation though, you should be able to boot and have the osd mount time out (so the rest of the node and osds can come up)
[9:03] <joelio> I've experienced this myself, had remote access thogh, so skipped
[9:04] <joelio> huangjun: what distro? I'm on ubuntu (wondering if it's something OS specific that can be overriden)
[9:05] <huangjun> we're on centos 6.4
[9:06] <joelio> ok, so maybe is the way Ceph is adding mounts?
[9:06] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[9:06] <joelio> did you use ceph-deploy?
[9:06] <huangjun> so if we push the new disk into host, and then the osd daemon can start up?
[9:06] <huangjun> yes,we use ceph-deploy
[9:06] <joelio> I was just able to skip the mount when booting by pressing 'S'
[9:07] <joelio> when booting
[9:07] <joelio> but not sure what CentOS does
[9:07] <joelio> adding a disk won't necessarily work as you'd need to add it as an osd, which is problematic if the host/cluster is down
[9:08] <joelio> do you have just the one node actually?
[9:09] <huangjun> no, we have many hosts
[9:09] <nerdtron> how many mons?
[9:09] <huangjun> joelio: what's your resoultion if you run ceph as an auto-run service?
[9:09] * Midnightmyth_ (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[9:10] <joelio> huangjun: as I said, when I booted once with a degraded osd disk (it had been removed for testing) - the server stuck at boot waiting to mount the osd
[9:10] <joelio> I had remote console, so just pressed 'S' to skip when booting - so the rest of the services came up
[9:11] <huangjun> bc the osd and mds, kclient mounting are depends on mon,if the mon daemon is down (just say not running up),so other service can not bring up, even hangs
[9:11] <joelio> the OSD was down obviously, but meant I could readd the disk... if it was faulty, that would be where I would add another fresh one and remake as an OSD
[9:11] <huangjun> joelio: ok,i 'll try it
[9:13] <joelio> it does sound like a bug to me, I'll do some digging when I get to work (it'd be imoortant to ensure hosts can boot degraded!)
[9:14] <joelio> I'll also see what changes have been put in ceph-deploy, as I built my cluster a while back, maybe that bug has been fixed
[9:15] * mschiff (~mschiff@p4FD7C98F.dip0.t-ipconnect.de) has joined #ceph
[9:15] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[9:16] <huangjun> thanks for replay
[9:22] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:23] * KrisK (~krzysztof@213.17.226.11) has joined #ceph
[9:23] <KrisK> Hello
[9:24] <KrisK> did anyone when deploying ceph get info like
[9:24] <KrisK> W: Failed to fetch http://ceph.com/debian-cuttlefish/dists/precise/main/binary-amd64/Packages 403 Forbidden
[9:24] <KrisK> E: Some index files failed to download. They have been ignored, or old ones used instead.
[9:24] <KrisK> I'm using http://ceph.com/docs/master/start/quick-start-preflight/
[9:25] * saabylaptop (~saabylapt@2a02:2350:18:1010:ac3d:3d15:d18:c34e) has joined #ceph
[9:27] * julian (~julianwa@125.69.104.58) Quit (Read error: Connection reset by peer)
[9:31] <nerdtron> using ubuntu? i just ignore that..installation is still good
[9:31] * julian (~julianwa@125.69.104.58) has joined #ceph
[9:31] * julian (~julianwa@125.69.104.58) Quit (Read error: Connection reset by peer)
[9:32] * julian (~julianwa@125.69.104.58) has joined #ceph
[9:32] * julian (~julianwa@125.69.104.58) Quit (Read error: Connection reset by peer)
[9:34] * tnt (~tnt@109.130.80.16) Quit (Ping timeout: 480 seconds)
[9:34] <Kioob`Taff> KrisK: and have you got access to that file (http://ceph.com/debian-cuttlefish/dists/precise/main/binary-amd64/Packages) ?
[9:35] <Kioob`Taff> a wget from your server is working ?
[9:35] <KrisK> sure I have
[9:36] <KrisK> and yes it is working
[9:36] * CliMz (~CliMz@194.88.193.33) has joined #ceph
[9:36] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[9:36] * joelio finds eu server a little better
[9:38] <KrisK> the same
[9:38] <KrisK> W: Failed to fetch http://eu.ceph.com/d
[9:39] <KrisK> 403 Forbidden
[9:40] <Kioob`Taff> your APT configuration use a proxy ?
[9:41] <joelio> yea, fine for me too
[9:41] <KrisK> ups
[9:41] <KrisK> I just saw proxy lines in apt.conf
[9:41] <KrisK> I builded massive amount of servers using MASS
[9:42] <KrisK> now all is fine
[9:42] <KrisK> just removed line with proxy
[9:42] <KrisK> thanks
[9:42] <KrisK> great idea
[9:42] <loicd> KrisK: how massive ?
[9:43] <KrisK> 20+ at begining
[9:43] <KrisK> but there will be more
[9:43] <KrisK> now I will test ceph
[9:43] <loicd> nice :-)
[9:43] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:44] <joelio> KrisK: if you're using ceph-deploy, I'd look at getting the latest version in gitbuilder (if it's not been pushed yet) - lots of nice bug fixes gone in
[9:44] <joelio> if not, ignore me :)
[9:45] <KrisK> I assume that you are using this product
[9:45] <KrisK> joelio: are you happy with it ?
[9:45] <joelio> ceph, for sure!
[9:45] <KrisK> openstack or something else?
[9:46] <joelio> not got a large cluster, only 6 hosts and 36 osds - but it works great
[9:46] <joelio> opennebula
[9:46] <joelio> lots of middleware supported though
[9:46] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) has joined #ceph
[9:47] <KrisK> nice
[9:47] <joelio> 2, soon to be 3 hypervisors running on it - plus we're building the same at another site
[9:48] * julian (~julianwa@125.69.104.58) has joined #ceph
[9:48] <joelio> used for engineers internally to spin up vms, tests ideas.. also for systems (ci, orchestration tools etc.)
[9:48] * julian (~julianwa@125.69.104.58) Quit (Read error: Connection reset by peer)
[9:50] <joelio> we're looking to push out further if the next few months goes well - quite exciting being involved actually :)
[9:50] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has left #ceph
[9:53] <KrisK> nice, we have openstack with a lot of vms
[9:54] <KrisK> 1k+
[9:54] <KrisK> and now planing new env
[9:54] <KrisK> so test test test
[9:55] <joelio> yep, couldn't agree more
[10:00] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[10:07] * Midnightmyth_ (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[10:07] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[10:07] * ChanServ sets mode +v andreask
[10:09] * root (~chatzilla@180.111.186.53) has joined #ceph
[10:10] * allsystemsarego (~allsystem@188.25.130.190) has joined #ceph
[10:14] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[10:23] <loicd> dmick: hi
[10:34] * LeaChim (~LeaChim@97e00998.skybroadband.com) has joined #ceph
[10:34] * KindTwo (KindOne@h194.33.186.173.dynamic.ip.windstream.net) has joined #ceph
[10:37] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:37] * KindTwo is now known as KindOne
[10:40] * bwesemann (~bwesemann@2001:1b30:0:6:c829:66ef:93a9:b43) Quit (Remote host closed the connection)
[10:40] * bwesemann (~bwesemann@2001:1b30:0:6:dad:9187:5580:7bf0) has joined #ceph
[10:57] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[10:58] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[11:16] * Vincent_Valentine (~Vincent_V@115.119.113.218) has joined #ceph
[11:23] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[11:32] * yy-nm (~chatzilla@115.196.74.105) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 22.0/20130618035212])
[11:41] <huangjun> mount the fuse client and returns 95 operation not supported
[11:42] <huangjun> i'm using centos 6.4
[11:42] <huangjun> does this related to selinux
[11:42] <huangjun> ?
[11:47] <Kioob`Taff> is fuse enabled under centos ?
[11:56] * BillK (~BillK-OFT@124-148-246-233.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[11:56] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[12:00] <joelio> huangjun: check the tunables - try and set to default and remount
[12:05] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) Quit (Killed (NickServ (Too many failed password attempts.)))
[12:06] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) has joined #ceph
[12:20] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley)
[12:22] <huangjun> joelio: how ?
[12:24] <joelio> huangjun: ceph osd crush tunables default
[12:28] * VincentValentine (~Vincent_V@115.119.113.218) has joined #ceph
[12:28] <huangjun> joelio: thanks,but i don't know the tunables do here
[12:29] <huangjun> and did ceph-fuse mounting need a configure file?
[12:29] * Vincent_Valentine (~Vincent_V@115.119.113.218) Quit (Read error: Connection reset by peer)
[12:41] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[12:43] <joelio> huangjun: if you're using ceph-fuse as an admin user, it should just mount without auth - otherwise you need to set auth
[12:43] <joelio> I had issues due to having optimal tunables set
[12:43] <joelio> I had to set to default and then it all worked fine
[12:44] * The_Bishop (~bishop@e179017183.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[12:45] * The_Bishop (~bishop@e179017183.adsl.alicedsl.de) has joined #ceph
[12:51] * huangjun (~kvirc@221.234.156.126) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[13:14] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[13:14] * ChanServ sets mode +v andreask
[13:16] * sagewk (~sage@38.122.20.226) Quit (Ping timeout: 480 seconds)
[13:16] * yehudasa (~yehudasa@38.122.20.226) Quit (Ping timeout: 480 seconds)
[13:16] * yehudasa (~yehudasa@2607:f298:a:607:d6be:d9ff:fe8e:174c) has joined #ceph
[13:16] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) has joined #ceph
[13:17] * nerdtron (~kenneth@202.60.8.252) Quit (Ping timeout: 480 seconds)
[13:48] * VincentValentine (~Vincent_V@115.119.113.218) Quit (Ping timeout: 480 seconds)
[13:49] * Vincent_Valentine (~Vincent_V@115.119.113.218) has joined #ceph
[14:09] * Vincent_Valentine (~Vincent_V@115.119.113.218) Quit (Ping timeout: 480 seconds)
[14:10] * Vincent_Valentine (~Vincent_V@115.119.113.218) has joined #ceph
[14:17] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[14:22] * KindTwo (KindOne@h216.29.131.174.dynamic.ip.windstream.net) has joined #ceph
[14:24] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[14:26] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:26] * KindTwo is now known as KindOne
[14:28] <loicd> zackc: ping ?
[14:35] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[14:38] * AfC (~andrew@2001:44b8:31cb:d400:2451:8906:8899:b611) has joined #ceph
[14:43] * oliver1 (~oliver@p4FD06BA9.dip0.t-ipconnect.de) has joined #ceph
[14:43] * VincentValentine (~Vincent_V@183.82.2.214) has joined #ceph
[14:50] * Vincent_Valentine (~Vincent_V@115.119.113.218) Quit (Ping timeout: 480 seconds)
[14:57] * saabylaptop (~saabylapt@2a02:2350:18:1010:ac3d:3d15:d18:c34e) Quit (Quit: Leaving.)
[14:59] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[15:02] * rongze_ (~quassel@117.79.232.201) has joined #ceph
[15:09] * rongze (~quassel@117.79.232.191) Quit (Ping timeout: 480 seconds)
[15:15] * haomaiwa_ (~haomaiwan@117.79.232.201) has joined #ceph
[15:15] * haomaiwang (~haomaiwan@117.79.232.191) Quit (Read error: Connection reset by peer)
[15:18] * haomaiwang (~haomaiwan@117.79.232.201) has joined #ceph
[15:18] * haomaiwa_ (~haomaiwan@117.79.232.201) Quit (Read error: Connection reset by peer)
[15:20] * rongze (~quassel@li565-182.members.linode.com) has joined #ceph
[15:20] * haomaiwang (~haomaiwan@117.79.232.201) Quit (Read error: Connection reset by peer)
[15:20] * haomaiwang (~haomaiwan@123.151.28.79) has joined #ceph
[15:23] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[15:23] * haomaiwang (~haomaiwan@123.151.28.79) Quit (Read error: Connection reset by peer)
[15:23] * VincentValentine (~Vincent_V@183.82.2.214) Quit (Ping timeout: 480 seconds)
[15:23] * haomaiwang (~haomaiwan@notes4.com) has joined #ceph
[15:24] * mrkl (~mrkl@88-149-179-73.v4.ngi.it) has joined #ceph
[15:27] * haomaiwa_ (~haomaiwan@117.79.232.201) has joined #ceph
[15:27] * haomaiwang (~haomaiwan@notes4.com) Quit (Read error: Connection reset by peer)
[15:27] * rongze_ (~quassel@117.79.232.201) Quit (Ping timeout: 480 seconds)
[15:28] * rongze_ (~quassel@117.79.232.201) has joined #ceph
[15:31] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) has joined #ceph
[15:31] <mozg> hello guys
[15:31] <mozg> is there a limit to the size of the vm disk image that you can store in rbd?
[15:32] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:33] * rongze (~quassel@li565-182.members.linode.com) Quit (Ping timeout: 480 seconds)
[15:44] * Vincent_Valentine (Vincent_Va@49.206.158.155) has joined #ceph
[15:50] * The_Bishop_ (~bishop@e179008060.adsl.alicedsl.de) has joined #ceph
[15:55] * nwat (~nwat@216.1.187.162) has joined #ceph
[15:56] * The_Bishop (~bishop@e179017183.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[15:59] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[16:03] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[16:08] * KrisK (~krzysztof@213.17.226.11) Quit (Quit: KrisK)
[16:10] * sprachgenerator (~sprachgen@130.202.135.172) has joined #ceph
[16:11] * mbjorling (~SilverWol@130.226.133.120) Quit (Remote host closed the connection)
[16:11] * The_Bishop__ (~bishop@f052100183.adsl.alicedsl.de) has joined #ceph
[16:14] <mozg> does anyone know if you can use compression with vm images stored in rbd?
[16:15] * The_Bishop_ (~bishop@e179008060.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[16:17] <ron-slc> mozg: No size limitation that I know of. I have a 1TB volume in use. I'd say the rule of thumb is if the rbd command lets you create it, you're good to go.
[16:17] <mozg> thanks
[16:17] <guppy> mozg: compression where, above rbd? should be fine.
[16:18] <ron-slc> mozg: ceph/rbd doesn't current include compression features, so any compression would have to be done at the RBD/Guest level, or on the btrfs filesystem.. But for any data you remotely care about, currently you should be using XFS.
[16:18] <mozg> yeah, I am using xfs at the moment
[16:19] <ron-slc> cool. btrfs will be cool "some day" soon. ;)
[16:19] <mozg> i've noticed that when I am migrating vm images from nfs (qcow2) to rbd the disk size in rbd is larger
[16:20] <mozg> while looking at it i've noticed that qcow2 has the same size as your used data
[16:20] <mozg> but rbd disk size is the same as the specified disk volume
[16:20] <guppy> I believe qcow2 only allocates space as it's used
[16:20] <ron-slc> maybe, maybe not. all RBD images are created in a sparse manner. They will report for example 300GB, but if you only ever write 1GB to the volume, the actual use will be around 1GB
[16:21] <mozg> so if i have 1 1GB of data saved on a 500gb volume - qcow disk size would be around 1gb
[16:21] <mozg> but rbd will be 500gb
[16:21] <ron-slc> But the RBD ls/list command doesn't give any real vs. sparse numbers
[16:21] <guppy> mozg: rbd will report 500GB because that's the size of the block device
[16:21] <guppy> but as ron-slc says, in the back, it's only allocating what's actually used
[16:22] <mozg> guppy: but what happens with the rbd image disk if you export it or copy to another pool
[16:22] <guppy> now, depending how you copy it in, it could allocate it all right away if you are using dd for example.
[16:22] <mozg> i think it will operate with 500GB size
[16:22] <ron-slc> The only "catch" may be in your import process. If the qcow2 to rbd conversion doesn't obey "sparseness" rules, it maybe did take the full size.
[16:22] <mozg> and it would take a while to copy/expoert it
[16:22] <mozg> compared with the qcow images
[16:22] <mozg> or am i not right here?
[16:22] <guppy> mozg: yeah, if you just imported it by copying the data, all 500GB would have been copied.
[16:23] <ron-slc> mozg: I haven't verified in real life. But if you copy an rbd image between pools, it will still report the example 300GB, but only allocate the used blocks.
[16:23] <mozg> okay
[16:23] <mozg> thanks
[16:25] <ron-slc> Just don't let your rados cluster fill up any one OSD, Things aren't very graceful at the moment, in this regard.
[16:25] <mozg> i did read about it
[16:25] <ron-slc> cool!
[16:26] <mozg> would ceph not automatically redistribute the data pretty equally between the osds if their weight is the same?
[16:26] * huangjun (~kvirc@106.120.176.54) has joined #ceph
[16:26] <ron-slc> But the beauty is, adding capacity is beyond easy, and fast. :)
[16:26] <mozg> if I add a new osd, would it automatically reallocate the data from the old osds onto the new one?
[16:27] <ron-slc> well... the problem is... In theory one OSD filling up will kill it.. Then this will trigger a re0balance for this OSD's data. Thus filling other OSDs. Then you have a totally full / crashed cluster.
[16:27] <mozg> or do I need to manually do that?
[16:27] <ron-slc> yes, if you do the "add osd" procedure, on the default crush-rule the data will balance very well.
[16:27] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) Quit (Remote host closed the connection)
[16:28] <mozg> nice
[16:28] <ron-slc> There are some non-default crush-rules, which don't rebalance perfectly, but they are only for very specific use cases; and VERY-huge clusters.
[16:29] <ron-slc> yea beats the crap out of a SAN... I sold our corporate SAN... GONE.....
[16:32] <huangjun> in some case, the whole cluster capacity depends on the smallest disk?
[16:33] <huangjun> if the crush leaf node is host, and we have two hosts:host1 and host2, there are osd.0(1TB) osd.1(1TB) osd.2(1TB) on host.1, and osd.3(500GB) on host.2; so we just can use 500GB
[16:34] <ron-slc> huangjun: For example. If you have 2TB drives, these should be weighted "2" in crush. Then a 1TB drive should be weighted "1", thus balancing based upon disk capacity.
[16:35] <ron-slc> ron-slc: true, choose-leaf based upon host can definitely alter balancing as well. I have a rule on a development cluster, which chooses based on disk, not host. The cluster has 2 hosts, but I wanted 3x replication for some tests.
[16:37] <huangjun> ron-slc: yes, that will decrease the data safty a little, if a host down and that host hold 3(or more) disks, then some data you can not access
[16:38] * devoid (~devoid@107-219-204-197.lightspeed.cicril.sbcglobal.net) has joined #ceph
[16:38] * mrkl (~mrkl@88-149-179-73.v4.ngi.it) Quit (Quit: KVIrc 4.1.3 Equilibrium http://www.kvirc.net/)
[16:38] <huangjun> so you must make a choice between aviablity and stablity
[16:38] <ron-slc> yes, correct it can. My choosing algorithm (in development) chooses 3 disks, on 2 hosts, with 2 disks each. so a host-down is safe, but this is definitely a case-specific non-recommended cluster config. ;)
[16:39] * mschiff_ (~mschiff@p4FD7C98F.dip0.t-ipconnect.de) has joined #ceph
[16:39] * devoid (~devoid@107-219-204-197.lightspeed.cicril.sbcglobal.net) Quit ()
[16:39] <ron-slc> in production I'm all about choose-leaf "host"
[16:39] * mschiff (~mschiff@p4FD7C98F.dip0.t-ipconnect.de) Quit (Read error: Connection reset by peer)
[16:42] <huangjun> and you can also set the crushmap by mix the disks in different host to get compromised resolution
[16:42] <ron-slc> indeed.
[16:43] <huangjun> what's setting of crushmap in the production clusters using ceph?
[16:44] <ron-slc> huangjun: by default?? The crushmap is choose-leaf host. so rep-size 3 on a 2 host cluster, will be always degraded.
[16:45] <huangjun> yes, i think so
[16:47] * AfC (~andrew@2001:44b8:31cb:d400:2451:8906:8899:b611) Quit (Quit: Leaving.)
[16:48] * diegows (~diegows@190.190.11.42) has joined #ceph
[16:55] * rongze (~quassel@117.79.232.201) has joined #ceph
[16:57] * aliguori (~anthony@32.97.110.51) has joined #ceph
[17:01] <loicd> is there a way to ask teuthology to not cleanup in case there is an error ? to help investigating on the target machine, that is :-)
[17:02] * oliver1 (~oliver@p4FD06BA9.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[17:02] * rongze_ (~quassel@117.79.232.201) Quit (Ping timeout: 480 seconds)
[17:04] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) Quit (Killed (NickServ (Too many failed password attempts.)))
[17:04] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) has joined #ceph
[17:10] * aliguori (~anthony@32.97.110.51) Quit (Ping timeout: 480 seconds)
[17:11] * Georg (~georg_hoe@bs.xidrasservice.com) has joined #ceph
[17:11] * oliver1 (~oliver@jump.filoo.de) has joined #ceph
[17:14] * grepory (~Adium@108-218-234-162.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[17:18] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) Quit (Killed (NickServ (Too many failed password attempts.)))
[17:19] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) has joined #ceph
[17:22] <Georg> Hello
[17:22] <Georg> can anybody help me why I'm not able to set a layout with cehphfs?
[17:22] <Georg> # cephfs mailstore/ set_layout -p 3
[17:22] <Georg> Error setting layout: Invalid argument
[17:23] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[17:24] * gregmark (~Adium@68.87.42.115) has joined #ceph
[17:24] * gregmark (~Adium@68.87.42.115) has left #ceph
[17:27] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:29] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) has left #ceph
[17:29] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) has joined #ceph
[17:29] * ChanServ sets mode +v leseb
[17:29] * aliguori (~anthony@32.97.110.51) has joined #ceph
[17:33] * Georg (~georg_hoe@bs.xidrasservice.com) has left #ceph
[17:33] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) Quit (Ping timeout: 480 seconds)
[17:37] * mnash (~chatzilla@vpn.expressionanalysis.com) has joined #ceph
[17:38] * CliMz (~CliMz@194.88.193.33) Quit (Ping timeout: 480 seconds)
[17:42] * scuttlemonkey (~scuttlemo@mb52036d0.tmodns.net) has joined #ceph
[17:42] * ChanServ sets mode +o scuttlemonkey
[17:43] * grepory (~Adium@108-218-234-162.lightspeed.sntcca.sbcglobal.net) Quit (Quit: Leaving.)
[17:45] <huangjun> seems you set the wrong pool , use "-p you-pool-name" instead
[17:45] <huangjun> and try to specify the absolute path of the file
[17:49] * oliver1 (~oliver@jump.filoo.de) has left #ceph
[17:49] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[17:52] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[17:56] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) has joined #ceph
[18:00] * nwat (~nwat@216.1.187.162) Quit (Ping timeout: 480 seconds)
[18:03] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[18:05] * nwat (~nwat@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[18:08] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[18:11] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[18:11] * Cube (~Cube@66-87-67-247.pools.spcsdns.net) has joined #ceph
[18:16] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[18:24] * xdeller (~xdeller@91.218.144.129) Quit (Read error: Connection reset by peer)
[18:24] * huangjun (~kvirc@106.120.176.54) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[18:25] * bandrus (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[18:29] * tnt (~tnt@109.130.80.16) has joined #ceph
[18:30] * psieklFH (psiekl@wombat.eu.org) Quit (Quit: leaving)
[18:32] * danieagle (~Daniel@177.205.183.226.dynamic.adsl.gvt.net.br) has joined #ceph
[18:32] * ShaunR (~ShaunR@staff.ndchost.com) Quit ()
[18:33] * diegows (~diegows@190.190.11.42) Quit (Read error: Operation timed out)
[18:47] * bandrus1 (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[18:47] * bandrus (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[18:49] * bandrus (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[18:49] * bandrus1 (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[18:51] * rturk-away is now known as rturk
[18:52] * bandrus1 (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[18:52] * bandrus (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[18:59] * netsrob (~thorsten@212.224.79.27) has joined #ceph
[19:01] * amatter (~oftc-webi@209.63.136.134) has joined #ceph
[19:02] <netsrob> hi, i want to set up a radosgw on centos and get '405 Method Not Allowed' as reply. somehow nothing is logged too :X any hints?
[19:05] * jeff-YF (~jeffyf@50.59.139.161) has joined #ceph
[19:05] * Machske (~Bram@81.82.216.124) has joined #ceph
[19:06] <amatter> hi guys. I have a cluster with a large cephfs file system. Now my mds servers keep crashing when they start up, the log is here: http://pastebin.com/FGsQL6Xr seems to crash in handle_osd_op_reply. Any ideas?
[19:09] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) Quit (Ping timeout: 480 seconds)
[19:11] <scuttlemonkey> netsrob: which version an which API are you using?
[19:16] <netsrob> scuttlemonkey: Cuttlefish and s3
[19:17] <scuttlemonkey> hmm
[19:17] <scuttlemonkey> ok, the only thing that immediately sprang to mind was a once-upon-a-time swift bug
[19:17] <scuttlemonkey> http://tracker.ceph.com/issues/2650
[19:18] <scuttlemonkey> your ceph cluster health ok?
[19:18] <scuttlemonkey> could also turn up logging a bit
[19:18] <netsrob> ok, i want to use swift later, too
[19:18] * The_Bishop__ (~bishop@f052100183.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[19:18] <scuttlemonkey> the problem is most of the inktank folks are in a room with questionable internet
[19:19] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[19:19] <scuttlemonkey> I'd drop a note on the list maybe so one of the rgw guys can snag it async
[19:19] <netsrob> health is ok
[19:20] * Cube (~Cube@66-87-67-247.pools.spcsdns.net) Quit (Quit: Leaving.)
[19:20] <netsrob> scuttlemonkey: maybe my setup is faulty, i'm trying to use rados for the first time
[19:21] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[19:21] <netsrob> i'm only following the man page http://ceph.com/docs/next/man/8/radosgw/
[19:23] <scuttlemonkey> wonder if there are any changes on the doc
[19:23] <scuttlemonkey> they are versioned with the code fwiw
[19:23] <scuttlemonkey> so: http://ceph.com/docs/master/radosgw
[19:23] <scuttlemonkey> rather than next
[19:23] <scuttlemonkey> but I don't think anything major has changed, so you should still be ok
[19:23] <scuttlemonkey> how far did you get?
[19:24] <netsrob> i've got a user, subuser and secrets for both of them
[19:25] <netsrob> i'm currently working on the apache setup
[19:25] <scuttlemonkey> ahh
[19:26] <scuttlemonkey> yeah, I wont have much insight beyond what's in the doc
[19:26] * bandrus1 (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Quit: Leaving.)
[19:26] <scuttlemonkey> only times I stand up a gateway are with an orchestration layer
[19:26] <scuttlemonkey> a la http://ceph.com/dev-notes/deploying-ceph-with-juju/
[19:27] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[19:27] * ChanServ sets mode +v andreask
[19:32] * jeff-YF (~jeffyf@50.59.139.161) Quit (Quit: jeff-YF)
[19:36] <netsrob> is the setup with juju easier than with radosgw in general?
[19:38] <scuttlemonkey> I found it to be so
[19:38] <scuttlemonkey> mostly I liked it b/c it was repeatable
[19:38] <scuttlemonkey> so I could spin clusters up and down quickly
[19:39] * mikedawson_ (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 22.0/20130618035212])
[19:39] <netsrob> ok, maybe i'll try that one :)
[19:40] <scuttlemonkey> there are recipes for all of the major orchestration flavors
[19:40] * xmltok (~xmltok@pool101.bizrate.com) Quit (Remote host closed the connection)
[19:40] * xmltok (~xmltok@relay.els4.ticketmaster.com) has joined #ceph
[19:40] <netsrob> oh, ubuntu again ^^
[19:40] <scuttlemonkey> so if you prefer Chef, Puppet, Ansible, etc those are available
[19:40] * nwat (~nwat@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[19:41] <scuttlemonkey> yeah, that was illustrative rather than instructive
[19:43] <netsrob> one problem for me is that i'm already running a webservice on port 80, so i'd need to reconfigure stuff to work first
[19:44] <netsrob> ok, now i've got logging, but no webserver-requests reach radosgw :X
[19:44] * xmltok_ (~xmltok@pool101.bizrate.com) has joined #ceph
[19:47] <netsrob> Swift tells me "Auth GET failed: http://<host>:8081/auth/1.0 200 OK"
[19:49] * alfredodeza (~alfredode@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[19:50] * tnt_ (~tnt@91.177.243.62) has joined #ceph
[19:50] * xmltok (~xmltok@relay.els4.ticketmaster.com) Quit (Ping timeout: 480 seconds)
[19:52] * tnt (~tnt@109.130.80.16) Quit (Ping timeout: 480 seconds)
[19:53] * nwat (~nwat@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[19:55] * diegows (~diegows@190.190.11.42) has joined #ceph
[19:58] * danieagle (~Daniel@177.205.183.226.dynamic.adsl.gvt.net.br) Quit (Quit: inte+ e Obrigado Por tudo mesmo! :-D)
[19:58] * rturk is now known as rturk-away
[19:58] * rturk-away is now known as rturk
[20:03] * bandrus (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[20:03] * rturk is now known as rturk-away
[20:07] <netsrob> dumb question: apache schould not send me the content of the fcgi on request via telnet?
[20:08] * nwat (~nwat@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[20:08] * scuttlemonkey (~scuttlemo@mb52036d0.tmodns.net) Quit (Read error: Connection reset by peer)
[20:08] * rturk-away is now known as rturk
[20:11] * bandrus (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Quit: Leaving.)
[20:14] * alfredod_ (~alfredode@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[20:14] * alfredodeza (~alfredode@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[20:18] * bandrus (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[20:19] * AaronSchulz (~chatzilla@192.195.83.36) Quit (Ping timeout: 480 seconds)
[20:21] * alfredodeza (~alfredode@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[20:21] * alfredod_ (~alfredode@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[20:23] * scuttlemonkey (~scuttlemo@38.122.20.226) has joined #ceph
[20:23] * ChanServ sets mode +o scuttlemonkey
[20:26] * bandrus (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[20:31] * sagelap (~sage@2600:1012:b01d:1be6:d9b2:5c3a:7386:c6ff) has joined #ceph
[20:32] * sagelap (~sage@2600:1012:b01d:1be6:d9b2:5c3a:7386:c6ff) has left #ceph
[20:33] * scuttlemonkey changes topic to 'Latest stable (v0.61.7 "Cuttlefish") -- http://ceph.com/get || CDS Vids and IRC logs posted http://ceph.com/cds/'
[20:34] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Remote host closed the connection)
[20:35] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[20:38] <scuttlemonkey> netsrob: sry, net connection via phone is a bit touch-and-go :(
[20:39] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[20:44] <netsrob> scuttlemonkey: no prob ;) had the same issues before, too ;)
[20:45] <netsrob> scuttlemonkey: i'm currently debugging the swift-response from apache
[20:45] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[20:46] <scuttlemonkey> cool
[20:47] <netsrob> would be cooler if it was working, currently the apache sends me the content of the fcgi-script
[20:50] * lx0 is now known as lxo
[20:56] * mschiff_ (~mschiff@p4FD7C98F.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[20:57] * mschiff (~mschiff@p4FD7C98F.dip0.t-ipconnect.de) has joined #ceph
[20:59] * scuttlemonkey (~scuttlemo@38.122.20.226) Quit (Ping timeout: 480 seconds)
[20:59] * scuttlemonkey_ (~scuttlemo@2607:f298:a:607:b006:1184:a707:834e) has joined #ceph
[20:59] * diegows (~diegows@190.190.11.42) Quit (Ping timeout: 480 seconds)
[21:01] * AaronSchulz (~chatzilla@216.38.130.164) has joined #ceph
[21:02] * rturk is now known as rturk-away
[21:03] * mschiff (~mschiff@p4FD7C98F.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[21:10] * bandrus (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[21:10] * scuttlemonkey_ (~scuttlemo@2607:f298:a:607:b006:1184:a707:834e) Quit (Read error: Connection reset by peer)
[21:11] * scuttlemonkey_ (~scuttlemo@38.122.20.226) has joined #ceph
[21:12] * bandrus (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit ()
[21:13] * bandrus (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[21:16] * rturk-away is now known as rturk
[21:29] * bandrus (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Quit: Leaving.)
[21:33] * rturk is now known as rturk-away
[21:41] * mozg (~andrei@host109-151-35-94.range109-151.btcentralplus.com) has joined #ceph
[21:42] * alfredod_ (~alfredode@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[21:43] * alfredodeza (~alfredode@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Remote host closed the connection)
[21:48] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[21:52] * Vjarjadian (~IceChat77@90.214.208.5) has joined #ceph
[21:52] * nwat (~nwat@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[21:53] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[21:54] * bandrus (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[22:02] <netsrob> i'm looking for a howto for using radosgw with apache2/fcgid
[22:02] * Cube (~Cube@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[22:09] * allsystemsarego (~allsystem@188.25.130.190) Quit (Quit: Leaving)
[22:09] * ishkabob (~c7a82cc0@webuser.thegrebs.com) has joined #ceph
[22:09] * diegows (~diegows@200.68.116.185) has joined #ceph
[22:09] <ishkabob> hey guys, i'm trying to create a new ceph cluster with ceph-deploy. When I do so, it seems to be hanging on "ceph-create-key" step
[22:10] <ishkabob> the monitors appear to be running fine, but its not creating an admin key for me
[22:10] <ishkabob> anyone know where i can look to troubleshoot? Can't find a log or anything
[22:12] * scuttlemonkey_ (~scuttlemo@38.122.20.226) Quit (Ping timeout: 480 seconds)
[22:13] <joelio> ishkabob: get the latest version of ceph-deploy from gitbuilder/master - there's a lot of fixes
[22:13] <joelio> you may be hitting a bug
[22:13] * bandrus1 (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[22:13] * Cube1 (~Cube@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[22:13] * scuttlemonkey_ (~scuttlemo@mb52036d0.tmodns.net) has joined #ceph
[22:13] * alfredodeza (~alfredode@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[22:14] * bandrus (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[22:14] * Cube (~Cube@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[22:15] * alfredod_ (~alfredode@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[22:16] <ishkabob> joelio: thanks, i think i know what's going on. I have two ethernet cards on my ceph boxes. for example, one of the boxes has a 1gig card with hostname camelot, and the other has a 10gig card with hostname camelot0-dstor
[22:16] <ishkabob> sry, camelot-dstor
[22:16] <ishkabob> i'm running - ceph-deploy new create camelot-dstor entourage-dstor roots-dstore
[22:16] <ishkabob> and then
[22:17] <ishkabob> ceph-deploy mon create camelot-dstor entourage-dstor roots-dstore
[22:17] <ishkabob> and then it tries to generate keys, and its getting this
[22:17] <ishkabob> # /usr/bin/python /usr/sbin/ceph-create-keys -i camelot INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
[22:17] <ishkabob> so for someo reason its using "camelot" as the hostname to create the keys instead of "camelot-dstor"
[22:17] <ishkabob> you think that might be fixed in a bugfix?
[22:19] <joelio> I'd give it a go, either pull from git or dig it out from gitbuilder - I'm not sure of the link tbh
[22:19] * KindTwo (KindOne@h220.53.186.173.dynamic.ip.windstream.net) has joined #ceph
[22:19] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[22:20] <ishkabob> joelio: thanks. I'm using the package version for sl, but i'll try that and see if it works
[22:20] * KindTwo is now known as KindOne
[22:23] <joelio> ishkabob: http://gitbuilder.ceph.com/cdep-rpm-rhel6-x86_64-basic/ref/master/noarch/ceph-deploy-1.1-0.noarch.rpm at a guess then
[22:23] <joelio> alfredodeza: that correct mate ^?
[22:23] * alfredodeza looks
[22:24] <alfredodeza> ishkabob: there is a bug right now with the monitors
[22:24] <alfredodeza> so this is not related to ceph-deploy directly
[22:24] <alfredodeza> it is actually a problem with the monitors not seeing enough quorum and hanging
[22:24] <alfredodeza> that prevents the gatherkeys to fully work
[22:24] * Vincent_Valentine (Vincent_Va@49.206.158.155) Quit (Ping timeout: 480 seconds)
[22:24] <joelio> ahh, ok
[22:24] <alfredodeza> :(
[22:24] <alfredodeza> sorry
[22:24] <alfredodeza> but! we are working on it!
[22:24] <joelio> I got mine working by only defining the initial one mon
[22:25] <ishkabob> alfredodeza: I don't think that's actually the problem here though, because if I deploy using the regular hostnames, everything works fine
[22:25] <joelio> yea, my hosts have the scheme vm-ds-{$n}
[22:25] <alfredodeza> ishkabob: that is exactly the problem :)
[22:25] <alfredodeza> it fails with other vanilla options as well
[22:25] <alfredodeza> really, we are working on it
[22:26] <alfredodeza> what do you mean by 'regular hostnames' ?
[22:26] <ishkabob> alfredodeza: i mean, this works:
[22:26] <ishkabob> ceph-deploy new create camelot-dstor entourage-dstor roots-dstore
[22:26] <ishkabob> ack
[22:26] <ishkabob> hold on
[22:26] <ishkabob> sry bad paste
[22:26] * joao (~JL@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[22:26] * ChanServ sets mode +o joao
[22:26] <ishkabob> lets start over
[22:27] <ishkabob> THIS WORKS:
[22:27] <ishkabob> ceph-deploy new create camelot entourage roots
[22:27] <ishkabob> ceph-deploy mon create camelot entourage roots
[22:27] * rturk-away is now known as rturk
[22:27] <ishkabob> however, if i replace my hostnames with {hostname}-dstor, which maps to a different interface on the box, it doesn't work
[22:28] <joelio> interesting, I was able to make it work by adding the one mon initially with the new command.. and then adding the other mons in the next step
[22:28] <joelio> my hostnames have a - in and also a .
[22:28] <alfredodeza> ishkabob: can you define what you mean by regular hostnames?
[22:28] <alfredodeza> is that like a FQDN ?
[22:29] <ishkabob> it's not, its just the short hostname, but it exists in our DNS
[22:29] * mozg (~andrei@host109-151-35-94.range109-151.btcentralplus.com) Quit (Quit: Ex-Chat)
[22:29] <ishkabob> it seems like ceph-deploy is somehow resolving camelot-dstor to camelot
[22:29] <alfredodeza> ah ok
[22:29] <ishkabob> which sort of makes sense, they're the same box, but not the same IP
[22:29] <joao> ishkabob, so you have multiple interfaces; are you specifying 'public addr' or 'public network'?
[22:29] <joao> oh
[22:29] <joao> nevermind
[22:29] <joao> duh
[22:30] <ishkabob> joao: I am not, i didn't really want the normal interaces used for anything except administration
[22:30] <joao> yeah, ishkabob, can you just check if those options are on your ceph.conf though?
[22:30] <joao> unlikely
[22:30] <ishkabob> joao: no they are not, i just checked
[22:30] <joao> kay, thanks
[22:30] <ishkabob> also, ceph-deploy doesn't create them
[22:30] <ishkabob> surely :)
[22:31] <joao> just covering all bases
[22:32] <ishkabob> i'll try this with the verion from git master and if it doesn't work I'll submit a bug ticket
[22:34] * bandrus (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[22:34] * Cube1 (~Cube@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[22:34] * bandrus1 (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[22:34] * alfredodeza (~alfredode@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[22:34] * nwat (~nwat@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[22:34] * Cube (~Cube@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[22:35] * alfredodeza (~alfredode@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[22:38] * Cube (~Cube@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit ()
[22:39] * buck1 (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has joined #ceph
[22:41] * bandrus (~Adium@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Quit: Leaving.)
[22:41] * rturk is now known as rturk-away
[22:43] <ishkabob> yeah, so this is still a problem with the latest ceph-deploy
[22:43] <ishkabob> i'll write a ticket
[22:43] * sleinen1 (~Adium@2001:620:0:26:34b2:d430:845c:f38) Quit (Quit: Leaving.)
[22:43] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[22:43] <amatter> hi guys. I have a cluster with a large cephfs file system. Now my mds servers keep crashing when they start up, the log is here: http://pastebin.com/FGsQL6Xr seems to crash in handle_osd_op_reply. Any ideas?
[22:46] * scuttlemonkey_ is now known as scuttlemonkey
[22:47] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Read error: Operation timed out)
[22:50] * alfredodeza (~alfredode@99-119-181-1.uvs.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[22:53] <joao> amatter, first step would be to reproduce that with debug mds = 10 and debug monc = 10
[22:54] <joao> also, it looks a lot like #5104
[22:57] <amatter> joao: here's the same error with increased verbosity: http://pastebin.com/gEFD53pP
[23:01] * aliguori (~anthony@32.97.110.51) Quit (Remote host closed the connection)
[23:06] * The_Bishop (~bishop@f052100183.adsl.alicedsl.de) has joined #ceph
[23:09] * alfredodeza (~alfredode@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph
[23:11] * mschiff (~mschiff@port-28851.pppoe.wtnet.de) has joined #ceph
[23:12] * sagelap (~sage@2600:1012:b01d:1be6:f904:5d2c:6572:3201) has joined #ceph
[23:13] * sprachgenerator_ (~sprachgen@130.202.135.172) has joined #ceph
[23:14] * sprachgenerator (~sprachgen@130.202.135.172) Quit (Read error: Connection reset by peer)
[23:14] * sprachgenerator_ is now known as sprachgenerator
[23:16] * The_Bishop_ (~bishop@f052103091.adsl.alicedsl.de) has joined #ceph
[23:18] * rturk-away is now known as rturk
[23:18] * rturk is now known as rturk-away
[23:19] * The_Bishop (~bishop@f052100183.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[23:21] <joao> amatter, under which circumstances does this happen?
[23:21] * jf-jenni (~jf-jenni@stallman.cse.ohio-state.edu) Quit (Ping timeout: 480 seconds)
[23:23] * amatter (~oftc-webi@209.63.136.134) Quit (Remote host closed the connection)
[23:25] * amatter (~oftc-webi@209.63.136.134) has joined #ceph
[23:25] <amatter> joao: every time a MDS tries to start on a production cluster with 4 servers each with 4 3TB osds
[23:25] <amatter> the cephfs is down because no mds can fully start
[23:26] <joao> I'd say you should file a ticket for this
[23:26] <amatter> only one running at a time, three servers running mds in standby, but the same error on each
[23:26] <amatter> ok, will do. thanks
[23:28] * amatter (~oftc-webi@209.63.136.134) Quit (Remote host closed the connection)
[23:32] * sagelap (~sage@2600:1012:b01d:1be6:f904:5d2c:6572:3201) Quit (Ping timeout: 480 seconds)
[23:33] * sagelap (~sage@2600:1012:b01d:1be6:f904:5d2c:6572:3201) has joined #ceph
[23:39] * mschiff_ (~mschiff@port-28851.pppoe.wtnet.de) has joined #ceph
[23:44] * mschiff (~mschiff@port-28851.pppoe.wtnet.de) Quit (Ping timeout: 480 seconds)
[23:46] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[23:51] * rturk-away is now known as rturk
[23:52] * mozg (~andrei@host109-151-35-94.range109-151.btcentralplus.com) has joined #ceph
[23:53] * mschiff_ (~mschiff@port-28851.pppoe.wtnet.de) Quit (Read error: Operation timed out)
[23:53] * mschiff (~mschiff@port-28851.pppoe.wtnet.de) has joined #ceph
[23:55] * sagelap1 (~sage@99-119-181-1.uvs.irvnca.sbcglobal.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.