#ceph IRC Log


IRC Log for 2013-07-15

Timestamps are in GMT/BST.

[0:04] * xmltok_ (~xmltok@relay.els4.ticketmaster.com) Quit (Read error: Operation timed out)
[0:09] * BManojlovic (~steki@237-231.197-178.cust.bluewin.ch) has joined #ceph
[0:11] * danieagle (~Daniel@ has joined #ceph
[0:20] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:28] * tremendous (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Bye!)
[0:28] * BillK (~BillK-OFT@124-148-212-240.dyn.iinet.net.au) has joined #ceph
[0:30] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[0:32] * leseb1 (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Quit: Leaving.)
[0:37] * BManojlovic (~steki@237-231.197-178.cust.bluewin.ch) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:41] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[1:06] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Quit: Leaving.)
[1:18] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[1:23] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[1:44] * LeaChim (~LeaChim@ Quit (Ping timeout: 480 seconds)
[1:45] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[1:48] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[2:02] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[2:05] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:05] * mschiff (~mschiff@port-1321.pppoe.wtnet.de) Quit (Remote host closed the connection)
[2:07] <mtanski> Gugge-47527: bs=4048
[2:37] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[2:40] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[2:41] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[2:48] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:55] * danieagle (~Daniel@ Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[2:57] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[3:04] * Henson_D (~kvirc@ Quit (Quit: KVIrc 4.1.3 Equilibrium http://www.kvirc.net/)
[3:08] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[3:11] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Remote host closed the connection)
[3:11] * xmltok_ (~xmltok@relay.els4.ticketmaster.com) has joined #ceph
[3:21] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[3:24] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[3:24] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit ()
[3:25] * julian (~julianwa@ has joined #ceph
[3:35] * diegows (~diegows@ Quit (Ping timeout: 480 seconds)
[3:55] * zhangjf_zz2 (~zjfhappy@ has joined #ceph
[3:59] * yy (~michealyx@ has joined #ceph
[4:01] * haomaiwa_ (~haomaiwan@ Quit (Ping timeout: 480 seconds)
[4:01] * yy (~michealyx@ has left #ceph
[4:02] * fuzz_ (~pi@c-76-30-9-9.hsd1.tx.comcast.net) Quit (Read error: Connection reset by peer)
[4:05] * yy (~michealyx@ has joined #ceph
[4:12] * yy (~michealyx@ has left #ceph
[4:22] * haomaiwang (~haomaiwan@notes4.com) has joined #ceph
[4:31] * haomaiwa_ (~haomaiwan@notes4.com) has joined #ceph
[4:33] * haomaiwa_ (~haomaiwan@notes4.com) Quit (Remote host closed the connection)
[4:38] * haomaiwang (~haomaiwan@notes4.com) Quit (Ping timeout: 480 seconds)
[4:39] * haomaiwang (~haomaiwan@notes4.com) has joined #ceph
[4:53] * nwat (~oftc-webi@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[5:00] * fireD1 (~fireD@93-142-200-96.adsl.net.t-com.hr) has joined #ceph
[5:06] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) has joined #ceph
[5:07] * fireD (~fireD@93-142-243-73.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:10] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[5:10] * ChanServ sets mode +o elder
[5:19] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[5:28] * nwat (~oftc-webi@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[5:36] * john_barbee (~jbarbee@c-50-165-106-164.hsd1.in.comcast.net) has joined #ceph
[5:45] * yy (~michealyx@ has joined #ceph
[5:56] * tremendous (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[5:58] * tremendous (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit ()
[5:59] * tremendous (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[6:00] * tremendous (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit ()
[6:03] * xmltok_ (~xmltok@relay.els4.ticketmaster.com) Quit (Ping timeout: 480 seconds)
[6:17] * john_barbee (~jbarbee@c-50-165-106-164.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[6:33] * yy (~michealyx@ has left #ceph
[6:37] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[6:41] * AfC (~andrew@2001:44b8:31cb:d400:e8db:5954:42fa:287b) has joined #ceph
[6:51] * haomaiwang (~haomaiwan@notes4.com) Quit (Read error: Connection reset by peer)
[6:52] * haomaiwang (~haomaiwan@ has joined #ceph
[6:57] * hujifeng (~hujifeng@ has joined #ceph
[7:10] * danieagle (~Daniel@ has joined #ceph
[7:14] <hujifeng> anyone use ceph-cookbook deploy ceph cluster?
[7:20] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley)
[7:27] * erwan_taf (~erwan@lns-bzn-48f-62-147-157-222.adsl.proxad.net) Quit (Ping timeout: 480 seconds)
[7:30] * erwan_taf (~erwan@lns-bzn-48f-62-147-157-222.adsl.proxad.net) has joined #ceph
[7:36] <hujifeng> monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
[7:41] * yy (~michealyx@ has joined #ceph
[7:47] * yy (~michealyx@ Quit (Read error: Connection reset by peer)
[7:51] * yy (~michealyx@ has joined #ceph
[7:53] * hujifeng (~hujifeng@ Quit (Read error: Operation timed out)
[7:54] * hujifeng (~hujifeng@ has joined #ceph
[8:02] * erwan_taf (~erwan@lns-bzn-48f-62-147-157-222.adsl.proxad.net) Quit (Ping timeout: 480 seconds)
[8:08] * yy (~michealyx@ has left #ceph
[8:12] * erwan_taf (~erwan@lns-bzn-48f-62-147-157-222.adsl.proxad.net) has joined #ceph
[8:12] * Cube (~Cube@173-8-221-113-Oregon.hfc.comcastbusiness.net) has joined #ceph
[8:13] <hujifeng> ?
[8:18] * hujifeng (~hujifeng@ Quit (Read error: Connection timed out)
[8:19] * hujifeng (~hujifeng@ has joined #ceph
[8:20] * erwan_taf (~erwan@lns-bzn-48f-62-147-157-222.adsl.proxad.net) Quit (Ping timeout: 480 seconds)
[8:21] * fridudad (~oftc-webi@fw-office.allied-internet.ag) has joined #ceph
[8:21] * Volture (~quassel@office.meganet.ru) has joined #ceph
[8:22] * yy (~michealyx@ has joined #ceph
[8:23] * Cube1 (~Cube@173-8-221-113-Oregon.hfc.comcastbusiness.net) has joined #ceph
[8:23] * Cube (~Cube@173-8-221-113-Oregon.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:23] * erwan_taf (~erwan@lns-bzn-48f-62-147-157-222.adsl.proxad.net) has joined #ceph
[8:35] * erwan_taf (~erwan@lns-bzn-48f-62-147-157-222.adsl.proxad.net) Quit (Ping timeout: 480 seconds)
[8:49] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Read error: Operation timed out)
[9:00] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[9:05] * hujifeng_ (~hujifeng@ has joined #ceph
[9:06] * hujifeng (~hujifeng@ Quit (Read error: Connection reset by peer)
[9:08] * hybrid512 (~walid@106-171-static.pacwan.net) has joined #ceph
[9:15] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[9:21] * erwan_taf (~erwan@lns-bzn-48f-62-147-157-222.adsl.proxad.net) has joined #ceph
[9:23] * infinitytrapdoor (~infinityt@ has joined #ceph
[9:25] * n3c8-35575 (~mhattersl@pix.office.vaioni.com) has joined #ceph
[9:28] * odyssey4me (~odyssey4m@ has joined #ceph
[9:30] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[9:30] * ChanServ sets mode +v andreask
[9:32] * matt (~matt@220-245-1-152.static.tpgi.com.au) has joined #ceph
[9:35] <matt> Random question of the day, my read/write latency on RBD volumes increases tenfold during backfill/recovery to the point where some things crash. Will lowering 'osd recovery op priority' below the default value help with this?
[9:37] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[9:37] <Gugge-47527> Do you have journal on ssd?
[9:37] <ccourtaut> morning
[9:37] <matt> Gugge-47527, Yep. journal is on a OCZ PCI-e SSD
[9:38] <Gugge-47527> how busy are the osd disks while recovering?
[9:39] <Gugge-47527> run iostat -x 1 for a bit, and check the latency
[9:39] * bergerx_ (~bekir@ has joined #ceph
[9:39] * leseb (~Adium@ has joined #ceph
[9:40] * infinitytrapdoor (~infinityt@ Quit (Ping timeout: 480 seconds)
[9:41] <matt> Gugge-47527, it's a reasonable sized cluster so the ceph load doesn't usually drop below 400op/s whilst the recovery was going. It's not running a backfill at the moment but last night the wait% was pushing 50%.
[9:41] <matt> I didn't check latency sorry
[9:42] <matt> The drives are 7200rpm SATA so it would be pretty safe to assume the latency would be huge
[9:45] <Gugge-47527> what do you have recovery max active set to?
[9:45] * erwan_taf (~erwan@lns-bzn-48f-62-147-157-222.adsl.proxad.net) Quit (Ping timeout: 480 seconds)
[9:50] <matt> Gugge-47527, 5
[9:50] <matt> I forgot that was set actually... that's probably the problem
[9:50] * infinitytrapdoor (~infinityt@ has joined #ceph
[9:51] <Gugge-47527> i would try lowering that, and the op prioriy, one at a time, and see what helps :)
[9:52] <matt> I'll give it a shot tonight, thanks for the help
[10:03] * erwan_taf (~erwan@lns-bzn-48f-62-147-157-222.adsl.proxad.net) has joined #ceph
[10:03] * stacker666 (~stacker66@ has joined #ceph
[10:13] * danieagle (~Daniel@ Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[10:21] * infinitytrapdoor (~infinityt@ Quit (Ping timeout: 480 seconds)
[10:28] * mschiff (~mschiff@port-49445.pppoe.wtnet.de) has joined #ceph
[10:33] * Cube1 (~Cube@173-8-221-113-Oregon.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[10:36] * julian (~julianwa@ Quit (Quit: afk)
[10:36] * LeaChim (~LeaChim@ has joined #ceph
[10:42] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[10:43] * infinitytrapdoor (~infinityt@ has joined #ceph
[10:48] * s2r2 (~s2r2@ has joined #ceph
[10:53] * infinitytrapdoor (~infinityt@ Quit (Ping timeout: 480 seconds)
[10:57] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[11:00] * Anticimex (anticimex@ Quit (Ping timeout: 480 seconds)
[11:04] * infinitytrapdoor (~infinityt@ has joined #ceph
[11:36] * yy (~michealyx@ has left #ceph
[11:42] <odyssey4me> odd, I'm getting 100% utilisation on an rbd block device but no disk activity... the block device queue is full and the cluster is health. It seems that the disk queue is not processing... what now?
[11:48] * X3NQ (~X3NQ@ Quit (Remote host closed the connection)
[11:50] * haomaiwa_ (~haomaiwan@notes4.com) has joined #ceph
[11:50] * haomaiwang (~haomaiwan@ Quit (Read error: Connection reset by peer)
[11:55] * infinitytrapdoor (~infinityt@ Quit (Ping timeout: 480 seconds)
[12:01] * Anticimex (anticimex@ has joined #ceph
[12:04] * infinitytrapdoor (~infinityt@ has joined #ceph
[12:10] <andreask> odyssey4me: hmm ... any kernel messages?
[12:15] * leseb (~Adium@ Quit (Quit: Leaving.)
[12:23] <odyssey4me> andreask: yeah "INFO: task flush-251:0:8273 blocked for more than 120 seconds."
[12:24] <odyssey4me> there's a trace that follows
[12:25] <andreask> but no errors from disk or controller?
[12:25] <andreask> sorry ... mismatched irc windows ;-)
[12:26] <andreask> odyssey4me: no errors from the rbd kernel module?
[12:26] <odyssey4me> no controller issues - just the osd's timing out as a result of the block
[12:27] * s2r2 (~s2r2@ Quit (Quit: s2r2)
[12:27] <andreask> the osds? ... maybe I misunderstood your question ... you access a rbd block device on a client and there i/o is stalled?
[12:28] <odyssey4me> http://pastebin.com/2RRT2Qgv
[12:28] <odyssey4me> yes - let me explain a little
[12:29] <odyssey4me> I have three servers - one acting as client/mon/osd's, the other two only acting as osd's. It's a test environment. I've created an rbd disk and mapped it, then mounted it as a block device
[12:30] <andreask> oh ... I see
[12:30] <odyssey4me> I create a 20G qemu-img in the mapped folder and that works fine... when I do a 50G qemu-img the IO stalls immediately
[12:31] <andreask> but you use the rbd kernel module?
[12:31] <odyssey4me> iostat on the other servers shows nothing happening, and on the client it shows 137 avgqu-sz on the rbd1 device with 100% util
[12:33] <andreask> using the rbd kernel module on an osd is not supported and really not recommended ... can freeze the client and the osd
[12:33] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[12:33] <odyssey4me> rbd create --size 102400 perftest1; rbd map perftest1 --pool rbd; mkfs.xfs /dev/rbd/rbd/perftest1; mkdir /srv/rbdperftest1; mount /dev/rbd/rbd/perftest1 /srv/rbdperftest1
[12:34] <odyssey4me> hmm, ok so a client cannot also be one of the servers in the cluster?
[12:34] <andreask> not if you are using the rbd kernel module
[12:35] <odyssey4me> Is there another way that I can achieve this in a supported manner?
[12:35] <andreask> for virtualization you can use qemu with librados support
[12:35] <andreask> in combination with kvm, works great
[12:36] <odyssey4me> OK, so if I have instances configured as decribed here, this should work fine? http://ceph.com/docs/next/rbd/libvirt/
[12:36] <phantomcircuit> works great if disk latency is low
[12:37] <andreask> odyssey4me: yes, that should work fien
[12:37] <andreask> fine
[12:38] <odyssey4me> andreask - thanks, that was going to be my next test anyway... good to know about this limitation
[12:38] <andreask> yw
[12:38] * infinitytrapdoor (~infinityt@ Quit (Read error: Operation timed out)
[12:45] * leseb (~Adium@ has joined #ceph
[12:54] * infinitytrapdoor (~infinityt@ has joined #ceph
[12:55] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[13:03] * Meths (rift@ Quit (Remote host closed the connection)
[13:13] * diegows (~diegows@ has joined #ceph
[13:18] * infinitytrapdoor (~infinityt@ Quit ()
[13:21] * s2r2 (~s2r2@ has joined #ceph
[13:25] * infinitytrapdoor (~infinityt@ has joined #ceph
[13:45] * allsystemsarego (~allsystem@ has joined #ceph
[13:50] * haomaiwang (~haomaiwan@ has joined #ceph
[13:50] * haomaiwa_ (~haomaiwan@notes4.com) Quit (Read error: Connection reset by peer)
[13:54] * hujifeng_ (~hujifeng@ Quit (Ping timeout: 480 seconds)
[13:54] * Meths_ (rift@ has joined #ceph
[14:03] * zhangjf_zz2 (~zjfhappy@ Quit (Quit: 离开)
[14:07] * infinitytrapdoor (~infinityt@ Quit (Ping timeout: 480 seconds)
[14:16] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[14:26] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[14:31] <joelio> are config stanza's both valid with spaces and underscores?
[14:32] <joelio> i.e. rbd_cache == rbd cache
[14:32] <odyssey4me> I've seen quite a bit of comment around cephfs not being good for production usage. Is this still a current state, or a historical state?
[14:34] <andreask> still current
[14:34] <andreask> but a lot of work to change this is ongoing
[14:36] <odyssey4me> so essentially for any ceph cluster access from qemu, the librados drivers are the best bet for reliability and performance?
[14:36] <andreask> yes
[14:38] <andreask> joelio: in configuration files, yes
[14:38] * frank9999 (~frank@kantoor.transip.nl) Quit (Ping timeout: 480 seconds)
[14:42] * markit (~marco@ has joined #ceph
[14:43] <markit> hi, I run # for i in $(seq 0 5); do ceph osd tell $i bench ; done but I get only 6 'ok', where is the benchmark result?
[14:43] <joelio> odyssey4me: I use librbd using libvirt (OpenNebula middleware) - works great
[14:43] <joelio> markit: check logs
[14:43] <joelio> markit: it backgrounds the process and reports via logging iirc
[14:45] <markit> joelio: oh, opennebula... I'm using proxmox as virtual environment, I'm confused about difference with "cloud solution" like opennebula
[14:46] * infinitytrapdoor (~infinityt@ has joined #ceph
[14:47] <joelio> markit: sure, I guess it's a case of whatever works for you. Proxmox afaik doesn't use libvirt. I have used it before in the past for smaller scale stuff. This is for $WORK though where we require something more suited to our requirements
[14:47] <markit> joelio: great, thanks, grep bench /var/log/ceph/ceph.log did the trick
[14:48] * Volture (~quassel@office.meganet.ru) Quit (Remote host closed the connection)
[14:48] * Volture (~quassel@office.meganet.ru) has joined #ceph
[14:48] <markit> you are right, proxmox does NOT use libvirt
[14:48] <joelio> and something that's extensible. I found OpenNebula the easiest to grok and work out what going on (simple interfaces)
[14:48] <joelio> it's very powerful thoguh - http://opennebula.org/documentation:rel4.0:intro
[14:48] <markit> joelio: I'm OT, but I'm wondering why openebula is not "a virtualization platform" and instead is described as "cloud"
[14:48] <joelio> YMMV, other middleware avalabl etc.. :)
[14:49] <joelio> markit: Cloud is just a buzz word
[14:50] <jks> as far as I understood it (and I might be wrong) - one of the main differences in philosophy is that with proxmox you create virtual servers directly on hosts, where as with opennebula you create "in the cloud" - and then they will be deployed to hosts according to an algorithm of choice
[14:50] <jks> so one is host-oriented where the other is cluster-oriented
[14:51] <markit> jks: we call it "proxmox cluster" ;P (joking)
[14:51] <joelio> no, you can create host placement policies as you wish.. you can create virtual datacentres or zones and provision completely how you want it to be defined
[14:51] <jks> joelio, with proxmox?
[14:51] <joelio> no, ONE
[14:52] <jks> joelio, okay, yes - that was what I meant by algorithm of your choice
[14:52] <joelio> understood
[14:53] <markit> I've read that canonical has Juju or something like that... a lot of technologies that I don't understand :(
[14:53] <markit> but thanks for the (try of) clarifications :)
[14:53] <joelio> Juju is a config management too.. like Puppet/Chef/Ansible etc..
[14:53] <jks> I have just been trying various system in order to make an informed choice... and it seems difficult to find something that works well, with "persistent servers" (i.e. one disk image for one virtual machine that you want to keep around forever), as well as being able to deploy on multiple hosts dynamically
[14:54] <jks> Proxmox seemed to work very well for the "persistent server" use case (don't know what to call this)... but not so good on being able to dynamically spread out the virtual machines to various hosts and handle failing hosts
[14:55] <markit> jks: cluster? HA? fencing? Isn't those concepts needed in opennebula too?
[14:55] <jks> OpenNebula doesn't seem by default to be perfectly suited for "persistent servers", but probably works alright... but it is very good at spreading out the virtual machines according to your policy, handles failed hosts very well, etc. - and it is easy to use with Ceph
[14:56] <jks> markit, I don't think we're talking about the same thing... I wasn't talking about the frontend node failing, but merely VM Hosts failing
[14:56] <markit> proxmox supports ceph too
[14:56] <markit> ah, I see
[14:56] <joelio> http://opennebula.org/documentation:rel4.0:ftguide
[14:56] <markit> I call them "guests"
[14:56] <jks> I have also tried other systems like CloudStack, but found it very difficult to get them working with Ceph
[14:57] <jks> markit, hmm, by "guests" I mean virtual machines
[14:57] <markit> mmm me too
[14:57] <jks> markit, hosts = physical servers that run virtual machines
[14:57] <markit> let me re-read then
[14:57] <markit> jks: ok, I was talking about hosts, so a cluster of proxmox hosts
[14:57] <jks> markit, frontend = special server that runs the web interface, placement algorithm, monitoring, etc. (might be run on multiple servers in practice)
[14:57] <markit> and for HA etc. you need fencing and other setups
[14:58] <markit> jks: oh, ok, but what about "hosts" failing in OpenNebula?
[14:58] <jks> markit, I don't know what you mean by "HA" as such... it requires a more precise definition :-)
[14:58] <markit> sorry, High Availability
[14:58] <jks> I'm not talking about HA in the sense that a host can fail and virtual machines live on without anyone noticing it
[14:58] <jks> markit, yes, but High Availability is many things :-)
[14:59] <jks> I was talking about the ability to deal with a failed host by automatically starting up those virtual machines on a different host
[14:59] <jks> so for the virtual machine it would simply look like a power failure
[14:59] <markit> jks: yes, is the kind of HA I mean too
[14:59] <jks> markit, okay, then you don't need special fencing mechanism for that to work with opennebula
[15:00] <markit> but at least with proxmox, you need "quorum", be sure the "dead" node is really dead (so "fencing" is required), you have to setup redundant paths etc..
[15:00] <jks> markit, well okay, if you want it to be fully automatic and neat, you would need some kind of STONITH mechanism, yes
[15:01] <jks> I don't need it to be that sophiscated
[15:01] <markit> mmm how can you be sure that the node (host) is really down and not someone that unplugghed the ethernet cable and is going to plugi it again, and you have already started the same VMs of that node in another node?
[15:01] <markit> exactly, STONITH is a must
[15:01] * matt is now known as Guest160
[15:01] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:01] <jks> markit, I'm handling it manually... i.e. when a node fails, I know that the node has failed... and then I manually ask the system to boot up those failed vms on new hosts
[15:02] * KindOne (KindOne@0001a7db.user.oftc.net) has joined #ceph
[15:03] <markit> I see
[15:03] <jks> is that easy to setup with proxmox? - I might have overlooked it when I tried proxmox
[15:05] * AfC (~andrew@2001:44b8:31cb:d400:e8db:5954:42fa:287b) Quit (Quit: Leaving.)
[15:12] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Quit: Leaving.)
[15:18] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[15:19] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[15:26] * markbby (~Adium@ has joined #ceph
[15:27] * PerlStalker (~PerlStalk@ has joined #ceph
[15:31] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[15:40] * BillK (~BillK-OFT@124-148-212-240.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:41] * nhm (~nhm@184-97-193-106.mpls.qwest.net) has joined #ceph
[15:41] * ChanServ sets mode +o nhm
[15:43] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[15:44] <xdeller> Hi, I had raised this question a week ago in the ceph-users, but no answer to date - why turning monitor off and then on may cause some peering?
[15:52] * leseb (~Adium@ Quit (Quit: Leaving.)
[15:54] * ismell (~ismell@host-24-56-171-198.beyondbb.com) has joined #ceph
[16:06] * leseb (~Adium@ has joined #ceph
[16:15] * Volture (~quassel@office.meganet.ru) Quit (Ping timeout: 480 seconds)
[16:19] * infinitytrapdoor (~infinityt@ Quit (Ping timeout: 480 seconds)
[16:22] * odyssey4me (~odyssey4m@ Quit (Ping timeout: 480 seconds)
[16:22] * infinitytrapdoor (~infinityt@ has joined #ceph
[16:27] * Cube (~Cube@173-8-221-113-Oregon.hfc.comcastbusiness.net) has joined #ceph
[16:28] * Cube (~Cube@173-8-221-113-Oregon.hfc.comcastbusiness.net) Quit ()
[16:30] <markit> jks: back.. yes, it is, ceph client is already there (or install the package) and add (through web interface) as storage
[16:31] <jks> markit: but does it easily support that if a host fails, you can click a button (or two) and the affected vms will be started up on other hosts?
[16:33] <markit> jks: you have to login and copy the dead node VM config on the current (alive) node path, if you have quorum, otherwise you fist have to issue pvecm expected 1
[16:34] <markit> jks: for them you don't have to "just click a button" otherwise you are probably going to do a disaster
[16:35] <jks> how much is involved in copying the dead node vm config? - just trying to get a feel of the practical aspects?
[16:35] <markit> something like mv /etc/pve/nodes/prox02/qemu-server/*.conf /etc/pve/nodes/prox01/qemu-server/
[16:35] <jks> I don't care if it is a button or two or three commands on the command line... but more than that, it becomes cumbersome :-)
[16:35] <markit> if prox02 node is dead and you are on prox01
[16:35] <jks> markit, and that would automatically update the web interface with the new vm placements, etc?
[16:35] <markit> yes
[16:35] <jks> nice!
[16:36] <jks> and this works with more than 2 hosts, right? :-)
[16:36] <markit> 3 is recomended for quorum reasons
[16:36] <markit> but I've a 2 node test setup
[16:36] <markit> so.. the more the better ;P
[16:36] <jks> interesting! I like the proxmox interface, so perhaps I should look more into this
[16:37] <jks> markit, by the looks of that cp command... does it require shared file storage?
[16:38] <jks> one of my criterias was to avoid having for example a single, shared NFS mount or similar... and only rely on Ceph storage
[16:38] <markit> jks: the configuration is keept in sync by they own stuff
[16:38] <markit> for vm storage you need shared storage (ceph)
[16:38] <jks> nice!
[16:39] <markit> in fact, when you loose quorum the config is frozen to avoid accidental modifications, that's why you have to issue 'expected 1' if you have a 2 node setup and one dies
[16:40] <markit> (when prox02 is up again, looks around the cluster, finds that prox01 has quorum, and then updates ITS config)
[16:40] <jks> sounds quite neat!
[16:40] <jks> is the frontend a SPOF or how does that work?
[16:41] <markit> no SPOF, every node can be used to manage the whole cluster
[16:41] <jks> every node provides the web interface also?
[16:41] <markit> sure
[16:41] <markit> you don't even know where you are logged (kidding ;P)
[16:41] <jks> hmm, I'm interested! - I think I'll grab a copy and get it installed today ;-)
[16:42] <jks> do you know if it works with qemu 1.5.x?
[16:42] <markit> jks: beware only that is a 'bare metal' installation, will erase the destination disk
[16:42] <markit> jks: proxmox 3.0 has 1.4, proxmox 3.1 probably will have qemu 1.6
[16:43] <jks> markit, is it possible to install it on top a ubuntu installation or similar? .. if not, do you know what the bare metal installation is based on? (i.e. to ensure it's something I can keep updated)
[16:43] <jks> markit, do you know which version of 1.4 it has? - and have you got it to work with Ceph Cuttlefish?
[16:43] <markit> jks: http://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster
[16:43] <markit> jks: bare metal is based upon debian weezy, and uses RH kernel
[16:44] <markit> you can install over weezy, but is not supported (even if described in the wiki)
[16:44] <markit> really you'd better use a spare disk and go with the default installer
[16:44] <jks> okay, not a huge deal... I have 4 nodes to install this on, but testing all sorts of system on them - but I'll clear one for testing this
[16:44] <markit> (disconnect other disks you don't want to risk being affected)
[16:45] <jks> but I was testing creating a PXE-booted VM hosts, which seemed quite easy with OpenNebula - I guess that would be out of the question with proxmox?
[16:45] <jks> (i.e. to avoid having disks in the VM hosts entirely, to avoid having to handle failed disks)
[16:45] <markit> mmm I think so, you need a real installation, I've read about iscsi one maybe
[16:46] <markit> use http://pve.proxmox.com/wiki/ :)
[16:46] <jks> thanks :-)
[16:46] * odyssey4me (~odyssey4m@41-133-58-101.dsl.mweb.co.za) has joined #ceph
[16:46] <jks> do you know about the qemu version? it matters a lot for performance
[16:46] <markit> http://pve.proxmox.com/wiki/Proxmox_ISCSI_installation
[16:46] <markit> beware that some doc is outdated
[16:46] <markit> you can use the very helpful forum also
[16:47] <markit> jks: pve-qemu-kvm: 1.4-13
[16:47] <jks> just read about the pmxcfs on the wiki - seems to be backed by corosync, which I know pretty well... so that's good news for me :-)
[16:47] <jks> markit, oh, darn! before 1.4.2 it had odd performance issues with ceph
[16:48] <markit> jks: mmmm like?
[16:49] <jks> markit, if I did ran something disk I/O intensive inside the VM (could be something like running rsync) - the VM would "stutter", meaning it would pause/lag
[16:50] <jks> markit, for example if you were pinging the VM while it was running rsync, the ping time would jump all over the place
[16:50] <markit> so test current proxmox, and become ready for the forecoming (don't know when) 3.1
[16:51] * Wolff_John (~jwolff@ftp.monarch-beverage.com) has joined #ceph
[16:51] <markit> you tried with cuttlefish?
[16:51] <jks> markit, do you know when 3.1 will be out approx.?
[16:51] <jks> markit, yes, also tried with cuttlefish - same problem... different solution
[16:52] <markit> "when is ready" mostly ;P Probably they will wait for qemu 1.6. If you want you can check git repo and also update from "pvetest" (tech preview) repositories (but still with qemu 1.4-13 at the moment)
[16:53] <markit> jks: we can move in ##proxmox on freenode
[16:53] <jks> okay, I need to get my system testing done and into production within the next month or two, so probably can't wait that long :-)
[16:54] * mtanski (~mtanski@ has joined #ceph
[16:54] * odyssey4me (~odyssey4m@41-133-58-101.dsl.mweb.co.za) Quit (Ping timeout: 480 seconds)
[16:55] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:55] <jks> markit, joined that channel now ;-)
[16:57] * infinitytrapdoor (~infinityt@ Quit (Ping timeout: 480 seconds)
[16:58] * odyssey4me (~odyssey4m@ has joined #ceph
[17:04] * mtanski_ (~mtanski@ has joined #ceph
[17:10] * mtanski (~mtanski@ Quit (Ping timeout: 480 seconds)
[17:10] * mtanski_ is now known as mtanski
[17:44] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[17:44] * jlogan1 (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[17:56] * tnt (~tnt@92.203-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[17:59] * sagelap (~sage@2600:1012:b024:79c5:f04e:a572:8b02:865b) has joined #ceph
[18:00] <odyssey4me> odd, for some reason I cant seem to get a vm running using an rbd device when using libvirt - but I can if using a direct command-line via kvm... it keeps coming back with 'could not open disk image'... any thoughts?
[18:02] * ScOut3R_ (~ScOut3R@rock.adverticum.com) has joined #ceph
[18:05] * gregaf (~Adium@ Quit (Quit: Leaving.)
[18:05] * stacker666 (~stacker66@ Quit (Read error: Operation timed out)
[18:06] * gregaf (~Adium@2607:f298:a:607:e44a:4714:6b0f:b2a7) has joined #ceph
[18:07] * ScOut3R__ (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[18:08] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[18:09] * Wolff_John (~jwolff@ftp.monarch-beverage.com) Quit (Ping timeout: 480 seconds)
[18:10] <loicd> Does anyone know where is the repository matching this project http://tracker.ceph.com/projects/calamari ?
[18:13] * sagelap (~sage@2600:1012:b024:79c5:f04e:a572:8b02:865b) Quit (Quit: Leaving.)
[18:13] * sagelap (~sage@73.sub-70-197-76.myvzw.com) has joined #ceph
[18:13] <loicd> sorry, my mistake
[18:14] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:14] * ScOut3R_ (~ScOut3R@rock.adverticum.com) Quit (Ping timeout: 480 seconds)
[18:15] * ntranger (~ntranger@proxy2.wolfram.com) has joined #ceph
[18:17] * markit (~marco@ Quit (Quit: Konversation terminated!)
[18:19] * diegows (~diegows@ Quit (Ping timeout: 480 seconds)
[18:19] * ScOut3R__ (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[18:21] <scheuk> I'm currently running bobtail, when a scrub happens, I see my all of the client's rbd drives latency spike while the backend osd disks don't experience any latency.
[18:21] <scheuk> what could be causing this
[18:22] <scheuk> and is there a way to tweak how scrubbing effects the client performance, just like a recovery operatons?
[18:22] <scheuk> or is this a known problem in bobtail and we should upgrade to cuttlefish?
[18:22] <xdeller> odyssey4me - escape symbols properly and you`ll get desired result
[18:26] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[18:27] * s2r2 (~s2r2@ Quit (Quit: s2r2)
[18:27] * sagelap (~sage@73.sub-70-197-76.myvzw.com) Quit (Read error: Connection reset by peer)
[18:30] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) has joined #ceph
[18:31] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) Quit ()
[18:31] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) has joined #ceph
[18:33] <odyssey4me> xdeller - interestingly, it appears that apparmor is blocking access to /tmp and /var/tmp - why would using rbd require access there?
[18:33] * hybrid512 (~walid@106-171-static.pacwan.net) Quit (Quit: Leaving.)
[18:35] <odyssey4me> xdeller - by the way, this is the config I'm trying to use in virsh: http://pastebin.com/V36QGiVL
[18:36] <xdeller> it`s probably qemu or libvirt who requesting such an access
[18:36] * jjgalvez (~jjgalvez@ip72-193-215-88.lv.lv.cox.net) has joined #ceph
[18:36] * xdeller (~xdeller@ has left #ceph
[18:36] * xdeller (~xdeller@ has joined #ceph
[18:37] * leseb (~Adium@ Quit (Quit: Leaving.)
[18:38] <odyssey4me> xdeller - ok, edited the apparmor profile and it's no longer blocking access... but it still won't start
[18:39] <xdeller> do you trying to start vm without libvirt?
[18:39] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) has joined #ceph
[18:39] <odyssey4me> xdeller - yes, that works fine - as long as I leave out the mon address
[18:39] <odyssey4me> I've taken the mon line out of the virsh config and I still see the same result
[18:39] <xdeller> no sure if understood you right
[18:40] <odyssey4me> if I start it like this: kvm -m 2048 -smp 2 -drive file=rbd:rbd/perftest2-system,cache=none,if=virtio -net nic,model=virtio -net user -nographic -usbdevice tablet -balloon virtio -vnc :10
[18:40] <xdeller> you had removed mon reference in the libvirt` config?
[18:40] <odyssey4me> then it works fine
[18:40] <xdeller> wow
[18:41] <odyssey4me> it appears that virsh is doing this though: -drive file=rbd:rbd/perftest2-system,if=none,id=drive-virtio-disk0,format=raw,cache=none
[18:41] <xdeller> hope ceph devs can explain it, seemingly qemu works with the ceph.conf
[18:41] <xdeller> yep
[18:41] <xdeller> what`s the error? ENOFILE?
[18:41] <odyssey4me> notice that 'if=none' with virsh, whereas my working start command uses if=virtio
[18:42] <odyssey4me> how do I get more debug to check that out?
[18:42] * n3c8-35575 (~mhattersl@pix.office.vaioni.com) Quit (Ping timeout: 480 seconds)
[18:42] <xdeller> strace probably
[18:43] <xdeller> as I remember there was some problems with moving mon strings to the bare cli, they needed double slash before port definition or so
[18:47] * Wolff_John (~jwolff@ftp.monarch-beverage.com) has joined #ceph
[18:47] <odyssey4me> xdeller - aha, got a debug log and it appears that cephx is required
[18:48] <odyssey4me> is it not possible to not use cephx?
[18:49] <xdeller> set none for auth cluster required, auth service required and auth client required
[18:59] <odyssey4me> aha, that works - this is just a test environment to familiarise myself... it would seem that it's best to use cephx for production?
[18:59] <odyssey4me> thanks for the help
[18:59] * oddomatik (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[19:04] <mtanski> Is there a way when libcephfs to make a lower level radios call to the object. In case I'd like to be able to do async / read write requests
[19:06] * oddomatik (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[19:13] * joshd1 (~jdurgin@2602:306:c5db:310:2cf4:aa79:959:e048) has joined #ceph
[19:20] * bergerx_ (~bekir@ Quit (Quit: Leaving.)
[19:24] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[19:26] * Tamil (~tamil@ has joined #ceph
[19:27] * vata (~vata@2607:fad8:4:6:d8c4:2e21:3c7e:cf30) has joined #ceph
[19:30] * Wolff_John_ (~jwolff@ftp.monarch-beverage.com) has joined #ceph
[19:30] * Wolff_John (~jwolff@ftp.monarch-beverage.com) Quit (Read error: Connection reset by peer)
[19:30] * Wolff_John_ is now known as Wolff_John
[19:36] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[19:38] * sagelap1 (~sage@ has joined #ceph
[19:38] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) Quit (Ping timeout: 480 seconds)
[19:42] * xmltok (~xmltok@pool101.bizrate.com) Quit (Remote host closed the connection)
[19:42] * xmltok (~xmltok@relay.els4.ticketmaster.com) has joined #ceph
[19:51] * oddomatik (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[19:57] * xmltok_ (~xmltok@pool101.bizrate.com) has joined #ceph
[19:58] * xmltok (~xmltok@relay.els4.ticketmaster.com) Quit (Read error: Operation timed out)
[20:05] * diegows (~diegows@ has joined #ceph
[20:09] * dmick (~dmick@2607:f298:a:607:391e:4fd:d328:a6ee) has joined #ceph
[20:20] * goldfish (~goldfish@ has joined #ceph
[20:22] * s2r2 (~s2r2@f049030082.adsl.alicedsl.de) has joined #ceph
[20:39] <infernix> are there any recommended kernel packages for ubuntu 12.04
[20:39] <infernix> i'm running with 3.5.0 on one box and it's giving me a lot of memory issues whereas an old 3.8.0 package i picked up a while ago seems to run fine
[20:39] <infernix> but there aren't any kernels in the bobtail repos i think
[20:41] <mtanski> you can use baported kernels from 13.04
[20:41] <mtanski> https://wiki.ubuntu.com/Kernel/LTSEnablementStack
[20:42] <infernix> that'd be 3.5.0 and that gives me problems
[20:42] <infernix> [172862.887645] ceph-osd: page allocation failure: order:5, mode:0x40d0
[20:43] <mtanski> I think you are running 12.10 (3.5) and not 13.04 kernel (3.8)
[20:45] <mtanski> http://www.ubuntuupdates.org/package/canonical_kernel_team/precise/main/base/linux-meta-lts-raring
[20:45] * odyssey4me (~odyssey4m@ Quit (Ping timeout: 480 seconds)
[20:47] <infernix> ah
[20:47] <infernix> i have linux-image-3.8.0-ceph_3.8.0-ceph-1_amd64.deb
[20:47] <infernix> that works well
[20:50] * dpippenger (~riven@tenant.pas.idealab.com) has joined #ceph
[20:53] <nhm> I'm using the ubuntu 3.8 kernel in 12.04 for testing and it's working quite well.
[20:54] <nhm> The debug kernels we have on the gitbuilder site work ok but may be slow.
[20:54] <infernix> k
[21:00] * Meths (rift@ has joined #ceph
[21:02] * Meths (rift@ Quit ()
[21:06] * Meths_ (rift@ Quit (Ping timeout: 480 seconds)
[21:07] * oddomatik (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[21:23] <infernix> nhm, how slow is slow on the gitbuilder debug kernel?
[21:23] <infernix> does it only affect kernel rbd and cephfs?
[21:24] * s2r2 (~s2r2@f049030082.adsl.alicedsl.de) Quit (Quit: s2r2)
[21:24] <nhm> infernix: Don't know for sure. I was doing some rados bench tests and in really high performance cases it was like 2/3rd the speed.
[21:31] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[21:31] * s2r2 (~s2r2@f049030082.adsl.alicedsl.de) has joined #ceph
[21:32] * oddomatik (~Adium@pool-71-106-149-194.lsanca.dsl-w.verizon.net) has joined #ceph
[21:33] * dosaboy_ (~dosaboy@host86-161-206-191.range86-161.btcentralplus.com) has joined #ceph
[21:34] * allsystemsarego (~allsystem@ Quit (Quit: Leaving)
[21:37] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[21:39] <dmick> so, monitor down: I can see that it's not present in 'quorum' in mon_status output; it's also not in 'outside_quorum'; it is in 'extra probe peers', and there's an entry in monmap, but its IP addr is there. All of that together doesn't seem like a super-crisp "I think I should know this mon but I can't reach him". Is there a better solid indication of that?
[21:40] * dosaboy (~dosaboy@host86-164-137-144.range86-164.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[21:42] <dmick> ...or Noah's email :)
[21:43] <dmick> sagewk: what's "outside_quorum" mean, then?
[21:43] <gregaf> dmick: outside_quorum is monitors which are known to be up, but which say they aren't in a quorum
[21:44] <dmick> ah, ok
[21:44] <gregaf> sounds like the monitor was in the initial list when the cluster was created, but has never communicated with anybody that the monitor you're looking at has
[21:44] <dmick> is 'extra probe peer' state where uncontactable monitors in the initial list stay?
[21:45] <gregaf> the extra probe peer list is composed of the mon_initial_members plus any probe peers you've added via the admin socket
[21:45] <dmick> ...presumably fi they haven't been sucked into the quorum
[21:45] <gregaf> or maybe it's mon_initial_members + "admin_socket additions" - "already contacted"
[21:45] <dmick> (there's only the dead one on it, and they all were in ceph.conf)
[21:45] <gregaf> yeah
[21:46] <gregaf> so that's only those they haven't yet contacted, I guess
[21:49] <joao> it is my belief that gregaf is right about the extra probe peers
[21:49] * sagelap1 (~sage@ Quit (Ping timeout: 480 seconds)
[21:49] <joao> those should become part of the monmap with an updated ip as soon as they probe the other monitors for the first time, iirc
[21:51] <joao> the ip means that they were declared in the mon initial members, and not as part of an initial monmap; other monitors will wait for the first probe from a monitor claiming to be a monitor in that list to update their ip
[21:52] <joao> dmick, as far as I can remember, will have to check the code to make sure though, is that you will only get an 'outside_quorum' monitor when you probe the monitor that hasn't been able to get into quorum yet
[21:53] <joao> and yeah, it should be safe to assume that a monitor is down if it's not present either in the quorum or the outside_quorum of any monitor in the quorum
[21:53] <dmick> would one expect the state to look similar if one had had contact, but lost it? (except probably not in extra_probe_peers anymore?)
[21:55] <joao> if a monitor in the quorum were to lose contact with another monitor M, then M would not be present on the quorum or the outside_quorum list
[21:55] <dmick> right
[21:55] <joao> if M were to be up however, you'd probably end up seeing his last view of the quorum (until some timeout was triggered and quorum reset), or an empty quorum and the monitor would be in state probing
[21:56] <joao> if M were to be unable to contact the other monitors in the quorum, then at some point it would not show any quorum members
[21:57] <joao> a monitor only updates its vision of what the current quorum looks like when it probes (and the other monitors reply)
[21:57] <joao> does this shed any light on what you were looking for?
[21:59] * sagelap (~sage@ has joined #ceph
[22:01] * dosaboy (~dosaboy@host86-150-246-156.range86-150.btcentralplus.com) has joined #ceph
[22:02] * joshd1 (~jdurgin@2602:306:c5db:310:2cf4:aa79:959:e048) Quit (Quit: Leaving.)
[22:03] * dosaboy_ (~dosaboy@host86-161-206-191.range86-161.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[22:04] * BManojlovic (~steki@237-231.197-178.cust.bluewin.ch) has joined #ceph
[22:04] * s2r2_ (~s2r2@f049030082.adsl.alicedsl.de) has joined #ceph
[22:04] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:06] * tnt_ (~tnt@92.203-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[22:06] * tchmnkyz_ (~jeremy@ip23.67-202-99.static.steadfastdns.net) has joined #ceph
[22:06] * terje_ (~joey@97-118-115-214.hlrn.qwest.net) has joined #ceph
[22:06] * sagelap (~sage@ Quit (reticulum.oftc.net oxygen.oftc.net)
[22:06] * s2r2 (~s2r2@f049030082.adsl.alicedsl.de) Quit (reticulum.oftc.net oxygen.oftc.net)
[22:06] * dpippenger (~riven@tenant.pas.idealab.com) Quit (reticulum.oftc.net oxygen.oftc.net)
[22:06] * tnt (~tnt@92.203-67-87.adsl-dyn.isp.belgacom.be) Quit (reticulum.oftc.net oxygen.oftc.net)
[22:06] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (reticulum.oftc.net oxygen.oftc.net)
[22:06] * terje (~joey@97-118-115-214.hlrn.qwest.net) Quit (reticulum.oftc.net oxygen.oftc.net)
[22:06] * sjust (~sam@ Quit (reticulum.oftc.net oxygen.oftc.net)
[22:06] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (reticulum.oftc.net oxygen.oftc.net)
[22:06] * tchmnkyz (~jeremy@0001638b.user.oftc.net) Quit (reticulum.oftc.net oxygen.oftc.net)
[22:06] * Guest2843 (~coyo@thinks.outside.theb0x.org) Quit (reticulum.oftc.net oxygen.oftc.net)
[22:06] * s2r2_ is now known as s2r2
[22:09] * Coyo (~coyo@thinks.outside.theb0x.org) has joined #ceph
[22:09] * Coyo is now known as Guest257
[22:17] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[22:18] * dpippenger (~riven@tenant.pas.idealab.com) has joined #ceph
[22:19] * sagelap (~sage@2607:f298:a:607:61dd:2b6f:b08f:b063) has joined #ceph
[22:19] * sjust (~sam@2607:f298:a:607:baac:6fff:fe83:5a02) has joined #ceph
[22:21] * stacker666 (~stacker66@ has joined #ceph
[22:25] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[22:29] <ntranger> Hey all! I'm new to Ceph, and I just had a question about a multiple disk configuration. Do I create an OSD per disk, or is there a way to configure, say, 4 disks, as one OSD (which is what I'm wanting to do, with a 12 disk server)
[22:30] <dignus> you can
[22:30] <dignus> but why? :)
[22:30] * joshd1 (~joshd@2602:306:c5db:310:b0be:ab79:188d:4fc9) has joined #ceph
[22:30] <dmick> better to have one osd per disk. keep in mind they're just daemons with storage attached, so multiple per host is fine
[22:32] <ntranger> ok. We have 3 servers we're setting up, and I was just trying to wrap my mind around how I should set this up.
[22:35] * WarrenUsui (~WarrenUsu@ has joined #ceph
[22:38] * nwat (~oftc-webi@eduroam-251-132.ucsc.edu) has joined #ceph
[22:41] <joelio> ntranger: I've tested various combos of number of disk per osd and it's just, on the whole, easier to manage with an OSD per disk.
[22:43] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[22:45] * oddomatik (~Adium@pool-71-106-149-194.lsanca.dsl-w.verizon.net) Quit (Quit: Leaving.)
[22:48] * s2r2 (~s2r2@f049030082.adsl.alicedsl.de) Quit (Quit: s2r2)
[22:50] * fridudad_ (~oftc-webi@p5B09DB91.dip0.t-ipconnect.de) has joined #ceph
[22:51] <scheuk> ntranger: 1 osd per disk is the usual setup, you could do a hardware raid setup with an OSD per raid set, that's what he have, it all depends on what you are using your ceph cluster for and what storage performance you need for the clients
[23:06] * jakes (~oftc-webi@dhcp-171-71-119-30.cisco.com) has joined #ceph
[23:12] <grepory> if i need to change all of the ip addresses that my mons listen on… what's the best way to go about doing that?
[23:12] <grepory> mons and osds, actually
[23:12] <grepory> also, will probably be setting up a cluster network in the coming days as well… should i do all of this at the same time? will it cause a headache to setup the cluster network later?
[23:17] <ntranger> Thanks guys for the help! This is just a file dump for the most part, so speed isn't really all that crucial. I was going to raid it, but was told not too, so I'll just OSD each disk. I greatly appreciate the help! Thanks again! :)
[23:19] <davidz> grepory: You should set up the cluster network beforehand. Switching networking around afterwards is going to bring everything down at least at the point when a majority of mons aren't communicating.
[23:20] <grepory> davidz: that's actually okay at this point. i could even go so far as to reprovision the whole cluster. tbh.
[23:20] <grepory> we're getting ready to put ceph into production after testing
[23:21] <grepory> so that means putting it on the appropriate vlans, etc.
[23:21] * vata (~vata@2607:fad8:4:6:d8c4:2e21:3c7e:cf30) Quit (Quit: Leaving.)
[23:23] * stacker666 (~stacker66@ Quit (Ping timeout: 480 seconds)
[23:23] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[23:23] * ChanServ sets mode +v andreask
[23:24] * DarkAce-Z is now known as DarkAceZ
[23:24] <scheuk> I'm currently running bobtail, when a scrub happens, I see my all of the client's rbd drives latency spike while the backend osd disks don't experience any increased latency.
[23:24] <scheuk> what could be causing this?
[23:24] <scheuk> and is there a way to tweak how scrubbing effects the client performance, just like a recovery operatons?
[23:25] <scheuk> or is this a known problem in bobtail and we should upgrade to cuttlefish?
[23:25] * sagelap (~sage@2607:f298:a:607:61dd:2b6f:b08f:b063) Quit (Quit: Leaving.)
[23:25] <nhm> scheuk: how's CPU usage on the mons and OSDs when it happens?
[23:26] <scheuk> the osd's that are scrubbing are a little hight
[23:26] <scheuk> mons are low
[23:27] <scheuk> also it seem like the master osd usually consumes alot of memory as well
[23:27] <scheuk> not enought to swap the machine though
[23:28] <nhm> Probably best to get Sam's opinion
[23:28] <jakes> i was trying to install ceph as in http://ceph.com/docs/next/start/quick-ceph-deploy/ . i have three node cluster. i could run ceph-w only for one node. rest two nodes gives me cephx authotication error. ceph-w on first node gives me right status
[23:29] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[23:29] * Wolff_John (~jwolff@ftp.monarch-beverage.com) Quit (Quit: ChatZilla 0.9.90 [Firefox 22.0/20130618035212])
[23:29] <scheuk> jakes: you need to make sure you have the ceph.client.admin.keyring in /etc/ceph
[23:30] <scheuk> that's the default keyring file used for the ceph command to authenticate to the monitors
[23:31] <jakes> yeah. it is not there.. Why is it not created for two two nodes. Do we need to manually do it? or Did I miss out something in my installation
[23:31] <joelio> there is an ceph-deploy admin command for that
[23:31] * oddomatik (~Adium@ has joined #ceph
[23:31] <joelio> that turns a node into an admin node, so you can run ceph based commands
[23:33] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[23:33] <joelio> jakes: http://ceph.com/docs/master/rados/deployment/ceph-deploy-admin/
[23:35] <jakes> Thanks.What does the ceph-deploy new command do?.. Is it been executed any random node in the cluster?
[23:36] <joelio> sets up initial mon members afaik
[23:36] <joelio> it caught me out a few times that command, as I assumed it didn't add a mon
[23:36] <joelio> it does
[23:36] <jakes> So, if we have multiple monitor nodes, do we need to do ceph-deploy new for other nodes?..
[23:37] <joelio> you want to be careful on how many mons you have
[23:37] <jakes> But, I saw that if we re-executing ceph.conf gets rewritten with new values
[23:37] <joelio> more mons create more traffic, so getting the number right it prudent. I have 6 nodes, 3 mons
[23:37] <nwat> cpeh-deploy new setups up a ceph.conf
[23:39] <jakes> If we have two monitor nodes, how do we setup using ceph-deploy?
[23:39] <dmick> ceph-deploy mon
[23:39] <dmick> not new
[23:39] <dmick> new is new cluster
[23:40] <dmick> "To create a cluster with ceph-deploy, use the new command"
[23:40] <jakes> so, are we executing ceph-deploy new only for first monitor node?
[23:41] <joelio> dmick: sure, but the text after it is misleading. To me it looked like I had to add all the hosts that would be part of the cluster
[23:42] <joelio> it's just the intial mon, which could just be the one host, with mons added in next step.. or the inital list of mon hosts?
[23:42] <sjust> scheuk: you might consider adjusting the osd_scrub_chunk_max
[23:42] <jakes> yup..same question
[23:42] <sjust> to 10 or so
[23:43] <joelio> jakes: I did the 3 mons and then added them again in the next step. I think it's fine jut to use initial host (probably what you're working on) and then add mons in the next step
[23:43] <joelio> but I'm not entirely sure, both ways I guess work
[23:44] <jakes> After ceph-deploy new, mon initial members and master mon host variables are set in ceph.conf file. Only that mon is updated in conf file
[23:44] <jakes> this is the confusing part
[23:45] <joelio> you're not alone in this sentiment with ceph-deploy, haha
[23:45] <jakes> :)
[23:45] <dmick> joelio: you can specify all the mon hosts there or not
[23:45] <dmick> but they'll need to be ceph-deploy mon create'd regardless
[23:46] <joelio> dmick: appreciate that, just feels like if there's multiple ways of doing it, feels like you may have done it wrong in a particular step
[23:47] <jakes> dmick: So, it just means that one initial mon is needed to start a cluster. Later, we can add monitors using ceph-deploy mon create. right?
[23:51] * Tamil (~tamil@ Quit (Quit: Leaving.)
[23:52] <dmick> you can run a cluster with one mon, yes.
[23:54] * Tamil (~tamil@ has joined #ceph
[23:55] <scheuk> sjust: I'll take a look at that one
[23:56] * jakes (~oftc-webi@dhcp-171-71-119-30.cisco.com) Quit (Remote host closed the connection)
[23:56] * johnu (~oftc-webi@dhcp-171-71-119-30.cisco.com) has joined #ceph
[23:58] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Quit: Leaving.)
[23:58] <johnu> dmick: My cluster status says as OK. but, my ceph.conf file has only one mon node info which was setup using ceph-deploy new command. Is it fine?. I read that ceph-deploy automatically fills cluster information ( http://ceph.com/docs/master/rados/configuration/ceph-conf/)
[23:59] * fridudad_ (~oftc-webi@p5B09DB91.dip0.t-ipconnect.de) Quit (Remote host closed the connection)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.