#ceph IRC Log

Index

IRC Log for 2013-05-13

Timestamps are in GMT/BST.

[0:05] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[0:05] * loicd (~loic@magenta.dachary.org) has joined #ceph
[0:07] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:09] * jjgalvez (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) has joined #ceph
[0:12] * b1tbkt_ (~Peekaboo@24-216-67-250.dhcp.stls.mo.charter.com) Quit (Remote host closed the connection)
[0:37] * danieagle (~Daniel@186.214.76.12) has joined #ceph
[0:39] * diegows (~diegows@190.190.2.126) has joined #ceph
[0:48] * tnt (~tnt@91.177.214.32) Quit (Ping timeout: 480 seconds)
[1:37] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:44] * LeaChim (~LeaChim@176.250.188.136) Quit (Ping timeout: 480 seconds)
[1:52] * danieagle (~Daniel@186.214.76.12) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[1:53] * The_Bishop (~bishop@2001:470:50b6:0:d59f:b451:2b64:16b6) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[2:28] * yehuda_hm (~yehuda@2602:306:330b:1410:942f:17b1:c111:4865) Quit (Ping timeout: 480 seconds)
[2:32] * yehuda_hm (~yehuda@2602:306:330b:1410:942f:17b1:c111:4865) has joined #ceph
[2:39] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[2:59] * yehuda_hm (~yehuda@2602:306:330b:1410:942f:17b1:c111:4865) Quit (Ping timeout: 480 seconds)
[3:07] * themgt (~themgt@24-177-232-33.dhcp.gnvl.sc.charter.com) Quit (Quit: themgt)
[3:21] * Vjarjadian_ (~IceChat77@90.214.208.5) has joined #ceph
[3:27] * Vjarjadian (~IceChat77@90.214.208.5) Quit (Ping timeout: 480 seconds)
[3:49] * themgt (~themgt@96-37-28-221.dhcp.gnvl.sc.charter.com) has joined #ceph
[4:02] * treaki_ (85601c43c3@p4FF4A1B4.dip0.t-ipconnect.de) has joined #ceph
[4:06] * treaki__ (db70d26d90@p4FDF6EBA.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[4:17] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[4:17] * loicd (~loic@magenta.dachary.org) has joined #ceph
[4:18] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[4:38] * Muhlemmer (~kvirc@cable-88-137.zeelandnet.nl) Quit (Quit: KVIrc 4.3.1 Aria http://www.kvirc.net/)
[5:07] * rustam (~rustam@94.15.91.30) has joined #ceph
[5:08] * yehuda_hm (~yehuda@2602:306:330b:1410:942f:17b1:c111:4865) has joined #ceph
[5:08] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[5:09] * themgt (~themgt@96-37-28-221.dhcp.gnvl.sc.charter.com) Quit (Quit: themgt)
[5:19] * themgt (~themgt@24-177-232-33.dhcp.gnvl.sc.charter.com) has joined #ceph
[5:24] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[5:24] * loicd (~loic@magenta.dachary.org) has joined #ceph
[6:25] * rustam (~rustam@94.15.91.30) has joined #ceph
[6:27] * john_barbee_ (~jbarbee@c-98-226-73-253.hsd1.in.comcast.net) Quit (Quit: ChatZilla 0.9.90 [Firefox 21.0/20130506154904])
[6:27] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[6:45] * sjusthm (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) has joined #ceph
[6:52] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[7:11] * ken1 (~quanta@14.160.47.146) has joined #ceph
[7:22] <ken1> ceph 0.56 and 0.61: anyone getting the `ls` hanging problem on random folder
[7:31] * sjusthm (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[7:34] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[7:36] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[7:45] * NyanDog (~q@103.29.151.3) Quit (Quit: Lost terminal)
[7:48] * themgt (~themgt@24-177-232-33.dhcp.gnvl.sc.charter.com) Quit (Quit: themgt)
[8:11] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:18] * ShaunR- (~ShaunR@staff.ndchost.com) Quit (Ping timeout: 480 seconds)
[8:20] * tnt (~tnt@91.177.214.32) has joined #ceph
[8:40] * Vjarjadian_ (~IceChat77@90.214.208.5) Quit (Quit: Easy as 3.14159265358979323846... )
[8:43] * rustam (~rustam@94.15.91.30) has joined #ceph
[8:45] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[8:51] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Quit: Leaving.)
[8:54] * rustam (~rustam@94.15.91.30) has joined #ceph
[8:55] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[9:00] * Zethrok_ (~martin@95.154.26.34) Quit (Read error: Connection reset by peer)
[9:01] * Zethrok (~martin@95.154.26.34) has joined #ceph
[9:04] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) has joined #ceph
[9:15] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[9:17] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[9:19] <ken1> I have submitted a bug here: http://tracker.ceph.com/issues/5036
[9:21] * tnt (~tnt@91.177.214.32) Quit (Ping timeout: 480 seconds)
[9:22] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[9:25] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Ping timeout: 480 seconds)
[9:26] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[9:28] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[9:28] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:31] * jgallard (~jgallard@gw-aql-129.aql.fr) has joined #ceph
[9:35] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[9:36] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:37] * dignus_ (~dignus@bastion.jkit.nl) has joined #ceph
[9:37] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Quit: Konversation terminated!)
[9:38] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[9:38] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[9:39] * dignus (~dignus@bastion.jkit.nl) Quit (Ping timeout: 480 seconds)
[9:40] * rustam (~rustam@94.15.91.30) has joined #ceph
[9:42] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[9:49] * leseb (~Adium@83.167.43.235) has joined #ceph
[9:50] * rustam (~rustam@94.15.91.30) has joined #ceph
[9:51] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[9:56] * bergerx_ (~bekir@78.188.101.175) has joined #ceph
[9:58] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[10:05] * jjgalvez (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) Quit (Quit: Leaving.)
[10:05] * jjgalvez (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) has joined #ceph
[10:05] * dxd828 (~dxd828@195.191.107.205) Quit (Quit: Leaving)
[10:05] * dxd828 (~dxd828@195.191.107.205) has joined #ceph
[10:07] * alo (~al.o@79.59.209.97) has joined #ceph
[10:07] * jjgalvez1 (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) has joined #ceph
[10:08] * rustam (~rustam@94.15.91.30) has joined #ceph
[10:09] * Rocky (~r.nap@188.205.52.204) has joined #ceph
[10:09] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[10:10] * agh (~oftc-webi@gw-to-666.outscale.net) has joined #ceph
[10:11] <agh> Hello to all
[10:11] <agh> I'm looking for some info on Openstack over Ceph
[10:11] <agh> Is live migration possible with RBD only ? (without CephFS), like in Proxmox ?
[10:13] * jjgalvez (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[10:14] * jgallard (~jgallard@gw-aql-129.aql.fr) Quit (Remote host closed the connection)
[10:15] * jgallard (~jgallard@gw-aql-129.aql.fr) has joined #ceph
[10:15] <tnt> I'm not sure how it works in openstack but I do Xen live migration over RBD without issues.
[10:17] * fridudad (~oftc-webi@fw-office.allied-internet.ag) has joined #ceph
[10:20] <agh> tnt: mmm... but i want to use OpenStack with KVM...
[10:23] * jjgalvez (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) has joined #ceph
[10:24] <andreask> agh: if you setup your instances to boot from volume that should work fine in grizzly
[10:24] <tnt> well, you can just try :) The only thing that could be an issue is if KVM doesn't send a flush command to the driver before restarting on the other end, but if they didn't, other drivers would have issue as well I think.
[10:24] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[10:25] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[10:25] <agh> andreask: ok. Did you try yourself ? (I have no Openstack installation yet, I use Proxmox)
[10:28] * jjgalvez1 (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[10:29] * LeaChim (~LeaChim@176.250.188.136) has joined #ceph
[10:30] <ken1> Am I the only one get this problem: http://tracker.ceph.com/issues/5036
[10:30] <ken1> as you can see, the `ls` process is in D state
[10:30] <andreask> agh: I have read some openstack mailing list posts that it works with some limitiations in usability
[10:31] <agh> andreask: ok.. thanks for your answer
[10:31] <andreask> agh: yw
[10:32] * alexxy[home] (~alexxy@2001:470:1f14:106::2) Quit (Read error: Connection reset by peer)
[10:32] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[10:34] <ay> Hi. WHen starting a cluster of three nodes by running service ceph -a start from node01, it starts node01 and node02, but node03 says global_init: unable to open config file from search list
[10:34] <andreask> ken1: hmm ... really old kernel in that bugreport
[10:34] <ay> (and a nonexistant /tmp/ceph.conf.<hash>) after
[10:34] <ay> Starting node03 manualy works.
[10:34] <ay> Any ideas?
[10:42] * jjgalvez (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) Quit (Quit: Leaving.)
[10:42] * jjgalvez (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) has joined #ceph
[10:42] * rustam (~rustam@94.15.91.30) has joined #ceph
[10:43] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[10:49] <andreask> ay: ceph.conf is at its place on node03?
[10:50] * jjgalvez (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[10:50] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[10:52] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[10:54] <andreask> ken1: you read these recommendations for using Cephfs? http://ceph.com/docs/master/install/os-recommendations/
[11:01] <tnt> Are the text logs stored into the mon store.db cleaned up after a while ? Over the week end it seems to have quadrupled size, in a small 1mon+2osd test cluster. It's already 5 times larger than on our prod cluster ...
[11:03] <ay> andreask: It's in it's place (in /etc/ceph/)
[11:10] <andreask> tnt: you have compact on trim enabled for the mon?
[11:12] <ken1> andreask: yes, I already read that link. But do you have any idea about this problem, I mean the root cause.
[11:13] <andreask> ken1: no, sorry
[11:14] <ken1> andreask: Are you running Ceph on production? If so, what kernel verion and which distro?
[11:15] <tnt> andreask: isn't that the default ?
[11:16] <andreask> ken1: no, I don't run Cephfs in production ... only RBD and rgw and its all Ubunto 12.04
[11:17] <andreask> tnt: yes but IIRC there have already been people reporting problems if its enabled
[11:25] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[11:25] <alo> Hi, I'm wondering to setup a Ceph RBD production backend for Cloudstack. I've set up a lab environment with commodities hardware and now I'm sizing the production environment. We are thinking to start with 3 DELL 320 (Perc H300) with 1 SSD for O.S. and journal and two 1TB sata. Is better to use SAS? IS better to use one SSD per journal? Do you have any other suggestion?
[11:27] <tnt> for 2 spinners, a single SSD will do fine I think.
[11:32] <tnt> Who's running cuttlefish in prod here ?
[11:34] * fabioFVZ (~fabiofvz@213.187.20.119) has joined #ceph
[11:36] * coyo (~unf@00017955.user.oftc.net) Quit (Quit: F*ck you, I'm a daemon.)
[11:42] <tnt> andreask: restarting the mon cleared it ...
[11:54] <loicd> alo: "1 SSD for O.S. and journal", IIRC this can lead to problematic situations if the journal is full ( or very active ) because the OS will lag so badly that the OSD will timeout. fghaas wrote a blog post about this, I think.
[11:55] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Remote host closed the connection)
[12:00] <wogri_risc> alo: also the ssd for the OS doesn't speed up anything concerning ceph. It's better to boot from the network in this case.
[12:01] <tnt> Or you can also boot from an internal SDCard IIRC.
[12:01] <wogri_risc> yeah, or from floppy disks :)
[12:05] <loicd> wogri_risc: :-D
[12:07] <alo> wogri_risc: I hadn't considered the possibility to make a network boot... I like it!
[12:09] <alo> floppy disk... I think I would have some difficulties to install a reader on the server. I'll ask Dell :D
[12:16] * br1 (~br1@79.59.209.97) has joined #ceph
[12:19] <mrjack> hm
[12:19] <mrjack> i am unhappy :(
[12:20] <mrjack> i cannot upgrade to 0.61.1
[12:27] <andreask> mrjack: why?
[12:27] <mrjack> andreask: my monitors fail to convert 0.56.6 to 0.61.1
[12:28] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[12:31] <mrjack> andreask: 0 _convert_machines mdsmap gv 31535406 already exists
[13:04] * diegows (~diegows@190.190.2.126) has joined #ceph
[13:08] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[13:09] * vipr (~vipr@78-23-113-37.access.telenet.be) has joined #ceph
[13:16] * vipr_ (~vipr@78-23-112-130.access.telenet.be) Quit (Ping timeout: 480 seconds)
[13:24] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[13:25] * agh (~oftc-webi@gw-to-666.outscale.net) Quit (Quit: Page closed)
[13:25] * agh (~oftc-webi@gw-to-666.outscale.net) has joined #ceph
[13:26] <wogri_risc> mrjack: you seem to have hit this bug: http://tracker.ceph.com/issues/4974 (unless you're smart webapplicatoins on the mailinglist, then joao gave you the answer anyways)
[13:26] <joao> currently working on the fix
[13:27] <joao> should be straightforward enough to have it in the next couple of hours
[13:27] <joao> (I hope)
[13:27] <wogri_risc> very optimistic :)
[13:28] * yehuda_hm (~yehuda@2602:306:330b:1410:942f:17b1:c111:4865) Quit (Read error: Connection timed out)
[13:29] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[13:33] <absynth> wogri_risc: mrjack is smart weblications
[13:34] <wogri_risc> oh yeah, read too fast. weblications. right. sorry :)
[13:35] <tnt> Ok, I think I'll delay my upgrade then :p
[13:35] <wogri_risc> I've delayed mine also :)
[13:36] <absynth> we are just glad we "only" upgraded to 0.56.6
[13:36] * yehuda_hm (~yehuda@2602:306:330b:1410:942f:17b1:c111:4865) has joined #ceph
[13:36] <tnt> alo: my dell servers have an internal SDCard reader by default :p
[13:40] * wido__ is now known as wido
[13:40] * ken1 (~quanta@14.160.47.146) Quit (Quit: Leaving.)
[13:48] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[13:50] * med (~medberry@00012b50.user.oftc.net) Quit (Quit: Coyote finally caught me)
[13:52] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[13:52] * treaki_ (85601c43c3@p4FF4A1B4.dip0.t-ipconnect.de) Quit (Quit: Verlassend)
[13:54] <alo> tnt: that's right... but I think I'll give a try to bootp first.
[13:54] <wogri_risc> alo: just don't forget that the mon's need to store their data somewhere.
[13:56] * rustam (~rustam@94.15.91.30) has joined #ceph
[13:57] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[14:04] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Quit: Leaving.)
[14:04] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[14:07] * fabioFVZ (~fabiofvz@213.187.20.119) Quit (Remote host closed the connection)
[14:09] <alo> wogri_risc: I would like to install mon on a Ganeti DRBD Xen cluster... with an 1Gbps dedicated eth. This should be ok, isn't it?
[14:20] <absynth> uh...
[14:20] <absynth> why don't you trust the redundancy that multiple mons bring with them?
[14:22] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[14:30] <BillK> just realised that I can rdb map a format 1 image, but not a format 2 ... seems odd? I know you can only clone a format 2.
[14:30] <wogri_risc> alo: yeah, what absynth said. no need to make the mon HA, just deploy more of them. and make sure you have an odd number of mon's.
[14:31] <wogri_risc> BillK: this is due to the format 2 code not being in your kernel yet
[14:31] <wogri_risc> I'm not sure if it's in 3.9, but sage has been working on it.
[14:31] <wogri_risc> the workaround is to virt-attach it to a VM.
[14:32] <BillK> ah, thats right ... I'm on 3.8.13 ... openvswitch wouldnt build against 3.9
[14:32] <tnt> BillK: and even then, layering will only be in 3.10 AFAIR.
[14:32] <wogri_risc> hm... openvswitch still not in the mainline kernel? damn it.
[14:33] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[14:35] <alo> absynth : I trust it, and I'm evaluating 3 MON VM. I understood that I have to put MON on other than OSD servers so i wan't use other rack space for MON. I'm wrong? Could I run MON on OSD hardware?
[14:38] <absynth> no you cant
[14:38] <absynth> and you want three or more mons
[14:38] <absynth> don't introduce more complexity and error prone redundancy by spreading one mon on some kind of external HA solution
[14:38] <absynth> really, don't.
[14:38] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[14:39] <absynth> if you can't afford to set aside three small boxes for mons, you should start thinking why you want a distributed filesystem
[14:39] <wogri_risc> absynth: you can run MON servers on OSD servers.
[14:39] <absynth> (and you are setting aside two or more boxes for that ganeti thing anyway)
[14:39] <absynth> wogri_risc: technically you can, but last i checked, it was very, very discouraged
[14:40] <wogri_risc> I have 2 productive setups doing exactly this. I think you can do this "these days".
[14:40] <wogri_risc> but you unsettle my certainty.
[14:41] <absynth> Since monitors are light-weight, it is possible to run them on the same host as an OSD; however, we recommend running them on separate hosts.
[14:41] <absynth> http://ceph.com/docs/master/rados/operations/add-or-rm-mons/
[14:41] <wogri_risc> was searching for just this :)
[14:42] * beardo (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[14:42] <wogri_risc> so this is the warning for 'in case your load on the OSD's is too high your mon's might freak out'
[14:42] <absynth> yeah, or in case your osd logs fill up the / partition... etc.
[14:43] <wogri_risc> you name it.
[15:07] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Quit: wogri_risc)
[15:08] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[15:08] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Remote host closed the connection)
[15:09] <alo> absynth: I have a ganeti cluster, so deploy a mon doesn't require me additional effort. On the other side I was wrong: I can't use CEPH HA MON cause Cloudstack isn't able (now) to use multiple monitor but only one.
[15:09] <alo> I'm deploying ceph as primary storage backend of cloudstack.
[15:17] <absynth> i don't use cloudstack but not supporting more than 1 monitor server sounds like a really silly idea
[15:17] * markbby (~Adium@168.94.245.4) has joined #ceph
[15:18] * rustam (~rustam@94.15.91.30) has joined #ceph
[15:19] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[15:21] * dgollub (~dgollub@p5DCA3735.dip0.t-ipconnect.de) has joined #ceph
[15:21] <andreask> Cloudstack only allows to bind to one mon-IP ... so some sort of load-balancing is needed
[15:22] <alo> The support for multiple MON will be present in future release. Initially CS worked with NFS so the code of the storage layer is able to manage only one address for the storage
[15:24] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:24] <alo> andreask: that's right but how? Any idea?
[15:24] <andreask> alo: the easiest one is DNS round-robin
[15:25] <andreask> alo: I'd say ha-proxy would be a good idea to dedect dead mons
[15:25] <tnt> that ip is only used for the initial contact AFAIK (i.e. when connected, it gets ip for the other mons)
[15:27] <andreask> yeah ... would be a rare case ... so rrdns should be fine most times
[15:28] * netmass (~netmass@69.199.86.242) has joined #ceph
[15:29] <netmass> GM! Anybody around this early?
[15:30] <absynth> what do you mean, early?
[15:30] <absynth> it's past 3pm
[15:30] <jmlowe> 9:30 EDT here
[15:30] <netmass> 3PM is quite early if you were hacking till 7AM... ;)
[15:31] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[15:31] <netmass> Have a quick question... may be obvious knowledge but a quick search didn't find any answers.
[15:32] * drokita1 (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) Quit (Ping timeout: 480 seconds)
[15:32] * ccourtaut (~kri5@sd-20759.dedibox.fr) has joined #ceph
[15:33] <netmass> Did a fresh install of .61.1 on Centos 6. Upgraded the Centos Kernel to (3.9.1-1.el6.elrepo.x86_64). If I mount cephfs with the kernel driver, I do not have permissions to make any changes. Almost like the file system is read only. If I mount with FUSE... all works fine.
[15:33] <netmass> An obvious blunder?
[15:34] <andreask> selinux?
[15:34] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[15:34] <netmass> Awesome question... lemme check / disable if needed. Hang on...
[15:35] <netmass> It was set to "Permissive". Rebooting now... Forgot about SELinux since I've recently switched to Ubuntu for most testing.
[15:36] * uli (~uli@p5493E117.dip0.t-ipconnect.de) has joined #ceph
[15:36] * itamar_ (~itamar@82.166.185.149) has joined #ceph
[15:37] <loicd> ccourtaut: \o
[15:38] <netmass> andreask: You rocked it!! Many thanks for the super quick solution!
[15:38] <andreask> yw
[15:39] <andreask> netmass: though "permissive" should not be problem ;-)
[15:40] <netmass> Hmmm.... well... that seems to be the only change that I made... this is a pretty stock install except for the upgraded kernel. I will play around and see if the issue comes back.
[15:41] <netmass> I'll set up an automount and do a few reboots to see how it goes.
[15:46] * capri_wk (~capri@212.218.127.222) Quit (Read error: Connection reset by peer)
[15:46] * capri_wk (~capri@212.218.127.222) has joined #ceph
[15:47] * netmass (~netmass@69.199.86.242) Quit (Quit: Try HydraIRC -> http://www.hydrairc.com <-)
[15:51] * uli (~uli@p5493E117.dip0.t-ipconnect.de) Quit (Quit: Verlassend)
[15:53] * itamar_ (~itamar@82.166.185.149) Quit (Remote host closed the connection)
[15:54] * drokita (~drokita@199.255.228.128) has joined #ceph
[16:00] * ccourtaut (~kri5@sd-20759.dedibox.fr) Quit (Quit: Lost terminal)
[16:06] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[16:08] * kyle_ (~kyle@216.183.64.10) has joined #ceph
[16:09] <joelio> does ceph-deploy support setting http_proxy vars?
[16:10] <joelio> can manage via automation, but wondered if there's an 'in the box' solution
[16:13] <joelio> c/wi19
[16:13] <joelio> doh
[16:13] * kyle__ (~kyle@216.183.64.10) Quit (Ping timeout: 480 seconds)
[16:13] <Azrael> aarrrggg
[16:14] <Azrael> it looks like ceph's chef cookbook assume a big layer 2 domain for its networking
[16:17] <tnt> you have a routed ceph network ? damn, how many nodes do you have ?
[16:17] <tnt> (and what router do you use :p)
[16:18] <Azrael> hehe
[16:18] * jgallard (~jgallard@gw-aql-129.aql.fr) Quit (Remote host closed the connection)
[16:18] <Azrael> starting with 60 osd nodes
[16:18] <Azrael> each rack is routed
[16:18] * jgallard (~jgallard@gw-aql-129.aql.fr) has joined #ceph
[16:19] <Azrael> each rack has two /24's
[16:19] * kyle_ (~kyle@216.183.64.10) Quit (Ping timeout: 480 seconds)
[16:19] <Azrael> one for ceph public and one for ceph cluster (in terms of ceph.conf)
[16:19] <Azrael> aka one for communicating with clients and one for replication
[16:20] <Azrael> tnt: top of rack is juniper ex4200's
[16:20] <tnt> Nice.
[16:21] <Azrael> its no matter though
[16:21] <Azrael> 0.61.1 osd's keep crashing and wont startup again
[16:21] <tnt> http://tracker.ceph.com/issues/4974 /
[16:21] <tnt> >
[16:21] <tnt> ?
[16:21] <Azrael> not sure how much longer ceph will remain the solution going forward
[16:21] <Azrael> nah not that bug
[16:22] <Azrael> tnt: http://article.gmane.org/gmane.comp.file-systems.ceph.devel/15016
[16:22] <Azrael> tnt: that
[16:27] <joelio> another one, how do I specify journal in RAM using the ceph-deploy tool?
[16:32] <andreask> Azrael: ouch ... you use xfs filesystem and the journal is on the osd disk?
[16:35] <Azrael> andreask: yes
[16:36] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[16:36] <Azrael> we weren't sure about playing with dedicated ssd's for journals yet
[16:36] <andreask> any special settings for the mount?
[16:36] <tnt> even without dedicated ssd, I just created a couple of G partition on the same disk
[16:36] * berant (~blemmenes@vpn-main.ussignal.co) has joined #ceph
[16:37] <Azrael> (rw,noatime,attr2,noquota)
[16:37] <Azrael> so nothing special
[16:37] <Azrael> btw andreask and tnt
[16:37] <Azrael> i'm using the chef cookbook for ceph, slightly modified (to make it actually work)
[16:37] <Azrael> its process is to partition the osd disks
[16:38] <Azrael> with sdX1 for data and sdX2 for journal
[16:39] * rustam (~rustam@94.15.91.30) has joined #ceph
[16:40] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[16:40] * wschulze (~wschulze@38.98.115.249) has joined #ceph
[16:40] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[16:42] <andreask> what's your os?
[16:43] <andreask> Azrael: ^^^
[16:44] <Azrael> andreask: debian wheezy 64bit
[16:45] * vata (~vata@2607:fad8:4:6:6c9f:efe8:22e5:53c4) has joined #ceph
[16:45] <Azrael> andreask: this is using the cuttlefish packages from ceph.com/debian
[16:47] * ken1 (~quanta@117.0.249.36) has joined #ceph
[16:47] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[16:49] <ken1> Has anyone run into this problem: http://tracker.ceph.com/issues/5036?
[16:51] <andreask> Azrael: ok, so you use an extra partition for the journal?
[16:51] <Azrael> andreask: yep
[16:52] <Azrael> if an osd's backing device is 'sdk' (example), then /dev/sdk1 is for data (xfs filesystem) and /dev/sdk2 is for journal (no filesystem; direct)
[16:56] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[16:57] * gmason (~gmason@hpcc-fw.net.msu.edu) has joined #ceph
[16:57] * ken1 (~quanta@117.0.249.36) Quit (Ping timeout: 480 seconds)
[16:59] * ccourtaut (~ccourtaut@2a01:e0b:1:119:88e6:75e4:2c00:3af4) has joined #ceph
[17:03] * sagelap (~sage@2600:1012:b01d:c1d4:418:a85b:6e9:4607) has joined #ceph
[17:07] * dgollub (~dgollub@p5DCA3735.dip0.t-ipconnect.de) has left #ceph
[17:09] * ken1 (~quanta@117.0.249.36) has joined #ceph
[17:10] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Quit: Leaving.)
[17:10] * yehuda_hm (~yehuda@2602:306:330b:1410:942f:17b1:c111:4865) Quit (Read error: Connection timed out)
[17:11] <Azrael> is kyle bader here?
[17:15] <sagelap> doesn't look like it.. he's usually kbader
[17:16] * yehuda_hm (~yehuda@2602:306:330b:1410:7849:6691:3662:529c) has joined #ceph
[17:16] <paravoid> sagelap: I had a look (but didn't try) wip-suppress
[17:16] <Azrael> ahh ok. questions on the chef cookbook. looks like it doesn't support having more than one network for the public addresses and cluster addresses.
[17:16] <paravoid> sagelap: udev events are asynchronous, so this can still be racy
[17:17] <paravoid> i.e. you have to suppress, prepare, sleep N, unsuppress
[17:17] * Vjarjadian (~IceChat77@90.214.208.5) has joined #ceph
[17:17] <paravoid> not a huge deal obviously
[17:18] <paravoid> sagelap: on an unrelated note, I see Debian unstable still has 0.48; are you in contact with the maintainer? need any help with that?
[17:19] * wschulze (~wschulze@38.98.115.249) Quit (Quit: Leaving.)
[17:19] <paravoid> sagelap: you seem to be doing an excellent job on packaging yourselves, maybe either the current maintainer or myself should just sponsor your packages
[17:20] <sagelap> paravoid: laszlo is the current maintainer. should probably get 0.61.x uploaded soon.
[17:20] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) Quit (Remote host closed the connection)
[17:20] <paravoid> okay, let me know if you need any help
[17:20] <paravoid> maybe I should tell Laszlo that
[17:21] <sagelap> he's not actually a user, though, so it tends to slip
[17:22] * Volture (~Volture@office.meganet.ru) Quit (Remote host closed the connection)
[17:23] <Azrael> sagelap: do you have [time for] any thoughts on http://article.gmane.org/gmane.comp.file-systems.ceph.devel/15016 ?
[17:25] * tkensiski (~tkensiski@209.66.64.134) has joined #ceph
[17:25] * tkensiski (~tkensiski@209.66.64.134) has left #ceph
[17:25] * scuttlemonkey_ (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Quit: my troubles seem so far away, now yours are too...)
[17:25] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[17:25] * ChanServ sets mode +o scuttlemonkey
[17:32] * sagelap (~sage@2600:1012:b01d:c1d4:418:a85b:6e9:4607) Quit (Ping timeout: 480 seconds)
[17:34] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[17:34] * stxShadow (~jens@jump.filoo.de) has joined #ceph
[17:38] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Ping timeout: 480 seconds)
[17:41] * loicd (~loic@magenta.dachary.org) has joined #ceph
[17:41] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[17:43] * loicd (~loic@magenta.dachary.org) Quit (Remote host closed the connection)
[17:44] * sagelap (~sage@2600:1012:b010:46f4:215:ffff:fe36:60) has joined #ceph
[17:45] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:51] * sagelap1 (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) has joined #ceph
[17:52] * kyle_ (~kyle@216.183.64.10) has joined #ceph
[17:52] * sagelap (~sage@2600:1012:b010:46f4:215:ffff:fe36:60) Quit (Ping timeout: 480 seconds)
[17:53] <sagewk> elder: ping
[17:54] <sagewk> paravoid: re wip-suppress... i think a 'ceph-disk prepare --no-active /dev/foo' that does the requisuite udevadm settle before removing the supporess would be ideal. (and fwiw in your example you can s/sleep 1/udevadm settle/ and it'll be reliable)
[17:58] * jgallard (~jgallard@gw-aql-129.aql.fr) Quit (Quit: Leaving)
[18:00] <elder> Yes
[18:01] <sagewk> elder: no -rc1 yet.. how is the flatten stuff looking?
[18:01] <sagewk> paravoid: can i ask your opinion on https://github.com/ceph/ceph/pull/268/files ? :)
[18:02] * bergerx_ (~bekir@78.188.101.175) Quit (Quit: Leaving.)
[18:02] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[18:02] <elder> I still need to address a concern Josh had, but it may be able to wait if he's comfortable with that.
[18:03] <sagewk> elder: the delta will likely be smaller/ish, right? i'm thinking its better to get the bulk in now
[18:03] <elder> Correct.
[18:03] <sagewk> which branch is it?
[18:04] <sagewk> oh, nevermind.. there is an -rc1. :)
[18:04] <elder> OK. I have to push it in a few minutes.
[18:04] <elder> Damn.
[18:04] <sagewk> friday at 5:14pm :)
[18:04] <sagewk> oh, saturday.. nm
[18:05] <sagewk> whatever, hopefully won't matter.
[18:06] * dikkjo (~dikkjo@46-126-128-50.dynamic.hispeed.ch) has joined #ceph
[18:07] <elder> I'll still get my changes out. Maybe we can get special dispensation. Or maybe it's better not to ask. There are going to be a number of niggling regressions, but the flatten thing is really a feature.
[18:09] * scuttlemonkey_ (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[18:09] <sagewk> it's worth a shot. well, we'll see what the final series looks like!
[18:12] * gregaf1 (~Adium@2607:f298:a:607:c8b1:de29:9e01:d804) Quit (Quit: Leaving.)
[18:13] * markl_ (~mark@tpsit.com) Quit (Quit: leaving)
[18:13] * markl (~mark@tpsit.com) has joined #ceph
[18:13] * gregaf (~Adium@2607:f298:a:607:e538:5598:e131:ae4b) has joined #ceph
[18:15] * FroMaster (~DM@static-98-119-19-146.lsanca.fios.verizon.net) has joined #ceph
[18:15] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[18:16] * scuttlemonkey_ is now known as scuttlemonkey
[18:22] * loicd (~loic@2a01:e35:2eba:db10:c08a:a402:6f70:6654) has joined #ceph
[18:23] <FroMaster> Looking for guidance on a new Ceph install. I was planning to use ubuntu 12.04 LTS but the Ceph docs recommend upgrading the kernel to 3.6. What's the best path for me to take so I don't spend all day trying to figure this out?
[18:23] <kyle_> i used this..
[18:23] <kyle_> http://www.upubuntu.com/2012/12/installupgrade-to-linux-kernel-369-in.html?m=0
[18:25] <sagewk> fromaster: only need new kernel if you'll be using the kernel fs/rbd client(s)
[18:25] <infernix> * rbd: incremental backups
[18:25] <infernix> what does that mean?
[18:26] <jmlowe> you can have ceph spit out the delta's between snapshots
[18:26] <infernix> aha
[18:26] <infernix> in what format?
[18:27] <jmlowe> http://ceph.com/docs/master/dev/rbd-diff/
[18:28] <infernix> thanks
[18:28] <FroMaster> My goal is to setup Ceph as Virtual Machine on VMware vSphere 5.1 and point a few S3 Apps/Gateways at it to see/test functionality (not performance). I'm looking for the easiest way to accomplish this without spending all day trying to figure out os/app dependencies :)
[18:30] * yehudasa (~yehudasa@2607:f298:a:607:b1ff:b5ec:8c2f:bffd) Quit (Remote host closed the connection)
[18:31] <dikkjo> fromaster, im doing the same as you - im using ubuntu 13.04 for that
[18:31] <loicd> FroMaster: I've had success this week-end installing ceph from scratch on ubuntu 13.04 as described here http://dachary.org/?p=1971
[18:32] * leseb (~Adium@83.167.43.235) Quit (Quit: Leaving.)
[18:33] * tnt (~tnt@91.177.214.32) has joined #ceph
[18:34] <kyle_> i'm having an issue with my mds servers crashing pretty much rigth after the service is started. does anyone know where to start troubleshooting this? running 0.61.1.
[18:35] <kyle_> cluster is healthy and i managed to get 200GB copied to it before this started happening. i believe the upgrade to 0.61.1 is when it started happening
[18:37] <kyle_> this is from the mds server's log:
[18:37] <kyle_> 0> 2013-05-13 09:36:15.916062 7f6858f53700 -1 mds/journal.cc: In function 'void EMetaBlob::replay(MDS*, LogSegment*, MDSlaveUpdate*)' thread 7f6858f53700 time 2013-05-13 09:36:15.915433
[18:37] <kyle_> mds/journal.cc: 1408: FAILED assert(i == used_preallocated_ino)
[18:37] * sjust (~sam@2607:f298:a:607:baac:6fff:fe83:5a02) has joined #ceph
[18:37] <FroMaster> loicd: I'll give it a read and try it ou
[18:38] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[18:38] <loicd> FroMaster: let me know if you find typos / mistakes
[18:40] * LeaChim (~LeaChim@176.250.188.136) Quit (Ping timeout: 480 seconds)
[18:40] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[18:40] * markbby (~Adium@168.94.245.4) has joined #ceph
[18:41] <loicd> glowell: would you be so kind as to review my pull request https://github.com/ceph/ceph-deploy/pull/10 ?
[18:42] <glowell> ok
[18:43] <loicd> thanks :-)
[18:44] * markbby1 (~Adium@168.94.245.4) has joined #ceph
[18:44] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[18:44] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[18:45] * markbby (~Adium@168.94.245.4) has joined #ceph
[18:45] * markbby1 (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[18:46] * markbby1 (~Adium@168.94.245.4) has joined #ceph
[18:46] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[18:49] * markbby (~Adium@168.94.245.2) has joined #ceph
[18:50] * markbby1 (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[18:50] * sileht (~sileht@2a01:6600:8081:d6ff::feed:cafe) Quit (Ping timeout: 480 seconds)
[18:51] * LeaChim (~LeaChim@176.250.188.136) has joined #ceph
[18:53] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[18:58] * Vjarjadian (~IceChat77@90.214.208.5) Quit (Quit: Copywight 2007 Elmer Fudd. All wights wesewved.)
[19:01] <kyle_> can someone please help me with the "ceph mds newfs" syntax? I removed an mds server and want to remove the mds map.
[19:01] <alo> FroMaster: i used 13.04 too. really plain, no issue.
[19:03] * sileht (~sileht@gizmo.sileht.net) has joined #ceph
[19:06] * Tamil (~tamil@38.122.20.226) has joined #ceph
[19:06] * Cube (~Cube@12.248.40.138) has joined #ceph
[19:07] * eternaleye (~eternaley@2607:f878:fe00:802a::1) Quit (Remote host closed the connection)
[19:09] * eternaleye (~eternaley@2607:f878:fe00:802a::1) has joined #ceph
[19:11] * stxShadow (~jens@jump.filoo.de) Quit (Quit: Ex-Chat)
[19:12] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[19:13] * BillK (~BillK@124-169-231-135.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[19:13] * gmason (~gmason@hpcc-fw.net.msu.edu) Quit (Quit: Textual IRC Client: www.textualapp.com)
[19:13] <paravoid> sagewk: hey, just saw that
[19:14] <paravoid> sagewk: I don't see the point tbh
[19:14] <paravoid> what's the problem with "cryptsetup-bin | cryptsetup" ?
[19:15] <sagewk> yeah dunno :)
[19:15] <sagewk> can you leave a comment? as a non-DD i'm just trusting others here
[19:17] * LeaChim (~LeaChim@176.250.188.136) Quit (Ping timeout: 480 seconds)
[19:18] <paravoid> done
[19:20] * gmason (~gmason@hpcc-fw.net.msu.edu) has joined #ceph
[19:21] * gmason (~gmason@hpcc-fw.net.msu.edu) Quit ()
[19:21] * juuva_ (~juuva@dsl-hkibrasgw5-58c05e-231.dhcp.inet.fi) Quit (Ping timeout: 480 seconds)
[19:25] * eegiks (~quassel@2a01:e35:8a2c:b230:b981:9397:6cc3:f108) Quit (Remote host closed the connection)
[19:27] * jjgalvez (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) has joined #ceph
[19:27] <loicd> sjust: here is my wip attempt on PG/ReplicatedPG : http://tracker.ceph.com/issues/4928 and the corresponding branch https://github.com/dachary/ceph/commits/wip-4928
[19:28] * kyle__ (~kyle@216.183.64.10) has joined #ceph
[19:29] * DarkAceZ (~BillyMays@50.107.54.92) Quit (Ping timeout: 480 seconds)
[19:29] <loicd> it does compile but when I read https://github.com/dachary/ceph/blob/wip-4928/src/osd/IPG.h ... there are only a few functions that are not used by code that's outside of ReplicatedPG / PG
[19:30] * mega_au_ (~chatzilla@84.244.21.218) has joined #ceph
[19:30] <loicd> i've just finished fixing it so that it compiles and I'm not sure I should try that first. I'm open to suggestions ;-)
[19:31] * ShaunR- (~ShaunR@staff.ndchost.com) has joined #ceph
[19:32] * KindTwo (KindOne@h184.178.130.174.dynamic.ip.windstream.net) has joined #ceph
[19:32] * masACC (maswan@kennedy.acc.umu.se) has joined #ceph
[19:32] * dignus (~dignus@bastion.jkit.nl) has joined #ceph
[19:33] * rturk-away is now known as rturk
[19:33] * tkensiski1 (~tkensiski@209.66.64.134) has joined #ceph
[19:33] * Tamil1 (~tamil@38.122.20.226) has joined #ceph
[19:34] * tkensiski1 (~tkensiski@209.66.64.134) has left #ceph
[19:34] * LeaChim (~LeaChim@176.250.188.136) has joined #ceph
[19:34] * vipr_ (~vipr@78-23-113-37.access.telenet.be) has joined #ceph
[19:35] * scuttlemonkey_ (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[19:35] * Cube (~Cube@12.248.40.138) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * Tamil (~tamil@38.122.20.226) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * markbby (~Adium@168.94.245.2) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * FroMaster (~DM@static-98-119-19-146.lsanca.fios.verizon.net) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * kyle_ (~kyle@216.183.64.10) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * ken1 (~quanta@117.0.249.36) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * ShaunR (~ShaunR@staff.ndchost.com) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * vipr (~vipr@78-23-113-37.access.telenet.be) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * diegows (~diegows@190.190.2.126) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * br1 (~br1@79.59.209.97) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * Rocky (~r.nap@188.205.52.204) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * dignus_ (~dignus@bastion.jkit.nl) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * maswan (maswan@kennedy.acc.umu.se) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * portante` (~user@66.187.233.206) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * mega_au (~chatzilla@84.244.21.218) Quit (synthon.oftc.net larich.oftc.net)
[19:35] * KindTwo is now known as KindOne
[19:35] * mega_au_ is now known as mega_au
[19:35] * DarkAceZ (~BillyMays@50.107.54.92) has joined #ceph
[19:36] * FroMaster (~DM@static-98-119-19-146.lsanca.fios.verizon.net) has joined #ceph
[19:37] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[19:37] * Steki (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[19:37] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Read error: Connection reset by peer)
[19:37] * sagelap1 (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) Quit (Ping timeout: 480 seconds)
[19:37] * Rocky (~r.nap@188.205.52.204) has joined #ceph
[19:38] * br1 (~br1@79.59.209.97) has joined #ceph
[19:39] * BillK (~BillK@124-169-231-135.dyn.iinet.net.au) has joined #ceph
[19:40] <loicd> "Tests are written for the API to cover 100% of the LOC and most of the expected functionalities implemented by PG/ReplicatedPG." http://tracker.ceph.com/issues/4928 is going to be difficult, because the number of functions in https://github.com/dachary/ceph/blob/wip-4928/src/osd/IPG.h is fairly large
[19:41] * diegows (~diegows@190.190.2.126) has joined #ceph
[19:42] * markbby (~Adium@168.94.245.2) has joined #ceph
[19:42] <terje-> I'd like to create an rbd block device for use with a VM managed by libvirt
[19:42] <terje-> I see that I can run: qemu-img create -f rbd ...
[19:43] <terje-> however my qemu-img isn't patched with rbd support. Is there another way to create this block-device?
[19:43] <terje-> using something other than qemu-img
[19:43] * loicd (~loic@2a01:e35:2eba:db10:c08a:a402:6f70:6654) Quit (Quit: Leaving.)
[19:43] * loicd (~loic@magenta.dachary.org) has joined #ceph
[19:45] * ken1 (~quanta@117.0.249.36) has joined #ceph
[19:45] <jmlowe> rbd create
[19:46] * sagelap (~sage@2607:f298:a:607:50b9:53a2:27df:b1d1) has joined #ceph
[19:46] <terje-> great - thanks.
[19:48] * rustam (~rustam@94.15.91.30) has joined #ceph
[19:48] * dpippenger (~riven@206-169-78-213.static.twtelecom.net) has joined #ceph
[19:50] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[19:52] * TMM (~hp@535240C7.cm-6-3b.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[19:53] * TMM (~hp@535240C7.cm-6-3b.dynamic.ziggo.nl) has joined #ceph
[19:56] * dmick (~dmick@2607:f298:a:607:9067:2df2:f863:6490) has joined #ceph
[19:57] * stp (~stp@188-193-209-221-dynip.superkabel.de) has joined #ceph
[19:59] * stp (~stp@188-193-209-221-dynip.superkabel.de) Quit ()
[20:00] * themgt (~themgt@24-177-232-33.dhcp.gnvl.sc.charter.com) has joined #ceph
[20:02] * buck (~buck@bender.soe.ucsc.edu) has joined #ceph
[20:04] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[20:06] * Muhlemmer (~kvirc@86.127.208.243) has joined #ceph
[20:07] * ken1 (~quanta@117.0.249.36) has left #ceph
[20:10] * br1 (~br1@79.59.209.97) Quit (Ping timeout: 480 seconds)
[20:13] * dwt (~dwt@128-107-239-234.cisco.com) has joined #ceph
[20:13] * rustam (~rustam@94.15.91.30) has joined #ceph
[20:13] * alram (~alram@38.122.20.226) has joined #ceph
[20:14] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[20:18] <dmick> davidz: opinion on 3552?
[20:25] <davidz> dmick: let me look
[20:25] * uli (~uli@p5493E117.dip0.t-ipconnect.de) has joined #ceph
[20:26] <uli> hey there, if i mount cephfs (mount.ceph ) , do i have to have special options to enable user_xattr, osd-filesystems are xfs
[20:27] <uli> my problem is, that im not able to set rights from windows via samba4, i can do a setfattr -n user.test -v test file.txt, but i cant set any rights via windows-samba
[20:29] * yehudasa (~yehudasa@2607:f298:a:607:c1fc:1433:ca04:dc9e) has joined #ceph
[20:30] <davidz> dmick: 3552 was seen a long time ago, but I would try to reproduce unless you know of a change that would have fixed it.
[20:31] <dmick> not offhand, but I do know Tamil's been testing the heck out of ceph-deploy
[20:31] * rturk is now known as rturk-away
[20:33] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Remote host closed the connection)
[20:33] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[20:36] <terje-> after I have successfully created a block device (rbd create ...) for a VM. How do I tell it to use a particular qcow image file?
[20:36] * dwt (~dwt@128-107-239-234.cisco.com) Quit (Read error: Connection reset by peer)
[20:37] <terje-> I see that it can be done in the graphical virt-manager but how is that done from the command line?
[20:37] * sagelap (~sage@2607:f298:a:607:50b9:53a2:27df:b1d1) Quit (Ping timeout: 480 seconds)
[20:37] * sagelap (~sage@38.122.20.226) has joined #ceph
[20:39] <dmick> terje-: not sure I understand. If you rbd create an image file, that *is* the image you use
[20:39] <dmick> if you want to use a preexisting file, you can import that into an rbd image
[20:40] * rturk-away is now known as rturk
[20:40] * scuttlemonkey_ is now known as scuttlemonkey
[20:41] * rturk is now known as rturk-away
[20:41] * rturk-away is now known as rturk
[20:42] <terje-> ah, that's what I want to do
[20:42] <dmick> regardless, to hook it up, see http://ceph.com/docs/master/rbd/libvirt/
[20:42] <terje-> I have been following that actually
[20:43] <terje-> I guess I'm confused on how I 'import' an existing image to an rbd volume.
[20:43] <Tamil1> davidz: not required, i already retested that particular issue some time last week and its already fixed
[20:43] * med (~medberry@ec2-50-17-21-207.compute-1.amazonaws.com) has joined #ceph
[20:46] * fridudad_ (~oftc-webi@p4FC2DF68.dip0.t-ipconnect.de) has joined #ceph
[20:46] <jmlowe> terje-: well, you can always use dd
[20:47] * Cube (~Cube@12.248.40.138) has joined #ceph
[20:47] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[20:47] <terje-> I think I see how it's done.
[20:48] <terje-> the docs recommend qemu-img convert ... but seems like you can simply rbd import < img.qcow2
[20:48] * BillK (~BillK@124-169-231-135.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[20:50] <jmlowe> nope, just did this a couple of hours ago with qemu-img convert, next best thing would be to use nbd and dd to copy everything over to a rbd device
[20:50] <terje-> nbd and dd, aye?
[20:50] <terje-> can you provide an example?
[20:51] <jmlowe> also you could use a vm and attach img.qcow2 as let's say sdb and rbd_image as sdc and dd if=/dev/sdb of=/dev/sdc bs=1M
[20:52] <terje-> there must be a better way
[20:52] <terje-> than that
[20:52] <jmlowe> qemu-nbd -c /path/to/img.qcow2; rbd map rbd_image_name; dd if=/dev/nbd0 of=/dev/rbd1 bs=1M
[20:52] <jmlowe> well, get a patched qemu-img, that would be the non brute force way
[20:53] <terje-> yea, I guess I'll have to do that.
[20:54] <jmlowe> if your qemu isn't patched for rbd then you are going and you are talking about qcow2 then you probably want to use rbd to back a vm which means you are going to have to get a patched qemu anyway
[20:54] <jmlowe> let me try that again: if your qemu isn't patched for rbd and you are talking about qcow2 then you probably want to use rbd to back a vm which means you are going to have to get a patched qemu anyway
[20:55] <fridudad_> Does anybody know why qemu-img 1.4.1 crashes with format rbd but works fine if i specify format raw?
[20:57] <terje-> jmlowe: well, I was able to use openstack cinder to create volumes using rbd as the backend.
[20:57] <terje-> now, I'm trying to create a few VM's outside of openstack, just directly using libvirt and I've noticed that qemu-img doesn't speak rbd
[20:57] <terje-> so I have no idea what's going on.
[20:58] <terje-> I'll poke around a bit more.
[21:07] <jmlowe> fridudad_: I know I had to change the name in libvirt from rbd to raw when I went to qemu 1.4
[21:07] <jmlowe> terje-: that would be highly unusual if you had qemu patched for rbd and working with openstack and not qemu-img
[21:08] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:09] * ShaunR- (~ShaunR@staff.ndchost.com) Quit (Ping timeout: 480 seconds)
[21:10] <fridudad_> jmlowe: but why? qemu-img code still contains rbd
[21:10] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) Quit (Remote host closed the connection)
[21:11] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[21:11] * jerker (jerker@Psilocybe.Update.UU.SE) Quit (Quit: Lost terminal)
[21:15] <mrjack> what could it be that a rbd rm <image> always makes my ceph cluster mark one osd down falsely?
[21:16] <mrjack> is rbd rm so io expensive?
[21:17] <jmlowe> fridudad_: no idea, just was suprised when libvirt said rbd was an invalid type
[21:21] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[21:21] * mnash (~chatzilla@vpn.expressionanalysis.com) Quit (Max SendQ exceeded)
[21:24] <fridudad_> jmlowe. still strange - even tough i recognized the same
[21:25] * mega_au (~chatzilla@84.244.21.218) Quit (Quit: ChatZilla 0.9.90 [Firefox 20.0.1/20130409194949])
[21:26] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) has joined #ceph
[21:27] * eschnou (~eschnou@175.93-201-80.adsl-dyn.isp.belgacom.be) has joined #ceph
[21:28] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[21:28] * Tamil1 (~tamil@38.122.20.226) Quit (Quit: Leaving.)
[21:30] * Steki (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[21:30] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[21:31] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[21:32] * vipr_ (~vipr@78-23-113-37.access.telenet.be) Quit (Remote host closed the connection)
[21:39] <mrjack> http://tracker.ceph.com/issues/4974 - i have at least seen one where this happens on osdmap, so not only on mdsmap!!! https://gist.github.com/anonymous/5559212 - 2013-05-11 09:28:35.072836 7f0687994780 0 _convert_machines osdmap gv 583265 already exists
[21:39] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) Quit (Max SendQ exceeded)
[21:40] <sagewk> mrjack: what version were you updating from?
[21:40] <sagewk> and do you have a copy of the mon data directory we can look at?
[21:41] * mnash (~chatzilla@vpn.expressionanalysis.com) has joined #ceph
[21:41] * loicd reading http://ceph.com/docs/master/dev/peering/ again
[21:41] * Vjarjadian (~IceChat77@90.214.208.5) has joined #ceph
[21:48] * Tamil (~tamil@38.122.20.226) has joined #ceph
[21:49] <iggy> fridudad_: format isn't rbd, format is one of raw, qcow2, qed, etc.
[21:49] * uli (~uli@p5493E117.dip0.t-ipconnect.de) Quit (Quit: Verlassend)
[21:50] <iggy> rbd isn't a format, it's a method for accessing a disk image
[21:51] <fridudad_> iggy: but it was rbd in thr past and the qemu-img code still contains references to it
[21:51] <fridudad_> iggy: it also still accepts rbd but segfaults
[21:53] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[21:54] <iggy> yeah, I don't know why they decided to use rbd as a format
[21:55] <iggy> that breaks layering (now you can't have a qcow2 image on an rbd volume, etc.)
[21:55] <iggy> fridudad_: I got nothing, that shouldn't be like that, but I see that it is
[21:55] <iggy> the only thing I can suggest is bisecting to see where it broke
[21:56] <fridudad_> iggy: yeah i just hoped that somebody know why it was changed and if this was intended
[21:56] * coyo (~unf@pool-71-170-191-140.dllstx.fios.verizon.net) has joined #ceph
[22:00] * fridudad_ (~oftc-webi@p4FC2DF68.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[22:03] <PerlStalker> Can mon ids be a string, e.g. the host name?
[22:04] <iggy> fridudad: generally speaking, segfaults are _not_ expected
[22:04] <iggy> fridudad: so you should definitely follow up on it
[22:08] * dikkjo (~dikkjo@46-126-128-50.dynamic.hispeed.ch) Quit (Quit: Leaving)
[22:08] <davidz> PerStalker: I think they can be the hostname, but it is less confusing if mon.0 is id 0, mon.1 is id 1 etc… If you are putting OSDs on the same node, what are you calling them?
[22:11] <sagewk> perlstalker: naming them after the hostname is actaully the recommended option
[22:12] <sagewk> davidz: the problem with mon.0 is that adding/removing mons can will reorder the numeric ranks and it'll get out of sync with the name
[22:12] <paravoid> davidz: hey, since you're around
[22:12] <paravoid> wanna talk a bit about #4967?
[22:12] <paravoid> I'm still confused :)
[22:13] <davidz> paravoid: The log issue is still perplexing
[22:13] <terje-> this appears to be working: rbd import /export/defaultTemplate.qcow2 my-new-vm --size 20480 --pool mgmt
[22:14] <davidz> But you really need to make the configuration change to the monitors.
[22:14] <terje-> can I expect to boot that thing and be happy?
[22:14] <paravoid> davidz: == docs are wrong
[22:14] <davidz> yup
[22:16] <davidz> paravoid: In the next release we will name the configs mon_osd_min_down_reporters and mon_osd_min_down_reports to make it clear what daemon they control.
[22:20] <sagewk> davidz: udpating that branch to also fix the docs
[22:20] * LeaChim (~LeaChim@176.250.188.136) Quit (Read error: Connection reset by peer)
[22:21] <sagewk> and merging
[22:21] * eegiks (~quassel@2a01:e35:8a2c:b230:6d37:4c4c:b170:8dff) has joined #ceph
[22:21] * LeaChim (~LeaChim@176.250.188.136) has joined #ceph
[22:23] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:24] <cjh_> are there any docs on radosgw-admin user modify? the wiki says it exists but not what it can do or what you call it with
[22:29] * LeaChim (~LeaChim@176.250.188.136) Quit (Read error: Connection reset by peer)
[22:30] * LeaChim (~LeaChim@176.250.188.136) has joined #ceph
[22:31] * sagelap (~sage@38.122.20.226) Quit (Quit: Leaving.)
[22:32] <cjh_> how does one suspend a user in the radosgw with the admin tool? it seems like you have to use the admin api to do that? is that true
[22:37] <alo> hi, i'm sizing my little production cluster... I would like to use SSD for journal and SATA for OSD with 2x1Gbps eth for osd and 2x1Gbps for "client" network. Any suggestion if it's usefull to use SATA3 instead of SATA2 spinners ?
[22:37] * rustam (~rustam@94.15.91.30) has joined #ceph
[22:38] <gregaf> it's stupendously unlikely to make a difference on a standard spinning disk ;)
[22:39] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[22:43] <alo> gregaf: sorry... probably it's a stupid question, so I could use the cheaper? :)
[22:43] <gregaf> yeah, SATA2 is…3Gbps of bandwidth, aka 250MB/s, which no spinner can hit
[22:43] <gregaf> I guess you can't do so much fast access into the cache, but Ceph is going to be forcing it all to disk anyway
[22:47] <alo> absolutely clear and reasonable, thanks... I'll stay with my WD RE
[22:51] * sbreck (~oftc-webi@static-71-126-149-35.washdc.fios.verizon.net) has joined #ceph
[22:57] * berant (~blemmenes@vpn-main.ussignal.co) Quit (Quit: berant)
[23:01] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[23:01] * dwt (~dwt@128-107-239-235.cisco.com) has joined #ceph
[23:04] * eschnou (~eschnou@175.93-201-80.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[23:07] * leseb1 (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[23:07] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[23:08] <davidz> cjh_: Looking into radosgw-admin command
[23:10] <PerlStalker> sagewk: Cool. I don't remember seeing that recommendation when I set up my cluster a few months ago.
[23:10] <loicd> sjust: http://ceph.com/docs/master/dev/peering/ was last updated a year ago, do you think I should be careful about things that changed since then ? It looks accurate to me but ... I'm learning ;-)
[23:11] <cjh_> davidz: thanks :)
[23:14] <sjust> loicd: it is somewhat accurate
[23:14] <sjust> the better information is in doc/dev/osd_internals
[23:14] <sjust> loicd: what you linked is accurate at a high level
[23:17] * yehuda_hm (~yehuda@2602:306:330b:1410:7849:6691:3662:529c) Quit (Ping timeout: 480 seconds)
[23:17] <loicd> ok
[23:17] <loicd> thanks
[23:17] * yehuda_hm (~yehuda@2602:306:330b:1410:7849:6691:3662:529c) has joined #ceph
[23:27] <sagewk> sjust: oh good, it's a bug in the scrub itself too?
[23:27] <sjust> correct
[23:27] <sagewk> changelog needs to be updated :)
[23:27] <sjust> my favorite kind of bug
[23:30] <nhm_> which bug?
[23:35] * rustam (~rustam@94.15.91.30) has joined #ceph
[23:35] * sbreck (~oftc-webi@static-71-126-149-35.washdc.fios.verizon.net) Quit (Quit: Page closed)
[23:35] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[23:41] <cjh_> davidz: any luck?
[23:42] <davidz> cjh_; I'm trying to find the source code, since I didn't see anything in the docs like you.
[23:45] <davidz> cjh_: BTW, there is a "user suspend" in the usage for the command.
[23:46] <sjust> nhm_: 5020
[23:46] <cjh_> that might be what i'm looking for. i just want to stop a user from writing to the gateway if he exceeds a certain usage
[23:46] <cjh_> maybe take away his write privs
[23:47] * rustam (~rustam@94.15.91.30) has joined #ceph
[23:48] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:48] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[23:53] * mnash (~chatzilla@vpn.expressionanalysis.com) Quit (Max SendQ exceeded)
[23:54] <davidz> cjh_: Check out radowgw-admin —help. Specifying permission:
[23:54] <davidz> --access=<access>         Set access permissions for sub-user, should be one of read, write, readwrite, full
[23:56] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) has joined #ceph
[23:56] <cjh_> davidz: sweet. i think that'll do it :)
[23:58] * alo (~al.o@79.59.209.97) Quit ()
[23:59] <cjh_> davidz: is that only for swift or does that work for s3 also?

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.