#ceph IRC Log

Index

IRC Log for 2012-10-23

Timestamps are in GMT/BST.

[0:01] <elder> sagewk, you didn't happen to open another bug for this xfstests 13 issue did you? It's different from the bio_pair leak.
[0:06] * mkampe (~markk@2607:f298:a:607:222:19ff:fe31:b5d3) has joined #ceph
[0:25] <Leseb> joshd: do you also use the raw format within glance?
[0:26] <joshd> Leseb: if you want to clone it, yes
[0:26] <joshd> Leseb: OpenStack doesn't convert image formats for you
[0:27] <Leseb> joshd: yes I know, I use qemu-img -o for this
[0:28] <Leseb> joshd: now I'm stuck with a "IOError: [Errno 28] No space left on device"
[0:28] * aliguori (~anthony@cpe-70-123-146-246.austin.res.rr.com) has joined #ceph
[0:29] <joshd> Leseb: from glance? is your ceph cluster full?
[0:30] <Leseb> joshd: from the cinder-volume logs, and no my ceph cluster isn't full
[0:30] * mkampe (~markk@2607:f298:a:607:222:19ff:fe31:b5d3) Quit (Remote host closed the connection)
[0:31] <joshd> Leseb: what command is producing that error?
[0:31] <Leseb> cinder create --image-id 6f0ba2c7-0c72-4d1a-b35c-6e833ebbadaa --display-name boot-from-rbd 30
[0:33] <joshd> what's in the cinder logs before the error?
[0:33] <Leseb> joshd: ok I got it, /tmp is too small...
[0:33] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[0:33] <joshd> ah, so just doing a full copy for now
[0:33] <Leseb> joshd: 2G :/
[0:34] <Leseb> without using cinder? the old trick you mean?
[0:34] <joshd> Leseb: you can change that path with the volume_tmp_dir cinder-volume flag
[0:34] <Leseb> I'm gonna change that :)
[0:35] * sage (~sage@76.89.177.113) has joined #ceph
[0:41] * Q310 (~Q@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Read error: Connection reset by peer)
[0:41] <Leseb> joshd: AttributeError: 'module' object has no attribute 'exists'
[0:43] <joshd> Leseb: you have found a typo. s/os.exists/os.path.exists/
[0:44] <joshd> Leseb: in your cinder/volume/driver.py
[0:45] <Leseb> joshd: what should I change?
[0:45] <Leseb> joshd: sorry os.path.exists :)
[0:47] <Leseb> joshd: re-importing
[0:49] <Leseb> joshd: but why cinder doesn't directly import into ceph?
[0:50] <joshd> Leseb: not enough time to change cinder to use librbd instead of the command line tool. plus cloning is better anyway :)
[0:50] <Leseb> joshd: ok ok I see :)
[0:51] <Leseb> joshd: did you report the typo?
[0:52] <joshd> not yet
[0:54] <Leseb> joshd: why do we need raw instead of qcow2 for the boot from volume? qemu limitation?
[0:57] <joshd> Leseb: you can technically do qcow2 on top of rbd, but there are some funny things qemu does, and it would need changes in nova to support it
[0:58] <joshd> Leseb: qcow2 doesn't give you much that rbd by itself can't, it just adds more overhead
[0:59] <Leseb> joshd: IT WORKS! Finally :D
[1:00] <tziOm> Are you going to use boost forever?
[1:00] <tziOm> adds quite a size overhead
[1:00] <tziOm> and what is libedit? .. readline?
[1:01] <Leseb> joshd: thanks for the clarification about image format :)
[1:01] <joshd> Leseb: yw, glad you got it working... if you want to improve http://ceph.com/docs/master/rbd/rbd-openstack/ that'd be awesome
[1:03] <Leseb> joshd: Yes! I will :), I wrote everything step by step. We keep in touch, now I'm going to sleep :)
[1:04] <joshd> Leseb: cool, have a good night
[1:04] <Leseb> joshd: thanks, speak to you soon ;)
[1:06] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[1:09] * tziOm (~bjornar@ti0099a340-dhcp0778.bb.online.no) Quit (Remote host closed the connection)
[1:11] * lofejndif (~lsqavnbok@82VAAHDC9.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[1:11] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[1:12] * cdblack (8686894b@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[1:13] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[1:16] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[1:18] <elder> sagewk, nevermind. http://tracker.newdream.net/issues/3385
[1:23] * Tv_ (~tv@2607:f298:a:607:190f:ecf7:102b:da8f) Quit (Quit: Tv_)
[1:26] <sagewk> elder: nice!
[1:29] <joshd> elder: that's not checking the results of fiemap, is it?
[1:31] <elder> No
[1:31] <elder> I've narrowed it down to two operations.
[1:31] <elder> I'm about to start figuring out what block ops are getting performed now.
[1:33] * pentabular (~sean@70.231.129.172) has left #ceph
[1:38] * The_Bishop (~bishop@p4FCDE74C.dip.t-dialin.net) Quit (Ping timeout: 480 seconds)
[1:40] * LarsFronius (~LarsFroni@2a02:8108:3c0:79:e1f6:193e:7e16:23ea) has joined #ceph
[1:41] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[1:41] * loicd (~loic@magenta.dachary.org) has joined #ceph
[1:48] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[1:56] * nwatkins1 (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[1:57] * buck (~buck@bender.soe.ucsc.edu) Quit (Quit: Leaving.)
[1:57] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[2:01] * imjustmatthew (~imjustmat@pool-74-110-201-156.rcmdva.fios.verizon.net) has joined #ceph
[2:07] * benner (~benner@193.200.124.63) Quit (Remote host closed the connection)
[2:07] * benner (~benner@193.200.124.63) has joined #ceph
[2:13] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[2:19] * sjust (~sam@2607:f298:a:607:baac:6fff:fe83:5a02) Quit (Ping timeout: 480 seconds)
[2:19] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (Ping timeout: 480 seconds)
[2:19] * Tamil1 (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[2:19] * LarsFronius (~LarsFroni@2a02:8108:3c0:79:e1f6:193e:7e16:23ea) Quit (Quit: LarsFronius)
[2:21] * sagelap (~sage@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:22] * sagelap (~sage@99.sub-70-197-143.myvzw.com) has joined #ceph
[2:24] * sagelap1 (~sage@58.sub-70-197-144.myvzw.com) has joined #ceph
[2:26] * sagelap2 (~sage@90.sub-70-197-140.myvzw.com) has joined #ceph
[2:29] * sjust (~sam@2607:f298:a:607:baac:6fff:fe83:5a02) has joined #ceph
[2:30] * sagelap (~sage@99.sub-70-197-143.myvzw.com) Quit (Ping timeout: 480 seconds)
[2:31] * joshd (~joshd@38.122.20.226) has joined #ceph
[2:32] * sagelap (~sage@58.sub-70-197-144.myvzw.com) has joined #ceph
[2:32] * sagelap1 (~sage@58.sub-70-197-144.myvzw.com) Quit (Ping timeout: 480 seconds)
[2:33] * nwatkins1 (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[2:34] * sagelap2 (~sage@90.sub-70-197-140.myvzw.com) Quit (Ping timeout: 480 seconds)
[2:36] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[2:37] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[2:38] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit ()
[2:38] * imjustmatthew (~imjustmat@pool-74-110-201-156.rcmdva.fios.verizon.net) Quit (Remote host closed the connection)
[2:45] * imjustmatthew (~imjustmat@pool-74-110-201-156.rcmdva.fios.verizon.net) has joined #ceph
[2:50] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[2:55] * nwatkins1 (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:55] * nwatkins1 (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has left #ceph
[2:56] * sage (~sage@76.89.177.113) Quit (Ping timeout: 480 seconds)
[2:57] * BManojlovic (~steki@bojanka.net) has joined #ceph
[2:58] <sagelap> joshd: wip-assert-exists-2 pushed. passing fsx, way simpler!
[3:00] <imjustmatthew> after deleting and creating pools and running 'newfs' the filesystem is acting like it's storing data, but isn't actually writing it anywhere. Directories have files, but the files themselves have no data. Am I missing something obvious?
[3:00] <sagelap> joshd: i think we should still implement the low-level assert_exists op.. someday in the future we can use that instead and make things a bit cleaner
[3:01] <sagelap> imjustmatthew: did you specify the numeric pool ids for the newly created data and metadata pools?
[3:01] <joshd> sagelap: cool, I'll take a look.
[3:01] <imjustmatthew> yes
[3:02] <imjustmatthew> sagelap: they happened to be 10 and 11 as in 'mds newfs 10 11 --yes-i-really-mean-it
[3:02] <imjustmatthew> it's weird since the OSDs are writing a little bit of data out as if they're storing metadata but not the data
[3:02] <imjustmatthew> and the network I/O goes up as if the data is getting to the OSDs
[3:06] * sage (~sage@76.89.177.113) has joined #ceph
[3:09] * sagelap (~sage@58.sub-70-197-144.myvzw.com) Quit (Ping timeout: 480 seconds)
[3:23] * slang (~slang@ace.ops.newdream.net) Quit (Quit: slang)
[3:24] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[3:26] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit ()
[3:37] * baobrien (~baobrien@76.77.236.0) has joined #ceph
[3:37] * markkampe (~markk@2607:f298:a:607:a5c8:e65b:2686:fc15) has joined #ceph
[3:40] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[3:42] * markkampe is now known as mkampe
[3:43] * mkampe (~markk@2607:f298:a:607:a5c8:e65b:2686:fc15) Quit (Quit: Leaving.)
[3:44] * mkampe (~markk@2607:f298:a:607:a5c8:e65b:2686:fc15) has joined #ceph
[3:47] <imjustmatthew> sage: okay, this is a permissions problem. Giving the client the cap 'osd rw' without a pool specification makes it work. Is the auth list actually used pool IDs internally? or could a permission probelm have been introduced in the upgrade from 0.52 to 0.53 and the data was never really trashed by the MDS in the first place?
[3:50] * BManojlovic (~steki@bojanka.net) Quit (Ping timeout: 480 seconds)
[3:51] <joshd> sagewk: wip-assert-exists-2 looks good, it even makes things cleaner than before
[3:52] * joshd (~joshd@38.122.20.226) Quit (Quit: Leaving.)
[3:55] * deepsa (~deepsa@122.167.173.220) Quit (Ping timeout: 480 seconds)
[3:59] * mkampe (~markk@2607:f298:a:607:a5c8:e65b:2686:fc15) has left #ceph
[4:00] * chutzpah (~chutz@199.21.234.7) Quit (Quit: Leaving)
[4:05] * deepsa (~deepsa@115.242.169.179) has joined #ceph
[4:12] * deepsa_ (~deepsa@122.172.33.116) has joined #ceph
[4:13] * deepsa (~deepsa@115.242.169.179) Quit (Ping timeout: 480 seconds)
[4:13] * deepsa_ is now known as deepsa
[4:17] * aliguori (~anthony@cpe-70-123-146-246.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[4:26] * baobrien (~baobrien@76.77.236.0) Quit (Read error: Connection reset by peer)
[4:32] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[4:54] <gregaf> imjustmatthew: ah, you've run into a combination of access permissions and a capability parsing bug
[4:54] <gregaf> you need to grant the client access to the pool you're using for filesystem data
[4:55] <gregaf> and some portion of the dev releases had a bug whereby incomplete permissions were parsing out to be much more generous than they should
[4:55] <gregaf> I think joshd was the one who found and fixed that
[4:55] <imjustmatthew> gregaf: it's good to know it wasn't the MDS loosing everything after all :)
[4:56] <imjustmatthew> The clients had access as 'osd rw pool=data', is that consistent with that bug?
[4:59] <gregaf> umm, that actually looks like the correct grant to me but I'm not certain just now
[4:59] <gregaf> not entirely awake, sorry ;)
[5:02] <gregaf> imjustmatthew: try
[5:03] <gregaf> osd = "allow rw pool data"
[5:04] <gregaf> the ceph-authtool manpage has some examples, and the grammar did change a little bit
[5:04] <gregaf> off now, later! :)
[5:08] * Q310 (~Q@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[5:16] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) has joined #ceph
[5:23] <imjustmatthew> gregaf: That worked, thanks for your help!
[5:39] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[6:39] * Cube (~Cube@12.248.40.138) has joined #ceph
[6:45] * deepsa_ (~deepsa@122.167.175.54) has joined #ceph
[6:46] * deepsa (~deepsa@122.172.33.116) Quit (Read error: Connection reset by peer)
[6:46] * deepsa_ is now known as deepsa
[7:13] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[7:41] * dmick is now known as dmick_away
[7:52] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) Quit (Ping timeout: 480 seconds)
[8:00] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[8:04] * pingfan (~pfliu@202.108.130.138) has joined #ceph
[8:09] * loicd (~loic@magenta.dachary.org) has joined #ceph
[8:14] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[8:17] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:42] * Qu310 (Q@qten.qnet.net.au) has joined #ceph
[8:46] * AaronSchulz_ (~chatzilla@216.38.130.166) has joined #ceph
[8:47] * Qten (Q@qten.qnet.net.au) Quit (Ping timeout: 480 seconds)
[8:51] * AaronSchulz (~chatzilla@216.38.130.166) Quit (Ping timeout: 480 seconds)
[8:51] * AaronSchulz_ is now known as AaronSchulz
[9:15] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (Ping timeout: 480 seconds)
[9:21] * LarsFronius (~LarsFroni@95-91-242-165-dynip.superkabel.de) has joined #ceph
[9:21] * gohko (~gohko@natter.interq.or.jp) Quit (Read error: Connection reset by peer)
[9:24] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:25] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[9:31] * masterpe (~masterpe@2001:990:0:1674::1:82) Quit (Remote host closed the connection)
[9:33] * robert (~pfliu@202.108.130.138) has joined #ceph
[9:34] * pingfan (~pfliu@202.108.130.138) Quit (Ping timeout: 480 seconds)
[9:35] * robert (~pfliu@202.108.130.138) Quit (Remote host closed the connection)
[9:45] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[9:47] * loicd (~loic@207.209-31-46.rdns.acropolistelecom.net) has joined #ceph
[9:50] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[9:58] * LarsFronius (~LarsFroni@95-91-242-165-dynip.superkabel.de) Quit (Quit: LarsFronius)
[9:59] * gohko (~gohko@natter.interq.or.jp) has joined #ceph
[9:59] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[10:03] * masterpe (~masterpe@2001:990:0:1674::1:82) has joined #ceph
[10:16] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[10:21] * Kioob`Taff1 (~plug-oliv@local.plusdinfo.com) has joined #ceph
[10:23] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[10:25] * BManojlovic (~steki@bojanka.net) has joined #ceph
[10:42] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[10:53] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[11:19] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[11:25] <todin> morning, ceph.com is down?
[11:28] <liiwi> works for me
[11:30] <todin> liiwi: works here again as well.
[11:32] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:36] * Morg (d4438402@ircip3.mibbit.com) has joined #ceph
[11:54] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[11:55] <madkiss> Hello there
[11:55] <madkiss> I am trying to create an RBD image with qemu-img and wonder whether it is possible to use cephx with qemu-img
[12:05] * ninkotech (~duplo@89.177.137.231) Quit (Read error: Connection reset by peer)
[12:05] * ninkotech (~duplo@89.177.137.231) has joined #ceph
[12:08] * steki-BLAH (~steki@bojanka.net) has joined #ceph
[12:12] * loicd (~loic@207.209-31-46.rdns.acropolistelecom.net) Quit (Quit: Leaving.)
[12:13] * BManojlovic (~steki@bojanka.net) Quit (Ping timeout: 480 seconds)
[12:38] <Qu310> madkiss: try this http://ceph.com/docs/master/rbd/rbd-openstack/
[12:42] <madkiss> ?!
[12:43] <madkiss> while we are at it, when did "ceph auth get-or-create" become available?
[12:44] <madkiss> my 0.48.2 argonaut packages don't have it
[12:49] <madkiss> wait, this is some sort of screw up with the ubuntu packaging system
[12:54] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[12:59] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) Quit (Read error: No route to host)
[12:59] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[13:01] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Ping timeout: 480 seconds)
[13:01] * tryggvil_ is now known as tryggvil
[13:03] * match (~mrichar1@pcw3047.see.ed.ac.uk) has joined #ceph
[13:09] * Morg (d4438402@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[13:15] * steki-BLAH (~steki@bojanka.net) Quit (Ping timeout: 480 seconds)
[13:23] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[13:28] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[13:30] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:31] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Quit: slang)
[13:32] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[13:36] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[13:36] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Read error: Connection reset by peer)
[13:37] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:37] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[13:39] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:40] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[13:43] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[13:45] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:51] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[13:54] * lofejndif (~lsqavnbok@28IAAIKSA.tor-irc.dnsbl.oftc.net) has joined #ceph
[13:58] * dabeowulf (dabeowulf@free.blinkenshell.org) Quit (Ping timeout: 480 seconds)
[14:16] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[14:23] * ninkotech (~duplo@89.177.137.231) Quit (Read error: Connection reset by peer)
[14:23] * ninkotech (~duplo@89.177.137.231) has joined #ceph
[14:25] * sagelap (~sage@m8a0f36d0.tmodns.net) has joined #ceph
[14:40] <sagelap> good morning!
[14:40] <slang> good morning
[14:41] <elder> sagelap, you are early.
[14:48] * loicd (~loic@78.250.162.182) has joined #ceph
[14:52] <todin> good afternoon ;-)
[14:54] <madkiss> If somebody ever asks, I wrote a nifty little how-to on how to remove the "D" from your DRBD cluster … http://www.hastexo.com/resources/hints-and-kinks/migrating-virtual-machines-block-based-storage-radosceph
[14:55] * sagelap (~sage@m8a0f36d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[14:59] * loicd (~loic@78.250.162.182) Quit (Quit: Leaving.)
[15:01] * psomas_ (~psomas@inferno.cc.ece.ntua.gr) has left #ceph
[15:06] * aliguori (~anthony@cpe-70-123-146-246.austin.res.rr.com) has joined #ceph
[15:10] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[15:11] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[15:11] * loicd (~loic@90.84.144.246) has joined #ceph
[15:14] <zynzel> madkiss: what is RB? ;)
[15:14] <madkiss> zynzel: s/d//, not s/d//g :)
[15:16] <zynzel> s/D// or s/d//i! :)
[15:16] <madkiss> zynzel: did I mention that I'm active in Waste Management Consulting?
[15:18] <madkiss> ;P
[15:18] <zynzel> ;)
[15:24] <Fruit> can't you just suspend the vm, dd the backend devices and resume it?
[15:26] * calebamiles1 (~caleb@c-24-128-194-192.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[15:31] * loicd (~loic@90.84.144.246) Quit (Ping timeout: 480 seconds)
[15:36] <scalability-junk> madkiss, you are from hastexo?
[15:38] <madkiss> scalability-junk: I am
[15:38] <madkiss> http://www.hastexo.com/who/martin
[15:38] <madkiss> that is me
[15:39] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[15:40] * loicd (~loic@magenta.dachary.org) has joined #ceph
[15:41] <scalability-junk> any estimate or if hastexo would help with a all in one "production" setup with openstack + ceph ontop of a hetzner server or colocation...?
[15:42] <scalability-junk> *an
[15:42] <madkiss> scalability-junk: could you write a short email to sales@hastexo.com with some more details? :)
[15:43] <scalability-junk> I could :) just wanted to make sure you do tiny setups too :P
[15:43] <madkiss> we do almost everything as long as we're paid for it
[15:45] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[15:45] <scalability-junk> why does that sound expensive :P
[15:46] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[15:52] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) Quit (Remote host closed the connection)
[15:54] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) has joined #ceph
[15:59] * MikeMcClurg (~mike@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) Quit (Ping timeout: 480 seconds)
[16:01] * noob2 (a5a00214@ircip2.mibbit.com) has joined #ceph
[16:06] * vata (~vata@208.88.110.46) has joined #ceph
[16:37] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) has joined #ceph
[16:50] * sagelap (~sage@185.sub-70-197-0.myvzw.com) has joined #ceph
[16:52] * cdblack (86868b4a@ircip2.mibbit.com) has joined #ceph
[17:02] * calebamiles (~caleb@c-24-128-194-192.hsd1.vt.comcast.net) has joined #ceph
[17:07] * sagelap (~sage@185.sub-70-197-0.myvzw.com) Quit (Ping timeout: 480 seconds)
[17:11] * aliguori (~anthony@cpe-70-123-146-246.austin.res.rr.com) Quit (Remote host closed the connection)
[17:20] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) Quit (Remote host closed the connection)
[17:21] * justinwarner1 (~ceg442049@osis111.cs.wright.edu) has joined #ceph
[17:22] <justinwarner1> I'm setting up ceph on two machines (For now) and I did the password-less ssh through su, but when I run mkcephfs with my ceph.conf, it goes on to the other machine: pushing conf and monmap to Cocke:/tmp/mkfs.ceph.11402
[17:23] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[17:23] <justinwarner1> Here it asks for a root@cocke's password: which I don't have the user "root"'s password, but I have my professors password which is a su, but using it does not work.
[17:24] <justinwarner1> Was wondering if I can put conf and monmap on the other system manually? I believe I read you could, but I can't find the document again.
[17:24] <madkiss> mkcephfs really doesn't do anything else other than scping around
[17:25] <madkiss> you will, however, need root capabilities in one way or another to put stuff into /etc/ceph i think
[17:25] <justinwarner1> I have a su account that I can use, but when I run mkcephfs, it wants the root account, which I don't have.
[17:25] <justinwarner1> Unless their is an option to use a different su user, and not root, then I can move things manually.
[17:25] <madkiss> i'm fairly sure you have it, it just doesn't have a password set.
[17:26] <justinwarner1> It's on a university, can't imagine them not setting the password.
[17:26] <madkiss> justinwarner: if you do the su thing and do "id" afterwards, what UID do you see?
[17:27] * verwilst (~verwilst@d5152FEFB.static.telenet.be) Quit (Quit: Ex-Chat)
[17:27] <justinwarner1> ceg442049@Wilkinson:~$ su id
[17:27] <justinwarner1> Unknown id: id
[17:27] <justinwarner1> [or]
[17:27] <justinwarner1> # id
[17:27] <justinwarner1> uid=0(root) gid=0(root) groups=0(root)
[17:27] <madkiss> so you're root anyway
[17:27] <justinwarner1> So it does say I'm root, but it won't accept my password to do anything with it.
[17:28] <justinwarner1> Well, anything through the prompt with mkcephfs
[17:28] <madkiss> can you deploy SSH user keys on these machines?
[17:28] <justinwarner1> I believe so, yes.
[17:28] <madkiss> just deploy your own key on every machine that's gonna be part of the cluster, then
[17:29] <justinwarner1> To /root/.ssh? Or to my users home .ssh?
[17:30] <madkiss> well, if you want it for root, you need to copy it to /root/.ssh/authorized_keys
[17:31] <madkiss> if you can not log in as root, copy the file to $YOURUSERHOME/authorized_keys, then do the su magic and paste it into there afterwards
[17:31] <justinwarner1> Alright, I'll try that now. Thanks.
[17:38] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[17:38] <justinwarner1> That did allow me to push the monmap and conf part to work.
[17:39] <justinwarner1> failed: 'ssh root@Cocke /sbin/mkcephfs -d /tmp/mkfs.ceph.12584 --init-daemon osd.1'
[17:39] <justinwarner1> That's where it is now
[17:39] <madkiss> unfortunately you didn't past the part of the error message containing the actual error message. ;)
[17:40] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[17:40] <justinwarner1> Um
[17:40] <justinwarner1> Is their a preferred way? It's several lines.
[17:40] <madkiss> http://paste.debian.net/
[17:41] <justinwarner1> Thank you. http://paste.debian.net/203003/
[17:43] <madkiss> jusis that suse or redhat or something?
[17:44] <justinwarner1> Debian/Ubuntu/KDE
[17:44] <justinwarner1> Debian
[17:44] <justinwarner1> Lol
[17:44] * Tv_ (~tv@2607:f298:a:607:190f:ecf7:102b:da8f) has joined #ceph
[17:45] <madkiss> justinwarner: Squeeze? Wheezy?
[17:46] <justinwarner1> Um, I'm not too sure.
[17:47] <justinwarner1> # uname -a
[17:47] <justinwarner1> Linux Wilkinson.osis.cs.wright.edu 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
[17:47] <justinwarner1> Wilkinson:/home/ceg4420/ceg442049
[17:47] <justinwarner1> # cat /proc/version
[17:47] <justinwarner1> Linux version 3.2.0-29-generic (buildd@allspice) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012
[17:47] <madkiss> use pastebin, please.
[17:47] <madkiss> That''s Ubuntu
[17:47] <madkiss> obviously.
[17:47] <madkiss> do you have the same version of ceph installed on all machines?
[17:48] <madkiss> "dpkg —list | grep ceph" from all machines, and compare then to find out if ceph has the same version on all machines
[17:48] <madkiss> which it should
[17:48] <justinwarner1> Thought Ubuntu was debian based.
[17:48] <madkiss> it is
[17:48] <madkiss> but it's still different from plain debian
[17:48] * aliguori (~anthony@32.97.110.59) has joined #ceph
[17:48] <madkiss> I'm madkiss@debian.org, I know that part of the project's history ;-)
[17:48] <justinwarner1> Lol
[17:49] <justinwarner1> Gotya
[17:50] <justinwarner1> One has 0.48.2argonaut-1precise and the other has 0.41-1ubuntu2.1.
[17:50] <justinwarner1> I'm guessing that's the problem?
[17:50] <madkiss> by all means, yes.
[17:50] <madkiss> make sure you have the same ceph version on all systems
[17:50] <madkiss> otherwise, you'll be entering a world full of pain
[17:50] <Tv_> "Linaro" in the kernel version?
[17:51] <Tv_> "lsb_release -a" tends to be the best way to ask "what am i running"
[17:51] <Tv_> oh huh, the gcc version does have Linaro on it
[17:51] <Tv_> funky
[17:52] <justinwarner1> Alright, thanks a lot. I'll try this and hopefully it works.
[17:52] <madkiss> Tv_: voodoo, y'know ;)
[17:53] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) Quit (Ping timeout: 480 seconds)
[18:01] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) has joined #ceph
[18:03] * Kioob`Taff1 (~plug-oliv@local.plusdinfo.com) Quit (Quit: Leaving.)
[18:04] <justinwarner1> madkiss: that worked great (I think), upon starting, then doing a ceph -s, it outputs: "2012-10-23 12:03:31.492022 7fd8f7ebc700 monclient: hunting for new mon" a lot on both machines set up. Is this supposed to display?
[18:08] <madkiss> it might take a little time until the mons are up
[18:08] <madkiss> you could try "/etc/init.d/ceph -a start"
[18:09] <justinwarner1> Said it's starting mon.a, then starting the other things which were already running (mds and the two osd's).
[18:10] <madkiss> you're running one mon only?
[18:10] <justinwarner1> Only doing this on two machines right now, more once I get the initial two set up.
[18:10] <madkiss> you'll need two mons anyway. your cluster needs a quorum.
[18:10] <madkiss> and 1 actually isn't a qurum.
[18:10] <madkiss> ;)
[18:11] <justinwarner1> Qurum (also written Qurm) is an upmarket suburb of Muscat in Oman. Its main attraction is the Qurum Natural Park,
[18:11] <justinwarner1> What's a qurum?
[18:11] <madkiss> you're more looking for http://en.wikipedia.org/wiki/Quorum_(distributed_computing) i think.
[18:11] <justinwarner1> That's better.
[18:11] <justinwarner1> Lol
[18:12] * Cube (~Cube@12.248.40.138) has joined #ceph
[18:12] <justinwarner1> So if I'm to make changes (Now and in the future), I change the ceph.conf file, push it out to cluster nodes, then I can do the same process on each machine, and that should work?
[18:13] <justinwarner1> And restart it and what not.
[18:13] <madkiss> you'll have to manually create the files needed for a mon. there's documentation out in the web on how to do that :)
[18:15] <justinwarner1> Does the monmap and mkcephfs not do that for you? I thought it did?
[18:16] <madkiss> well, how would mkcephfs create a mon that wasn't present in ceph.conf when you ran it?
[18:17] <justinwarner1> Can you not rerun mkcephfs?
[18:17] <madkiss> i think it will bark if you let it create the mon/mds/osd-structure on a host where the filles are in place already.
[18:18] <justinwarner1> http://ceph.com/wiki/Designing_a_cluster#Ceph_Monitor_.28Ceph-MON.29
[18:18] <justinwarner1> According to that, 1 is okay, 2 is not. So should I just get three computers set up on the cluster then?
[18:18] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[18:18] <justinwarner1> And I see where that problem is, assumed it would just ignore it or overwrite what was needed.
[18:20] <gregaf> one monitor is fine as long as that's all the cluster ever had
[18:20] <madkiss> ah
[18:20] <madkiss> gregaf knows more about it than me, then
[18:20] <madkiss> :)
[18:20] <justinwarner1> Lol
[18:22] <justinwarner1> So then, if I have 30 machines I want to set up, they all should be osd's, but how many mon's and mds's should I have?
[18:23] <justinwarner1> I can't find anywhere where it says the size of clusters and what the proportions of each node should be.
[18:24] <gregaf> in a real setup you should have 3 monitors
[18:25] <gregaf> and if you're using CephFS (the POSIX filesystem) then you'll need one MDS
[18:25] <gregaf> you probably shouldn't run more than that; it's pretty unstable
[18:25] <justinwarner1> Alright.
[18:26] <justinwarner1> Sounds great.
[18:26] <justinwarner1> Thank you both, =)
[18:30] * sagelap (~sage@59.sub-70-197-0.myvzw.com) has joined #ceph
[18:31] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[18:31] * justinwarner1 (~ceg442049@osis111.cs.wright.edu) Quit (Quit: Leaving.)
[18:33] * loicd (~loic@magenta.dachary.org) has joined #ceph
[18:35] <sagelap> slang, davidz, gregaf: missing standup. i fixed the deb (and rpm) builds after the libcephfs-java merge. other than that, nothing new.
[18:35] <sagelap> joshd: working on negative caching in ObjectCacher
[18:35] <jks> gregaf: I have 3 monitors in my setup currently - if one goes down am I then supposed to manually down the second monitor to get to a situation with 1 running monitor?
[18:35] <gregaf> jks: no, no!
[18:36] <gregaf> you've got a cluster with three defined monitors
[18:36] * rosco (~r.nap@188.205.52.204) Quit (Quit: *Poof*)
[18:36] <jks> just checking :-)
[18:36] * rosco (~r.nap@188.205.52.204) has joined #ceph
[18:36] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) has joined #ceph
[18:36] <gregaf> in order to make any progress you need a strict majority of your monitors (ie, 2)
[18:36] <jks> ah, okay - so majority is defined in terms of the number of monitors in ceph.conf, and not the number of running monitors?
[18:36] <gregaf> we just don't recommend that you set up cluster with even numbers of monitors because they don't add any more failure resistance
[18:36] * rweeks (~rweeks@12.25.190.226) has joined #ceph
[18:36] <gregaf> jks: well, it's defined in terms of the monitors defined in your monitor map
[18:37] <jks> (I assume because the remaining monitors cannot know if the down monitors are really down, or they themselves have been cut off from the network)
[18:37] <gregaf> if you used mkcephfs, then the monitors in your ceph.conf, yes
[18:37] <gregaf> yeah, that's right
[18:37] <jks> gregaf: I did! - Thanks for the explanation
[18:46] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[18:52] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[18:55] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[18:55] * deepsa (~deepsa@122.167.175.54) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[18:57] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[19:03] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[19:04] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) Quit (Remote host closed the connection)
[19:05] * sagelap1 (~sage@soenat3.cse.ucsc.edu) has joined #ceph
[19:05] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) has joined #ceph
[19:10] * sagelap (~sage@59.sub-70-197-0.myvzw.com) Quit (Ping timeout: 480 seconds)
[19:12] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[19:12] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) Quit (Remote host closed the connection)
[19:12] * houkouonchi-work (~linux@12.248.40.138) Quit (Remote host closed the connection)
[19:13] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[19:14] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[19:19] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) has joined #ceph
[19:20] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[19:21] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[19:38] * loicd (~loic@90.84.146.196) has joined #ceph
[19:42] <PerlStalker> Will it cause problems if I try to mount a cephfs on a node that is part of the ceph cluster?
[19:44] <joshd> if you use a kernel client on a node with an osd, it can lead to a deadlock just like a loopback nfs mount
[19:45] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Ping timeout: 480 seconds)
[19:45] <PerlStalker> Interesting.
[19:45] * scalability-junk (~stp@188-193-208-44-dynip.superkabel.de) has joined #ceph
[19:45] <joshd> it's a generic issue with kernel clients and userspace storage
[19:46] <PerlStalker> Fair enough
[19:48] * rweeks (~rweeks@12.25.190.226) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[19:54] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[19:57] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) Quit (Remote host closed the connection)
[19:58] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) has joined #ceph
[19:58] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) Quit (Ping timeout: 480 seconds)
[20:00] <madkiss> PerlStalker: http://h10025.www1.hp.com/ewfrf/wc/document?cc=us&lc=en&dlc=en&tmp_geoLoc=true&docname=c02073470 if you want more information
[20:00] <madkiss> (it's the same for ceph as it is for nfs)
[20:00] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) Quit (Remote host closed the connection)
[20:10] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) has joined #ceph
[20:12] * LarsFronius (~LarsFroni@2a02:8108:3c0:79:e847:d1c:63db:6c06) has joined #ceph
[20:27] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[20:36] * slang (~slang@ace.ops.newdream.net) has joined #ceph
[20:39] * loicd (~loic@90.84.146.196) Quit (Ping timeout: 480 seconds)
[20:44] * rweeks (~rweeks@12.25.190.226) has joined #ceph
[20:54] * aliguori (~anthony@32.97.110.59) Quit (Quit: Ex-Chat)
[20:55] * aliguori (~anthony@32.97.110.59) has joined #ceph
[20:57] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:58] * rweeks (~rweeks@12.25.190.226) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[20:59] * ninkotech (~duplo@89.177.137.231) Quit (Read error: Connection reset by peer)
[21:00] * ninkotech (~duplo@89.177.137.231) has joined #ceph
[21:11] <jtang> will ross turk be at the amsterdam workshop for ceph?
[21:11] <rturk> yes, I'll be there
[21:11] <jtang> cool, we're planning a few things here in relation to things we might want to do with ceph
[21:12] <jtang> i just had a discussion with one of my co-workers, we might want to throw a few ideas towards you guys when the workshop happens
[21:12] <jtang> and thats in relation to doing something with ceph
[21:12] <jtang> and academia
[21:14] <jtang> so rturk you might get hit with some questions and ideas in a weeks time in amsterdam
[21:14] <rturk> Very cool
[21:14] <rturk> I look forward to it :)
[21:15] <jtang> two of my co-workers are going to this workshop
[21:15] <jtang> from two different perspectives and projects
[21:15] <jtang> :P
[21:15] <jtang> sadly im not going :P
[21:16] <rturk> too bad, amsterdam is a fun spot!
[21:16] <rturk> I think it's going to be a really cool event.
[21:16] <rturk> but there will be more in the future :)
[21:16] <jtang> do you have any indicators on the size of the event?
[21:16] <rturk> I don't have the latest registration numbers at the moment
[21:16] <jtang> im curious as to what the up take on the ceph product is like
[21:17] <jtang> we're thinking that ceph is a pretty fit for a few of our use cases
[21:17] <jtang> and we have a few things we would like to do and see being done for our projects
[21:17] <rturk> from what I've seen, uptake is very good
[21:18] <jtang> btw i work in a HPC facility, but im allocated to a preservation and archiving project
[21:18] <rturk> ah, gotcha. how will I know your colleagues when I meet them?
[21:18] <jtang> they will ask questions, they will be from trinity college dublin
[21:19] <jtang> we're still in the middle of cooking up a few proposals and ideas in relation to ceph though
[21:20] <jtang> though im at no liberaty to discuss the details of what we're cooking up
[21:20] <jtang> or thinking about
[21:21] * jtang is resisting the urge to spew out ideas which might not happen or be proposed
[21:21] <rturk> :)
[21:21] <rturk> I will keep an eye out for them in AMS
[21:22] <jtang> i'll be making sure i talk to you guys at SC12 next month as well
[21:22] <rturk> I may not be there myself, but a whole crew of us will be
[21:23] <rturk> alex and mark will be, I think
[21:23] * ninkotech (~duplo@89.177.137.231) Quit (Read error: Connection reset by peer)
[21:24] <jtang> are they engineers/developers? or sales/marketing/etc ??
[21:24] * ninkotech (~duplo@89.177.137.231) has joined #ceph
[21:27] <rturk> alex and mark are both engineers. we're sending a few other folks from sales/marketing too
[21:28] <jtang> excellent
[21:28] <jtang> i can quiz them about some features and stuff then
[21:29] <jtang> im looking forward to sc12
[21:30] <rturk> I hear it's a great show
[21:31] <jtang> yea its pretty much cutting edge computing i guess
[21:31] <jtang> the tech from it eventually filters down to the general public
[21:31] <jtang> its like the formula1 of the computing world
[21:32] * LarsFronius (~LarsFroni@2a02:8108:3c0:79:e847:d1c:63db:6c06) Quit (Quit: LarsFronius)
[21:32] <jtang> i wonder who will be #1 in the top500 list (i suspect llnl will be)
[21:32] <jtang> i hope you guys wont be in with the disruptive tech area
[21:34] <rturk> looking at the floor plan now
[21:34] <rturk> pretty large conference :)
[21:38] * dmick_away is now known as dmick
[21:42] <jtang> yea there's about 14k people there usually
[21:42] <jtang> anyway its late in ireland
[21:43] <jtang> will talk again tomorrow
[21:48] <rturk> ok - night
[21:49] * MikeMcClurg (~mike@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) has joined #ceph
[21:52] * johnl (~johnl@2a02:1348:14c:1720:c83c:98a3:cc38:a950) Quit (Remote host closed the connection)
[21:52] * johnl (~johnl@2a02:1348:14c:1720:69e7:15b0:da5:1ec0) has joined #ceph
[21:54] * steki-BLAH (~steki@69.164.220.50) has joined #ceph
[21:55] * sagelap (~sage@soenat3.cse.ucsc.edu) has joined #ceph
[21:55] * sagelap1 (~sage@soenat3.cse.ucsc.edu) Quit (Read error: Connection reset by peer)
[21:56] * sagelap (~sage@soenat3.cse.ucsc.edu) Quit ()
[21:56] * sagelap (~sage@soenat3.cse.ucsc.edu) has joined #ceph
[22:13] <dmick> Thanks yehuda, indeed, to run without cephx one now needs to explicitly specify "none" for auth cluster required, auth service required, and auth supported
[22:13] <dmick> following up for extra doc and/or syntax checking for 'blank'
[22:26] * rweeks (~rweeks@12.25.190.226) has joined #ceph
[22:31] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:40] * rweeks (~rweeks@12.25.190.226) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[22:53] * aliguori (~anthony@32.97.110.59) Quit (Quit: Ex-Chat)
[23:01] <scuttlemonkey> Tv_: you have a few to chat about ceph-deploy stuff at some point here?
[23:01] <scuttlemonkey> having a few difficulties while trying to get a handle on all the moving pieces
[23:01] <Tv_> scuttlemonkey: an interview starts in <15 min
[23:01] * nhmlap (~nhm@38.122.20.226) has joined #ceph
[23:01] <madkiss> Which reminds me.
[23:01] <scuttlemonkey> yah, I'm gonna chat w/ Ross for the next 30-50 mins
[23:01] <madkiss> Can I use cephx with qemu-img somehow?
[23:01] <scuttlemonkey> later today or tomorrow?
[23:02] <scuttlemonkey> (tomorrow is yer last day, right?)
[23:02] <Tv_> scuttlemonkey: tomorrow, then
[23:02] <Tv_> yup
[23:02] <scuttlemonkey> cool, you PDT?
[23:02] <Tv_> yup
[23:02] <scuttlemonkey> right on, poke me when you roll in, I'm EDT so should be around
[23:02] <scuttlemonkey> thanks :)
[23:02] <joshd> madkiss: yes
[23:02] <nhmlap> good afternoon #ceph
[23:03] <madkiss> joshd: what would be the correct syntax for that? :)
[23:03] <joshd> madkiss: it's just like using cephx with qemu normally I think
[23:04] <joshd> madkiss: so you could specify rbd:pool/image:conf=/etc/ceph/ceph.conf, or manually do rbd:pool/image:auth_supported=cephx:key=BASE64key:mon_addr=...
[23:04] <madkiss> ahum
[23:05] <madkiss> I tried
[23:05] <madkiss> qemu-img create -f rbd rbd:libvirt/ubuntu-amd64-alice:id=client.rbd:key=AQB0Q4ZQYDB2MBAAYzWmHvpg7t1MzV1E0jkBww==:auth_supported=cephx 1G
[23:05] <madkiss> and failed terribly ;)
[23:06] <dmick> mon_addr?
[23:06] <madkiss> it does try to connect to 192.168.133.111, which it must have from ceph.conf actually
[23:08] <madkiss> "error while creating rbd: Input/output error!" it tells me.
[23:10] <joshd> hmm, well qemu-img is using the bdrv api directly
[23:10] <joshd> it may be doing things slightly differently than normal qemu
[23:10] * rweeks (~rweeks@12.25.190.226) has joined #ceph
[23:11] <madkiss> I stumbled across this when trying to set up rbd via qemu-rbd as non client.admin
[23:11] <madkiss> (for the hints-and-kinks thing I wrote earlier)
[23:12] <joshd> madkiss: in general I'd suggest using the rbd cli tool rather than qemu-img
[23:12] <joshd> the only thing qemu-img is really needed for is converting from e.g. qcow2 to rbd, but that could be convert from qcow2 to raw and rbd import instead
[23:12] <madkiss> joshd: that's good to know. My first version of the document *was* using rbd instead of qemu-img, but florian — quite correctly — pointed out that when using qemu-img to do the actual image conversion, it'd be nifty to do the img creation itself with qemu-img, too
[23:13] * noob2 (a5a00214@ircip2.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[23:13] <joshd> as more options are added for image creation, it's hard to squeeze them into qemu-img
[23:13] <dmick> yeah, just wondering how to specify format=2 :)
[23:13] <madkiss> I see
[23:14] <slang> dmick: can you make that change to vstart.sh that uses ./init-ceph instead of CEPH_BIN/init-ceph?
[23:14] <madkiss> well, it'd be really nifty if rbd had some way to convert raw images into rbd
[23:14] <slang> dmick: since you're going to be working on it...
[23:14] * steki-BLAH (~steki@69.164.220.50) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:14] <joshd> madkiss: you can just import them directly with 'rbd import'
[23:14] <dmick> I suppose, although I don't know much about it slang
[23:14] * steki-BLAH (~steki@bojanka.net) has joined #ceph
[23:15] <madkiss> joshd: so i could just do "rbd import /dev/drbd/by-res/no-more-drbd libvirt/rbd-please"?
[23:15] <slang> dmick: ok I can do it I just didn't want you to hit a conflict on merge
[23:16] <dmick> yeah, probably better, and I'm good at pull --rebase ;)
[23:16] <joshd> madkiss: yeah, I think that would work, I haven't tried with a block device directly
[23:16] <madkiss> if it works
[23:16] <madkiss> it's cool as fuck
[23:17] <madkiss> i mean. this would actually make raw=>rbd conversion straightforward, and I could extent my howto-entry to cover qcow2-to-rbd conversion, too
[23:18] <joshd> worst case, it needs to be updated to support block devices, but you could cat /dev/drbd/etc | rbd import - libvirt/image
[23:18] <madkiss> this is ubuntu, so argonaut actually
[23:20] * sagelap (~sage@soenat3.cse.ucsc.edu) Quit (Quit: Leaving.)
[23:20] * sagelap (~sage@soenat3.cse.ucsc.edu) has joined #ceph
[23:21] <joshd> yeah, looks like it works from straight block devices
[23:27] * sagelap1 (~sage@soenat3.cse.ucsc.edu) has joined #ceph
[23:27] * sagelap (~sage@soenat3.cse.ucsc.edu) Quit (Read error: Connection reset by peer)
[23:28] * aliguori (~anthony@cpe-70-123-146-246.austin.res.rr.com) has joined #ceph
[23:30] * sagelap (~sage@soenat3.cse.ucsc.edu) has joined #ceph
[23:31] * sagelap1 (~sage@soenat3.cse.ucsc.edu) Quit ()
[23:32] * sagelap1 (~sage@soenat3.cse.ucsc.edu) has joined #ceph
[23:32] * sagelap (~sage@soenat3.cse.ucsc.edu) Quit (Read error: Connection reset by peer)
[23:36] * rweeks (~rweeks@12.25.190.226) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[23:45] * sagelap1 (~sage@soenat3.cse.ucsc.edu) Quit (Read error: Connection reset by peer)
[23:45] * sagelap (~sage@soenat3.cse.ucsc.edu) has joined #ceph
[23:47] * nhmlap (~nhm@38.122.20.226) Quit (Remote host closed the connection)
[23:51] * vata (~vata@208.88.110.46) Quit (Quit: Leaving.)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.