#ceph IRC Log

Index

IRC Log for 2012-10-30

Timestamps are in GMT/BST.

[0:01] * MikeMcClurg (~mike@91.224.174.75) Quit (Ping timeout: 480 seconds)
[0:04] <joao> does anyone know if it's possible for a crush map rule to have more than one 'step take foo'?
[0:06] <joshd> joao: yes
[0:06] * elder (~elder@c-24-118-242-216.hsd1.mn.comcast.net) has joined #ceph
[0:06] <joao> joshd, thanks
[0:07] <joshd> joao: why all the crush questions today?
[0:07] <elder> I'm online again. Ran an Ethernet cable to my nighbor's house...
[0:07] <joao> joshd, I've been fiddling with the crush map for the mon/osd workload generator
[0:08] <joao> and ended up fiddling with it for far too long trying to solve something that bummed me out
[0:09] <joao> and that being that 'root' buckets can't be defined after the rules, which make it impossible to append a test root to a decompiled crush map and recompile it
[0:10] <joao> figured it was due to the grammar, but after fiddling with the grammar too, I just realized that the order the grammar imposes must have to do with rules being allowed to use more than one 'step take foo'
[0:11] <joshd> some day that parser (and probably language) will be a lot cleaner
[0:12] * nwatkins (~Adium@soenat3.cse.ucsc.edu) Quit (Quit: Leaving.)
[0:13] <joao> the good thing I got out of this was learning that boost's spirit kinda rocks :p
[0:18] <joshd> yeah it's pretty nice once you learn the syntax
[0:25] * jlogan (~Thunderbi@72.5.59.176) has joined #ceph
[0:26] <dmick> elder: did you lose a router?
[0:26] <elder> No. The next-door neighbor had a fence installed today. One of the posts severed my cable line as it got pounded in.
[0:26] <elder> That same neighbor is the one now supplying my Internet service.
[0:27] * sedwards_ (~sedwards@99-72-217-92.lightspeed.stlsmo.sbcglobal.net) has joined #ceph
[0:29] * jlogan1 (~Thunderbi@2600:c00:3010:1:852f:a2dd:c540:fa16) Quit (Ping timeout: 480 seconds)
[0:33] * sedwards_ (~sedwards@99-72-217-92.lightspeed.stlsmo.sbcglobal.net) Quit (Remote host closed the connection)
[0:34] * tanato (~Adium@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[0:37] <sagelap1> elder: nice!
[0:37] * sagelap1 is now known as sagelap
[0:38] <elder> I was doing fine without it, frankly, thanks to UML working nicely on the problem I'm working on. But it is good to be able to interact again.
[0:38] <dmick> I'm amazed it didn't happen during the hangout. That would have been par for the course
[0:39] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[0:40] * BManojlovic (~steki@212.200.243.179) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:40] <elder> I think it was just shortly afterward, but yes, that would have been fitting.
[0:58] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[1:02] * tziOm (~bjornar@ti0099a340-dhcp0778.bb.online.no) Quit (Remote host closed the connection)
[1:02] * andreask (~andreas@93-189-29-152.rev.ipax.at) Quit (Quit: Leaving.)
[1:03] <buck> Teuthology question. I'm specified a path for the client to pull code from but it still seems to be pulling the worklists from HEAD. Is that expected?
[1:03] <buck> I see this in the output: INFO:teuthology.task.workunit:Pulling workunits from ref HEAD
[1:03] <buck> but my yaml looks like this (snippet):
[1:03] <buck> - workunit:
[1:03] <buck> clients:
[1:03] <buck> client.0: [libcephfs-java]
[1:03] <buck> path: /home/buck/git/ceph
[1:04] <joshd> buck: yeah, only the ceph task knows how to get stuff from a local path. the workunit task can only change which github branch/tag/sha1 it uses
[1:05] <buck> okay. I can deal with that. Just wanted to make sure I wasn't missing an obvious fix. Thanks josh.
[1:18] * sagelap (~sage@38.122.20.226) Quit (Ping timeout: 480 seconds)
[1:20] * sagelap (~sage@45.sub-70-197-151.myvzw.com) has joined #ceph
[1:28] * sagelap (~sage@45.sub-70-197-151.myvzw.com) Quit (Ping timeout: 480 seconds)
[1:32] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[1:35] * loicd (~loic@magenta.dachary.org) has joined #ceph
[1:40] * buck (~buck@bender.soe.ucsc.edu) has left #ceph
[1:41] * sjusthm (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) has joined #ceph
[1:42] * sagelap (~sage@143.sub-70-197-144.myvzw.com) has joined #ceph
[1:44] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[2:00] * sagelap1 (~sage@235.sub-70-197-141.myvzw.com) has joined #ceph
[2:04] * LarsFronius (~LarsFroni@95-91-242-169-dynip.superkabel.de) Quit (Quit: LarsFronius)
[2:05] * sagelap (~sage@143.sub-70-197-144.myvzw.com) Quit (Ping timeout: 480 seconds)
[2:06] * LarsFronius (~LarsFroni@2a02:8108:3c0:79:f83c:f550:958c:d32f) has joined #ceph
[2:07] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[2:13] * yeming (~user@180.168.36.70) has joined #ceph
[2:16] * LarsFronius (~LarsFroni@2a02:8108:3c0:79:f83c:f550:958c:d32f) Quit (Quit: LarsFronius)
[2:19] * sagelap1 (~sage@235.sub-70-197-141.myvzw.com) Quit (Read error: Connection reset by peer)
[2:20] * dmick (~dmick@2607:f298:a:607:cdea:4965:cd42:6c7) Quit (Quit: Leaving.)
[2:29] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[2:31] * jlogan (~Thunderbi@72.5.59.176) Quit (Ping timeout: 480 seconds)
[2:32] * sagelap (~sage@235.sub-70-197-141.myvzw.com) has joined #ceph
[2:53] * maxim (~pfliu@202.108.130.138) has joined #ceph
[2:59] * maxim (~pfliu@202.108.130.138) Quit (Quit: Ex-Chat)
[3:03] * maxim (~pfliu@202.108.130.138) has joined #ceph
[3:07] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[3:08] * miroslav (~miroslav@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:08] * sagelap (~sage@235.sub-70-197-141.myvzw.com) Quit (Ping timeout: 480 seconds)
[3:10] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[3:12] * loicd (~loic@magenta.dachary.org) has joined #ceph
[3:34] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[4:01] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[4:03] * loicd (~loic@magenta.dachary.org) has joined #ceph
[4:23] * maxim (~pfliu@202.108.130.138) Quit (Remote host closed the connection)
[4:26] * maxim (~pfliu@202.108.130.138) has joined #ceph
[4:33] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[4:34] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[4:47] * gohko (~gohko@natter.interq.or.jp) Quit (Read error: Connection reset by peer)
[4:49] * gohko (~gohko@natter.interq.or.jp) has joined #ceph
[4:52] * nazarianin|2 (~kvirc@mg01.apsenergia.ru) Quit (Quit: KVIrc 4.0.4 Insomnia http://www.kvirc.net/)
[5:09] * Q310 (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[5:13] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[5:33] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[5:44] * chutzpah (~chutz@199.21.234.7) Quit (Quit: Leaving)
[5:44] * ceph (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[5:45] * ceph (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has left #ceph
[5:46] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[5:54] * synapsr1 (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[5:56] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[5:56] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[5:58] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:59] * sagelap (~sage@76.89.177.113) has joined #ceph
[6:35] * sagelap (~sage@76.89.177.113) Quit (Ping timeout: 480 seconds)
[6:36] * synapsr1 (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[6:36] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[7:01] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[7:03] * loicd (~loic@magenta.dachary.org) has joined #ceph
[7:32] * gaveen (~gaveen@175.157.59.93) has joined #ceph
[7:59] * pixel (~pixel@81.195.203.34) has joined #ceph
[8:05] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[8:08] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:08] * sjusthm (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) Quit (Ping timeout: 480 seconds)
[8:10] * loicd (~loic@magenta.dachary.org) has joined #ceph
[8:12] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[8:18] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[8:24] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[8:24] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[8:55] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:57] * loicd (~loic@magenta.dachary.org) has joined #ceph
[9:11] * scuttlemonkey_ (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[9:15] * MikeMcClurg (~mike@91.224.175.20) has joined #ceph
[9:15] * MikeMcClurg (~mike@91.224.175.20) Quit ()
[9:16] * MikeMcClurg (~mike@91.224.175.20) has joined #ceph
[9:18] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[9:22] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[9:22] * gregorg (~Greg@78.155.152.6) has joined #ceph
[9:32] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[9:40] * gaveen (~gaveen@175.157.59.93) Quit (Quit: Leaving)
[9:43] * morse (~morse@supercomputing.univpm.it) Quit (Read error: Connection reset by peer)
[9:44] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[10:03] * mib_k967go (ca037809@ircip1.mibbit.com) has joined #ceph
[10:03] <mib_k967go> Hi Folks .
[10:03] <mib_k967go> can anyone guide me about how to build ceph from source ?
[10:04] <mib_k967go> i followed some instructions given at official site/
[10:04] <mib_k967go> but having some issues to move further
[10:04] <iltisanni> That was my problem also... didn't understand some of that points at the website
[10:04] <mib_k967go> :(
[10:05] <iltisanni> now I have my cluster with health ok
[10:05] * maxim (~pfliu@202.108.130.138) Quit (Read error: Connection reset by peer)
[10:05] <mib_k967go> oww.. good
[10:05] <mib_k967go> can you please guide me
[10:05] <mib_k967go> ?
[10:05] <iltisanni> 3 osds, 3 monitor and 1 mds
[10:05] <iltisanni> i did the following things:
[10:06] <iltisanni> I installed ubuntu server on 3 VMs -> Then I installed the package ceph
[10:06] <iltisanni> sudo apt get install ceph
[10:07] <iltisanni> after that I had to install ceph mds
[10:07] <iltisanni> because it was not in the package
[10:07] <mib_k967go> sorry I conveyed wrong info
[10:07] <mib_k967go> I need to build it from source code
[10:07] <iltisanni> ahhhhh ok
[10:07] <iltisanni> sorry then
[10:07] <iltisanni> cant help you then
[10:07] <iltisanni> :-(
[10:07] <iltisanni> i thought from bottom up
[10:08] <mib_k967go> cuz I also have same cluster as yours with direct setup
[10:08] <mib_k967go> but r8 now I was trying to have stable setup
[10:08] <mib_k967go> so needed to build from tarball
[10:08] <mib_k967go> anywayz most of the developers are in different time zone
[10:08] <mib_k967go> so need to contact developer comm. only
[10:09] * MikeMcClurg (~mike@91.224.175.20) Quit (Read error: No route to host)
[10:10] <iltisanni> y.. kk
[10:11] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[10:12] * jlogan (~Thunderbi@2600:c00:3010:1:79cf:65b4:570d:7cdd) has joined #ceph
[10:15] * MikeMcClurg (~mike@91.224.175.20) has joined #ceph
[10:23] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[10:33] <todin> morning
[10:36] <mib_k967go> Can someone please guide me about how to build ceph from source code
[10:36] <mib_k967go> ?
[10:37] <mib_k967go> i followed instructions from official site
[10:37] <mib_k967go> but unable to start the ceph service
[10:38] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[10:42] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[10:46] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[10:48] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[10:56] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[10:58] <iltisanni> I still have on silly question.. I just dint get it. Sorry. I installed the ceph cluster. ceph health is OK. I have 3 VMs = 3OSDs, 3 monitos, 1 mds. How can I test the function of the cluster now?? what should I do next to test the cluster. any idea?
[10:58] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[10:59] * LarsFronius_ (~LarsFroni@p578b21b6.dip0.t-ipconnect.de) has joined #ceph
[11:01] <mib_k967go> use rados cmds to put
[11:01] <mib_k967go> data
[11:01] <mib_k967go> get that data
[11:02] <mib_k967go> if you want specific cmds , u can ask
[11:03] <iltisanni> ok. where can I find information about that.. some commands
[11:03] <mib_k967go> 1 sec a
[11:04] <mib_k967go> http://ceph.com/docs/master/man/8/rados/?highlight=lspools
[11:05] * LarsFronius__ (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[11:05] <iltisanni> OK and with those rados commands I can write data or get data and so on
[11:06] <mib_k967go> yeah
[11:06] <mib_k967go> http://ceph.com/docs/master/
[11:06] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Ping timeout: 480 seconds)
[11:06] * LarsFronius__ is now known as LarsFronius
[11:06] <iltisanni> all right.. but those commands can only be used by VMs which are in the cluster right?
[11:06] <mib_k967go> the dia in right frame will tell u wat exactly ur using
[11:07] <mib_k967go> yeap . as i dont think u have separate app to access ceph fs .
[11:07] <mib_k967go> so use rados directly
[11:07] <mib_k967go> :)
[11:08] * loicd (~loic@83.167.43.235) has joined #ceph
[11:09] <iltisanni> ok. so lets say I have 5 VMs. 3 have ceph installed and function as osds and mons the other 2 VMs are naked (only ceph package is installed, but not in cluster yet). Now I can use some rados commands from one osd VM but not from the naked one.
[11:11] * LarsFronius_ (~LarsFroni@p578b21b6.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[11:12] <iltisanni> so writing data and getting data directly in the ceph cluster itself can only be done by VMs that are in the cluster themselves. If I want to mount data to other clients around I have to use cephfs and use the mount -t ceph command? Is that right?
[11:13] <iltisanni> So I could configure the other 2 naked vms as cephfs clients to test this feature and use the other 3 ceph VMs to manage the cluster and write/get data directly in the cluster
[11:14] <iltisanni> :-) do you understand my question?
[11:16] * match (~mrichar1@pcw3047.see.ed.ac.uk) has joined #ceph
[11:23] <mib_k967go> ummm
[11:23] <mib_k967go> nope
[11:23] <mib_k967go> :)
[11:23] <iltisanni> well I can use ceph without ceph fs can't I ?
[11:23] <mib_k967go> yeah u can directly use ceph storage without client app
[11:24] <mib_k967go> with terminal u can directly access storage layer
[11:24] <iltisanni> y but then I have to use one of the cluster VMs
[11:24] <mib_k967go> yeah .
[11:24] <iltisanni> so.. the rados commands
[11:24] <mib_k967go> :)
[11:25] <iltisanni> like rados -p foo put myobject blah.txt
[11:25] <iltisanni> can be used then
[11:25] <mib_k967go> hmm
[11:25] <iltisanni> but to use cephfs I need an additional VM as client where I install ceph
[11:25] <iltisanni> and then use the mount method ?
[11:26] <iltisanni> or am I just confused
[11:26] <iltisanni> ?
[11:26] <iltisanni> :-)
[11:26] <mib_k967go> cephfs can be used even on 1 m/c as well
[11:26] <iltisanni> I find this ceph cluster without cephfs and with cephfs and so on really hard to understand.. sorry :-)
[11:27] <mib_k967go> its like software layer na
[11:27] <mib_k967go> see
[11:27] <mib_k967go> storage layer is lowest thing
[11:27] <mib_k967go> r8?
[11:27] <iltisanni> y
[11:28] <mib_k967go> logically storage is at lowest layer as low lever data placement is carried out at storage layer
[11:28] <iltisanni> ok
[11:29] <mib_k967go> den on top[ of that we have storage access layer .. wer we have handlers to put / get data n other API s
[11:29] <mib_k967go> on top of that application layer who is utilizing storage services
[11:29] <iltisanni> y ok
[11:30] <mib_k967go> so at storage access layer we can access storage ... indirectly bypassing the application layer
[11:30] <mib_k967go> got it
[11:30] <iltisanni> y
[11:31] <iltisanni> that makes sense
[11:32] <mib_k967go> so u can use rados to test cluster
[11:33] <mib_k967go> whether its storing n retrieving d data accurately... sure its gonna do that bt experiment urself
[11:33] <mib_k967go> :)
[11:33] <iltisanni> ok
[11:33] <iltisanni> and when I want to test that from a client
[11:33] <iltisanni> I have to go through application Layer
[11:33] <mib_k967go> yep.. write application n all
[11:34] <iltisanni> hm.. ok how can I manage a Client to do so
[11:34] <mib_k967go> didnt got u..
[11:35] * stxShadow (~jens@p4FD06E95.dip.t-dialin.net) has joined #ceph
[11:35] <stxShadow> Hi all
[11:35] <iltisanni> The Client has to go through the application Layer... So I have to tell the client how to do this right?
[11:36] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[11:36] <iltisanni> unlike the rados commands on the cluster nodes themselves. Because they can access the storage directly
[11:37] <mib_k967go> then u need to learn API thing
[11:37] <mib_k967go> there developer told short codes to do ur task
[11:38] <iltisanni> ai caramba
[11:38] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[11:38] <iltisanni> so client server connection isn't that easy?
[11:38] <iltisanni> client - cluster
[11:39] <mib_k967go> http://ceph.com/docs/master/api/librados/
[11:40] <mib_k967go> Can someone please guide me about how to build ceph from source code ?
[11:40] <mib_k967go> i followed instructions from official site
[11:40] <mib_k967go> but unable to start the ceph service
[11:41] <iltisanni> ok doesn't look that simple. But maybe I'll try this..So when I need this api thing for writing and getting data from/to the cluster storage, what is the cephfs for ?
[11:41] * stingray (~stingray@stingr.net) has joined #ceph
[11:41] <iltisanni> and Thank you mib so far.. You really hepled me understand some things
[11:41] <stingray> good morning everyone
[11:41] <mib_k967go> The Ceph FS file system is a POSIX-compliant file system that uses a RADOS cluster to store its data. Ceph FS uses the same RADOS object storage device system as RADOS block devices and RADOS object stores such as the RADOS gateway with its S3 and Swift APIs, or native bindings. Using Ceph FS requires at least one metadata server in your ceph.conf configurati
[11:41] <mib_k967go> Good morning Sir
[11:42] <mib_k967go> http://ceph.com/docs/master/cephfs/
[11:42] <stingray> maybe you can help me and tell the magic command which will unstuck a placement group from active+remapped
[11:43] <mib_k967go> dats hi-fi question for me ..... m beginner boy
[11:43] <mib_k967go> :(
[11:43] <stingray> :(
[11:45] <iltisanni> hmmm.. well OK... thank you very much mib_k967go. Im gonna do some Rados commands now to test the cluster. After that I try to understand this cephfs things...
[11:45] <iltisanni> :-)
[11:45] <mib_k967go> :)
[11:48] * verwilst (~verwilst@91.183.54.28) has joined #ceph
[11:51] * loicd (~loic@83.167.43.235) Quit (Ping timeout: 480 seconds)
[12:00] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[12:00] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Read error: Connection reset by peer)
[12:00] * LarsFronius_ is now known as LarsFronius
[12:15] * jlogan (~Thunderbi@2600:c00:3010:1:79cf:65b4:570d:7cdd) Quit (Ping timeout: 480 seconds)
[12:24] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[12:30] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Ping timeout: 480 seconds)
[12:30] * LarsFronius_ is now known as LarsFronius
[12:33] * tziOm (~bjornar@194.19.106.242) Quit (Ping timeout: 480 seconds)
[12:41] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Quit: LarsFronius)
[12:42] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[12:44] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[12:55] <stxShadow> is there any possiblity to tell which one of three monitors is the active monitor ?
[12:56] <stxShadow> we have mon0 to mon2
[12:56] <stxShadow> mon1 should be the active mon .... quorum election always eletcs mon0 ..... but mon1 is better on the hardware side
[12:58] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Quit: LarsFronius)
[13:01] <joao> stxShadow, all monitors in the quorum are always 'active' in the sense that they always answer to the client (although they may forward requests to the leader)
[13:01] <joao> you do however have one leader and the remaining monitors will be the peons, if that's what you mean by 'active'
[13:02] <iltisanni> how can I check what mon is leader?
[13:02] <joao> in that case, I think the ranks are attributed to each monitor on a 'lowest ip:port' basis
[13:02] <joao> iltisanni, I think the quorum status will tell you that
[13:03] <joao> let me check in a sec
[13:03] <stxShadow> 2012-10-30 13:03:21.639943 mon.0 [INF] pgmap v7455639: 10440 pgs: 10440 active+clean; 3776 GB data, 7428 GB used, 14908 GB / 22336 GB avail
[13:03] <stxShadow> -> ceph -w
[13:03] <stxShadow> mon.0 is active
[13:04] <joao> ?
[13:04] <joao> mon.0 is reporting that you have 10440 pgs active and clean
[13:04] <stxShadow> yes .... all messages in "ceph -w" output come from mon.0
[13:04] <stxShadow> so i think he is the leader
[13:05] <stxShadow> -> not active ... sorry :)
[13:05] <joao> yes, I think the leader will be the one reporting that to the ceph tool
[13:05] <joao> not sure, but it's a fair assumption
[13:06] <stxShadow> i want mon.1 to be the leader but cannot find a solution for that
[13:06] <stxShadow> maybe i have to change the hardware
[13:06] <joao> the election is based on ranks, which are based on the lowest ip:port combination
[13:06] <stxShadow> i hoped, that there is an other way
[13:07] * MikeMcClurg (~mike@91.224.175.20) Quit (Ping timeout: 480 seconds)
[13:07] <joao> stxShadow, if you check 'ceph quorum_status' you will see what I mean
[13:07] <stxShadow> hmmm .... then is should renumber mon0
[13:07] <joao> the leader is usually the lowest rank available under 'mons' on the 'monmap'
[13:08] <stxShadow> yes ... thats mon0 on my side
[13:08] <joao> alright
[13:08] <stxShadow> thanks a lot
[13:09] <joao> np
[13:09] <joao> let us know if things go bad while trying to make mon.1 the leader ;)
[13:09] <iltisanni> y thx :-) I learned while reading
[13:10] <stxShadow> hehe .... will test it in our lab first .... the main cluster ist productive
[13:13] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[13:13] * gregorg (~Greg@78.155.152.6) has joined #ceph
[13:15] * scuttlemonkey_ (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Quit: This computer has gone to sleep)
[13:16] * scuttlemonkey_ (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[13:25] * long (~chatzilla@118.195.65.95) has joined #ceph
[13:31] <match> iltisanni: cephfs is a mountable fs on top of a ceph object store (via the mds daemon). It might be what you want, but it's not fully stable for production yet
[13:32] <match> (oops - reading yesterdays posts - ignore me)
[13:32] <iltisanni> no its ok. Thank You
[13:32] <stxShadow> agree to that ... ist not stable ;)
[13:32] <iltisanni> the question was not answered. thx
[13:33] <stxShadow> we tried to use it but had to go back to nfs
[13:33] <long> hi,all
[13:33] <iltisanni> good to know
[13:33] <stxShadow> cause sometimes the mount "hangs"
[13:33] <iltisanni> thats bad
[13:33] <long> i want saksome api of c-language?
[13:33] <long> who can answer me
[13:34] * LarsFronius (~LarsFroni@ip-109-47-0-85.web.vodafone.de) has joined #ceph
[13:35] * Fruit has a mysterious hang during directory traversal of a ceph fs (on a toy system) as well :)
[13:36] <stxShadow> i hope the devs will forcus on cephfs in the near future
[13:37] <tziOm> they will
[13:37] * andreask (~andreas@ATuileries-153-1-62-234.w83-202.abo.wanadoo.fr) has joined #ceph
[13:38] * andreask (~andreas@ATuileries-153-1-62-234.w83-202.abo.wanadoo.fr) Quit ()
[13:38] <long> ok,what is the difference berween rados_cluster_stat and rados_ioctx_pool_stat?
[13:39] <long> which is right way to get pool stat, used,avaiable,total?
[13:40] <long> libvirt use the 2 function to get stat,but it seems wrong. because allocation has exceeded capacity?
[13:41] <stxShadow> sorry ... im not able to answer any of your questions ...... you have to wait for the devs
[13:41] <match> long: I use 'rados df <poolname>'
[13:41] <match> long: sorry - misse the 'api' comment there :)
[13:42] <long> rados is right,but for libvirt, i think they miss understand the 2 function
[13:45] <match> long: not entirely sure, but the docs imply that 1 is for the whole cluster, while 2 is for a specific pool in tha cluster
[13:46] <match> long: pool size is (I think) a 'quota' rather than any hard limit, so it might be possible for a pool to be larger than its size, as long as its still smaller than the cluster total?
[13:46] * loicd (~loic@magenta.dachary.org) has joined #ceph
[13:46] * pixel (~pixel@81.195.203.34) Quit (Quit: Ухожу я от вас (xchat 2.4.5 или старше))
[13:48] <long> it distinguish normal pool from rbd pool?
[13:48] <long> it compute rbd pool's stat
[13:48] <long> >> #virsh pool-info 2361a6d4-0edc-3534-87ae-e7ee09199921
[13:48] <long> >> Name: 2361a6d4-0edc-3534-87ae-e7ee09199921
[13:48] <long> >> UUID: 2361a6d4-0edc-3534-87ae-e7ee09199921
[13:49] <long> >> State: running
[13:49] <long> >> Persistent: yes
[13:49] <long> >> Autostart: no
[13:49] <long> >> Capacity: 285.57 GiB
[13:49] <long> >> Allocation: 489.89 GiB
[13:49] <long> >> Available: 230.59 GiB
[13:49] <long> used space is about 50G,
[13:50] <match> long: Ahh - I think what that might be saying is that you've overcommitted guest disk size. The pool is created sparse, so space is only used when data is written to the guest. If every guest disk was full, it would fail
[13:50] <long> it absolutely not 489G
[13:55] <long> would you help to confirm it?
[13:56] <long> i can not subscribe ceph mail throught public mail address(not compant mail)
[13:58] * mib_k967go (ca037809@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[13:59] * Fruit (wsl@2001:980:3300:2:216:3eff:fe10:122b) has left #ceph
[13:59] <long> company mail
[14:09] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[14:09] * deepsa_ (~deepsa@115.184.42.220) has joined #ceph
[14:10] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[14:11] * deepsa (~deepsa@122.172.7.249) Quit (Ping timeout: 480 seconds)
[14:11] * deepsa_ is now known as deepsa
[14:24] * MikeMcClurg (~mike@91.224.175.20) has joined #ceph
[14:29] * LarsFronius (~LarsFroni@ip-109-47-0-85.web.vodafone.de) Quit (Quit: LarsFronius)
[14:32] <iltisanni> I just wrote two objects rados -p foo put myobject blah.txt and deleted one of them 1 min later. rados df still says, that there are 2 ojects but rados -p foo ls - only shows the other object (which should be right)... is this a bug or am I doing something wrong?
[14:33] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[14:36] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[14:37] * loicd (~loic@magenta.dachary.org) has joined #ceph
[14:38] * MikeMcClurg (~mike@91.224.175.20) Quit (Ping timeout: 480 seconds)
[14:40] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[14:46] <iltisanni> I found some great text about ceph. good to understand. For noobs like me ;-) http://www.anchor.com.au/blog/2012/09/a-crash-course-in-ceph/
[14:49] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[14:56] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Ping timeout: 480 seconds)
[14:56] * tryggvil_ is now known as tryggvil
[14:59] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[15:00] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[15:00] * deepsa (~deepsa@115.184.42.220) Quit (Ping timeout: 480 seconds)
[15:00] * PerlStalker (~PerlStalk@perlstalker-1-pt.tunnel.tserv8.dal1.ipv6.he.net) has joined #ceph
[15:00] * deepsa (~deepsa@122.172.7.249) has joined #ceph
[15:02] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit ()
[15:06] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Remote host closed the connection)
[15:06] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[15:14] <iltisanni> Is there any chance to write data on ceph Cluster from a client without using cephfs?
[15:20] <PerlStalker> iltisanni: You could use rbd
[15:20] * gucki (~smuxi@80-218-125-247.dclient.hispeed.ch) has joined #ceph
[15:21] <gucki> hi there
[15:21] <jmlowe> iltisanni: rados
[15:23] <gucki> i have 4 nodes, each having 3 disks. i run an osd for each disk so 4*3 osds. how can i ensure that cep will not use osds on the same machine for replication? so that when i have a pool with target 3 that the data is on 3 different machines (and not only 3 different osds on one machine for example)
[15:25] <iltisanni> PerlStalker: OK.. rbd sounds good. What should I do to get it work. I have 3 VMs running osd and mon daemons and 2 naked VMs (that will be the clients later on). Just install ceph on the naked VMs and then use the rbd commands given here: http://ceph.com/docs/master/rbd/rados-rbd-cmds/ ???
[15:25] <jmlowe> gucki: check out crushmap
[15:25] <jmlowe> gucki: it's the secret sauce that makes ceph so good
[15:26] <PerlStalker> iltisanni: You could do that. Which hypervisor are you using?
[15:26] <jmlowe> http://ceph.com/docs/master/cluster-ops/crush-map/
[15:26] <gucki> jmlowe: yeah i already read about the crushmap but it didn't seem that easy and i thought it might be able to let ceph do the magic in the background ;)
[15:27] <iltisanni> PerlStalker: hypervisor? sorry I dont know what it is
[15:27] <PerlStalker> iltisanni: vmware, xen, kvm?
[15:27] <iltisanni> ah vmware
[15:28] <jmlowe> gucki: I think when you create the ceph fs it will do the right thing, group the osd's into hosts and put all hosts into a rack, the default rules say keep two copies and put them on different hosts
[15:28] <iltisanni> but cephfs is buggy i heard ?
[15:28] <PerlStalker> iltisanni: In that case, using rbd from within the client is probably your best bet. There are docs on ceph.com that tell you have to configure linux to mount a remote rbd.
[15:28] <jmlowe> gucki: that only works if everything was defined in your ceph.conf of course
[15:29] <gucki> jmlowe: ok, thanks...i'll play around a little with it :)
[15:29] <zynzel> gucki: step choose firstn {num} type {bucket-type} where bucket-type = host, rack, row...
[15:30] <iltisanni> PerlStalker: Thx for that.I will try to get rbd working
[15:30] <gucki> jmlowe: one more question which i couldn't find an anwers for...when i change the crushmap of a running cluster, will ceph handle this gracefully and rebalance or will it crash?
[15:30] <zynzel> default in ubuntu bucket-type is set for host.
[15:31] <jmlowe> gucki: it should rebalance, looks very similar to a failure, I found a bug when I set the crush map to an impossible placement but I think that has been fixed
[15:38] * MikeMcClurg (~mike@91.224.175.20) has joined #ceph
[15:55] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[15:55] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[15:56] <tnt> Hi. I was wondering if using osd weight to 'correct' an imbalance in disk size is an acceptable use ?
[15:59] <match> long: Sorry - afk there. I think if you look at your libvirt guest config(s), you'll see they've been allocated 489GB (Allocation) within the pool, even though the pool size is only 285 (Capacity). The guest(s) are using 50GB, so that's why you see 235 free (Available)
[16:13] * glowell1 (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[16:13] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Remote host closed the connection)
[16:15] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[16:27] * scheuk (~scheuk@67.110.32.249.ptr.us.xo.net) has left #ceph
[16:27] * sagelap (~sage@240.sub-70-197-141.myvzw.com) has joined #ceph
[16:31] * MikeMcClurg (~mike@91.224.175.20) Quit (Ping timeout: 480 seconds)
[16:34] * jlogan1 (~Thunderbi@2600:c00:3010:1:79cf:65b4:570d:7cdd) has joined #ceph
[16:36] * MikeMcClurg (~mike@91.224.174.71) has joined #ceph
[16:43] * long (~chatzilla@118.195.65.95) Quit (Quit: ChatZilla 0.9.89 [Firefox 16.0.2/20121024073032])
[16:50] * MikeMcClurg (~mike@91.224.174.71) Quit (Ping timeout: 480 seconds)
[16:54] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Remote host closed the connection)
[16:57] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[16:58] * calebamiles (~caleb@c-24-128-194-192.hsd1.vt.comcast.net) has joined #ceph
[16:59] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[17:01] * verwilst (~verwilst@91.183.54.28) Quit (Quit: Ex-Chat)
[17:06] * aliguori (~anthony@cpe-70-123-145-75.austin.res.rr.com) Quit (Remote host closed the connection)
[17:07] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:10] * mknux (~mknux@mercator.ceat.univ-poitiers.fr) has joined #ceph
[17:11] * vata (~vata@208.88.110.46) has joined #ceph
[17:11] * sagelap1 (~sage@2607:f298:a:607:9def:cff5:2f8c:2076) has joined #ceph
[17:12] * MikeMcClurg (~mike@91.224.174.71) has joined #ceph
[17:13] * sagelap (~sage@240.sub-70-197-141.myvzw.com) Quit (Ping timeout: 480 seconds)
[17:25] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[17:27] * loicd (~loic@magenta.dachary.org) has joined #ceph
[17:29] * tziOm (~bjornar@ti0099a340-dhcp0778.bb.online.no) has joined #ceph
[17:29] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[17:35] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Quit: LarsFronius)
[17:35] * stxShadow (~jens@p4FD06E95.dip.t-dialin.net) Quit (Remote host closed the connection)
[17:38] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[17:42] * mknux (~mknux@mercator.ceat.univ-poitiers.fr) Quit (Remote host closed the connection)
[17:43] <tnt> Is it possible to set the max_osd in the osdmap 'offline' ? (i.e. without the cluster running yet)
[17:44] * senner (~Wildcard@68-113-232-90.dhcp.stpt.wi.charter.com) has joined #ceph
[17:45] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[17:46] * senner (~Wildcard@68-113-232-90.dhcp.stpt.wi.charter.com) has left #ceph
[17:46] * aliguori (~anthony@32.97.110.59) has joined #ceph
[17:50] * MissDee (~dee@jane.earlsoft.co.uk) has left #ceph
[17:53] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[17:55] * loicd (~loic@magenta.dachary.org) has joined #ceph
[18:00] * drokita (~drokita@199.255.228.10) has joined #ceph
[18:01] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[18:02] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[18:03] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[18:05] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[18:07] * senner (~Wildcard@68-113-232-90.dhcp.stpt.wi.charter.com) has joined #ceph
[18:08] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[18:10] * glowell1 (~glowell@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[18:12] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[18:12] * tnt (~tnt@20.35-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[18:18] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Ping timeout: 480 seconds)
[18:20] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) Quit (Ping timeout: 480 seconds)
[18:29] * synapsr (~Adium@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[18:31] * BManojlovic (~steki@212.200.241.133) has joined #ceph
[18:36] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[18:37] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[18:39] * miroslav (~miroslav@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[18:40] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[18:42] * loicd (~loic@magenta.dachary.org) has joined #ceph
[18:48] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[18:51] * dmick (~dmick@2607:f298:a:607:2c2d:7a5b:d40b:e703) has joined #ceph
[18:52] * ChanServ sets mode +o dmick
[18:57] * synapsr (~Adium@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[18:57] * glowell1 (~glowell@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[18:58] * synapsr (~Adium@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[18:58] * miroslav1 (~miroslav@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[18:58] * glowell1 (~glowell@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[18:58] * miroslav (~miroslav@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:01] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (Ping timeout: 480 seconds)
[19:02] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[19:09] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[19:13] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[19:17] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[19:32] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[19:34] * jlogan (~Thunderbi@2600:c00:3010:1:3880:bbab:af7:6407) has joined #ceph
[19:35] * jlogan1 (~Thunderbi@2600:c00:3010:1:79cf:65b4:570d:7cdd) Quit (Ping timeout: 480 seconds)
[19:38] * loicd (~loic@90.84.144.121) has joined #ceph
[19:39] <jmlowe> joshd: you around?
[19:40] * slang (~slang@ace.ops.newdream.net) has joined #ceph
[19:42] * johnl (~johnl@2a02:1348:14c:1720:69e7:15b0:da5:1ec0) Quit (Remote host closed the connection)
[19:42] * johnl (~johnl@2a02:1348:14c:1720:3960:bd45:f20f:59ae) has joined #ceph
[19:42] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[19:43] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:52] <benpol> I'm a bit confused as to when to use "ceph auth get-or-create-key" and when to use ceph-authtool. I'm beginning to suspect that the documentation here: http://ceph.com/docs/master/cluster-ops/auth-intro/ isn't entirely applicable to the argonaut release.
[19:53] <benpol> is there further documentation on the use of cephx that I'm missing?
[19:57] <benpol> I don't see documentation for using "ceph auth get-or-create-key" to create a key that has osd caps restricted to a specific pool. But I do see that I can specify pool restrictions via ceph-authtool. But ceph-authtool only seems to want to deal with ceph keyring files. Whereas "ceph auth get-or-create-key" seems to be the only way to get the cluster to be aware of a new key.
[19:57] <rweeks> scuttlemonkey_: can you respond to that?
[19:58] <rweeks> or maybe rturk
[19:58] <rweeks> but I think they're both travelling
[19:59] * benpol wishes he were going to Amsterdam!
[19:59] <jmlowe> you couldn't pay me enough to fly this week
[20:00] <benpol> jmlowe: yes there is that.
[20:00] <jmlowe> wouldn't mind seeing Amsterdam next week though
[20:04] <joao> jmlowe, are you on the east coast?
[20:04] * Steki (~steki@212.200.240.127) has joined #ceph
[20:04] * Steki (~steki@212.200.240.127) Quit ()
[20:04] * Steki (~steki@212.200.240.127) has joined #ceph
[20:05] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:05] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[20:06] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[20:06] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[20:06] * Leseb_ is now known as Leseb
[20:07] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[20:11] * BManojlovic (~steki@212.200.241.133) Quit (Ping timeout: 480 seconds)
[20:15] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[20:19] <jmlowe> joao: Indiana, with all the hubs shut down we can only fly west from here
[20:20] <joshd> jmlowe: what's up?
[20:20] <jmlowe> cinder today
[20:21] <jmlowe> one quick thing, in the docs should /etc/init/cinder-volume be /etc/init.d/cinder-volume ?
[20:21] <joshd> benpol: 'ceph auth ...' communicates with the monitors, who actually store the auth info. ceph-authtool just lets you interact with a copy of it locally
[20:23] * deepsa (~deepsa@122.172.7.249) Quit (Ping timeout: 480 seconds)
[20:23] <joshd> jmlowe: depends how you run it, but I think that meant /etc/init/cinder-volume.conf for upstart
[20:23] * jjgalvez (~jjgalvez@12.248.40.138) has joined #ceph
[20:24] * deepsa (~deepsa@122.172.20.38) has joined #ceph
[20:25] <jmlowe> joshd: well /etc/init/cinder-volume doesn't exist in ubuntu as the docs suggest so it's slightly misleading
[20:25] <elder> sagewk, which ceph branch should I be using for my teuthology tests? I'm hitting a problem on master running xfstests over rbd.
[20:25] <gucki> is there any way to see the real usage of a rbd image? i mean if i create an image with 100GB, it will not really consume 100GB - just like sparse files, right?
[20:25] <gucki> rbd info doesn't seem to report that information
[20:25] <joshd> benpol: I added more to the ceph-authtool man page documenting the capabilities recently: http://ceph.com/docs/master/man/8/ceph-authtool/#osd-capabilities
[20:26] <joshd> gucki: there's a feature request for that http://www.tracker.newdream.net/issues/3283
[20:26] <sagewk> elder: master
[20:26] <sagewk> or next
[20:27] <sagewk> kernel rbd?
[20:27] <joshd> gucki: they are thin-provisioned, and objects are sparse too
[20:27] <jmlowe> joshd: anyway my problem today is that starting cinder-volume I can't seem to convince it not to use the client.admin keyring
[20:27] <elder> Yes
[20:27] <elder> INFO:teuthology.task.ceph.mon.a.err:./osd/OSDMap.h: In function 'entity_inst_t OSDMap::get_inst(int) const' thread 7f1d65e3c700 time 2012-10-30 12:18:31.121565
[20:27] <elder> INFO:teuthology.task.ceph.mon.a.err:./osd/OSDMap.h: 345: FAILED assert(is_up(osd))
[20:28] <gucki> joshd: ok, so currently there's no way to see if for example trim/ discard works correctly?
[20:29] <joshd> gucki: if you can discard an entire object it'll be deleted, so you can do rados -p rbd ls | grep $BLOCK_PREFIX_FOR_YOUR_IMAGE
[20:29] <elder> sagewk, I can re-try with the testing branch to make sure it's not my own code if you like.
[20:30] <gucki> joshd: no i mean trim/ discard when using qemu. as far as i understand when the fs used in the vm uses trim/ discard, this bubbles up to qemu/ rbd which then frees up the space in the pool, right?
[20:30] * loicd (~loic@90.84.144.121) Quit (Ping timeout: 480 seconds)
[20:31] <jmlowe> joshd: nm, I swear I tried this before but sticking the id in /etc/init/cinder-volume.conf worked
[20:31] <joshd> gucki: yes, and if that trim/discard spans an entire object in rbd, rbd will delete the underlying object rather than truncating etc.
[20:31] <sagewk> elder: which branch is that? master or next?
[20:31] <elder> master
[20:31] <sagewk> try next.
[20:32] <elder> OK.
[20:32] <sagewk> i'll see if i can reproduce. hitting all kinds of crap on master atm... :(
[20:32] <joshd> jmlowe: good to know. I'll fix that typo
[20:32] <elder> I just need a clean run so I can move on :)
[20:32] <elder> I don't care which ceph branch I use.
[20:33] <gucki> joshd: ah ok, thanks. i just tried, but how can i find the block prefix for my image? they all seem to start with rb.0....
[20:34] <joshd> gucki: rbd info reports it
[20:35] <gucki> joshd: ah, perfect. so when i now get like 2000 and a block is 4mb (default) then i know it currently consumes 8gb? :)
[20:35] <gucki> joshd: 2000 lines of output, counted with | wc -l
[20:36] <jmlowe> joshd: did you ever have trouble with cinder-api port conflicts?
[20:36] <joshd> gucki: that's still a conservative estimate, since not every 4mb object will be used - a 1 byte write will only use 1 byte
[20:36] <joshd> jmlowe: no, but I didn't use very complicated setups
[20:37] <joshd> gucki: well, at least a 4k write. there's probably some extra overhead for 1 byte
[20:38] <gucki> joshd: ok, great. i think for a rough estimation it's enough for now. is ceph performing any kind of defragmentation so this number gets better over time after big updates?
[20:38] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Ping timeout: 480 seconds)
[20:39] * synapsr (~Adium@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[20:39] <joshd> gucki: if you mean dedup, no
[20:40] <joshd> gucki: rbd isn't doing anything fancy with data placement - it's mapping the same virtual block device offsets to the same objects every time
[20:41] <elder> sagewk, got past test 11 using ceph/next (which is where it died before on master)
[20:41] <elder> I think I'
[20:41] <elder> m OK now.
[20:41] <joshd> gucki: the fs or application on top of it will handle placing data and fragmentation within the virtual block device
[20:42] <gucki> joshd: isn't it like a file system - i mean if i create a file with 1 byte, it still consumes one block so 4kb. when i now create an image with 1 byte, doesn't it consume a full object which in turn consumes 4mb from the pool's storage?
[20:42] * synapsr (~Adium@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[20:43] <gucki> joshd: as images are normally big, the block size is bigger 4mb instead of 4kb for better performance...that's how i understand it atm
[20:43] <joshd> gucki: objects in ceph aren't write-all or read-all - you can perform arbitrary transactions on an object, including reading/writing only portions of them
[20:44] <gucki> joshd: yes sure, but once an object is created it takes object size bytes (so normally 4mb) from the pool's storage, right? so when ui create an image of 1 byte, 1 object gets created and my pool has 4mb less free space, right?
[20:45] <gucki> joshd: btw, i just like to understand it better :)
[20:45] <gucki> joshd: if there's any document describing it in detail, i'd be happy to get that link :)
[20:46] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[20:48] <joshd> gucki: unfortunately this stuff isn't in the docs per se. probably on the mailing list at some point
[20:50] <joshd> gucki: a write to a new object will create a file on the osd, using as much space as it would normally for that filesystem
[20:50] <gucki> joshd: ah ok, so you create objects as sparse files on the osd? :)
[20:51] <joshd> gucki: yes
[20:51] <joshd> that's probably the shortest way of saying it
[20:51] <gucki> joshd: ok, perfect. now i think i got the whole thing :-) thanks a lot :)
[20:51] <joshd> you're welcome :)
[20:52] * slang (~slang@ace.ops.newdream.net) Quit (Ping timeout: 480 seconds)
[20:54] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[20:55] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[20:55] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[20:55] * Leseb_ is now known as Leseb
[20:56] * glowell2 (~glowell@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[20:56] * glowell1 (~glowell@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[20:57] * adjohn (~adjohn@69.170.166.146) Quit (Read error: Connection reset by peer)
[20:58] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[21:04] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:05] <gucki> joshd: one more question: how does ceph decide which osd to use for reading? does it choose by load, random, or just query all and use the first response...? :)
[21:08] <joshd> gucki: there's a bunch of links I could point you to for that one :) check out http://ceph.com/docs/master/dev/placement-group/ and http://ceph.com/docs/master/cluster-ops/crush-map/
[21:08] * jjgalvez1 (~jjgalvez@12.248.40.138) has joined #ceph
[21:09] <joshd> reads go to the primary for a pg, writes are synchronously replicated to all replicas in a pg (this will make sense after those links)
[21:09] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[21:10] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[21:11] * jjgalvez (~jjgalvez@12.248.40.138) Quit (Ping timeout: 480 seconds)
[21:12] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[21:17] * Leseb_ (~Leseb@72.11.154.239) has joined #ceph
[21:22] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[21:22] * Leseb_ is now known as Leseb
[21:22] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Quit: LarsFronius)
[21:23] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[21:31] <gucki> joshd: ok, thanks :)
[21:40] <benpol> Joshd: Sorry missed your replies earlier about cephx stuff
[21:41] * Leseb (~Leseb@72.11.154.239) Quit (Ping timeout: 480 seconds)
[21:41] <benpol> I think the part I was missing was that you can create keyring files with ceph-authtool and then *add* them *from* those keyring files via a command like this: "ceph auth add client.foo --in-file=foo.keyring"
[21:42] <joshd> yeah, or you can skip using ceph-authtool entirely
[21:43] <benpol> I wasn't able to see how to do things like limiting access to specific pools w/o using ceph-authtool.
[21:43] * sagelap (~sage@2607:f298:a:607:74aa:5ec5:b8d:3870) has joined #ceph
[21:44] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[21:44] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:44] <joshd> it's similar syntax: ceph auth get-or-create client.blah mon 'allow r' osd 'allow r pool foo, allow rwx pool bar'
[21:45] <joshd> in argonaut that'd be pool=foo and pool=bar
[21:45] <benpol> hmm, ok thanks
[21:45] <benpol> oh, the equals sign is on its way out?
[21:46] <joshd> it won't be required in 0.54
[21:46] * benpol nods
[21:46] * sagelap1 (~sage@2607:f298:a:607:9def:cff5:2f8c:2076) Quit (Ping timeout: 480 seconds)
[21:46] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[21:48] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[22:03] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[22:08] * iggy (~iggy@theiggy.com) Quit (Remote host closed the connection)
[22:08] * iggy (~iggy@theiggy.com) has joined #ceph
[22:08] * lofejndif (~lsqavnbok@1RDAAENID.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:17] * aliguori (~anthony@32.97.110.59) Quit (Remote host closed the connection)
[22:19] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[22:22] * lofejndif (~lsqavnbok@1RDAAENID.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[22:24] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[22:24] * loicd (~loic@magenta.dachary.org) has joined #ceph
[22:25] * benner_ (~benner@193.200.124.63) has joined #ceph
[22:27] * benner (~benner@193.200.124.63) Quit (Read error: Connection reset by peer)
[22:29] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[22:31] * calebamiles (~caleb@c-24-128-194-192.hsd1.vt.comcast.net) Quit (Quit: Leaving.)
[22:31] * calebamiles (~caleb@c-24-128-194-192.hsd1.vt.comcast.net) has joined #ceph
[22:34] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[22:34] * loicd (~loic@magenta.dachary.org) has joined #ceph
[22:38] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[22:40] * BManojlovic (~steki@212.200.240.142) has joined #ceph
[22:42] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[22:47] * Steki (~steki@212.200.240.127) Quit (Ping timeout: 480 seconds)
[22:49] * aliguori (~anthony@cpe-70-123-145-75.austin.res.rr.com) has joined #ceph
[22:51] * benner_ (~benner@193.200.124.63) Quit (Read error: Connection reset by peer)
[22:51] * benner (~benner@193.200.124.63) has joined #ceph
[22:51] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[22:52] * sjust (~sam@2607:f298:a:607:baac:6fff:fe83:5a02) has joined #ceph
[22:55] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[22:56] * Cube1 (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[22:58] * gucki (~smuxi@80-218-125-247.dclient.hispeed.ch) Quit (Remote host closed the connection)
[23:01] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[23:05] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[23:13] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Quit: Leaving)
[23:15] * drokita (~drokita@199.255.228.10) Quit (Quit: Leaving.)
[23:15] * drokita (~drokita@199.255.228.10) has joined #ceph
[23:18] * tziOm (~bjornar@ti0099a340-dhcp0778.bb.online.no) Quit (Remote host closed the connection)
[23:23] * drokita (~drokita@199.255.228.10) Quit (Ping timeout: 480 seconds)
[23:23] * mistur (~yoann@kewl.mistur.org) Quit (Remote host closed the connection)
[23:23] * mistur (~yoann@kewl.mistur.org) has joined #ceph
[23:40] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[23:48] <joao> sagewk, and anyone up for a quick review, wip-osd-msg-gen contains a couple of crushtool-related commits
[23:48] * vata (~vata@208.88.110.46) Quit (Quit: Leaving.)
[23:49] <joao> also, sagewk, topmost commit on that branch is all about making parse_pos_long() use the strict_strtol() function
[23:51] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) has joined #ceph
[23:54] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) has left #ceph
[23:55] * PerlStalker (~PerlStalk@perlstalker-1-pt.tunnel.tserv8.dal1.ipv6.he.net) Quit (Quit: um)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.