#ceph IRC Log

Index

IRC Log for 2012-10-29

Timestamps are in GMT/BST.

[0:05] * danieagle (~Daniel@186.214.92.172) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[0:06] * LarsFronius (~LarsFroni@95-91-242-160-dynip.superkabel.de) Quit (Quit: LarsFronius)
[0:06] * MikeMcClurg (~mike@3239056-cl69.boa.fiberby.dk) Quit (Ping timeout: 480 seconds)
[0:23] * synapsr (~synapsr@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[0:24] * synapsr (~synapsr@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[1:27] * lofejndif (~lsqavnbok@9KCAACONR.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[1:34] * psandin is now known as Moar
[1:34] * Moar is now known as psandin
[1:43] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[1:44] * mgalkiewicz_ (~mgalkiewi@staticline-31-183-94-25.toya.net.pl) has left #ceph
[1:57] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[2:04] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[2:32] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[2:33] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit ()
[2:34] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[2:36] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit ()
[2:37] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[2:39] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit ()
[2:50] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[3:47] * ryann (~chatzilla@216.81.130.180) has joined #ceph
[3:48] <ryann> Say i wish to add "journal dio = true" to my ceph.conf. It's not clear exactly which section is appropriate. global? osd?
[3:49] <mikeryan> osd
[3:49] <ryann> thanks!
[3:50] <mikeryan> np
[3:53] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[4:04] * yehudasa_ (~yehudasa@99-48-179-68.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[4:13] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Remote host closed the connection)
[4:21] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[4:55] * miroslav1 (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) has joined #ceph
[5:00] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[5:05] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[5:10] * deepsa (~deepsa@122.172.7.249) has joined #ceph
[5:53] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[5:55] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[5:55] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Remote host closed the connection)
[6:01] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[6:46] * miroslav1 (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[6:47] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[6:50] * f4m8 (f4m8@kudu.in-berlin.de) has joined #ceph
[7:19] <ryann> leave
[7:19] * ryann (~chatzilla@216.81.130.180) has left #ceph
[7:36] * iltisanni (d4d3c928@ircip3.mibbit.com) has joined #ceph
[8:45] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[8:47] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[8:53] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[9:05] * loicd (~loic@magenta.dachary.org) has joined #ceph
[9:07] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[9:13] * MoroIta (~MoroIta@62.196.20.28) has joined #ceph
[9:23] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:25] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[9:30] <iltisanni> Hey. Is that correct?: The ceph primary monitor keeps the osdmap as long as it's the master. The osdmap tells the client on whick OSD it has to connect. The OSD Master keeps the primary Placement Group as long as it's the master. The placement groups keep the objects (data) and they are replicated on the other OSDs. ???
[9:31] <iltisanni> moreover mds is only required for cephfs?
[9:31] <Fruit> the latter is true at least :)
[9:34] * pixel (~pixel@81.195.203.34) has joined #ceph
[9:35] <pixel> Hello, everybody!
[9:35] <iltisanni> :-) well OK unfortunately that's my shortest of of all questions. But Thx anyway
[9:35] <iltisanni> hi
[9:35] <Robe> iltisanni: rest also reads correct
[9:35] <Robe> at least according to the rados paper I read
[9:36] <iltisanni> ok that's nice
[9:36] <iltisanni> Thx
[9:36] <pixel> Is this right command "ceph osd crush set 2 osd.2 2.0 pool=data" to add osd to crush map?
[9:37] <pixel> I'm getting the error (22) Invalid argument when try to use it
[9:37] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[9:39] <iltisanni> not correct
[9:39] <iltisanni> because
[9:39] <iltisanni> ceph osd setcrushmap -i {compiled-crushmap-filename}
[9:39] * danieagle (~Daniel@186.214.95.132) has joined #ceph
[9:39] <iltisanni> thats the usage
[9:39] <pixel> but I've got it from official docs)
[9:40] <pixel> ok, thank will use your way
[9:40] <iltisanni> ok sorry.. was wrong
[9:40] <iltisanni> ceph osd crush set {id} {name} {weight} pool={pool-name} [{bucket-type}={bucket-name}, ...]
[9:40] <iltisanni> thats the usage to add an osd
[9:40] <iltisanni> http://ceph.com/docs/master/cluster-ops/crush-map/
[9:43] <iltisanni> I have an other question.. according to my last few questions..how is the pool involved in that? what does it contain...
[9:43] * MikeMcClurg (~mike@91.224.175.20) has joined #ceph
[9:45] <iltisanni> i understood. osds keep placement groups that keep objects.... And pools contain placement groups and are contained in osds?
[9:45] <iltisanni> and thats the place where to set the number of pg replicas?
[9:45] * MoroIta (~MoroIta@62.196.20.28) Quit (Ping timeout: 480 seconds)
[9:47] * LarsFronius (~LarsFroni@95-91-242-155-dynip.superkabel.de) has joined #ceph
[9:48] * calebamiles (~caleb@c-24-128-194-192.hsd1.vt.comcast.net) Quit (Quit: Leaving.)
[9:49] * calebamiles (~caleb@c-24-128-194-192.hsd1.vt.comcast.net) has joined #ceph
[9:50] <iltisanni> ?
[9:51] * LarsFronius (~LarsFroni@95-91-242-155-dynip.superkabel.de) Quit ()
[9:52] <pixel> I've found my mistake , there isn't the pool named data, I've used 'default': "ceph osd crush set 2 osd.2 2.0 pool=default"
[9:55] * nazarianin|2 (~kvirc@mg01.apsenergia.ru) has joined #ceph
[9:59] <nazarianin|2> Hello All! I create iscsi image 200M size and size osd on xfs partition was 300M. After 2 days use iscsi image size osd was over 6,5G. Why?
[10:01] * MikeMcClurg (~mike@91.224.175.20) Quit (Ping timeout: 480 seconds)
[10:01] * MoroIta (~MoroIta@62.196.20.28) has joined #ceph
[10:08] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[10:13] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[10:14] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[10:15] * danieagle (~Daniel@186.214.95.132) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[10:18] * MoroIta (~MoroIta@62.196.20.28) Quit (Quit: Yaaic - Yet another Android IRC client - http://www.yaaic.org)
[10:20] <joao> iltisanni, actually, all the monitors in the quorum keep updated versions of all the maps, including the osdmap
[10:22] <joao> the leader among the monitors only happens to be the one responsible for sharing updates with the remaining monitors (in the form of proposals that must be acknowledge by a majority)
[10:22] * mib_04imu4 (bca79153@ircip4.mibbit.com) has joined #ceph
[10:22] <joao> the monitors follow the Paxos algorithm for quorum decisions
[10:26] <iltisanni> ahh ok thanks
[10:29] <iltisanni> i don't get what the crush map is exactly for.... only for configuration of the different communication ways between client and osd?
[10:30] <mib_04imu4> hi guys, i have testing config with 4 osd on 2 servers everything looks ok but i have problem when one of osd goes down. There are freeze on rbd for 20-25 second (when ceph detect osd is down) then block device continue operate. Can i eliminate this behavior? Is possible do it without freezing of iops?
[10:30] <joao> iltisanni, it's used to calculate the location of pgs given the current osdmap
[10:31] <joao> you get your crushmap and an updated version of the osdmap, and your client will know which osd to contact directly
[10:32] <iltisanni> ok, so the client gets the osdmap from the monitor and the crush map from ? (monitor also?) -> and with that information the client knows which osd to contact
[10:33] <joao> mib_04imu4, I don't know if there's any chance to avoid that; by the looks of it I would say that those 20-25 seconds are the time it takes for the other osds and monitors to realize that monitor just went down, update the osdmaps and only then will the rbd contact the other replica(s) of the pgs it was trying to access on the failed osd
[10:33] <joao> if that's all there is, I know of no way around it
[10:34] <joao> iltisanni, from the monitor's point of view, the crushmap is embedded in the osdmap
[10:35] <joao> so when it shares the osdmap it is sharing the crushmap along
[10:36] <iltisanni> ok Thanks. got it now i think :-). The crushmap can be edited by the user, but the osdmap is automatically created isn't it?
[10:37] <joao> yes
[10:37] <iltisanni> good
[10:39] <mib_04imu4> joao, thanks for reply. So this is normal behavioral? Hm so i need to try how can handle VMs this freeze or database..
[10:39] <joao> mib_04imu4, I would say so, yes, but I'm no expert when it comes to rbd
[10:41] <iltisanni> when editing the crushmap on a monitor which is not the master.. does the master pull this information or is it pushed to it?
[10:42] <mib_04imu4> joao, but this freeze is not only in rbd right? this is how ceph operate when osd goes down
[10:42] <joao> yes, I do think so
[10:43] <joao> at least it matches with the behavior you would expect from the cluster once an osd goes down until it is detected by the remaining cluster participants and maps get updated
[10:46] <mib_04imu4> joao, it help when u have more osd? cluster with 20 osd can be quicker recovery? or doesn't matter?
[10:47] * MikeMcClurg (~mike@91.224.175.20) has joined #ceph
[10:49] <joao> mib_04imu4, if the client is accessing an object on the osd that fails, I would say you don't have a way around it
[10:49] <joao> however, having more osds will mean that the primary pgs are more scattered across the cluster
[10:50] <iltisanni> I have an other (maybe stupid) question. Sorry I'm a ceph noob and I find the explanation on ceph homepage quiet hard to understand -.- ... Ceph is a cluster which stores objects and replicas of them through osds -> pools -> pgs. To write data on the cluster I need a client that connects the osds in the cluster
[10:50] <joao> instead of relying on only 4 osds, if you rely on 400
[10:50] <iltisanni> but how to specify that client (server)?
[10:50] <joao> your osd may very well fail and you may not even notice it
[10:51] <joao> but then again, that's also true for only 4 osds, as long as you are accessing the other 3 osds that didn't fail ;)
[10:51] <madkiss1> integrating RADOS into Nagios / Icinga is something that I was discussing with some Icinga developers 2 weeks ago btw.
[10:51] * madkiss1 is now known as madkiss
[10:51] <joao> iltisanni, what do you mean? "specify that client (server)"?
[10:52] <joao> madkiss, I'm not familiar with how nagios work; will it notify you immediately when an osd fails?
[10:52] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[10:53] * ninkotech_ (~duplo@89.177.137.231) Quit (Read error: Connection reset by peer)
[10:53] <iltisanni> well i can not take any server which physically has connection to the cluster.. I need a server that is known from the cluster or not ?
[10:53] <madkiss> joao: not "immediately", but there will be some way to catch such an event
[10:53] <mib_04imu4> joao, hm thats right :) thanks
[10:54] <joao> madkiss, wouldn't that duplicate what the osds and the monitors are already doing internally?
[10:54] <madkiss> no.
[10:54] <joao> I'm obviously missing the point here :p
[10:54] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[10:54] <madkiss> it would just be getting the internal status messages out of the mons and finding a way to integrate this into standard monitoring systems (with alarming etc. pp.)
[10:54] * loicd (~loic@magenta.dachary.org) has joined #ceph
[10:54] <joao> oh, that would be neat
[10:56] <joao> from the meager awareness I have how datacenters work, which is all through a friend who used to work at one, they used to rely a lot on nagios for system status
[10:56] <joao> so having the mons interfacing with nagios would only make sense
[10:57] <joao> (actually, his office looked kind of cool with all those screens with nagios status)
[10:57] <iltisanni> lets say.. I have a working cluster with 3 osds, 3 montiors and one mds. Now I want to write data on the cluster from a server. How to do that? The Server has to know the cluster... or must it be in that cluster, too?
[10:57] <iltisanni> sorry for interrupting you guys with my stupid question btw ;-)
[10:58] <joao> iltisanni, the client to the ceph cluster will have to have a way to contact the monitors
[10:58] <joao> I mean, will have to have a way to know where at least one monitor is
[10:58] <joao> from the docs at the site, you would specify a ceph.conf to the libraries, for instanc
[10:58] <joao> *instance
[10:59] <joao> I'm sure the cephfs client will also need a ceph.conf too
[10:59] * Leseb (~Leseb@193.172.124.196) Quit (Read error: Connection reset by peer)
[10:59] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[10:59] <iltisanni> ok so i install the ceph packages i need and edit the ceph.conf on the server, where I specify at least one of the monitors
[11:00] <joao> I'm really not sure how that works as I haven't tried that myself, so I'm entering the 'theoretically it would work like this' zone
[11:00] <joao> iltisanni, the 'server' being the one acting as client?
[11:00] <iltisanni> y
[11:01] <joao> well, if it was me trying to give it a shot for the first time, I would then create a ceph.conf with only a [global] and [mon] sections
[11:02] <joao> I'm sure the docs have something to say about it though
[11:02] <iltisanni> right now I take all my information from here: http://ceph.com/docs/master/rbd/rbd/ But i can't find something about that... maybe I'm blind.
[11:08] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[11:46] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[11:52] * MikeMcClurg (~mike@91.224.175.20) Quit (Ping timeout: 480 seconds)
[11:59] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) Quit (Read error: Connection reset by peer)
[12:04] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[12:05] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[12:05] * mib_04imu4 (bca79153@ircip4.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[12:18] * MikeMcClurg (~mike@91.224.175.20) has joined #ceph
[12:18] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[12:25] * match (~mrichar1@pcw3047.see.ed.ac.uk) has joined #ceph
[12:30] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[12:39] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[12:42] * loicd (~loic@magenta.dachary.org) has joined #ceph
[12:54] * MikeMcClurg (~mike@91.224.175.20) Quit (Ping timeout: 480 seconds)
[13:08] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[13:29] * noob2 (a5a00214@ircip3.mibbit.com) has joined #ceph
[13:29] <noob2> does ceph work on rhel 5.8 or is that getting too crusty?
[13:33] <Fruit> server or client?
[13:33] <noob2> client
[13:33] <Fruit> doubtful. certainly not "out of the box"
[13:34] <noob2> server i'm pretty sure i'll go with ubuntu 12
[13:34] <noob2> right
[13:34] <noob2> you guys recommend staying away from the fuse client right?
[13:35] <pixel> Pls. write the command (ceph mon add <name> <ip>[:<port>]\n"; ) on an example because I'm not able to performe it
[13:36] * tziOm (~bjornar@194.19.106.242) Quit (Ping timeout: 480 seconds)
[13:45] <iltisanni> pixel : you can add a monitor via ceph.conf
[13:45] <iltisanni> i think...
[13:45] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[13:46] <pixel> you mean, we don't need to use this command, just add mon to conf file manualy?
[13:46] <iltisanni> i did so
[13:46] <joao> pixel,
[13:46] <joao> jecluis@Magrathea:~/inktank/src/ceph.master/src$ ./ceph mon add d 127.0.0.1:6792
[13:46] <joao> added mon.d at 127.0.0.1:6792/0
[13:47] <pixel> ok, thx
[13:47] <iltisanni> but it is possible to add the monitor via conf file isn't it
[13:47] <iltisanni> ?
[13:47] <iltisanni> just add some lines
[13:48] <iltisanni> hostname and mon addr
[13:48] <pixel> \n"; -- this is so strange in docs
[13:49] <joao> iltisanni, the ceph.conf will be used as the configuration file for the cluster; it may specify where the monitors live, and a monitor will use it when it is started
[13:49] <joao> but if you have a running cluster, changing the ceph.conf won't make other monitors aware of said monitor
[13:50] <joao> it will however let the new monitor know where the remaining monitors are, but the remaining monitors won't allow the new monitor into the quorum unless the new monitor is on their monmap
[13:50] <joao> thus the 'ceph mon add' command
[13:50] <joao> does this make sense?
[13:51] <noob2> anyone know of ceph puppet modules ?
[13:51] <iltisanni> but adding the monitor manually in the conf file and restarting the service does add it?
[13:51] * ssedov (stas@ssh.deglitch.com) has joined #ceph
[13:52] <joao> it depends, iirc
[13:52] <joao> assuming you shutdown all the monitors in the cluster
[13:53] <joao> you'd have to either do one of two things after adding the new monitor to the ceph.conf
[13:53] <joao> either set mon initial peers (I think that's the right option) on [global] with all the monitors that are supposed to be part of your monitor cluster
[13:53] <wonko_be> what is nowadays the preferred way to start building a ceph cluster?
[13:53] <joao> or regenerate the new monmap
[13:54] <wonko_be> debian + chef? debian + manually setting it up?
[13:54] <joao> and specify it to the monitors, but I have no idea if regenerating the monmap is advised
[13:54] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[13:54] <todin> good day
[13:54] <joao> iltisanni, the advised route here is to use the ceph tool to add new monitors afaik
[13:54] <iltisanni> well ok
[13:55] <noob2> hello todin
[13:55] <joao> hi todin
[13:55] * stass (stas@ssh.deglitch.com) Quit (Read error: Connection reset by peer)
[13:55] <pixel> good afternoon
[14:02] <iltisanni> I have to test ceph cluster. How would you guys start with that project? I dont know where and how to begin. Right now i installed 5 VMs. 3 of them are OSD and MON Server. Additionally one of the three hat a mds daemon running. Now im stuck... dont know how to go on testing ceph..
[14:03] <pixel> I'm doing the same now
[14:04] <iltisanni> ceph health gives me Health OK.. so the cluster is running.. but I dont know what to do now :-)
[14:05] <pixel> I've tried to use the tool " iozone -i 0 -i 2 -l 8 -u 8 -r 4k -s 100m -F ./1 ./2 ./3 ./4 ./5 ./6 ./7 ./8" but my cluster frozen
[14:05] <iltisanni> what do you mean with frozen?
[14:06] <iltisanni> vms dont react anymore?
[14:06] <pixel> stopped working :(
[14:06] <pixel> I've 5 dedicated serves
[14:06] <iltisanni> the ceph cluster or the whole vm?
[14:06] <pixel> client nodes
[14:06] <pixel> node*
[14:06] <pixel> where storage is mounted
[14:06] <pixel> then I've restatre it
[14:07] <pixel> restarted
[14:10] <iltisanni> sorry cant help you out
[14:19] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[14:27] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[14:27] * loicd (~loic@magenta.dachary.org) has joined #ceph
[15:00] * PerlStalker (~PerlStalk@perlstalker-1-pt.tunnel.tserv8.dal1.ipv6.he.net) has joined #ceph
[15:07] * MikeMcClurg (~mike@91.224.175.20) has joined #ceph
[15:17] * vata (~vata@208.88.110.46) has joined #ceph
[15:19] * pixel_ (~pixel@81.195.203.34) has joined #ceph
[15:24] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[15:27] * pixel (~pixel@81.195.203.34) Quit (Ping timeout: 480 seconds)
[15:33] <wonko_be> comming back to ceph after 6 months, the installation docs didn't become easier, so it seems.
[15:36] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[15:37] * loicd (~loic@magenta.dachary.org) has joined #ceph
[15:37] <wonko_be> does the chef cookbooks work?
[15:39] <scuttlemonkey> hey wonko_be: sorry to hear things aren't cleared up
[15:39] <scuttlemonkey> I know they have been doing a fari amount of work on doc
[15:39] <scuttlemonkey> the Chef cookbooks should be functional if you are most comfortable with that route
[15:39] <wonko_be> well, I just wanted to give ceph a new try, and was looking for the best way to do it "the ceph way"
[15:39] <scuttlemonkey> ahh
[15:40] <scuttlemonkey> have you had a chance to look over ceph-deploy?
[15:40] <wonko_be> so, should i use mkcephfs?
[15:40] <wonko_be> ceph-deploy
[15:40] <wonko_be> lets look for that
[15:40] <scuttlemonkey> it's still a little wild-west-y, but that's the direction things are headed I think
[15:41] <scuttlemonkey> I'm currently playing with both ceph-deploy and the newer of the juju charms
[15:41] <scuttlemonkey> https://github.com/ceph/ceph-deploy
[15:41] <wonko_be> yeah, i'm reading through it now
[15:41] <scuttlemonkey> cool
[15:42] <scuttlemonkey> shout if you have any questions...I might be able to help, but barring that the guys working on it should be around in a bit
[15:42] <wonko_be> looks sweet, lets give that a spin
[15:44] <wonko_be> okay, workstation is a bit wrongly worded, it should be a linux node
[15:45] <scuttlemonkey> the "...runs fully on your workstation," bit?
[15:45] <wonko_be> :)
[15:45] <scuttlemonkey> well ceph-deploy really is just a few scripts that you could run from your workstation
[15:45] <scuttlemonkey> ubuntu laptop, cloud node, etc
[15:46] <wonko_be> yeah, but mac os x doesnt fit the bill, apparently
[15:46] <wonko_be> let me fix that
[15:47] <scuttlemonkey> ahh, yeah
[15:47] <scuttlemonkey> I deployed it from an ubuntu box I had
[15:52] * MikeMcClurg1 (~mike@91.224.175.20) has joined #ceph
[15:52] * MikeMcClurg (~mike@91.224.175.20) Quit (Quit: Leaving.)
[15:52] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[15:53] <wonko_be> ah, and the nodes should be ubuntu
[15:54] * pixel_ (~pixel@81.195.203.34) Quit (Quit: Ухожу я от вас (xchat 2.4.5 или старше))
[16:00] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[16:04] * buck (~buck@bender.soe.ucsc.edu) has joined #ceph
[16:07] * MikeMcClurg1 (~mike@91.224.175.20) Quit (Ping timeout: 480 seconds)
[16:12] * yehudasa (~yehudasa@2607:f298:a:607:dcb7:393:6594:a865) Quit (Ping timeout: 480 seconds)
[16:12] * sagelap (~sage@212.sub-70-197-150.myvzw.com) has joined #ceph
[16:15] * yehudasa_ (~yehudasa@99-48-179-68.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[16:20] * yehudasa (~yehudasa@2607:f298:a:607:74bd:a80c:a3ba:88b4) has joined #ceph
[16:20] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[16:21] * sagelap (~sage@212.sub-70-197-150.myvzw.com) Quit (Ping timeout: 480 seconds)
[16:25] * sagelap (~sage@3.sub-70-197-142.myvzw.com) has joined #ceph
[16:29] * gregorg (~Greg@78.155.152.6) has joined #ceph
[16:33] * sagelap (~sage@3.sub-70-197-142.myvzw.com) Quit (Ping timeout: 480 seconds)
[16:40] * jlogan1 (~Thunderbi@2600:c00:3010:1:852f:a2dd:c540:fa16) has joined #ceph
[16:43] * sagelap (~sage@172.sub-70-197-143.myvzw.com) has joined #ceph
[16:47] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[16:54] * sagelap1 (~sage@2607:f298:a:607:9def:cff5:2f8c:2076) has joined #ceph
[16:56] * sagelap (~sage@172.sub-70-197-143.myvzw.com) Quit (Ping timeout: 480 seconds)
[16:57] <sagewk> slang: wip-3367 looks good, i'll merge it in.
[16:58] <slang> sagewk: k
[17:06] <sagewk> elder: we can close 3291 right?
[17:06] <elder> Let me look.
[17:06] <elder> I need a review, it's not committed.
[17:06] <elder> Wait.
[17:07] <elder> No, that's the btrfs one. I guess we can close it given we have a fix. I don't know how you like to time those things... I haven't checked to see if Josef's patch is upstream yet.
[17:08] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:09] <elder> sagwk, translated: Feel free to close it if you like...
[17:10] * verwilst (~verwilst@d5152FEFB.static.telenet.be) Quit (Quit: Ex-Chat)
[17:17] * tziOm (~bjornar@ti0099a340-dhcp0778.bb.online.no) has joined #ceph
[17:18] * loicd (~loic@magenta.dachary.org) has joined #ceph
[17:28] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[17:35] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[17:36] * tzi0m (~bjornar@ti0099a340-dhcp0778.bb.online.no) has joined #ceph
[17:37] * tzi0m (~bjornar@ti0099a340-dhcp0778.bb.online.no) Quit ()
[17:51] <wonko_be> is there a way to tell the kernel to rescan the rbd device, to see the new size after a rbd --resize command?
[17:52] <madkiss> "partprobe" possibly?
[17:53] <Robe> madkiss: ohai!
[17:56] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[17:57] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit ()
[18:03] <Leseb> wonko_be: there is no way, unless umount and mount the fs (I guess you ask because there is a filesystem on it)
[18:05] <joshd> wonko_be: iirc echo 1 > /sys/bus/pci/.../rescan or some such does it
[18:12] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[18:12] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[18:13] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[18:14] <wonko_be> joshd: that would be for a scsi-like device
[18:14] <wonko_be> not rbd
[18:14] <wonko_be> i remounted
[18:14] <wonko_be> well, i rebooted, as resize2fs didn't see the new size of the rbd device
[18:14] <wonko_be> it isn't really that important
[18:14] <mikeryan> partprobe ought to handle it
[18:15] <mikeryan> we should fix it if it doesn't
[18:16] <wonko_be> mikeryan: i'll try partprobe the next time
[18:17] <wonko_be> i though there might be a "rescan" command to rbd, like iscsi has
[18:20] <wonko_be> never, mind, this actually works, must have been ext4
[18:21] <wonko_be> ah, once mounted/used/open, the resizing doesn't get noticed
[18:21] <wonko_be> wierd
[18:23] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) has joined #ceph
[18:32] * match (~mrichar1@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[18:36] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[18:38] * AaronSchulz (~chatzilla@216.38.130.166) Quit (Ping timeout: 480 seconds)
[18:39] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[18:41] <joshd> sagewk: wip-oc-neg looks good. moving the layering handling is a separate step, right?
[18:43] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[18:46] * rino (~rino@12.250.146.102) has joined #ceph
[18:46] <sagewk> joshd: yeah
[18:47] <sagewk> sjust: we talked before about making filestore_xattr_use_omap = true the default.. was there a reason we didn't?
[18:48] <nhmlap> sagewk: fwiw I've been using it pretty extensively.
[18:52] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[18:52] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[18:52] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) Quit (Quit: Leaving.)
[18:53] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) Quit ()
[18:54] * dmick (~dmick@2607:f298:a:607:cdea:4965:cd42:6c7) has joined #ceph
[18:55] * rino (~rino@12.250.146.102) Quit (Quit: [BX] Were you born a fat, slimy, scumbag, puke, piece of shit or did you have to work on it?)
[18:57] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[18:59] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Ping timeout: 480 seconds)
[19:09] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[19:11] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[19:14] <joao> sagewk, is it just me, or does the crush map needs to be constructed in a way that roots are always parsed before rules?
[19:14] * miroslav (~miroslav@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[19:23] <Fruit> sagewk: what does that omap option do anyway? :)
[19:30] * LarsFronius (~LarsFroni@2a02:8108:3c0:79:968:36e:9749:3bfb) has joined #ceph
[19:36] * LarsFronius_ (~LarsFroni@95-91-242-169-dynip.superkabel.de) has joined #ceph
[19:37] * senner (~Wildcard@68-113-232-90.dhcp.stpt.wi.charter.com) has joined #ceph
[19:41] * noob2 (a5a00214@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[19:41] * LarsFronius (~LarsFroni@2a02:8108:3c0:79:968:36e:9749:3bfb) Quit (Ping timeout: 480 seconds)
[19:41] * LarsFronius_ is now known as LarsFronius
[19:45] <nhmlap> Fruit: more or less to store the extended file attributes in leveldb.
[19:47] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) has joined #ceph
[19:47] <jmlowe> joshd: you around?
[19:47] <joshd> jmlowe: yeah, what's up?
[19:48] <jmlowe> so the ceph docs say to put a —id volumes in for cinder/nova-volume, is there something similar that should be done for glance?
[19:49] <jmlowe> nm, I see where rbd_store_user gets set
[19:49] <joshd> yeah, glance doesn't need CEPH_ARGS
[19:50] <jmlowe> where do you tell it what keyring to use?
[19:51] <jmlowe> I've got "monclient(hunting): failed to open keyring: (2) No such file or directory" in my syslogs, so I think maybe it doesn't know where to find its keyring
[19:52] <joshd> it's getting that from ceph.conf
[19:52] <joshd> right, ok, I remember this from friday now
[19:52] <jmlowe> yeah, taking another crack at it
[19:53] <joshd> at one point you had glance connecting to the cluster correctly, but it was getting 'operation not permitted' when trying to actually do i/o
[19:53] <jmlowe> so I have nothing in ceph.conf about that keyring, is that something I'm missing
[19:54] <joshd> you want the keyring for client.glance (or whatever rados client you're using) and ceph.conf should point to that keyring file on the machine running glance-api
[19:54] <joshd> it needs to be readable by the glance unix user
[19:56] <jmlowe> we had the unix user working with the rbd cli on Friday, new keys due to cap parsing bug that hasn't hit the packaged ceph yet
[19:57] <joshd> right, so your current client.glance caps are something like mon 'allow r' osd 'allow rwx pool images'?
[20:00] <jmlowe> yes and rbd ls works when specifying the id and keyring
[20:01] <jmlowe> I've now added a keyring = /etc/ceph/ceph.client.images.keyring for client.images
[20:01] <joshd> sounds good
[20:09] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[20:10] <jmlowe> no luck finding the fault with strace
[20:11] <jmlowe> also oddly isn't obeying the debug settings in ceph.conf
[20:11] <scuttlemonkey> heh, I'm having trouble w/ keyring also...just w/ juju instead
[20:12] <joshd> hmm, is ceph.conf not readable by glance-api once it starts up? is there an apparmor profile being applied?
[20:12] <jmlowe> oh, apparmor I hadn't thought to check that
[20:13] <joshd> it's been a problem with qemu/libvirt in the past, but I haven't seen it cause issues with glance yet
[20:22] * BManojlovic (~steki@212.200.243.179) has joined #ceph
[20:23] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[20:23] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[20:29] <jmlowe> ok, I finally read the code, glance-api.conf:rbd_store_user = glance should have been glance-api.conf:rbd_store_user = images
[20:30] <jmlowe> which is in the docs, no idea why I changed that
[20:31] <joshd> ah, that would do it
[20:32] <jmlowe> had rbd_store_user twice in /etc/glance/glance-api.conf
[20:33] <jmlowe> there we go, I've got an image uploaded now via the glance cli
[20:34] <jmlowe> I'm guessing there was a default rbd_store_user commented out and on two different occasions I removed the comments and added my own rbd_store_user
[20:34] <joshd> yeah, I think the commented one is glance
[20:39] <PerlStalker> Any idea why qemu-img can see an rbd block but kvm (via libvirt) spits back "could not open disk image rbd:kvm_prod/srvmgmt_c: No such file or directory"
[20:39] <scuttlemonkey> joshd: do we have anyone internal who has messed w/ juju? I have a few questions, but all the canonical guys are most likely busy at UDS
[20:40] <joshd> PerlStalker: kvm is probably pointing to a different qemu binary without rbd support (kvm -device format=? | grep rbd would tell you for sure)
[20:41] <joshd> scuttlemonkey: gregaf was looking at it last week in preparation for uds
[20:42] <scuttlemonkey> ahh, is he there?
[20:42] <joshd> scuttlemonkey: yeah, he's there the whole week
[20:42] <scuttlemonkey> gotcha
[20:43] <scuttlemonkey> I have a howto written up for deploying ceph w/ juju that I wanted to publish during UDS...but it looks like that wont happen
[20:43] <scuttlemonkey> couple of places where it has the opportunity to just asplode
[20:43] <scuttlemonkey> guess I'll just file a few bugs and chat w/ James when he gets back
[20:43] <PerlStalker> joshd: Actually, it looks like a libvirt issue. I can run the kvm command-line by hand and start the vm.
[20:45] <joshd> PerlStalker: ah, you might have a slightly older libvirt that still tries to do security checks on rbd as if it's a file. which version of libvirt is this?
[20:46] <dmick> PerlStalker: I had that problem when I'd installed kvm-spice
[20:46] <dmick> which doesn't support rbd
[20:46] <PerlStalker> joshd: It's libvir 0.9.8 but I have it working on another server running the same version.
[20:46] <dmick> but in my case the xml actually referred to that binary, so may be different for you
[20:47] <PerlStalker> dmick: I ran into that but I switched back to the non-spice kvm to get around that.
[20:48] * BManojlovic (~steki@212.200.243.179) Quit (Quit: Ja odoh a vi sta 'ocete...)
[20:48] <joshd> PerlStalker: is one ubuntu and one not? ubuntu had some extra patches on their 0.9.8 to avoid this kind of issue
[20:48] * Steki (~steki@212.200.243.179) has joined #ceph
[20:48] <PerlStalker> They are both ubuntu precise.
[20:49] <PerlStalker> I'm using the same package version on both systems. That was my first thought.
[20:51] <joshd> is there any clue in libvirt's log if you enable debug logging in /etc/libvirtd.conf?
[20:51] <PerlStalker> Could it be permissions on the keys in /etc/ceph ?
[20:51] <joshd> er, /etc/libvirt/libvirtd.conf
[20:52] <dmick> (btw that option for 'checking binary for support' is kvm -drive format=?; brutal command line parsing in qemu)
[20:52] <joshd> it could be permissions or apparmor if you're using a keyring file
[20:53] * Steki (~steki@212.200.243.179) Quit ()
[20:53] * BManojlovic (~steki@212.200.243.179) has joined #ceph
[20:53] * PerlStalker kicks apparmor
[21:05] <PerlStalker> Yep, apparmor is denying access to my keyring.admin file.
[21:07] <rweeks> sudo apt-get remove apparmor
[21:07] <rweeks> <.<
[21:07] <PerlStalker> :-)
[21:07] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:12] * anna1 (~Adium@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[21:13] * anna1 (~Adium@c-98-234-186-68.hsd1.ca.comcast.net) has left #ceph
[21:18] * noob (~Adium@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[21:18] * noob is now known as Guest3643
[21:19] * sagelap1 (~sage@2607:f298:a:607:9def:cff5:2f8c:2076) Quit (Ping timeout: 480 seconds)
[21:21] * ssedov (stas@ssh.deglitch.com) Quit (Read error: Connection reset by peer)
[21:22] * stass (stas@ssh.deglitch.com) has joined #ceph
[21:25] <Guest3643> does anyone know what the new syntax of ceph-deploy when creating OSD's?
[21:25] <Guest3643> seems to have changed in the last couple of days; it seems to require to define journal; when i tried, got an error
[21:25] <Guest3643> ubuntu@client:~/my-admin-sandbox$ ceph-deploy osd ceph1:sdc:sdd ceph2:sdc:sdd
[21:25] <Guest3643> usage: ceph-disk-prepare [-h] [-v] [--cluster NAME] [--cluster-uuid UUID] DISK
[21:25] <Guest3643> Traceback (most recent call last):
[21:25] <Guest3643> File "/usr/local/bin/ceph-deploy", line 9, in <module>
[21:25] <Guest3643> load_entry_point('ceph-deploy==0.0.1', 'console_scripts', 'ceph-deploy')()
[21:25] <Guest3643> File "/home/ubuntu/ceph-deploy/ceph_deploy/cli.py", line 80, in main
[21:25] <Guest3643> return args.func(args)
[21:25] <Guest3643> File "/home/ubuntu/ceph-deploy/ceph_deploy/osd.py", line 202, in osd
[21:25] <Guest3643> journal=journal,
[21:25] <Guest3643> File "/home/ubuntu/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy-0.5.1-py2.7.egg/pushy/protocol/proxy.py", line 255, in <lambda>
[21:25] <Guest3643> (conn.operator(type_, self, args, kwargs))
[21:25] <Guest3643> File "/home/ubuntu/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy-0.5.1-py2.7.egg/pushy/protocol/connection.py", line 66, in operator
[21:25] <Guest3643> return self.send_request(type_, (object, args, kwargs))
[21:25] <Guest3643> File "/home/ubuntu/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy-0.5.1-py2.7.egg/pushy/protocol/baseconnection.py", line 323, in send_request
[21:25] <Guest3643> return self.__handle(m)
[21:25] <Guest3643> File "/home/ubuntu/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy-0.5.1-py2.7.egg/pushy/protocol/baseconnection.py", line 639, in __handle
[21:25] <Guest3643> raise e
[21:25] <Guest3643> pushy.protocol.proxy.ExceptionProxy: Command '['ceph-disk-prepare', '--', '/dev/sdc', '/dev/sdd']' returned non-zero exit status 2
[21:28] * sagelap (~sage@2607:f298:a:607:60be:d9e:7bcb:df9d) has joined #ceph
[21:29] <dmick> Guest3643: please use some pastebin for long things like that, but:
[21:29] <Guest3643> sorry forgot - noob
[21:31] * Guest3643 is now known as tanato
[21:31] <dmick> while there was a new optional arg added to allow for a journal, it's not supposed to be required
[21:32] <miroslav> It seems to error out when not included.
[21:32] <miroslav> And it also errors out when included but with a different (non-syntax) message
[21:38] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) has joined #ceph
[21:39] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) has left #ceph
[21:40] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[21:40] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[21:40] * Leseb_ is now known as Leseb
[21:41] * senner (~Wildcard@68-113-232-90.dhcp.stpt.wi.charter.com) has left #ceph
[21:46] <dmick> I can confirm neither form seems to work
[21:50] <rweeks> should anna file a bug?
[21:51] <dmick> sure
[21:52] <dmick> the new code is using split() wrong; just poking at it
[22:13] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[22:13] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) Quit ()
[22:18] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Ping timeout: 480 seconds)
[22:22] <tanato> filed a bug - BUG #3420
[22:22] * MikeMcClurg (~mike@91.224.174.75) has joined #ceph
[22:22] <dmick> tanato: doh! Just pushed a fix :)
[22:23] <dmick> I'll note the SHA1 in the bug
[22:24] * MikeMcClurg1 (~mike@91.224.174.75) has joined #ceph
[22:24] * MikeMcClurg (~mike@91.224.174.75) Quit (Read error: Connection reset by peer)
[22:25] * MikeMcClurg1 (~mike@91.224.174.75) Quit ()
[22:25] * MikeMcClurg (~mike@91.224.174.75) has joined #ceph
[22:28] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[22:30] * loicd (~loic@magenta.dachary.org) has joined #ceph
[22:32] * sagelap1 (~sage@38.122.20.226) has joined #ceph
[22:34] * MikeMcClurg (~mike@91.224.174.75) Quit (Ping timeout: 480 seconds)
[22:35] * sagelap (~sage@2607:f298:a:607:60be:d9e:7bcb:df9d) Quit (Ping timeout: 480 seconds)
[22:57] * MikeMcClurg (~mike@91.224.174.75) has joined #ceph
[23:05] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Quit: Leaving)
[23:05] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[23:07] * calebamiles (~caleb@c-24-128-194-192.hsd1.vt.comcast.net) has left #ceph
[23:08] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[23:16] <buck> I have a packaging query if anyone is about
[23:16] <buck> for the java packages
[23:19] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[23:22] * nwatkins (~Adium@soenat3.cse.ucsc.edu) has joined #ceph
[23:22] * vata (~vata@208.88.110.46) Quit (Quit: Leaving.)
[23:32] * andreask (~andreas@93-189-29-152.rev.ipax.at) has joined #ceph
[23:36] <dmick> buck: go ahead and ask, can't hurt
[23:42] <buck> So we have tests for the libcephfs-java code
[23:42] <buck> They depend on JUnit (a java package)
[23:42] * gohko (~gohko@natter.interq.or.jp) Quit (Read error: Connection reset by peer)
[23:43] <buck> We're rewriting the tests to use the version of JUnit that ships with Ubuntu 12.04 (which is crazy old)
[23:43] * gohko (~gohko@natter.interq.or.jp) has joined #ceph
[23:43] <buck> But the question is about how we deliver the tests to users
[23:43] <dmick> do we deliver any tests to users?
[23:45] <buck> I'm not sure. My line of reasoning is that the java work is being done in preparation for a focus on Hadoop over Ceph. Hadoop ships tests so I figured it would be good to have a way to test the java bindings independent of Hadoop. Semeed like a good way to avoid issues being misreported.
[23:45] <dmick> doesn't look like the tests I'm familiar with are packaged (I say this from grepping debian/*install)
[23:45] <dmick> they're in the source tree, and available from a build there, but my impression is that we don't package them
[23:45] <dmick> I have been wrong before...
[23:46] <buck> Hmmm....I just ran off and started coding, assuming we'd ship them. At least you're speaking from a point of experience......
[23:46] <dmick> test_librbd, for instance, is not packaged I don't think
[23:46] <buck> Well, the Makefile stuff is small, so not a huge deal, but you bring out the salient point of sorting out if we *want* t o deliver the tests
[23:47] <dmick> I definitely think they should be in the tree, buildable and runnable
[23:47] <buck> So with test_librbd. If we wanted a user to run that and they installed from packages, how does that work?
[23:47] <buck> They compile and run the tests themselves?
[23:47] <dmick> yep
[23:48] <buck> ok......good data point. Thanks.
[23:59] * PerlStalker (~PerlStalk@perlstalker-1-pt.tunnel.tserv8.dal1.ipv6.he.net) Quit (Quit: rcirc on GNU Emacs 24.2.1)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.