#ceph IRC Log

Index

IRC Log for 2012-10-19

Timestamps are in GMT/BST.

[0:00] * stan_theman (~stan_them@173.208.221.221) has left #ceph
[0:00] <slang> sage: I can just do mflags == O_whatever, I was shooting for something more generic there
[0:01] <slang> sage: apparently on gnu/hurd, the O_RDONLY/WRONLY/RDWR bits can be or-ed together
[0:01] * slang . o O ( not that we will ever run Ceph on gnu/hurd )
[0:02] * nwatkins2 (~Adium@soenat3.cse.ucsc.edu) has joined #ceph
[0:02] * nwatkins3 (~Adium@soenat3.cse.ucsc.edu) has joined #ceph
[0:02] * nwatkins1 (~Adium@soenat3.cse.ucsc.edu) Quit (Read error: Connection reset by peer)
[0:03] <slang> but yeah, I guess (fmode & O_ACCMODE) == O_xxx is more intuitive
[0:04] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[0:05] * Ryan_Lane (~Adium@63.133.198.91) Quit (Quit: Leaving.)
[0:05] * scuttlemonkey (~scuttlemo@63.133.198.36) Quit (Quit: This computer has gone to sleep)
[0:07] * amatter (~amatter@209.63.136.133) has joined #ceph
[0:10] * nwatkins2 (~Adium@soenat3.cse.ucsc.edu) Quit (Ping timeout: 480 seconds)
[0:12] <tziOm> are there any plans for getting rid of the ceph.conf relication configuration process?
[0:13] <tziOm> I am working a little with a netboot/pxe setup where I check md uuid, compare to osd dump --format=json, make config from that and so on..
[0:14] <Tv_> tziOm: a modern ceph.conf contains fsid, mon_host and tunables you need for performance, i don't see how to get rid of much there..
[0:15] <tziOm> it includes all osds, all mons and so on..
[0:15] <tziOm> fsid..?
[0:16] <Tv_> tziOm: that's why i said "modern"; mkcephfs ain't modern
[0:16] <tziOm> so point me to a "modern" example
[0:16] <Tv_> http://thread.gmane.org/gmane.comp.file-systems.ceph.devel/9697
[0:16] * The_Bishop (~bishop@2001:470:50b6:0:5471:3349:17ea:83c1) Quit (Ping timeout: 480 seconds)
[0:17] * synapsr (~synapsr@63.133.198.91) Quit (Remote host closed the connection)
[0:17] <slang> sage: rebased and pushed those changes
[0:17] <Tv_> tziOm: or the chef deployment stuff at http://ceph.com/docs/master/install/chef/ http://ceph.com/docs/master/config-cluster/chef/
[0:18] <sage> slang: i'm worried that ll_open assert will trigger, but we'll see.. did pjd hit it?
[0:18] <Anticimex> todin: are you (inktank) developing ceph ready front ends?
[0:18] <tziOm> yeah..
[0:18] <sage> if we do, there's a problem, so it's not necessarily bad, but :(
[0:18] <sage> slang: otherwise looks good!
[0:18] <rweeks> Anticimex: could you be more specific?
[0:18] <Anticimex> todin: ie, ceph as storage ("southbound"), and NFS/iSCSI/FCoE "northbound"
[0:18] <tziOm> Tv_, I dont see how this is different except using a script for it..
[0:19] <Anticimex> i was asking about this a few weeks ago
[0:19] <Tv_> tziOm: for starters, there's no [osd.42] sections
[0:19] <Anticimex> to compete with general storage systems
[0:19] <rweeks> that is not a focus today, no, Anticimex
[0:19] <Anticimex> and configurations with ssd/servers etc
[0:19] <Anticimex> okay
[0:19] <tziOm> Tv_, this does not show in the link you sent..
[0:19] <slang> sage: I didn't test with pjd :-)
[0:19] <Anticimex> im looking for this :) or a "certified" bill of materials
[0:19] <slang> sage: I'll start it, takes a while to run...
[0:19] <sage> k
[0:20] <Anticimex> rweeks: more into integrating ceph with cloud environments?
[0:20] <Anticimex> object storage, etc
[0:20] <Anticimex> then?
[0:20] <rweeks> right now, yes, that's more of our focus.
[0:20] <Anticimex> i see the consulting servicse matrix
[0:20] * synapsr (~synapsr@63.133.198.91) has joined #ceph
[0:21] * loicd (~loic@63.133.198.91) Quit (Quit: Leaving.)
[0:21] <rweeks> Yes. Inktank does consulting services around Ceph deployments, and also support for deployments.
[0:21] <rweeks> where are you at in the world?
[0:21] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[0:22] <Anticimex> sweden, europe
[0:22] <Anticimex> so a bit from the valley :)
[0:22] <Anticimex> have been on #ceph irc channel since fall of 2006
[0:23] <Anticimex> i recall sage explaining i had to wait until early summer of 2007 for production ready code. :-D
[0:23] * BManojlovic (~steki@212.200.240.42) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:23] <sage> hehe
[0:23] <sage> those were the days!
[0:23] <rweeks> well, I can put you in touch with one of our business development folks if you want to talk about consulting services.
[0:23] <Anticimex> sage: :)
[0:23] <rweeks> or you can harass the developers here. Either is good.
[0:24] <tziOm> hmm..
[0:24] <Anticimex> rweeks: what i really would like is a BOM and code packade to deploy what i described above, and then of course be able to support such installations some how
[0:24] <Anticimex> but i believe i should hit wikis/ml a bit more
[0:24] <Anticimex> or just sit down and draw/sketch a little on where bottlenecks are
[0:24] <rweeks> I believe that we could probably help you with that, and I also believe we will eventually get to something like a standard BOM, but we're not there yet.
[0:25] * The_Bishop (~bishop@2001:470:50b6:0:d49b:f6bc:f76b:9fc2) has joined #ceph
[0:25] <Anticimex> since ceph lives in userspace, i wonder how much of a performance hit comes from network I/O passing through kernel
[0:25] <rweeks> ceph doesn't HAVE to live in userspace
[0:25] <Anticimex> (jumping from one thing to another)
[0:25] <Anticimex> hmm, OSD's, yes?
[0:25] <rweeks> hm
[0:25] <rweeks> sage? I didn't view OSDs as being userspace
[0:25] <Anticimex> perhaps it's not an issue
[0:26] <Tv_> i will guarantee you talking to the NIC isn't the slowest part of an OSD
[0:26] <Anticimex> right, maybe it doesn't matter
[0:26] <gregaf> rweeks: it's all userspace; only thing int he kernel are the kernel cephfs and rbd clients
[0:26] <rweeks> I see
[0:26] <rweeks> ok, I had some things backwards.
[0:26] <Anticimex> i just happen to know the linux kernel has pretty high latency and #instructions per packet, compared to.. more optimized code :)
[0:26] <gregaf> and yeah, we aren't nearly to the point where network traffic matters
[0:26] <Anticimex> gregaf: gotcha
[0:26] <tziOm> rweeks, and you are working on this!=!
[0:27] <rweeks> tziOm: I'm not a dev
[0:27] <rweeks> I work in marketing!
[0:27] * cdblack (86868949@ircip4.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[0:27] <rweeks> I just happen to know how to use IRC and do technical things.
[0:27] <tziOm> oh my god.. I allmost had a heartattack
[0:27] * rweeks grins
[0:27] <rweeks> sorry for any alarm
[0:27] <rweeks> I've been at Inktank all of 3 weeks now, so I'm still learning
[0:28] <Anticimex> it's great that companies are starting up what inktank is doing imo
[0:28] <Anticimex> much needed :)
[0:28] <Anticimex> now time to hit bed, TZ difference and all
[0:28] <tziOm> how many developers are full time now?
[0:29] <tziOm> Linus wrote git in couple weeks,.. step up the phase, guys! ;)
[0:33] <rweeks> Dev/QA is over half the company, I think
[0:37] * scuttlemonkey (~scuttlemo@63.133.198.36) has joined #ceph
[0:43] <tziOm> You have QA? ;-)
[0:44] <rweeks> hush, you
[0:44] <rweeks> ;)
[0:45] * tnt (~tnt@246.121-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[0:47] * stass (stas@ssh.deglitch.com) Quit (Ping timeout: 480 seconds)
[0:47] * loicd (~loic@63.133.198.91) has joined #ceph
[0:51] * tziOm (~bjornar@ti0099a340-dhcp0778.bb.online.no) Quit (Remote host closed the connection)
[0:56] * stass (stas@ssh.deglitch.com) has joined #ceph
[1:02] * loicd (~loic@63.133.198.91) Quit (Ping timeout: 480 seconds)
[1:05] * loicd (~loic@63.133.198.91) has joined #ceph
[1:08] * Tv_ (~tv@2607:f298:a:607:bc4d:663f:aa67:a0da) Quit (Quit: Tv_)
[1:10] * Ryan_Lane (~Adium@63.133.198.91) has joined #ceph
[1:12] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[1:20] * lofejndif (~lsqavnbok@04ZAAAWFY.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[1:23] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[1:23] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[1:23] * gaveen (~gaveen@112.134.113.16) Quit (Remote host closed the connection)
[1:26] * Kioob (~kioob@luuna.daevel.fr) Quit (Quit: Leaving.)
[1:27] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[1:32] * synapsr (~synapsr@63.133.198.91) Quit (Remote host closed the connection)
[1:35] * scuttlemonkey (~scuttlemo@63.133.198.36) Quit (Quit: This computer has gone to sleep)
[1:35] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[1:36] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[1:36] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[1:36] * Leseb_ is now known as Leseb
[1:36] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit ()
[1:41] * loicd (~loic@63.133.198.91) Quit (Quit: Leaving.)
[1:42] <sage> slang: did pjd pass?
[1:48] * Ryan_Lane (~Adium@63.133.198.91) Quit (Quit: Leaving.)
[1:50] * dty_ (~derek@129-2-129-154.wireless.umd.edu) has joined #ceph
[1:52] * dty_ (~derek@129-2-129-154.wireless.umd.edu) Quit ()
[1:55] * dty_ (~derek@129-2-129-154.wireless.umd.edu) has joined #ceph
[1:57] * dty (~derek@testproxy.umiacs.umd.edu) Quit (Ping timeout: 480 seconds)
[1:57] * dty_ is now known as dty
[2:02] * dty (~derek@129-2-129-154.wireless.umd.edu) Quit (Quit: dty)
[2:05] * Kioob (~kioob@luuna.daevel.fr) Quit (Quit: Leaving.)
[2:09] * AaronSchulz (~chatzilla@216.38.130.166) has joined #ceph
[2:14] <AaronSchulz> are there any bugs or usage patterns that cause metadata values to be duplicated and separated by a comma?
[2:15] <gregaf> AaronSchulz: which metadata, where?
[2:15] <AaronSchulz> I'm using a rados-gateway (using the swift api)
[2:15] * AaronSchulz is testing with mediawiki
[2:16] <AaronSchulz> anyway, after getting some errors I noticed that the object metadata returned was like:
[2:16] <AaronSchulz> ["sha1base36"]=>
[2:16] <AaronSchulz> string(262) "r2baidcig82irc3n9xfozku4mgo8b96, r2baidcig82irc3n9xfozku4mgo8b96, r2baidcig82irc3n9xfozku4mgo8b96, r2baidcig82irc3n9xfozku4mgo8b96, r2baidcig82irc3n9xfozku4mgo8b96, r2baidcig82irc3n9xfozku4mgo8b96, r2baidcig82irc3n9xfozku4mgo8b96, r2baidcig82irc3n9xfozku4mgo8b96"
[2:16] <AaronSchulz> ["Sha1base36"]=>
[2:16] <AaronSchulz> string(31) "r2baidcig82irc3n9xfozku4mgo8b96"
[2:16] <AaronSchulz> only the Sha1base36 one should be there
[2:17] <AaronSchulz> I never set anything else, but some how a broken sha1base36 (notice the case) is in there
[2:18] <AaronSchulz> I haven't fully ruled out the PHP cloudfiles binding, though I've never ran into this with actual swift before and can't quite see how that would happen
[2:18] <gregaf> I haven't seen any reports of anything like that and also can't see how it would happen ;)
[2:18] <gregaf> but I'm not focused on RGW and it's entirely possible
[2:18] <gregaf> unfortunately yehudasa is the guy you want to talk to about that and he's out
[2:19] <gregaf> pretty sure it'll be news to him though; so he'll have to look into it
[2:19] <gregaf> I assume you've reproduced it on multiple objects?
[2:19] <AaronSchulz> 2 objects so far
[2:20] <gregaf> can you create a bug at tracker.newdream.net with what you've got?
[2:20] * AaronSchulz gets some stack traces in mediawiki
[2:21] <gregaf> and I can direct that to Yehuda tomorrow
[2:21] <AaronSchulz> I'll file something if it turns out not to be something stupid on the client end
[2:21] <gregaf> let me know either way ;)
[2:22] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[2:22] <AaronSchulz> ok, looking further it seems that ceph only stores the broken sha1base36 one
[2:23] <AaronSchulz> MW then tries to add the correct one and POST it, but it fails in client check (headers value <= 255) since it also sends the broken one
[2:24] * nwatkins3 (~Adium@soenat3.cse.ucsc.edu) Quit (Quit: Leaving.)
[2:24] * alexxy[home] (~alexxy@2001:470:1f14:106::2) Quit (Read error: Connection reset by peer)
[2:29] * Meyer___ (meyer@c64.org) has joined #ceph
[2:29] * Meyer__ (meyer@c64.org) Quit (Read error: Connection reset by peer)
[2:30] <jluis> sage, fyi, assigned #3361 to me and will take care of it first thing in the morning
[2:30] <jluis> unless you have any objections or have started working on it
[2:33] <gregaf> jluis: we may want to talk about that a bit more — go ahead and get rid of that dout if you like, but that won't be enough to close the bug :)
[2:34] <jluis> I know, but will do a quick sweep through both the AuthMonitor and other auth related classes, just to make sure those secrets don't end up in the log
[2:37] <jluis> anyway, heading to bed; good night guys
[2:37] <gregaf> night!
[3:07] * rektide (~rektide@deneb.eldergods.com) Quit (Ping timeout: 480 seconds)
[3:16] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[3:16] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[3:26] * Ryan_Lane (~Adium@wsip-184-191-191-52.sd.sd.cox.net) has joined #ceph
[3:29] * Ryan_Lane (~Adium@wsip-184-191-191-52.sd.sd.cox.net) Quit ()
[3:30] * scuttlemonkey (~scuttlemo@wsip-70-164-119-40.sd.sd.cox.net) has joined #ceph
[3:37] * sjustlaptop (~sam@md60536d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[3:39] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[3:40] * sjustlaptop (~sam@24-205-43-118.dhcp.gldl.ca.charter.com) has joined #ceph
[3:44] * sjustlaptop1 (~sam@24-205-43-118.dhcp.gldl.ca.charter.com) has joined #ceph
[3:44] * sjustlaptop (~sam@24-205-43-118.dhcp.gldl.ca.charter.com) Quit (Read error: Connection reset by peer)
[3:52] * sjustlaptop1 (~sam@24-205-43-118.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[4:21] * sjust-phone (~sjust@md60536d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[4:29] * Meths_ (rift@2.25.191.134) has joined #ceph
[4:29] * Meths (rift@2.25.189.30) Quit (Read error: Connection reset by peer)
[4:34] * LarsFronius (~LarsFroni@frnk-590d13d2.pool.mediaWays.net) has joined #ceph
[4:35] * loicd (~loic@12.69.234.201) has joined #ceph
[4:40] * scuttlemonkey_ (~scuttlemo@wsip-70-164-119-40.sd.sd.cox.net) has joined #ceph
[4:40] * scuttlemonkey (~scuttlemo@wsip-70-164-119-40.sd.sd.cox.net) Quit (Read error: Connection reset by peer)
[4:41] * LarsFronius (~LarsFroni@frnk-590d13d2.pool.mediaWays.net) Quit (Quit: LarsFronius)
[4:51] * scuttlemonkey (~scuttlemo@wsip-70-164-119-50.sd.sd.cox.net) has joined #ceph
[4:51] * scuttlemonkey_ (~scuttlemo@wsip-70-164-119-40.sd.sd.cox.net) Quit (Read error: Connection reset by peer)
[4:51] * scuttlemonkey_ (~scuttlemo@wsip-70-164-119-40.sd.sd.cox.net) has joined #ceph
[4:52] * scuttlemonkey (~scuttlemo@wsip-70-164-119-50.sd.sd.cox.net) Quit (Read error: Connection reset by peer)
[4:52] * scuttlemonkey_ (~scuttlemo@wsip-70-164-119-40.sd.sd.cox.net) Quit (Read error: Connection reset by peer)
[4:57] * loicd (~loic@12.69.234.201) Quit (Quit: Leaving.)
[4:57] * sjustlaptop (~sam@24-205-61-15.dhcp.gldl.ca.charter.com) has joined #ceph
[5:04] * chutzpah (~chutz@199.21.234.7) Quit (Quit: Leaving)
[5:22] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[5:23] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[5:43] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[5:49] * scuttlemonkey (~scuttlemo@wsip-70-164-119-40.sd.sd.cox.net) has joined #ceph
[5:58] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) has joined #ceph
[6:13] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[6:13] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[6:32] * The_Bishop (~bishop@2001:470:50b6:0:d49b:f6bc:f76b:9fc2) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[6:36] * sjustlaptop (~sam@24-205-61-15.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[7:39] <todin> morning
[7:48] <liiwi> good mornin
[7:52] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:55] <prometheanfire> moin
[8:00] * scuttlemonkey (~scuttlemo@wsip-70-164-119-40.sd.sd.cox.net) Quit (Quit: This computer has gone to sleep)
[8:01] * synapsr (~synapsr@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[8:04] * tnt (~tnt@246.121-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[8:09] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[8:10] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[8:18] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[8:21] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[8:22] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[8:22] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit ()
[8:37] * maelfius (~mdrnstm@140.sub-70-197-142.myvzw.com) has joined #ceph
[9:01] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:02] * tnt (~tnt@246.121-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[9:19] * synapsr (~synapsr@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[9:19] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[9:30] * tziOm (~bjornar@ti0099a340-dhcp0778.bb.online.no) has joined #ceph
[9:34] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:50] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[9:51] * Leseb (~Leseb@193.172.124.196) Quit (Remote host closed the connection)
[9:51] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[9:54] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[9:54] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[9:56] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[9:56] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[10:04] * Ryan_Lane (~Adium@wsip-184-191-191-52.sd.sd.cox.net) has joined #ceph
[10:09] * MikeMcClurg (~mike@client-7-193.eduroam.oxuni.org.uk) has joined #ceph
[10:23] * Ryan_Lane1 (~Adium@wsip-184-191-191-52.sd.sd.cox.net) has joined #ceph
[10:23] * Ryan_Lane (~Adium@wsip-184-191-191-52.sd.sd.cox.net) Quit (Read error: Connection reset by peer)
[10:25] * Ryan_Lane (~Adium@wsip-184-191-191-52.sd.sd.cox.net) has joined #ceph
[10:25] * Ryan_Lane1 (~Adium@wsip-184-191-191-52.sd.sd.cox.net) Quit (Read error: Connection reset by peer)
[10:26] * Ryan_Lane (~Adium@wsip-184-191-191-52.sd.sd.cox.net) Quit ()
[10:27] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[10:37] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[10:49] * tziOm (~bjornar@ti0099a340-dhcp0778.bb.online.no) Quit (Remote host closed the connection)
[10:56] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[10:56] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[11:16] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[12:16] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[12:18] * dubbelpunt (dubbelpunt@78-21-157-178.access.telenet.be) has joined #ceph
[12:24] * dubbelpunt (dubbelpunt@78-21-157-178.access.telenet.be) Quit ()
[12:26] * ao (~ao@85.183.4.97) has joined #ceph
[12:28] * ao (~ao@85.183.4.97) Quit ()
[12:44] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[12:48] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[13:02] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[13:13] * dubbelpunt (dubbelpunt@78-21-157-178.access.telenet.be) has joined #ceph
[13:13] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[13:37] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[13:57] * LarsFronius (~LarsFroni@frnk-590d13d2.pool.mediaWays.net) has joined #ceph
[13:57] <dubbelpunt> hi guys
[13:57] <dubbelpunt> how can i set acls for anonymous user via s3 command?
[13:57] <dubbelpunt> i tried the following:
[13:57] <dubbelpunt> OwnerID myuser My User
[13:57] <dubbelpunt> Type User Identifier Permission
[13:57] <dubbelpunt> ------ ------------------------------------------------------------------------------------------ ------------
[13:57] <dubbelpunt> UserID myuser (My User) FULL_CONTROL
[13:57] <dubbelpunt> UserID anonymous READ
[13:58] <dubbelpunt> but i always get ERROR: Failed to parse ACLs
[13:59] <dubbelpunt> i edited the acl that i get via s3 -u getacl mybucket
[14:10] * MikeMcClurg (~mike@client-7-193.eduroam.oxuni.org.uk) Quit (Ping timeout: 480 seconds)
[14:12] * LarsFronius (~LarsFroni@frnk-590d13d2.pool.mediaWays.net) Quit (Quit: LarsFronius)
[14:18] * The_Bishop (~bishop@e179020227.adsl.alicedsl.de) has joined #ceph
[14:28] * The_Bishop (~bishop@e179020227.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[14:39] * dubbelpunt (dubbelpunt@78-21-157-178.access.telenet.be) Quit ()
[14:41] <elder> slang, nhm, are you having any trouble with your mail?
[14:46] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[15:01] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Quit: Leaving)
[15:03] * The_Bishop (~bishop@e179020227.adsl.alicedsl.de) has joined #ceph
[15:06] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[15:12] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[15:16] * scalability-junk (~stp@188-193-208-44-dynip.superkabel.de) has joined #ceph
[15:27] * tnt (~tnt@246.121-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[15:38] * yehuda_hm (~yehuda@99-48-179-68.lightspeed.irvnca.sbcglobal.net) Quit (Remote host closed the connection)
[15:46] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[15:49] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[15:50] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[15:54] * aliguori (~anthony@cpe-70-123-146-246.austin.res.rr.com) has joined #ceph
[15:55] * slang (~slang@ace.ops.newdream.net) has joined #ceph
[15:56] * maelfius (~mdrnstm@140.sub-70-197-142.myvzw.com) Quit (Ping timeout: 480 seconds)
[15:58] * tnt (~tnt@246.121-67-87.adsl-dyn.isp.belgacom.be) Quit (Read error: Connection reset by peer)
[16:00] <slang> elder: I haven't noticed any problems
[16:03] * MikeMcClurg (~mike@212.183.128.23) has joined #ceph
[16:07] * dubbelpunt (dubbelpunt@78-21-157-178.access.telenet.be) has joined #ceph
[16:12] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[16:13] <dubbelpunt> if i access an object via radosgw (in the browser), my browser doesn't interpreted the mimetyes of the webserver
[16:13] <dubbelpunt> i configured apache as described in the documentation
[16:14] <dubbelpunt> is this a known issue or just a bad configured apache :)?
[16:18] * scuttlemonkey (~scuttlemo@wsip-70-164-119-40.sd.sd.cox.net) has joined #ceph
[16:22] * lofejndif (~lsqavnbok@04ZAAAXDO.tor-irc.dnsbl.oftc.net) has joined #ceph
[16:23] * MikeMcClurg (~mike@212.183.128.23) Quit (Ping timeout: 480 seconds)
[16:26] <dubbelpunt> never mind, found the problem
[16:26] * dubbelpunt (dubbelpunt@78-21-157-178.access.telenet.be) Quit ()
[16:32] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[16:44] * MikeMcClurg (~mike@212.183.128.218) has joined #ceph
[16:51] <scheuk> what do I do if I have Inconsistant PGs
[16:51] <scheuk> is there a way to resync the objects from the other OSDs?
[16:51] <scheuk> pgmap v3476159: 2560 pgs: 2558 active+clean, 2 active+clean+inconsistent
[16:52] <scheuk> if I perform a scrub on the Pg:
[16:52] <scheuk> 2012-10-19 09:52:00.141052 osd.3 [ERR] 7.60 osd.7: soid ab0ada60/rb.0.115c.4891432b.00000002cec8/head//7 size 4194304 != known size 2195456
[16:52] <scheuk> 2012-10-19 09:52:00.141070 osd.3 [ERR] 7.60 scrub 0 missing, 1 inconsistent objects
[16:52] <scheuk> 2012-10-19 09:52:00.165456 osd.3 [ERR] 7.60 scrub stat mismatch, got 2891/2893 objects, 0/0 clones, 12120465408/12128854016 bytes.
[16:52] <scheuk> 2012-10-19 09:52:00.165473 osd.3 [ERR] 7.60 scrub 1 errors
[16:53] <calebamiles> dubbelpunt boto provides some nice python tools for working with the s3 api
[16:58] * gaveen (~gaveen@112.134.113.109) has joined #ceph
[16:59] * tziOm (~bjornar@ti0099a340-dhcp0778.bb.online.no) has joined #ceph
[17:05] * maelfius (~mdrnstm@91.sub-70-197-139.myvzw.com) has joined #ceph
[17:05] * alphe (~alphe@200.111.172.138) has joined #ceph
[17:06] * lofejndif (~lsqavnbok@04ZAAAXDO.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[17:08] <alphe> hello all after more tests I see in the ubuntu12.10 (64bits)-ceph0.53 the size properly of the ceph mounted storage (df -h) but from windows (XP and 7 32 bits) I still only see 176GB when looking at the properties when I browse down the directory it shows all the content and is smooth
[17:08] <alphe> the interface betwin my windows and linux is samba
[17:09] * lofejndif (~lsqavnbok@659AABCJO.tor-irc.dnsbl.oftc.net) has joined #ceph
[17:21] * Ryan_Lane (~Adium@wsip-184-191-191-52.sd.sd.cox.net) has joined #ceph
[17:22] * MikeMcClurg (~mike@212.183.128.218) Quit (Ping timeout: 480 seconds)
[17:25] * markl_ (~mark@tpsit.com) has joined #ceph
[17:25] * markl_ (~mark@tpsit.com) Quit ()
[17:26] * lofejndif (~lsqavnbok@659AABCJO.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[17:26] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[17:29] * LarsFronius (~LarsFroni@frnk-590d13d2.pool.mediaWays.net) has joined #ceph
[17:31] * Ryan_Lane (~Adium@wsip-184-191-191-52.sd.sd.cox.net) Quit (Quit: Leaving.)
[17:36] * Cube2 (~Cube@12.248.40.138) has joined #ceph
[17:38] * LarsFronius (~LarsFroni@frnk-590d13d2.pool.mediaWays.net) Quit (Quit: LarsFronius)
[17:42] * Cube1 (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[17:52] * rweeks (~rweeks@108-218-203-194.uvs.sntcca.sbcglobal.net) has joined #ceph
[17:56] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[17:59] <jefferai> Do you guys truly only support installing Ceph via Chef?
[17:59] <rweeks> no
[18:00] <rweeks> want basic CLI install instructions? http://ceph.com/docs/master/start/
[18:00] <jefferai> ah, okay
[18:00] <jefferai> because http://ceph.com/docs/master/install/
[18:01] <jefferai> has instructions for installing packages, but then only for deploying with Chef
[18:01] <rweeks> There's also https://github.com/ceph/ceph-deploy
[18:01] <rweeks> if you're on Ubuntu and using Juju there is also a charm in progress for Ceph
[18:01] <jefferai> Nope, on Debian
[18:01] <jefferai> and using Salt
[18:02] <rweeks> hmm, someone in here was talking to me about Salt
[18:02] <rweeks> who was that...
[18:02] <jefferai> could have been me, a long while back
[18:02] <rweeks> no, this was earlier this week
[18:02] <jefferai> oh
[18:02] <jefferai> Salt is really nice
[18:02] <jefferai> I've used Chef
[18:02] <jefferai> I'm definitely preferring salt
[18:02] <jefferai> if nothing else because I like the fact that Salt information is stored in files, which I can pop into Git
[18:03] <jefferai> instead of having the whole couch/solr mess of chef
[18:03] <rweeks> want to write us Salt modules? :)
[18:04] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[18:04] <rweeks> or you might just be able to call ceph-deploy from Salt
[18:05] <jefferai> maybe
[18:05] <jefferai> or, simply use salt to distribute ceph config files and ssh keys
[18:05] <jefferai> which is what I'm doing now (for other ssh keys and other config files)
[18:05] <jefferai> I'm still pretty new with salt and haven't wrapped my head around writing modules yet
[18:06] <jefferai> that said, I'm happy to try deploying with it, and making the results available
[18:06] <jefferai> whether it's how to do it with normal salt operations, or whether it's with a module
[18:06] <rweeks> either would be useful
[18:09] <jefferai> Well, I should have something within two weeks
[18:09] <jefferai> I have all my hardware, getting it racked next week, and Ceph is near the top of the list
[18:10] <rweeks> nice
[18:11] <rweeks> what will you you be using Ceph for?
[18:11] * sbohrer (~sbohrer@173.227.92.65) has joined #ceph
[18:12] <jefferai> Mostly RBD
[18:12] <jefferai> potentially some RADOS
[18:13] <jefferai> some of it raw storage, in the sense of bringing some large RBDs into some kvm vms and exporting over nfs
[18:13] <jefferai> unless that's a bad idea
[18:13] <jefferai> but mostly hosting VM disks
[18:14] <jefferai> a cost-effective "SAN"
[18:15] <rweeks> gotcha
[18:15] * tziOm (~bjornar@ti0099a340-dhcp0778.bb.online.no) Quit (Remote host closed the connection)
[18:15] <rweeks> what kind of VM infrastructure? KVM?
[18:16] <rweeks> er, I missed that KVM line
[18:16] <rweeks> ;)
[18:16] <rweeks> I don't think that exporting large RBDs as NFS exports is a bad idea as long as you're comfortable with whatever NFS server you're using
[18:17] <rweeks> I haven't used the ganesha userspace NFS server but I hear good things about it
[18:18] * Tv_ (~tv@38.122.20.226) has joined #ceph
[18:24] * miroslavk (~miroslavk@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[18:25] <gregaf> yehudasa: did you see bug 3365 come in? It's from AaronSchulz and could be a problem (or maybe not, but I can't debug it)
[18:30] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[18:33] <rweeks> back later.
[18:33] * rweeks (~rweeks@108-218-203-194.uvs.sntcca.sbcglobal.net) has left #ceph
[18:36] * scuttlemonkey (~scuttlemo@wsip-70-164-119-40.sd.sd.cox.net) Quit (Quit: This computer has gone to sleep)
[18:37] <sage> rweeks: there are ceph patches in the dev branch of ganesha, allegedly. from the linuxbox.com guys.
[18:38] * sagelap (~sage@76.89.177.113) Quit (Ping timeout: 480 seconds)
[18:41] * jluis is now known as joao
[18:42] * sagelap (~sage@32.sub-70-197-143.myvzw.com) has joined #ceph
[18:43] * lxo (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[18:43] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:44] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[18:45] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[18:45] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[18:46] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[18:47] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:52] <jefferai> sage: what would those ceph patches do, exactly?
[18:53] <jefferai> I would imagine for using cephfs
[18:53] <jefferai> rather than rbd + some other filesystem + nfs
[18:53] <jefferai> I wonder why I wouldn't be comfortable with the normal debian nfs-kernel-server, though...
[18:54] <jefferai> ah, yep
[18:54] <jefferai> for cephfs
[18:54] <jefferai> but I hear that's not really stable, yet...
[18:55] <jefferai> what I was thinking of doing was exporting an RBD device, putting btrfs on that...that way I can expand by adding more RBD devices and adding those devices into my btrfs pool, and also take advantage of btrfs' quota support
[18:55] <jefferai> (eventually, that is)
[19:05] * sagelap (~sage@32.sub-70-197-143.myvzw.com) Quit (Ping timeout: 480 seconds)
[19:06] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[19:06] <sbohrer> Are there any recommendations on using SSDs in combination with rotational drives to increase performance?
[19:06] <sbohrer> I've seen that you can use an SSD for the journal
[19:07] <sbohrer> Is is possible to make a custom CRUSH map to use an SSD for "primary" storage and replicate to a rotational drive for redundancy?
[19:09] <sjust> sbohrer: such an arrangement is probably feasible, but it will only increase read speed
[19:09] <sjust> using and ssd for a journal is a popular option as well
[19:09] <sjust> *using an
[19:09] <sbohrer> We'd pretty much have a write once read many workload
[19:09] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[19:10] <PerlStalker> Is there a way to see which clients are connect to which monitor?
[19:10] <sjust> sbohrer: that might be a good option then
[19:10] <joshd> PerlStalker: if you turn monitor debugging up maybe, I'd just use netstat
[19:11] <PerlStalker> Fair enough
[19:13] * cdblack (47c565ea@ircip2.mibbit.com) has joined #ceph
[19:13] * nwatkins1 (~Adium@soenat3.cse.ucsc.edu) has joined #ceph
[19:23] * sagelap1 (~sage@32.sub-70-197-143.myvzw.com) has joined #ceph
[19:24] * LarsFronius (~LarsFroni@frnk-590d13d2.pool.mediaWays.net) has joined #ceph
[19:24] * The_Bishop_ (~bishop@e179013240.adsl.alicedsl.de) has joined #ceph
[19:24] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[19:25] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[19:28] * maelfius (~mdrnstm@91.sub-70-197-139.myvzw.com) Quit (Quit: Leaving.)
[19:32] * The_Bishop (~bishop@e179020227.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[19:38] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:38] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[19:38] * Leseb_ is now known as Leseb
[19:53] <cdblack> upgraded to .53, worked as advertised, cluster up & testing in progress :-)
[19:54] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[19:54] <nwatkins1> Is there any technical reason for not having a variation of ceph_file_get_replication(..) that take paths instead of file handles?
[19:54] <sagelap1> slang: if you hav ea minute can you review the wip-client patch? it's a multi-mds bug and hard to trigger, but the fix behavior is correct
[19:55] <sagelap1> the behavior needs to match ceph-client.git/fs/ceph/caps.c handle_cap_import()
[19:55] * sagelap1 is now known as sagelap
[19:55] <slang> sagelap1: looking
[19:58] <jefferai> sbohrer: I have a similar situation; I'm planning to put most of my read-many write-seldom storage onto four boxes that are populated with SSDs, and put the rest onto 8 boxes populated with spinny disks, and use a CRUSH map to sort it out
[19:58] <jefferai> I'll know in a couple weeks how feasible that ends up being
[20:00] * BManojlovic (~steki@bojanka.net) has joined #ceph
[20:00] <dmick> cdblack: excellent.
[20:11] * sagelap (~sage@32.sub-70-197-143.myvzw.com) Quit (Ping timeout: 480 seconds)
[20:15] * nwl (~levine@atticus.yoyo.org) has left #ceph
[20:16] * nwl (~levine@atticus.yoyo.org) has joined #ceph
[20:23] * tziOm (~bjornar@ti0099a340-dhcp0778.bb.online.no) has joined #ceph
[20:24] * Cube2 (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[20:25] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[20:41] * Cube (~cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[20:49] <todin> nhm: hi, I tried today the 4 520 Intel SSD instead of the 710, the bandwidth was increases to 784MB/s
[20:50] <sjust> todin: interesting
[20:51] <todin> sjust: yep, it was still on the e3 1230v2, next week I will drop a e5 1650 in the node
[20:51] <sjust> try adding debug <subsys> = 0/0
[20:51] <slang> sage: looks good
[20:51] <sjust> for <subsys> in: filestore, journal, osd, ms
[20:52] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[20:52] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[20:52] <sjust> , throttle
[20:52] <sjust> , tp
[20:52] <sjust> if you are actually cpu bound, that may help
[20:52] <todin> sjust: how to mean that?
[20:53] <todin> sjust: I get the board for free, so I will try it
[20:53] <sjust> sorry? I'm just wondering if the extra debugging is hurting performance
[20:53] <sjust> we track more debug logging than we actually spew to the log so that we can spew the last few thousand lines in case of a crash
[20:54] <sjust> but it does have overhead
[20:54] <todin> sjust: I was asking about the syntax in the config file I did not understand what to add
[20:57] * gaveen (~gaveen@112.134.113.109) Quit (Remote host closed the connection)
[20:58] <todin> sjust: I figured it, I will try it, right now
[21:03] * miroslavk (~miroslavk@173-228-38-131.dsl.dynamic.sonic.net) has left #ceph
[21:04] * Kioob (~kioob@luuna.daevel.fr) Quit (Ping timeout: 480 seconds)
[21:05] * Meths_ is now known as Meths
[21:09] <todin> sjust: could it be possible, that the cluster is to small, to get all of the 4 osd under an even load, I have always one which does more than the others.
[21:12] * alphe (~alphe@200.111.172.138) Quit (Quit: Leaving)
[21:14] <todin> sjust: with the changes I get 795MB/s, not sure if is that a real change or just noise in the test
[21:18] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[21:22] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[21:31] <jefferai> rweeks: wb
[21:32] <rweeks> thx
[21:32] <jefferai> you mentioned that exporting rbd-backed filesystems exposed to a VM over NFS isn't a bad idea if I trust my NFS server -- any reason I shouldn't?
[21:32] <jefferai> I figured I'd use Debian's standard nfs-kernel-server
[21:33] <jefferai> ganesha isn't in debian's repos and I'm generally trying to stay vanilla for ease of management, unless there is a particular reason not to
[21:50] * Tobarja (~athompson@cpe-071-075-064-255.carolina.res.rr.com) Quit (Ping timeout: 480 seconds)
[21:51] <rweeks> well, having come from a big storage vendor who partially invented NFS
[21:52] <rweeks> I previously spent many years making fun of other NFS server impolementations.
[21:52] * aliguori (~anthony@cpe-70-123-146-246.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[21:52] <rweeks> (but spelled right)
[21:56] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[21:56] * tziOm (~bjornar@ti0099a340-dhcp0778.bb.online.no) Quit (Remote host closed the connection)
[21:58] <rweeks> there's nothing wrong with the nfs kernel server if you have pretty standard export needs, but if you have to say, set up several hundred exports with differing access controls, it can become a bit of a pain
[21:58] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[22:00] * todin (tuxadero@kudu.in-berlin.de) Quit (Remote host closed the connection)
[22:00] * f4m8 (f4m8@kudu.in-berlin.de) Quit (Read error: Connection reset by peer)
[22:00] * aliguori (~anthony@cpe-70-123-146-246.austin.res.rr.com) has joined #ceph
[22:02] * Kioob (~kioob@82.67.37.138) has joined #ceph
[22:03] <jefferai> rweeks: ah, okay
[22:04] <jefferai> there's still a lot TBD right now
[22:04] <jefferai> it was just a thought
[22:04] <jefferai> because I want to offer easy storage access to various groups
[22:04] <rweeks> what would you be using the nfs exports for?
[22:04] <rweeks> yes, I understand that requirement
[22:04] <jefferai> so my thought was to make the same area available via smb, nfs, etc.
[22:04] <jefferai> general storage
[22:04] <jefferai> not high access
[22:04] <rweeks> right
[22:05] <jefferai> just, a place for people to put large files so that they stop putting them in git repos
[22:05] <rweeks> now: here's something to consider
[22:05] <rweeks> if you take an RBD and share it out as an NFS export
[22:05] <rweeks> those are going to be written as NFS files which are then written as objects
[22:05] * Leseb (~Leseb@62.233.37.15) has joined #ceph
[22:05] <rweeks> if you wanted to read those same files via the RADOS gateway, you're not going to be able to
[22:05] <rweeks> because an S3 GET won't read an NFS file handle, for example.
[22:06] <jefferai> yeah, I figured
[22:06] <rweeks> someone else was asking about this last week: could they write files via NFS and read them via S3 APIs
[22:06] <jefferai> have a better idea?
[22:06] <rweeks> and as things are today, that won't work
[22:07] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[22:07] <rweeks> I don't see an optimal solution for that today unless someone has an NFS to S3 gateway
[22:07] <rweeks> hmm
[22:07] * rweeks ponders
[22:07] <jefferai> cephfs, at some point?
[22:07] <rweeks> yes, provided all your clients then mount cephfs
[22:08] <jefferai> Well, I figured it I put the client in a VM
[22:08] <jefferai> then export that over NFS or CIFS
[22:08] <jefferai> ...
[22:08] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[22:08] <rweeks> then does your VM client become the bottleneck?
[22:08] <jefferai> oh, probably
[22:09] <jefferai> it's quite possible that I won't need S3 for anytime soon/a while/ ever
[22:09] <rweeks> gotcha
[22:09] <rweeks> so blocks and files, basically
[22:09] <jefferai> so my thought was, if I have enough space, I can start things out with RBD/(some filesystem)/NFS, or cephfs (if it's stable) / NFS
[22:10] <jefferai> and be able to copy the files to some new mechanism later
[22:10] <jefferai> if need be
[22:10] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:10] <jefferai> I'm obviously open to better ideas
[22:10] <jefferai> :-)
[22:10] <rweeks> heh
[22:10] * todinlap (~martin@e178207190.adsl.alicedsl.de) has joined #ceph
[22:10] <rweeks> I think either of those are valid ideas.
[22:12] <rweeks> so your constraint with the first one is that you'd effectively have one client mounting a block device and sharing it via NFS
[22:12] <rweeks> with CephFS you could have many clients mounting the FS and exporting it via NFS
[22:12] <jefferai> I see
[22:12] <jefferai> but how solid is cephfs right now?
[22:13] <rweeks> I am not entirely certain
[22:13] <rweeks> I know that Sage said earlier this week that debs were pivoting to work on cephfs
[22:13] * Leseb (~Leseb@62.233.37.15) Quit (Ping timeout: 480 seconds)
[22:13] <jefferai> debs or devs?
[22:13] * Leseb_ is now known as Leseb
[22:13] <rweeks> er. devs. :)
[22:13] <jefferai> thought so :-)
[22:14] <jefferai> I think for now I can use the block store approach
[22:14] <jefferai> and at some point later migrate the data
[22:14] <jefferai> if the service even proves useful
[22:18] * sagelap (~sage@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[22:19] * sagelap (~sage@c-98-234-186-68.hsd1.ca.comcast.net) has left #ceph
[22:19] * sagelap (~sage@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[22:26] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[22:32] * Tobarja (~athompson@cpe-071-075-064-255.carolina.res.rr.com) has joined #ceph
[22:34] * todinlap (~martin@e178207190.adsl.alicedsl.de) Quit (Quit: Lost terminal)
[22:38] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[22:38] <dspano> jefferai: I use cephfs in production, but not for anything too heavy. The only time I've had issues is when the network bond on one of servers is out of order.
[22:39] <jefferai> dspano: what happened?
[22:42] <dspano> jefferai: I have an OSD that's also a mon and mds server because my environment is small.
[22:42] <jefferai> ah
[22:43] <dspano> jefferai: When the nics get weird, the client acts like it's connected, but isn't connected.
[22:44] <jefferai> ah
[22:45] <dspano> jefferai: The cephfs locks up when an application tries to use it, and the only errors I get are the kernel panic cephfs timeouts on the client.
[22:45] <jefferai> yuck
[22:45] <jefferai> which version?
[22:46] <jefferai> like, kernel version/ceph version
[22:46] <dspano> jefferai: Restarting the OSD fixes it.
[22:46] <jefferai> ah
[22:46] <jefferai> but then things have to resync right?
[22:46] <dspano> Ubuntu 12.04/ceph 0.48.2 argonaut.
[22:46] <jefferai> ah
[22:46] <dspano> 3.2.0-32-generic
[22:46] <jefferai> I'd be on debian wheezy, so 3.2
[22:46] <jefferai> with ceph's deb packages
[22:47] <jefferai> I'm fine without cephfs for now, I think
[22:47] <jefferai> and if you don't use cephfs, you don't need MDS nodes right?
[22:47] <dspano> Nope.
[22:47] <sagelap> correct
[22:47] <jefferai> Yeah, so at first I'll *probably* just go without mds and get everything else running smoothly
[22:48] <jefferai> then can add mds and cephfs later as needed
[22:48] <jefferai> especially because by that point I might be able to afford more RAM for the MDS nodes as they'd likely be on the same nodes as the VMs
[22:48] <jefferai> (alternative is same nodes as the OSDs)
[22:48] <dspano> jefferai: That's what I did. It mainly boiled down to my liking cephfs, and not wanting to fool with nfs.
[22:49] <jefferai> dspano: yeah -- I get the draw of cephfs for my own internal use, but I'm not likely to get random users getting cephfs set up on their boxes
[22:49] <jefferai> for the same reason that people have trouble getting other people to use their afs installs
[22:49] <jefferai> it works, you can do it, but people are lazy
[22:49] <jefferai> so I'll probably use cephfs when it makes sense internally
[22:49] <jefferai> but export over CIFS/FTP/NFS
[22:49] <dspano> jefferai: Yeah, I wouldn't offer it to anyone who doesn't know what they're doing.
[22:50] <dspano> jefferai: I only use it internally.
[22:50] <jefferai> right
[22:50] <jefferai> thanks for the info though
[22:50] <dspano> No problem.
[22:50] <jefferai> it's useful to have my decisions reinforced :-)
[22:50] <PerlStalker> Would it make sense to use cephfs as a place to put .iso files VM installation media?
[22:51] <dspano> When you do try it out, I think you'll be pleased.
[22:51] <jefferai> I've been looking at using Ceph since February, and I'm really excited to actually get working with it...should be able to start next week
[22:51] <jefferai> dspano: oh sure
[22:51] <jefferai> I bet
[22:51] <jefferai> Just, at the start I'm going to focus on the things I know I need
[22:51] <jefferai> then can play with the other bits later
[22:51] <jefferai> and get more familiar with them
[22:52] <jefferai> PerlStalker: probably (from what little I know)
[22:52] <dspano> jefferai: Yeah, it's best to take baby steps.
[22:53] <jefferai> gotta go, see you guys around
[22:53] <PerlStalker> I was toying with it a little when I first got ceph set up but my interest was/is in rbd
[22:53] <jefferai> PerlStalker: me too mainly, but at first, just because until I get my VMs up there's not much else I can do
[22:53] <jefferai> :-)
[22:53] <joshd> PerlStalker: that's the kind of limited use case that cephfs handles better right now. but for vm storage, rbd makes more sense
[22:54] <dspano> I second that.
[22:54] <PerlStalker> Fair enough
[22:55] * tziOm (~bjornar@ti0099a340-dhcp0778.bb.online.no) has joined #ceph
[22:55] <tziOm> sage, any plans to make use of multiple storage "qualities" in a single cluster and beeing able to provision different quality space
[22:57] <iggy> there was an faq about tiered storage at one point I think
[22:57] <joshd> tziOm: you can do that today with different pools using different placement rules
[22:57] <joshd> general tiered storage, where data is moved automatically, would require some work
[22:58] <rweeks> automatic tiering isn't worth the effort, IMHO
[22:59] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[23:05] <tziOm> joshd, ok.
[23:06] <tziOm> joshd, so I could have ssd machines that do rbd?
[23:06] <tziOm> and db
[23:06] <joshd> yeah
[23:06] <tziOm> where is that doc
[23:07] <tziOm> I see that my company could benefit alot from ceph, so I wonder if there is possiblitity to put some $ into the development, and eventually what one could get back
[23:08] <rweeks> I think you'd want to talk to our business development people about that.
[23:08] <sagelap> tziom: you mean investing in your own employees contributing to ceph, or codevelopment with inktank to help us do development?
[23:08] <joshd> tziOm: http://ceph.com/docs/master/cluster-ops/crush-map/#crush-map-parameters
[23:09] <tziOm> we dont have spare development power, so investing in your guys would be preferrable
[23:09] <elder> joshd, the size of an rbd segment is capped at 2^32 bytes by the Linux bio system. I.e., RBD_OBJ_ORDER_MAX (or whatever) must be <= 32. Does that need to documented or enforced anywhere special?
[23:10] <elder> I can easily check it at rbd load time. I'm already adding a check for order >= 9 (again, 512-byte minimum) as a sanity check.
[23:11] * Tobarja1 (~athompson@cpe-071-075-064-255.carolina.res.rr.com) has joined #ceph
[23:12] <joshd> elder: yes, that should be documented in the rbd(8) man page and the kernel rbd ceph docs. I don't think it should be enforced though, or does that come from using the bio system for layering?
[23:12] <elder> Wait, I may be wrong about that. If I'm right it's bio, not layering.
[23:13] <elder> I'll get it straightened out and if it's a limit I will make sure it gets recorded.
[23:13] <joshd> ok
[23:16] * Tobarja (~athompson@cpe-071-075-064-255.carolina.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:16] <elder> The existing code assumes it's 32 bits (actually, 31), but I don't think it has to...
[23:16] <elder> It would simply do weird things if it were bigger. (Not nice)
[23:17] <joshd> elder: right now the cli tool prevents you from using order < 12 or > 25
[23:17] <elder> OK then we're in great shape.
[23:18] <joshd> but we should figure out what the actual limits should be from the kernel side
[23:18] * aliguori (~anthony@cpe-70-123-146-246.austin.res.rr.com) Quit (Remote host closed the connection)
[23:18] <elder> I *will* add something that checks at image load time though, as a defensive measure until we know >32 bits will work.
[23:18] <joshd> sounds good
[23:20] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[23:20] <tziOm> joshd, thanks
[23:21] <tziOm> does the crushmap basically mean that one could basically have tapes in the same system?
[23:22] <tziOm> as long as they can be presented as a filesystem on the ost
[23:26] * Tobarja1 (~athompson@cpe-071-075-064-255.carolina.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:30] <sjust> todin: thanks, that's actually reassuring
[23:38] * Ryan_Lane (~Adium@219.sub-70-197-143.myvzw.com) has joined #ceph
[23:38] * BManojlovic (~steki@bojanka.net) Quit (Ping timeout: 480 seconds)
[23:42] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[23:43] * Tobarja (~athompson@cpe-071-075-064-255.carolina.res.rr.com) has joined #ceph
[23:44] <joshd> tziOm: yes, anything that you can put a filesystem on
[23:44] <tziOm> huh
[23:44] * Tobarja (~athompson@cpe-071-075-064-255.carolina.res.rr.com) Quit (Read error: Connection reset by peer)
[23:44] <tziOm> cool
[23:45] <tziOm> ceph is allmost to perfect (when it eventually works`=)
[23:46] * Tobarja (~athompson@cpe-071-075-064-255.carolina.res.rr.com) has joined #ceph
[23:49] <tziOm> you are pushing new version every day now
[23:49] <tziOm> :)
[23:49] <joshd> :)
[23:51] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.