#ceph IRC Log

Index

IRC Log for 2012-05-22

Timestamps are in GMT/BST.

[0:00] * rturk (~rturk@aon.hq.newdream.net) has joined #ceph
[0:00] * rturk1 (~rturk@aon.hq.newdream.net) Quit (Read error: Connection reset by peer)
[0:01] * s[X]_ (~sX]@eth589.qld.adsl.internode.on.net) has joined #ceph
[0:01] * ThoughtCoder (~ThoughtCo@60-240-78-43.static.tpgi.com.au) has joined #ceph
[0:03] * rturk (~rturk@aon.hq.newdream.net) Quit ()
[0:03] * rturk (~rturk@aon.hq.newdream.net) has joined #ceph
[0:35] * ThoughtCoder (~ThoughtCo@60-240-78-43.static.tpgi.com.au) Quit (Ping timeout: 480 seconds)
[0:47] * rturk (~rturk@aon.hq.newdream.net) Quit (Quit: Leaving.)
[0:47] * rturk (~rturk@aon.hq.newdream.net) has joined #ceph
[0:48] * ThoughtCoder (~ThoughtCo@202-173-147-27.mach.com.au) has joined #ceph
[0:48] * rturk (~rturk@aon.hq.newdream.net) Quit (Read error: Connection reset by peer)
[0:48] * rturk (~rturk@aon.hq.newdream.net) has joined #ceph
[0:48] * rturk (~rturk@aon.hq.newdream.net) has left #ceph
[1:02] * fzylogic (~fzylogic@69.170.166.146) Quit (Quit: DreamHost Web Hosting http://www.dreamhost.com)
[1:05] * BManojlovic (~steki@212.200.243.232) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:16] * Ryan_Lane (~Adium@208-117-193-99.static.idsno.net) Quit (Quit: Leaving.)
[1:29] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[1:30] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) Quit (Remote host closed the connection)
[1:52] * lofejndif (~lsqavnbok@83TAAF5H9.tor-irc.dnsbl.oftc.net) has joined #ceph
[1:56] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[2:06] * lofejndif (~lsqavnbok@83TAAF5H9.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[2:08] * ThoughtCoder (~ThoughtCo@202-173-147-27.mach.com.au) Quit (Ping timeout: 480 seconds)
[2:21] * ThoughtCoder (~ThoughtCo@202-173-147-27.mach.com.au) has joined #ceph
[2:22] * Tv_ (~tv@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[2:28] <dmick> sagewk: back, I think
[2:28] <sagewk> yay!
[2:33] * yoshi (~yoshi@p3167-ipngn3601marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[2:42] * mdxi_ (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[2:51] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) has joined #ceph
[2:52] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) Quit (Remote host closed the connection)
[3:07] * joshd (~joshd@79.255.231.193) Quit (Ping timeout: 480 seconds)
[3:18] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[3:30] <elder> joshd, do you know why one end or the other of a connection doesn't know its IP address in the ceph messaging code?
[3:30] <elder> (Or anyone else who happens to be on)
[3:36] <dmick> IIRC that takes a little bit of work to dig out of the connection from the OS; maybe the class doesn't do that extra work?...(haven't looked at all)
[3:36] <elder> Hmm.
[3:36] <dmick> but since you're still working elder
[3:36] <elder> Yes I am.
[3:36] <elder> Question?
[3:36] <dmick> got any good tricks for estimating "private RSS" for a process?
[3:36] <elder> What do you mean?
[3:36] <dmick> i.e. the memory actually used by this proc and only this proc
[3:36] <elder> Is that not available from some sort of kernel interface?
[3:36] <elder> Oh, like non-shared stuff?
[3:37] <dmick> yeah
[3:37] <dmick> this is my current hack
[3:37] <dmick> cat /proc/$1/smaps | grep Private | awk 'BEGIN {total = 0;}
[3:37] <dmick> {total += $2;}
[3:37] <dmick> END {print "Total: ", total, "\n"}'
[3:37] <dmick> and it might even be plausible
[3:37] <elder> Let me study what you just pasted... Why do you need this?
[3:37] <dmick> to compare different webservers, today, but in general I'd like to know this often
[3:38] <dmick> and it seems like one of those eternal questions; I know it was in Solaris
[3:38] <dmick> so it might have been something you'd pondered
[3:39] <elder> So you're looking at the net cost of (in this case) one web server versus another, in terms of incremental memory use?
[3:40] <dmick> yeah, basically
[3:40] <elder> So your Private grep pulls out Private_Clean and Private_Dirty
[3:41] <dmick> if they used the same shared libraries I could just diff the RSS, but since they probably don't, it gets more complicated. Of course there's also the "but how shared is the shared" problem.
[3:41] <dmick> yep
[3:41] * The_Bishop (~bishop@p4FCDF8BE.dip.t-dialin.net) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[3:42] <elder> Interesting. I haven't looked at this stuff before but just glancing at the code that implements it, it looks like a pretty good way of doing it...
[3:42] <elder> Those are exactly the page maps that have only a single reference count.
[3:42] <dmick> cool
[3:42] <dmick> it seems to correlate well with my simple-malloc-and-touch test
[3:42] <dmick> perhaps I'll adopt this for a while
[3:43] <dmick> someone's written a Perl module to parse it
[3:45] <elder> There's also a "pss" field that seems to account clean or dirty private pages, or a scaled fraction of shared pages.
[3:45] <elder> "Proportional Set Size"
[3:46] <elder> See the comment at line 380 or so infs/proc/task_mmu.c
[3:46] <elder> Nevermind, here:
[3:46] <elder> * PSS of a process is the count of pages it has in memory, where each
[3:46] <elder> * page is divided by the number of processes sharing it. So if a
[3:46] <elder> * process has 1000 pages all to itself, and 1000 shared with one other
[3:46] <elder> * process, its PSS will be 1500.
[3:46] <elder> *
[3:47] <elder> Back in a second.
[3:49] <dmick> ah, well, that's somewhat useful, although it seems like it would be factoring in faults and not just actual page load
[3:49] <dmick> although...even then...shared pages shouldn't fault
[3:49] <dmick> no, the more I think about it, the less I see how that's useful
[3:50] <dmick> single-use shared lib pages would be accounted for by the smap, which seems appropriate
[3:52] <elder> It's based on the same numbers. It's just that if something is a shared page, its contribution to the total for the process is divided by the number of sharers.
[3:53] <elder> Your number just excludes anything with a refcount > 1
[3:53] <elder> In any case, I think your count is probably just what you were looking for, from my brief look at the code involved.
[4:00] <CristianDM> How I can know the total space of the cluster?
[4:00] * detaos (~quassel@c-50-131-106-101.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:01] <joshd> rados df
[4:01] <CristianDM> Thanks
[4:02] <joshd> oh hey, 'rados df --format=json' works too
[4:02] <CristianDM> The value are in MB?
[4:03] <joshd> looks like kb to me
[4:03] <joshd> ceph -s will tell you too, just overall
[4:04] <CristianDM> Perfect.
[4:08] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[4:08] <dmick> elder: yeah, I realize it's all from the same pages, but I'm not sure why you'd want a derated shared page count; marginal cost for a page is either 0 or 1, not .33
[4:09] <dmick> I mean there are mapping structures for overhead, but those are fixed per-page
[4:09] * detaos (~quassel@c-50-131-106-101.hsd1.ca.comcast.net) has joined #ceph
[4:09] <dmick> so yeah, I like this better
[4:10] <elder> Yet counting only non-shared pages doesn't give you an accurate view of what's really in use by a process easier. Your way at least you know *something* exactly. The proportional one could be all shared with a few unique or vice-versa, and you have no way of knowing which.
[4:10] <elder> (s/easier/either)
[4:11] <dmick> yes, but if they're shared with one other, the marginal cost of *this* process is 0 for those pages
[4:11] <dmick> (or pretty close, at least in terms of memory usage)
[4:11] <dmick> the same is true if they're shared with 100
[4:12] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[4:12] <ThoughtCoder> Hey guys, is anyone able to help with mounting a specific pool in ceph? I've read that I need to use 'cephfs' to 'set_layout --pool X' with the numeric ID. Do I need to also do a 'ceph mds add_data_pool poolname' before running the set_layout command?
[4:12] <ThoughtCoder> And, I've been using 'ceph osd dump -o -|grep 'rep size'' to get my numeric pool number, is this the best way to do it?
[4:13] <elder> dmick, we are in agreement. Someone thought the proportional one was worth keeping, I was just trying to explore why...
[4:14] <dmick> yeah, I know, I just think it's misguided.
[4:14] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[4:38] * Qten (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Read error: Connection reset by peer)
[4:38] * Qten (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[4:40] * joao (~JL@aon.hq.newdream.net) Quit (Quit: Leaving)
[4:58] * adjohn (~adjohn@50-0-164-218.dsl.dynamic.sonic.net) has joined #ceph
[5:00] * The_Bishop (~bishop@cable-86-56-102-91.cust.telecolumbus.net) has joined #ceph
[5:29] * Qten (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) Quit ()
[5:30] * Qten (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[6:08] * adjohn (~adjohn@50-0-164-218.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[6:11] * chutzpah (~chutz@216.174.109.254) Quit (Quit: Leaving)
[6:28] * Ryan_Lane (~Adium@228.sub-166-249-196.myvzw.com) has joined #ceph
[6:36] * dmick (~dmick@aon.hq.newdream.net) Quit (Quit: Leaving.)
[6:38] * f4m8_ is now known as f4m8
[6:58] * Theuni (~Theuni@46.253.59.219) has joined #ceph
[7:10] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) Quit (Ping timeout: 480 seconds)
[7:10] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) has joined #ceph
[7:19] * Ryan_Lane1 (~Adium@99.sub-166-249-194.myvzw.com) has joined #ceph
[7:23] * Ryan_Lane (~Adium@228.sub-166-249-196.myvzw.com) Quit (Ping timeout: 480 seconds)
[7:26] * hijacker (~hijacker@213.91.163.5) Quit (Ping timeout: 480 seconds)
[7:28] * Ryan_Lane (~Adium@8.sub-166-249-192.myvzw.com) has joined #ceph
[7:33] * Ryan_Lane2 (~Adium@145.sub-166-249-195.myvzw.com) has joined #ceph
[7:33] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) Quit ()
[7:33] * Ryan_Lane1 (~Adium@99.sub-166-249-194.myvzw.com) Quit (Ping timeout: 480 seconds)
[7:37] * Ryan_Lane (~Adium@8.sub-166-249-192.myvzw.com) Quit (Ping timeout: 480 seconds)
[7:50] * Theuni (~Theuni@46.253.59.219) Quit (Ping timeout: 480 seconds)
[7:57] * Ryan_Lane2 (~Adium@145.sub-166-249-195.myvzw.com) Quit (Quit: Leaving.)
[8:09] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[8:18] * Theuni (~Theuni@195.62.106.100) has joined #ceph
[8:57] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:02] * tjikkun (~tjikkun@82-169-255-84.ip.telfort.nl) Quit (Remote host closed the connection)
[9:05] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[9:05] * s[X]_ (~sX]@eth589.qld.adsl.internode.on.net) Quit (Remote host closed the connection)
[9:06] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit ()
[9:07] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[9:09] * ThoughtCoder (~ThoughtCo@202-173-147-27.mach.com.au) Quit ()
[9:09] <Qten> rbd block size is that 4MB?
[9:30] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[10:08] * renzhi (~renzhi@78.159.123.26) has joined #ceph
[10:09] * brambles (brambles@79.133.200.49) Quit (Remote host closed the connection)
[10:11] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[10:14] * brambles (brambles@79.133.200.49) has joined #ceph
[11:02] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[11:03] * yoshi (~yoshi@p3167-ipngn3601marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:10] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[11:15] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[11:29] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[12:01] * alexxy (~alexxy@79.173.81.171) Quit (Remote host closed the connection)
[12:07] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[12:13] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[12:40] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[13:03] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[13:20] * cattelan (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[13:49] * ThoughtCoder (ThoughtCod@60-240-78-43.static.tpgi.com.au) has joined #ceph
[14:03] * cattelan (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) has joined #ceph
[14:13] * renzhi (~renzhi@78.159.123.26) Quit (Ping timeout: 480 seconds)
[14:32] * ThoughtCoder (ThoughtCod@60-240-78-43.static.tpgi.com.au) Quit (Ping timeout: 480 seconds)
[14:32] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) has joined #ceph
[15:02] * jantje_ (jan@paranoid.nl) Quit (Ping timeout: 480 seconds)
[15:06] * nhm (~nh@68.168.168.19) Quit (Read error: Operation timed out)
[15:06] * nhm (~nh@68.168.168.19) has joined #ceph
[15:06] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[15:10] * jantje (jan@paranoid.nl) has joined #ceph
[15:19] * lofejndif (~lsqavnbok@83TAAF54R.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:26] <nhm> good morning #ceph
[15:30] <liiwi> good afternoon
[15:36] * lofejndif (~lsqavnbok@83TAAF54R.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[16:05] * Ryan_Lane (~Adium@135.sub-166-249-196.myvzw.com) has joined #ceph
[16:39] <elder> nhm, I will not be at standup today, please inform others. Headed to Northfield to pick up my daughter.
[16:40] <nhm> elder: will do
[16:40] <elder> My status: implementing sub-state machine for ceph connection socket.
[16:40] <elder> Basically actually moving on some of the stuff I've been pondering and looking at the last couple of days.
[16:41] <elder> nhm, separate question--do you know off hand what the "ip=" option is for ceph clients?
[16:42] <elder> Nevermind.
[16:42] <elder> I think it's simply stating "my IP address is this"
[16:42] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[16:43] <nhm> elder: yeah, that's what it seems to be for. I assume in some situations it is helpful to manually specify it.
[16:51] <darkfader> i still don't have any understanding about ceph and multihoming would look like in a later real-world use
[16:52] * lofejndif (~lsqavnbok@28IAAEWZ7.tor-irc.dnsbl.oftc.net) has joined #ceph
[16:52] <darkfader> but since it always works anyway i don't worry yet :)
[17:00] * johnl (~johnl@2a02:1348:14c:1720:d44c:7318:a7ab:69cc) Quit (Remote host closed the connection)
[17:00] * johnl (~johnl@2a02:1348:14c:1720:4c85:c827:3c1d:b094) has joined #ceph
[17:07] * Tv_ (~tv@aon.hq.newdream.net) has joined #ceph
[17:09] * Ryan_Lane (~Adium@135.sub-166-249-196.myvzw.com) Quit (Quit: Leaving.)
[17:11] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) has joined #ceph
[17:24] * joao (~JL@aon.hq.newdream.net) has joined #ceph
[17:25] <joao> hello all
[17:25] <nhm> joao: good morning
[17:25] <joao> hello nhm :)
[17:32] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) has joined #ceph
[17:32] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:35] * verwilst (~verwilst@d5152FEFB.static.telenet.be) Quit (Quit: Ex-Chat)
[17:39] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) has joined #ceph
[17:40] <jmlowe> Has anybody noticed the ceph debian repo being kind of slow?
[17:49] <nhm> jmlowe: how slow is slow?
[17:59] * DonaHolmberg (~Adium@aon.hq.newdream.net) has joined #ceph
[18:00] * DonaHolmberg (~Adium@aon.hq.newdream.net) has left #ceph
[18:06] * Theuni (~Theuni@195.62.106.100) Quit (Ping timeout: 480 seconds)
[18:07] <jmlowe> 50kBs
[18:07] <jmlowe> apt-get upgrade takes about 15 minutes
[18:13] * BManojlovic (~steki@212.200.243.232) has joined #ceph
[18:14] <nhm> yeah, that's pretty awful.
[18:20] <elder> You should upgrade your modem.
[18:20] <elder> (Just kidding.)
[18:21] <yehudasa_> elder: 300 baud should be enough for anyone
[18:21] <elder> 110
[18:22] <elder> Those 1200 baud modems are crazy. What would you do with all that?
[18:22] <yehudasa_> heh
[18:22] <yehudasa_> 110 is way past my time
[18:24] <nhm> Yeah, 1200 baud was the first I owned.
[18:24] <elder> Youngster.
[18:25] <Tv_> i still remember my 1200baud that had half a palm area of 2" heat radiating fins in the back, and it still heated like crazy
[18:26] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[18:26] <elder> Anderson Jacobson baby.
[18:26] <elder> http://blogs.sybase.com/wdudley/wp-content/uploads/2010/09/acoustic_modem.jpg
[18:26] <elder> Half *or* full duplex.
[18:27] <yehudasa_> always reminds me of war games
[18:27] * Glace (~IceChat7@74.121.244.3) has joined #ceph
[18:28] <Tv_> http://home.kpn.nl/a.dikker1/museum/nokiamodem.html
[18:28] <elder> You never forget your first.
[18:28] <nhm> Man, I remember wanting a USR Courier so badly as a kid.
[18:28] <jmlowe> I could only ever afford a sportster
[18:29] <gregaf1> Qten: default RBD block size is 4MB, yes ??? you can configure it differently if you like, though
[18:29] <jmlowe> now I send emails that are bigger than my first hard drive
[18:31] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[18:31] <gregaf1> you must mean your attachments, not the actual email content
[18:32] <jmlowe> yeah, but I do it from my cell phone
[18:32] <gregaf1> heh, yeah
[18:33] <nhm> My first hard drive was awesome. full height 5.25 that sounded like a coffee maker.
[18:34] <gregaf1> I think the first hard drive I ever owned myself was 250MB, when my grandmother gave me her old Performa
[18:34] <jmlowe> a colleague of mine has a 5MB drive from a very old IBM system, roughly the size of a basketball, keeps it under his desk as a foot rest
[18:34] <gregaf1> although I'm not sure what was in my dad's 1981 IBM PC that I remember using a little bit
[18:34] <jmlowe> wouldn't have had a hard drive until the ps/2 I think
[18:35] <gregaf1> it was a green-screen but I'm pretty sure it had a hard drive, and I'm pretty sure that was the year
[18:35] <jmlowe> 20MB to 40 MB range If I remember
[18:35] <gregaf1> might be misremembering *shrug*
[18:36] <nhm> gregaf1: could have been upgraded.
[18:36] <gregaf1> went straight from that to a computer where the Turbo button slowed it down :D
[18:36] <nhm> turbo buttons were great
[18:36] <gregaf1> Commander Keen and Terminal Velocity were a lot more fun than the ladder games
[18:38] <nhm> terminal velocity came later though.
[18:38] <elder> 300 MB in a 12-platter removable disk pack: http://home.ntelos.net/~donbryan/Don/Pics/my_disk.jpg
[18:38] <elder> Saweet.
[18:38] <gregaf1> '95; I was 8 and a half at the time :)
[18:39] <elder> We had two of them babies, holding the accounts and files for the 3000 students in the college.
[18:39] <jmlowe> yeah wikipedia says that they had hard drives sometime between 1981 and 1983
[18:40] <nhm> gregaf1: I was 15 then wasting all my time playing videogames. :)
[18:41] <nhm> Honestly I probably play more games from that era these days than I play new games.
[18:42] <gregaf1> games from then are pretty hard on the eyes now :(
[18:43] <gregaf1> http://upload.wikimedia.org/wikipedia/en/f/f3/Descent.png is waaaay less detailed than I remember it being
[18:43] <nhm> that looks about how I remember it. :)
[18:44] <nhm> I mostly go back and play stuff like xcom or master of orion II.
[18:46] <nhm> My son is starting to play the windwaker on our old gamecube.
[18:50] <jmlowe> I've got a few years, I wonder what I will be able to teach him about the good old days https://picasaweb.google.com/104314253101021700373/JackWasBorn?authuser=0&feat=directlink
[18:51] <nhm> Awesome! :)
[18:52] <nhm> jmlowe: How are you holding up on sleep? :)
[18:52] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[18:53] <jmlowe> doing ok, mom takes the night shift I take the 0600 to 1000 and 1700 to 2300 shifts
[18:54] <jmlowe> haven't started on bottles yet so there isn't much point in me being up all night
[18:56] * lofejndif (~lsqavnbok@28IAAEWZ7.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[18:58] <nhm> jmlowe: it's good to have a system like that worked out
[18:58] * lofejndif (~lsqavnbok@19NAAIZQO.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:59] * Glace (~IceChat7@74.121.244.3) Quit (Ping timeout: 480 seconds)
[18:59] * Theuni (~Theuni@82.113.119.108) has joined #ceph
[19:00] * chutzpah (~chutz@216.174.109.254) has joined #ceph
[19:02] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[19:05] * Theuni (~Theuni@82.113.119.108) Quit (Quit: Leaving.)
[19:06] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[19:17] * Ryan_Lane (~Adium@222.sub-166-249-200.myvzw.com) has joined #ceph
[19:29] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[19:31] * MarkDude (~MT@ip-64-134-236-53.public.wayport.net) has joined #ceph
[19:40] * Theuni (~Theuni@82.113.119.108) has joined #ceph
[19:45] * Theuni (~Theuni@82.113.119.108) Quit (Quit: Leaving.)
[19:49] * dmick (~dmick@aon.hq.newdream.net) has joined #ceph
[20:28] * lofejndif (~lsqavnbok@19NAAIZQO.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[20:30] * Ryan_Lane (~Adium@222.sub-166-249-200.myvzw.com) Quit (Quit: Leaving.)
[20:32] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) has joined #ceph
[20:32] <CristianDM> Hi guys. Great jobs with the docs. Everyday have better doc
[20:48] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[21:30] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[21:40] * izdubar (~MT@ip-64-134-236-53.public.wayport.net) has joined #ceph
[21:41] * MarkDude (~MT@ip-64-134-236-53.public.wayport.net) Quit (Ping timeout: 480 seconds)
[21:41] * izdubar is now known as MarkDude
[21:53] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[21:54] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[21:54] * jtang (~jtang@089-101-195084.ntlworld.ie) has joined #ceph
[21:57] <jtang> im kinda surprised there are that many people on this irc channel for #ceph
[21:57] * Theuni (~Theuni@i59F76DB6.versanet.de) has joined #ceph
[21:58] <nhm> jtang: it's a popular project. :)
[21:59] <jtang> heh, I've onyl recently started to look at ceph again
[21:59] <jtang> especially since now there is a commercial body backing it
[21:59] * jtang has been a long time gpfs user/admin in work
[21:59] <jtang> and ceph looks appealing
[22:00] * stxShadow (~jens@ip-78-94-238-69.unitymediagroup.de) has joined #ceph
[22:02] <nhm> jtang: Never ended up using gpfs. IBM put some bids in a couple of times, but couldn't beat lustre on price for us.
[22:03] <jtang> we had lustre as well in work, but lustre never picked up for due to costs of maintenance
[22:03] <jtang> even though it was cheaper to 'buy' lustre, it was always more costly in man hours for us
[22:04] <jtang> we've been looking at ceph in work, and i'd like to comment on, erm??? it needs more userland documentation
[22:05] <jtang> we're finding it a bit hard to figure out the options and what things mean short of lots of experimentation
[22:06] <nhm> jtang: Yeah, there are some rough edges there still.
[22:07] <nhm> jtang: Hopefully a lot of that should improve now that we are getting a QA group and documentation guys setup.
[22:08] * Theuni (~Theuni@i59F76DB6.versanet.de) Quit (Quit: Leaving.)
[22:08] <jtang> yea i saw that on the website
[22:08] <jtang> i had almost got tempted to apply for one of the posts as well :)
[22:08] <jtang> looks like it could be a nice place to work
[22:08] <nhm> jtang: doesn't hurt to apply! :D
[22:09] <nhm> jtang: it's the best place I've ever worked. :)
[22:09] <jtang> i'd apply if there was a european branch
[22:10] <nhm> jtang: talk to joao, he's working out of Portugal.
[22:10] <joao> did I just became the "European branch"?
[22:10] <nhm> joao: congrats! ;)
[22:11] <joao> can't say if good or bad :x
[22:11] * rturk (~rturk@aon.hq.newdream.net) has joined #ceph
[22:14] <jtang> heh, on a more serious note, i was wondering with regards to the data-placement in ceph, are the clients aware of the network locality of the data? are the clients in ceph current smart enough to get data from the closet source?
[22:14] <jmlowe> if the money was right and I could stay in the midwest I'd consider http://ceph.com/community/career/system-engineer/
[22:14] <jtang> and are there planes to support non-tcp based networking?
[22:15] * stxShadow (~jens@ip-78-94-238-69.unitymediagroup.de) Quit (Quit: bye bye !! )
[22:15] <jtang> the last point might be a bit of a deal breaker for some HPC facilities
[22:15] <jtang> but i guess ceph isn't just targetted to hpc applications
[22:16] <nhm> jtang: on data locality: I don't think so right now. On RDMA: Maybe, but it depends on who gives us funding. ;)
[22:16] <jtang> okay, i guess an answer is better than no answer
[22:18] <jtang> i'm currently working on digital preservation systems right so i guess evaluating ceph right now from that point of view, then rdma doesnt really matter to me right now
[22:18] <jtang> *sigh* english grammar is failing me today *sigh*
[22:19] <nhm> jtang: we do have at least one user doing ipoib...
[22:19] <pmjdebruijn> isn't there a tcp wrapper for ib which supports rdma?
[22:19] <jtang> yea, but ipoib just sucks compared to native ib or sdp
[22:19] <pmjdebruijn> it's a nasty ld_preload construct though iirc
[22:19] <jtang> pmjdebruijn: yeap ipoib
[22:20] <pmjdebruijn> jtang: ipoib is native
[22:20] <jmlowe> with ahmdal's law in mind I don't see how rdma is going to get you very much
[22:20] <nhm> jtang: yeah, I know. I just wanted to mention that someone is using it successfully.
[22:20] <jtang> sorry i meant sdp (as far as i remember)
[22:20] <pmjdebruijn> ah
[22:20] <pmjdebruijn> libsdp right
[22:20] <pmjdebruijn> but that aint pretty
[22:20] <pmjdebruijn> we are actually testing it over ipoib
[22:20] <pmjdebruijn> the throughput is fairly respectable
[22:21] <pmjdebruijn> we only have 20Gbit equipment, since that's available cheaply now
[22:21] <jtang> i never found ipoib to be too satisfactory, it often gave users the wrong idea about "i think i'll just use that crappy old binary and not recompile to use native ib"
[22:21] <pmjdebruijn> jtang: it's far from optimal
[22:21] <pmjdebruijn> jtang: but it's still beat gigabit ethernet or multiple bonded gigabit ethernets by a longshot
[22:21] <nhm> pmjdebruijn: oh, have you been doing much rados bench testing?
[22:21] <pmjdebruijn> rdb disks
[22:21] <pmjdebruijn> well
[22:21] <pmjdebruijn> "much"
[22:22] <jtang> pmjdebruijn: are you running ceph in a hpc environment?
[22:22] <pmjdebruijn> jtang: nope
[22:22] <jtang> im interested in hearing/seeing if people are using ceph in that environment
[22:22] <jtang> and also if people are using ceph as a storage target for the likes of digital libraries
[22:22] <nhm> pmjdebruijn: how much are you saturating the IB on the osd side?
[22:23] <pmjdebruijn> I don't have any numbers handy now
[22:23] <pmjdebruijn> nhm: but we'll easily go beyond gigabit ethernet
[22:23] <pmjdebruijn> which is why we didn't want gigabit ethernet
[22:24] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[22:24] <nhm> pmjdebruijn: How many osds per node?
[22:25] <pmjdebruijn> we have an osd per disk, and 48 disks per node
[22:25] <jtang> nhm: since you work for ceph, do you know if there are plans to support non-linux based systems both at the server and client side in the distant future?
[22:26] <nhm> pmjdebruijn: I'm toping out right now at around 600MB/s per node on 10G. That's to 5 OSDs per node (5 SSD journals and 5 SSD OSD disks).
[22:26] <pmjdebruijn> nhm: all fairly cheap equipment
[22:26] <pmjdebruijn> nhm: we have all the OSD journals on a single SSD
[22:26] <nhm> pmjdebruijn: I'd like to see that get closer to 800-900MB/s.
[22:26] <jtang> since the servers all run in userland, and it appears the leveldb code might help it be more portable, is it even in the long term goals?
[22:26] <pmjdebruijn> nhm: if you need specific numbers you could give NaioN a bump
[22:27] <nhm> jtang: Not sure. I think there has been some talk about it. One of the other guys might know for sure.
[22:27] <pmjdebruijn> nhm: it would take him a bit to respond though
[22:27] <pmjdebruijn> nhm: 20Gbit IB is cheaper than 10Gbit ethernet ATM
[22:27] <pmjdebruijn> it's just a bit more hassle
[22:27] <nhm> pmjdebruijn: yeah, I know. We ran a lot of IB at my last job.
[22:28] <pmjdebruijn> it's funny how markets work
[22:28] <pmjdebruijn> 20Gbit IB is old/low-end and thus cheap
[22:28] <pmjdebruijn> 10Gbit Ethernet is new/high-end and thus expensive
[22:28] <nhm> pmjdebruijn: I imagne prices should come down once 40G ethernet starts gaining more traction.
[22:29] <gregaf1> jtang: somebody got a BSD port of the userspace stuff working; we're not very good at maintaining it so I dunno if it will build easily but it's possible
[22:29] <jtang> gregaf1: ah ok
[22:29] <nhm> I doubt DDR IB will continue to be cheaper in the long run.
[22:29] <jtang> hmm i just got a call, its 9:30pm in ireland, and its time for a pint of beer
[22:29] <gregaf1> and it hasn't worked in a while but you could once get the FUSE client working on OS X, and I may try and resurrect that once we get back to work on the filesystem again
[22:30] <pmjdebruijn> nhm: 10Gbit IB is silly cheap now
[22:30] <jtang> nhm: 40gb ib has lots of traction ;)
[22:30] <pmjdebruijn> you can get Gigabit ethernet cards that are more expensive that 10Gbit IB
[22:30] <jtang> the difference in price between 20 and 40gb isnt that much if you get memfree hca's
[22:30] <pmjdebruijn> than*
[22:30] <jtang> the switches and cables are pricey thats the only problem
[22:30] <gregaf1> it's less likely you'll see anything working properly on a non-POSIX system in the near-to-mid future, though
[22:30] <pmjdebruijn> yep
[22:31] <Meths> gregaf1: Got a link to the BSD port or info about it?
[22:31] <nhm> pmjdebruijn: sure, but most systems have gigabit built in these days.
[22:31] <pmjdebruijn> nhm: true
[22:31] <jtang> gregaf1: i was jsut curious about its portability, i would naturally gravitate to it more and try to port it to osx for the fun of it
[22:31] <pmjdebruijn> nhm: just skewed pricing just makes me chuckle
[22:31] <gregaf1> Meths: it got merged into our upstream repo a few/several months ago; I'm just not sure that we've maintained compatibility properly everywhere
[22:31] <pmjdebruijn> hmmm
[22:32] <gregaf1> I'd have to search to find out who actually did it, but I bet "ceph BSD" would probably turn up useful info
[22:32] <pmjdebruijn> Apparently I'm having issues putting coherent sentences together
[22:32] <Meths> gregaf1: Cool, thanks.
[22:32] <jtang> gregaf1: if someone wanted to port ceph to another os, which library/package would you target first in the ceph stack?
[22:32] <nhm> pmjdebruijn: Now that 10GE cat6 is starting to become more common, the cable price can't be ignored. Too bad Dell upcharges for the cards. ;P
[22:33] <jtang> well to something that is not linux to run the mds, osd and mon servers that is
[22:33] <gregaf1> jtang: our build system isn't very modular, and I suck at autotools
[22:33] <gregaf1> I made a stab at getting it to at least build on OS X a year or two ago and just sort of ran make and tried to work out the problems as I hit them
[22:33] <gregaf1> I made reasonable progress but never enough to be actually useful
[22:34] <jtang> you mean autohell
[22:34] <jtang> :)
[22:34] <jtang> heh
[22:35] <pmjdebruijn> nhm: indeed
[22:35] <jtang> on that not, im gonna go meet some friends for a beer, nice talking to you guys about this, this has given me some more info to further evaluate the current state of ceph for our uses in work
[22:35] <jtang> by for now!
[22:35] <gregaf1> but since it's built for Linux there wasn't much that was actually difficult; mostly just redefining specialized variable types into more generic ones and such
[22:36] <gregaf1> Meths: heh, got a link http://ceph.com/releases/v0-39-released/
[22:36] <gregaf1> looks like you want to talk to ssedov (and you, jtang, since he actually made a port work)
[22:36] * adjohn (~adjohn@md80536d0.tmodns.net) has joined #ceph
[22:37] <jtang> gregaf1: cool thanks, must dash now anyway
[22:42] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Quit: Leaving)
[22:47] <Meths> gregaf1: Great, thanks.
[22:59] * ThoughtCoder (~ThoughtCo@60-240-78-43.static.tpgi.com.au) has joined #ceph
[23:03] * rturk (~rturk@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[23:04] * rturk (~rturk@aon.hq.newdream.net) has joined #ceph
[23:04] * rturk (~rturk@aon.hq.newdream.net) has left #ceph
[23:05] * adjohn (~adjohn@md80536d0.tmodns.net) Quit (Quit: adjohn)
[23:17] <CristianDM> Hi. Where I can get help about how-to add new users to auth rbd for glance and nova?
[23:17] <gregaf1> joshd would know what's available
[23:18] <CristianDM> joshd: Any idea?
[23:25] <CristianDM> I add the user keyring.
[23:25] <CristianDM> But I don??t know how setup for example read and write to specific pool
[23:26] <joshd> CristianDM: the end of http://glance.openstack.org/configuring.html#configuring-the-rbd-storage-backend has some info
[23:27] <joshd> the relevant part for allowing access to specific pools is --cap osd 'allow rwx pool=images'
[23:28] <joshd> you can add more --cap options to specify other pools, e.g. --cap osd 'allow rwx pool=nova' --cap osd 'allow r pool=glance'
[23:28] <CristianDM> Thanks joshd.
[23:28] <CristianDM> I will check this now
[23:31] <CristianDM> joshd: Is really necesary add [client.glance] to ceph.conf or with keyring = /etc/ceph/$name.keyring are fine?
[23:33] * mkampe1 (~markk@aon.hq.newdream.net) Quit (Remote host closed the connection)
[23:37] <gregaf1> you need to specify the client to use, which I believe for Glance requires putting it into the ceph.conf
[23:37] <gregaf1> CristianDM: ^
[23:37] <CristianDM> But if work with client.admin using global config
[23:37] <CristianDM> I will try :D
[23:38] <gregaf1> CristianDM: well, yes, but then you're using the admin client, and have to have the admin keyring available, and that key can do anything to anybody
[23:39] <gregaf1> granted, Ceph is not really a secure system, but it would be good if your setup is maximally secure ;)
[23:40] <CristianDM> Yes, sorry. I will use glance user. The idea is don??t put additional setup into ceph.conf, with the general rule maybe works with the user glance.
[23:40] <CristianDM> If I put into /etc/ceph/client.glance.keyring
[23:40] <CristianDM> and put in general
[23:40] <CristianDM> keyring = /etc/ceph/$name.keyring
[23:41] <CristianDM> So, don??t need special section [client.glance]
[23:44] <gregaf1> ah, I see ??? so you need to specify that you're using client.glance, and I don't remember exactly how the glance stuff is done, but if you can pass it in on a command line somewhere or something then no, it doesn't need to be in the conf file
[23:48] <CristianDM> Good, thanks.
[23:58] * dfdsav is now known as goedi
[23:59] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.