#ceph IRC Log

Index

IRC Log for 2011-06-07

Timestamps are in GMT/BST.

[0:04] <Tv> for the channel's benefit: i screwed up a bit, don't expect to run teuthology until i say it's fixed :(
[0:04] <Tv> (rm -rf fail)
[0:06] * aliguori_ (~anthony@32.97.110.59) has joined #ceph
[0:06] <yehudasa> Tv: autobuild screams at one of my commits at the rgw-multipart branch, however, I'm having trouble deciphering what went wrong
[0:06] <yehudasa> it just says 'error: Added files'
[0:07] <Tv> yehudasa: the build creates files that are not in .gitignore
[0:07] <Tv> it should list the paths
[0:07] <Tv> yeah, src/rgw_multiparser
[0:08] <yehudasa> Tv: does it compile with --with-debug?
[0:09] <Tv> + ./configure --with-debug --with-radosgw --with-fuse --with-tcmalloc --with-libatomic-ops --with-gtk2
[0:09] <Tv> on the same page...
[0:09] <yehudasa> yeah
[0:09] * aliguori (~anthony@32.97.110.64) Quit (Ping timeout: 480 seconds)
[0:09] <yehudasa> so, right, it created that
[0:09] <yehudasa> so what?
[0:09] <yehudasa> what's missing?
[0:09] <Tv> is it an executable?
[0:10] <yehudasa> yes
[0:10] <Tv> see src/.gitignore, do the same
[0:10] <Tv> basically, "git st" should be silent even after a make
[0:10] <Tv> i mean git status
[0:11] <Tv> sorry that's a local alias i have
[0:11] <yehudasa> ok, thanks
[0:12] <Tv> ok teuthology should be all good again
[0:12] <Tv> still had the relevant files open in emacs, phew
[0:15] <Tv> oh lovely, gitbuilder borked for some reason
[0:15] <Tv> master fails to build, but that shouldn't have removed the binary tarball for it :(
[0:16] <Tv> yehudasa: the last commit on master is yours
[0:17] <yehudasa> Tv: does it cry that the man page is out of date?
[0:17] <Tv> clitests
[0:18] <gregaf> it's whining about the radosgw_admin help text
[0:18] <Tv> yeah, the usage of the command-line tool
[0:25] <Tv> found the gitbuilder scripting bug
[0:34] * aliguori_ (~anthony@32.97.110.59) Quit (Quit: Ex-Chat)
[1:57] * bchrisman (~Adium@70-35-37-146.static.wiline.com) Quit (Quit: Leaving.)
[2:11] * Tv (~Tv|work@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[2:22] * greglap (~Adium@ip-66-33-206-8.dreamhost.com) has joined #ceph
[2:22] * greglap (~Adium@ip-66-33-206-8.dreamhost.com) Quit ()
[2:34] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) Quit (Quit: Leaving.)
[2:36] * yoshi (~yoshi@p24092-ipngn1301marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[2:57] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[3:50] * greglap (~Adium@cpe-76-170-84-245.socal.res.rr.com) has joined #ceph
[4:09] * maswan (maswan@kennedy.acc.umu.se) Quit (Ping timeout: 480 seconds)
[4:12] * maswan (maswan@kennedy.acc.umu.se) has joined #ceph
[4:27] * votz (~votz@dhcp0020.grt.resnet.group.UPENN.EDU) has joined #ceph
[5:26] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[5:32] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) has joined #ceph
[7:35] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[8:21] * macana (~ml.macana@159.226.41.129) Quit (Ping timeout: 480 seconds)
[8:22] * macana (~ml.macana@159.226.41.129) has joined #ceph
[9:10] * eternaleye_ (~eternaley@195.215.30.181) has joined #ceph
[9:14] * eternaleye (~eternaley@195.215.30.181) Quit (Read error: Connection reset by peer)
[9:14] * votz (~votz@dhcp0020.grt.resnet.group.UPENN.EDU) Quit (resistance.oftc.net weber.oftc.net)
[9:14] * nolan (~nolan@phong.sigbus.net) Quit (resistance.oftc.net weber.oftc.net)
[9:14] * pruby (~tim@leibniz.catalyst.net.nz) Quit (resistance.oftc.net weber.oftc.net)
[9:14] * dwm (~dwm@vm-shell4.doc.ic.ac.uk) Quit (resistance.oftc.net weber.oftc.net)
[9:14] * u3q (~ben@jupiter.tspigot.net) Quit (resistance.oftc.net weber.oftc.net)
[9:18] * MarkN (~nathan@59.167.240.178) has joined #ceph
[9:18] * pruby (~tim@leibniz.catalyst.net.nz) has joined #ceph
[9:21] * votz (~votz@dhcp0020.grt.resnet.group.UPENN.EDU) has joined #ceph
[9:21] * nolan (~nolan@phong.sigbus.net) has joined #ceph
[9:21] * dwm (~dwm@vm-shell4.doc.ic.ac.uk) has joined #ceph
[9:23] * votz (~votz@dhcp0020.grt.resnet.group.UPENN.EDU) Quit (Ping timeout: 480 seconds)
[9:24] * votz (~votz@dhcp0020.grt.resnet.group.UPENN.EDU) has joined #ceph
[9:26] <failbaitr> 7
[9:33] * dwm (~dwm@vm-shell4.doc.ic.ac.uk) Quit (synthon.oftc.net weber.oftc.net)
[9:33] * nolan (~nolan@phong.sigbus.net) Quit (synthon.oftc.net weber.oftc.net)
[9:36] * nolan (~nolan@phong.sigbus.net) has joined #ceph
[9:36] * dwm (~dwm@vm-shell4.doc.ic.ac.uk) has joined #ceph
[9:58] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[10:05] * allsystemsarego (~allsystem@188.27.167.240) has joined #ceph
[10:05] * pombreda (~Administr@109.128.232.224) Quit (Read error: Connection reset by peer)
[10:06] * pombreda (~Administr@109.128.9.215) has joined #ceph
[11:12] * yoshi (~yoshi@p24092-ipngn1301marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:12] * yoshi (~yoshi@p24092-ipngn1301marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[11:14] * yoshi (~yoshi@p24092-ipngn1301marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:31] * macana (~ml.macana@159.226.41.129) Quit ()
[11:36] * lxo (~aoliva@186.214.48.26) Quit (Read error: No route to host)
[11:37] * lxo (~aoliva@186.214.48.26) has joined #ceph
[13:15] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[13:18] * yoshi (~yoshi@KD027091032046.ppp-bb.dion.ne.jp) has joined #ceph
[13:20] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) has joined #ceph
[13:33] * alien_ (~cristi@46.102.246.155) Quit (Remote host closed the connection)
[13:33] * alien_ (~cristi@46.102.246.155) has joined #ceph
[15:46] * bhem (~bhem@1RDAAANAL.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:56] * yoshi (~yoshi@KD027091032046.ppp-bb.dion.ne.jp) Quit (Remote host closed the connection)
[15:57] * bhem (~bhem@1RDAAANAL.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[16:01] * bhem (~bhem@659AAB6J9.tor-irc.dnsbl.oftc.net) has joined #ceph
[16:01] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[16:16] * sayotte (~ircuser@208.89.100.110) has joined #ceph
[16:22] <sayotte> I'm unclear on the niche that ceph fills... is it storage abstraction, on top of which I'd build a filesystem? or is it filesystem abstraction, on top of which I'd build an application?
[16:31] <greglap> sayotta: it's both!
[16:32] <greglap> Ceph itself is a POSIX-compliant distributed filesystem with linux kernel and FUSE clients (plus libraries for userspace)
[16:33] <greglap> it is built on top of a distributed object store called RADOS
[16:33] <greglap> be back in 15 minutes
[16:33] * greglap (~Adium@cpe-76-170-84-245.socal.res.rr.com) Quit (Quit: Leaving.)
[16:48] * greglap (~Adium@mobile-198-228-209-057.mycingular.net) has joined #ceph
[16:48] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) Quit (Remote host closed the connection)
[16:53] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) has joined #ceph
[17:22] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[17:23] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Quit: Ex-Chat)
[17:25] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[17:25] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit ()
[17:25] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[17:26] * bhem (~bhem@659AAB6J9.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[17:40] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:44] * greglap (~Adium@mobile-198-228-209-057.mycingular.net) Quit (Ping timeout: 480 seconds)
[17:49] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[17:58] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) has joined #ceph
[18:07] * pombreda (~Administr@109.128.9.215) Quit (Quit: Leaving.)
[18:10] * pombreda (~Administr@109.128.9.215) has joined #ceph
[18:36] * bchrisman (~Adium@70-35-37-146.static.wiline.com) has joined #ceph
[18:42] * Tv (~Tv|work@ip-66-33-206-8.dreamhost.com) has joined #ceph
[18:49] * pombreda (~Administr@109.128.9.215) has left #ceph
[18:56] * cmccabe (~cmccabe@208.80.64.174) has joined #ceph
[19:12] <sagewk1> standup at 10:15!
[19:13] <cmccabe> ok
[19:16] <sagewk1> er, few minutes
[20:07] * sugoruyo (~george@athedsl-408992.home.otenet.gr) has joined #ceph
[20:40] * sugoruyo (~george@athedsl-408992.home.otenet.gr) Quit (Quit: sugoruyo)
[21:12] <wido> hi
[21:13] <Tv> hello
[21:13] <wido> I'm still seeing my monitor going OOM (4G RES), I want to use the memory profiler to figure out what is going on, but "ceph mon tell 0 heap start_profiler" doesn't work
[21:13] <Tv> wow today has been a quiet day on irc
[21:13] <wido> 'unknown command tell'
[21:13] <wido> yes, pretty quit I see :)
[21:14] <wido> quite*
[21:15] <Tv> wido: the tell thing makes me think your problem is not about heap profiling at all
[21:16] <wido> Tv: Correct, these commands haven't been implemented for the mon
[21:16] <wido> A few weeks ago cmon was also linked against tcmalloc to support profiling, but the subsystem commands haven't been implemented
[21:17] <wido> http://ceph.newdream.net/wiki/Memory_Profiling
[21:37] <bchrisman> sagewk1: ahhh should've mentioned quarterly review today.. will catch up tomorrow.
[21:58] * jmlowe (~Adium@129-79-195-139.dhcp-bl.indiana.edu) has joined #ceph
[22:00] <jmlowe> I'd like to try out rbd, but I'm having some trouble with the wiki docs, specifically "ceph class list" yields 'class distribution is no longer handled by the monitor'
[22:00] <jmlowe> any hints?
[22:01] <joshd> jmlowe: looks like the wiki needs to updated
[22:01] <jmlowe> anywhere to look until that happens (maybe I could update as I work through)?
[22:01] <cmccabe> what brought you to the class distribution stuff?
[22:01] <cmccabe> just trying some commands?
[22:02] <gregaf> there's an RBD class
[22:02] <jmlowe> http://ceph.newdream.net/wiki/Rbd
[22:02] <joshd> it should be pretty much the same, just without the class distribution parts
[22:02] <gregaf> I don't recall if it's installed by default or not
[22:02] <wonko_be> it is by default enabled in latest ceph
[22:02] <Tv> gregaf: afaik no
[22:02] <Tv> oh
[22:02] <wonko_be> well, let me rephrase that to "it just worked for me"
[22:02] <Tv> i recall our automated tests doing cclass add or such
[22:03] <joshd> Tv: that was before the removal of the class distribution stuff
[22:03] <Tv> ah
[22:03] <jmlowe> natty 0.28.2 btw
[22:03] <yehudasa> yeah, it should work out of the box now
[22:03] <jmlowe> so skip the loading section?
[22:04] <jmlowe> looks like it
[22:04] <gregaf> I think so, yeah
[22:04] <jmlowe> any idea of the first version with it baked in?
[22:05] <gregaf> there's a class mechanism in RADOS that lets you add custom code, and RBD is largely built using that
[22:05] <yehudasa> 0.27 or 0.28
[22:05] * verwilst_ (~verwilst@d51A5B6F6.access.telenet.be) has joined #ceph
[22:05] <gregaf> previously Ceph itself would handle class object distribution but now it doesn't, because it's hard to do right and is solved by other tools ;)
[22:06] <yehudasa> yeah, 0.28
[22:06] <gregaf> so the RBD .so's are just installed by the packaging system after we ripped out the distribution infrastructure :)
[22:07] <Tv> but do they still need to be explicitly activated?
[22:07] <jmlowe> wiki now reads "In order to use http://ceph.newdream.net/wiki/RBD, you have to load the http://ceph.newdream.net/wiki/RBD module in your http://ceph.newdream.net/wiki/Ceph cluster, NOTE: not necessary with version >= 0.28."
[22:08] <yehudasa> Tv: I'd assume no
[22:09] <wonko_be> Tv: without doing a thing on the server-side, rbd just works in recent versions
[22:09] <yehudasa> wonko_be: have you installed fresh or upgraded?
[22:10] <wonko_be> installed fresh
[22:10] <wonko_be> was a week or two ago, but noticed that the class loading wasn't needed any more, and making and mounting the rbd-volumes just worked
[22:10] <yehudasa> Tv: it's loaded lazily anyway.. without client's request it doesn't load
[22:12] <Tv> ahh
[22:12] <Tv> nice
[22:12] * Tv edits todo notes
[22:13] <wonko_be> actually, on this matter, is rbd the way to go to get block devices on clients?
[22:13] <Tv> wonko_be: i don't see what would be better
[22:13] <Tv> wonko_be: it still won't make many writers magically work ;)
[22:13] <wonko_be> Tv: just wanted to be sure i wasn't missing something
[22:14] <Tv> wonko_be: naive rbd use performance may currently be disappointing, as the writes are not buffered (well enough)
[22:14] <wonko_be> actually, the performance isn't that bad
[22:15] <Tv> yeah real world performance is better than microbenchmark performance, afaik
[22:15] <wonko_be> i got 72M per sec write on a three cluster node in a 1Gb wired network
[22:17] <jmlowe> I have a pair of data centers 50 miles apart connected by one of the worlds longest uninterrupted fibers with the smallest elevation change, we had some physicists do some experiments before it was lit because it was the only place in the world where the could find some fiber with these properties
[22:18] <jmlowe> I plan on using rbd when it's ready to provide block storage for vm's that migrate between these data centers around power an cooling events
[22:19] <Tv> jmlowe: between data centers is pretty far out of current usage, beware
[22:20] <Tv> jmlowe: as best i can tell, ceph is really not designed for (relatively) high-latency links
[22:21] <Tv> jmlowe: if you can make it so it only needs to replicate across that link when a scheduled outage is about to happen, then it'll be good
[22:22] <Tv> but it might actually work surprisingly well too -- only trying it out will tell ;)
[22:22] <jmlowe> we have a pair of cisco nexus 7k with 4x10Gige trunked, you can literally shine a flashlight in one fiber and see it 50 miles away on the other end, node<->switch<-fiber->switch<->node, latency is 1.25ms vs 0.4 for same data center
[22:22] <gregaf> 50 miles of uninterrupted fiber should be fine latency-wise, I think?
[22:22] <gregaf> bandiwdth is what would worry me
[22:23] <jmlowe> we have 40Gig today, if they ever get those 40G optics usable we will go up to 1. 6Tbs
[22:23] <Tv> jmlowe: sounds pretty good
[22:23] <gregaf> but if it was worth laying 50 miles of fiber I really hope you laid enough to have jealousy-inspiring bandwidth ;)
[22:23] <jmlowe> you tax payer dollars at work
[22:24] <Tv> jmlowe: my worry is more that something in the ceph architecture might make you take that lower latency hit even when it's not strictly needed
[22:24] <Tv> jmlowe: but your lower latency is pretty darn sweet ;)
[22:25] <Tv> btw our office to nearby data center is about that latency
[22:25] <Tv> if anyone here wants to run a vm storing stuff at the dc, that'd be a good demo
[22:27] <jmlowe> http://noc.ipgrid.org/uploads/0f/a7/0fa7a68be097db1410ffc54060e749e6/RT-Network-04-Apr-2011.png
[22:30] <jmlowe> vm's are part of this project https://xsede.org/about;jsessionid=E413571615A1E91AD69EC4D936F37DF3
[22:31] <Tv> pasting a sessionid is probably not a smart idea
[22:31] <Tv> i'm now logged in as you
[22:32] <Tv> log out quick to invalidate that
[22:32] <jmlowe> doh
[22:32] <darkfader> cookieless session? :)
[22:32] <jmlowe> it's the nsf, what did you expect
[22:37] <jmlowe> anyway, thanks for the help
[22:37] * jmlowe (~Adium@129-79-195-139.dhcp-bl.indiana.edu) has left #ceph
[22:38] <darkfader> now he logged out of the internets :)
[22:43] * lxo (~aoliva@186.214.48.26) Quit (Read error: Connection reset by peer)
[22:47] * lxo (~aoliva@186.214.48.26) has joined #ceph
[23:14] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) Quit (Quit: Leaving.)
[23:19] * lxo (~aoliva@186.214.48.26) Quit (Read error: Connection reset by peer)
[23:29] <Tv> http://gitbuilder-gcov-amd64.ceph.newdream.net/ (internal only for now, i still don't know how to work the DH apache configurator rube golberg machine)
[23:39] * lxo (~aoliva@186.214.52.63) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.