#ceph IRC Log

Index

IRC Log for 2012-09-11

Timestamps are in GMT/BST.

[0:00] <pentabular> noob Q: what exactly does ceph use root ssh for [sans chef]?
[0:00] <dmick> accessing other machines in the cluster and starting daemons there
[0:01] <joshd> just for the init scripts and mkcephfs, really
[0:01] <dmick> it's a cluster-setup-time mech
[0:01] <joao> sagewk, any chance someone with access to ceph-object-corpus can apply a patch in a new branch?
[0:01] <pentabular> ..not for ongoing / background stuff?
[0:01] <dmick> pentabular: no
[0:01] <sagewk> joao: you figured out how to indicate the incompat barrier?
[0:01] <sagewk> what tree do you currently have it in?
[0:02] <sagewk> pentabular: well.. you *can* use it with teh init script to start daemons on other hosts, but i'm not sure i'd recommend that
[0:03] <pentabular> I was wondering if I could substitute my own remote exec system
[0:04] <joao> sagewk, looks like that creating a 'forward_incompat' directory with the class name on each of the directories will make sure that the encoding test ignores the class for those versions
[0:05] <sagewk> joao: cool
[0:05] <amatter> sjust: ceph osd dump http://pastebin.com/FRSRKgP6
[0:05] <pentabular> sagewk: so if I just have /some command/ that will spawn the same action on a remote host, can I ditch SSH or is it woven into the setup?
[0:05] <sagewk> joao: well, the version sort, so you just create the version where it changed, that sorts properly, and put it there (once)
[0:05] <sagewk> look at ceph_common.sh... it may be possible to swap ssh for something else easily, i forget
[0:06] <sagewk> pentabular: ^
[0:06] <pentabular> thanks! just getting started.
[0:06] <sjust> amatter: basically, if all of the stale pgs are actually a couple of stale pools, than removing the pools is by far the easiest way of fixing it
[0:06] <sjust> ceph pg dump is the output I would need to verify that
[0:07] <sjust> rather than ceph osd dump
[0:09] <amatter> sjust: oops sorry http://pastebin.com/3aGZgDpv
[0:09] <amatter> sorry, there are lots of pgs
[0:09] <sjust> yeah
[0:11] <sjust> amatter: is that the whole dump, I only see 219 linse
[0:11] <sjust> pastebin might have truncated it?
[0:11] <amatter> hmm. checking
[0:12] * mgalkiewicz (~mgalkiewi@staticline58611.toya.net.pl) Quit (Remote host closed the connection)
[0:12] * wijet (~wijet@staticline58611.toya.net.pl) Quit (Quit: wijet)
[0:14] <amatter> yes, pastebin says I exceed the limit of 500kb.
[0:14] <sjust> ah
[0:15] <sjust> amatter: you can sftp it to cephdrop@ceph.com
[0:16] <amatter> sjust: http://www.mattgarner.com/ceph/pgdump.txt
[0:18] <sjust> ok, all of your bad pgs are in pool 7, which appears to be completely empty
[0:18] <sjust> now we just need the name for pool 7
[0:19] <sjust> I think rados lspools gives you the name/number mapping for all of the pools
[0:19] <joshd> ceph osd dump does
[0:19] <sjust> oh, we have that
[0:19] <sjust> one sec
[0:20] <joshd> lspools does not
[0:20] <sjust> ok, removing hs-san-1-la should remove your bad pg problem
[0:21] <sjust> ceph osd pool delete hs-san-1-la
[0:21] <sjust> hang on
[0:23] <sjust> yeah, that looks right
[0:36] * pentabular (~sean@70.231.131.129) has left #ceph
[0:39] <joao> sagewk, yeah, that works
[0:42] <amatter> sjust: there is a directory in the cephfs mapped to that pool, should I remove that first?
[0:42] <sjust> ah, yes
[0:43] * senner (~Wildcard@68-113-228-222.dhcp.stpt.wi.charter.com) has joined #ceph
[1:06] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[1:07] * gregphone (~gregphone@38.122.20.226) has joined #ceph
[1:08] <gregphone> joao: if youre still around and want to talk about btrfs we're in the Vidyo ceph room
[1:08] <gregphone> dmick: I thought you said you were coming over? :)
[1:08] <joao> oh nice :D
[1:08] <joao> fireing up the laptop
[1:08] * BManojlovic (~steki@212.200.241.6) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:09] <elder> What's the procedure for review for a change to the teuthology tree?
[1:11] * yoshi (~yoshi@p28146-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[1:12] <joao> gregphone, updating vidyo -_-
[1:14] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[1:19] * wijet (~wijet@staticline57333.toya.net.pl) has joined #ceph
[1:22] * markl (~mark@tpsit.com) Quit (Read error: Connection reset by peer)
[1:33] * Cube (~Adium@12.248.40.138) has joined #ceph
[1:35] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[1:35] * jlogan (~Thunderbi@2600:c00:3010:1:8131:e4ec:e12c:5709) Quit (Ping timeout: 480 seconds)
[1:36] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[1:37] * lofejndif (~lsqavnbok@1RDAADKJA.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[1:37] <joao> looks like I should run a full 'make all' from time to time to flush out lingering errors
[1:37] <joao> such as this
[1:37] <joao> + [ -s .git/added-files ]
[1:37] <joao> + rm -rf ../out/output/sha1/ba80b5c64ed21494e470830a03767c572390d6d9.tmp
[1:37] <joao> + echo error: Added files:
[1:39] * gregphone (~gregphone@38.122.20.226) Quit (Read error: Connection reset by peer)
[1:42] <joshd> elder: no formal procedure, asking in irc is fine
[1:42] * senner (~Wildcard@68-113-228-222.dhcp.stpt.wi.charter.com) Quit (Quit: Leaving.)
[1:47] <amatter> sjust: thanks, removing that pool solved the issue of the stuck pgs
[1:49] <joao> so, has anyone ever had this kind of issue during a gitbuilder build?
[1:49] <joao> error: Added files:
[1:49] <joao> + cat .git/added-files
[1:49] <joao> src/store-tool
[1:49] <joao> + exit 7
[1:49] <joao> it does appear to be 'installed' though
[1:49] <joao> but the build fails
[1:57] * wijet (~wijet@staticline57333.toya.net.pl) Quit (Quit: wijet)
[1:57] <Tv_> joao: you created a file that is not cleaned up
[1:58] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[2:00] <joao> Tv_, I thought the make targets would be cleaned up automatically though
[2:00] <joao> maybe I missed something; will look into it
[2:00] <joao> thanks
[2:00] <joao> :)
[2:03] <Tv_> joao: nothing is automatic about make ;)
[2:03] <Tv_> joao: perhaps you forgot to add a line to .gitignore?
[2:04] <Tv_> given the name of the file you showed, that sounds likely
[2:04] <Tv_> joao: if src/store-tool really is an executable, see src/.gitignore e.g. /ceph-mon etc lines -- you need one like that
[2:07] * Tv_ (~tv@2607:f298:a:607:51e0:e578:bd15:6681) Quit (Quit: Tv_)
[2:11] <joao> well, will give this another shot tomorrow; going to bed
[2:11] <joao> o/
[2:11] <dmick> \m/
[2:14] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[2:14] * trhoden (~trhoden@pool-108-28-184-124.washdc.fios.verizon.net) Quit (Quit: trhoden)
[2:18] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[2:20] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:22] <elder> joshd, thanks. Anyone care to review teuthology/wip-specify_rbd_format ?
[2:23] * amatter (~amatter@209.63.136.130) Quit ()
[2:24] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) has joined #ceph
[2:24] <dmick> erg - vs _
[2:24] <dmick> I'll look
[2:24] <elder> I used a little of both :)
[2:24] <dmick> consistency: soft :)
[2:27] <elder> Two commits to review, by the way.
[2:28] <dmick> yep. both look reasonable to me, modulo the gratuitous "-s to --size" change ;)
[2:28] <dmick> but yeah, those look right
[2:29] <elder> Yeah I know, that belongs in a separate commit. But you approve of that going with it?
[2:30] <dmick> lol. yes that's fine
[2:30] <elder> OK. Thanks. I'll add your Reviewed-by also.
[2:35] <gregaf1> yehudasa: some comments for wip-2923 on github, otherwise looks good
[2:53] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) Quit (Quit: Konversation terminated!)
[3:10] * Ryan_Lane (~Adium@216.38.130.163) Quit (Quit: Leaving.)
[3:11] * chutzpah (~chutz@100.42.98.5) Quit (Quit: Leaving)
[3:24] * yoshi_ (~yoshi@p28146-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[3:24] * yoshi (~yoshi@p28146-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Read error: Connection reset by peer)
[3:26] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[3:34] * ajm (~ajm@adam.gs) Quit (Quit: ajm)
[3:36] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) Quit (Remote host closed the connection)
[3:36] * ajm (~ajm@adam.gs) has joined #ceph
[3:39] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:39] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (Quit: Leaving.)
[3:39] * sagelap (~sage@38.122.20.226) Quit (Ping timeout: 480 seconds)
[3:42] * sagelap (~sage@167.sub-70-197-146.myvzw.com) has joined #ceph
[3:48] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[3:50] * sagelap (~sage@167.sub-70-197-146.myvzw.com) Quit (Ping timeout: 480 seconds)
[3:54] * sagelap (~sage@22.sub-70-197-144.myvzw.com) has joined #ceph
[3:57] * amatter (~amatter@209.63.136.130) has joined #ceph
[4:02] * maelfius (~mdrnstm@66.209.104.107) Quit (Quit: Leaving.)
[4:26] * sagelap (~sage@22.sub-70-197-144.myvzw.com) Quit (Ping timeout: 480 seconds)
[5:28] * dmick (~dmick@2607:f298:a:607:1d88:5b53:8eec:5ac2) Quit (Quit: Leaving.)
[5:57] * nhmlap_ (~nhm@67-220-20-222.usiwireless.com) Quit (Ping timeout: 480 seconds)
[6:21] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[8:00] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[8:04] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[8:05] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[8:05] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[8:06] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[8:21] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[8:23] * yoshi_ (~yoshi@p28146-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Ping timeout: 480 seconds)
[8:27] * deepsa_ (~deepsa@122.172.26.86) has joined #ceph
[8:28] * yoshi (~yoshi@pw126244206006.4.tik.panda-world.ne.jp) has joined #ceph
[8:29] * deepsa (~deepsa@122.172.39.144) Quit (Ping timeout: 480 seconds)
[8:29] * deepsa_ is now known as deepsa
[8:50] * yoshi (~yoshi@pw126244206006.4.tik.panda-world.ne.jp) Quit (Ping timeout: 480 seconds)
[8:52] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[8:59] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[8:59] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:20] * yoshi (~yoshi@pw126244206006.4.tik.panda-world.ne.jp) has joined #ceph
[9:21] * yoshi (~yoshi@pw126244206006.4.tik.panda-world.ne.jp) Quit (Remote host closed the connection)
[9:22] * yoshi (~yoshi@pw126244206006.4.tik.panda-world.ne.jp) has joined #ceph
[9:32] * yoshi (~yoshi@pw126244206006.4.tik.panda-world.ne.jp) Quit (Remote host closed the connection)
[9:32] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:33] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[9:34] * yoshi (~yoshi@pw126244206006.4.tik.panda-world.ne.jp) has joined #ceph
[9:36] * loicd (~loic@2a01:e35:2eba:db10:120b:a9ff:feb7:cce0) has joined #ceph
[9:42] * yoshi_ (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[9:47] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[9:48] * ninkotech (~duplo@89.177.137.231) has joined #ceph
[9:48] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:50] * yoshi (~yoshi@pw126244206006.4.tik.panda-world.ne.jp) Quit (Ping timeout: 480 seconds)
[10:05] * Cube (~Adium@12.248.40.138) Quit (Quit: Leaving.)
[10:21] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[10:24] * loicd (~loic@2a01:e35:2eba:db10:120b:a9ff:feb7:cce0) Quit (Quit: Leaving.)
[10:46] * loicd (~loic@178.20.50.225) has joined #ceph
[10:47] * loicd (~loic@178.20.50.225) Quit ()
[10:48] * loicd (~loic@178.20.50.225) has joined #ceph
[10:48] * yoshi_ (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:04] * loicd (~loic@178.20.50.225) Quit (Quit: Leaving.)
[11:06] * loicd (~loic@178.20.50.225) has joined #ceph
[11:06] * loicd (~loic@178.20.50.225) Quit ()
[11:06] * loicd (~loic@178.20.50.225) has joined #ceph
[11:22] * EmilienM (~EmilienM@ADijon-654-1-133-33.w90-56.abo.wanadoo.fr) has joined #ceph
[11:23] * luckky (~73f13958@2600:3c00::2:2424) has joined #ceph
[11:24] <luckky> hi all,
[11:24] <luckky> getting error while running radosgw server
[11:25] <luckky> error is: "start: unknown parameter:id"
[11:25] * pradeep (~1b3d8692@2600:3c00::2:2424) has joined #ceph
[11:26] <pradeep> hi
[11:26] <luckky> hi pradeep , hw to solve dis bug? can u solved
[11:27] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[11:29] * luckky (~73f13958@2600:3c00::2:2424) Quit (Quit: TheGrebs.com CGI:IRC)
[11:37] <deepsa> hi Ludo
[11:42] * MikeMcClurg (~mike@62.200.22.2) has joined #ceph
[11:59] * pradeep (~1b3d8692@2600:3c00::2:2424) Quit (Quit: TheGrebs.com CGI:IRC (EOF))
[12:14] * wijet (~wijet@staticline58611.toya.net.pl) has joined #ceph
[12:19] * wijet (~wijet@staticline58611.toya.net.pl) has left #ceph
[12:24] * stass (stas@ssh.deglitch.com) Quit (Read error: Connection reset by peer)
[12:24] * stass (stas@ssh.deglitch.com) has joined #ceph
[13:42] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) has joined #ceph
[14:18] * nhmlap (~nhm@67-220-20-222.usiwireless.com) has joined #ceph
[14:38] * damien (~damien@94-23-154-182.kimsufi.com) has joined #ceph
[14:42] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) has joined #ceph
[14:55] <damien> Hi, any devs around?
[14:55] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Read error: Connection reset by peer)
[14:55] <nhmlap> damien: not exatly, but what's up?
[14:57] <damien> getting a crash in qemu-kvm when rbd_cache is enabled, http://dpaste.com/799402/
[14:57] <joao> okay, so I spent this morning chasing an improper behavior on my branch that was waaaay too similar to the one I had last week and had fixed... while on the laptop.
[14:58] <joao> this whole distributed development is great and all, but when one starts having multiple distributed versions of one's own work, things start to get messy
[15:04] * ninkotech (~duplo@89.177.137.231) Quit (Remote host closed the connection)
[15:08] * nhmlap_ (~nhm@174-20-43-18.mpls.qwest.net) has joined #ceph
[15:08] <nhmlap_> joao: I've been meaning to try mercurial. I'v heard it's like git for dummies which is probably what I need.
[15:09] <nhmlap_> damien: ok, joshd is the guy you want to talk to. He should be around in a few hours.
[15:09] <joao> nhmlap_, I've heard great things about mercurial
[15:10] <nhmlap_> joao: I don't think my brian works consistently well enough to use git really effectively.
[15:10] <joao> lol
[15:11] * nhmlap (~nhm@67-220-20-222.usiwireless.com) Quit (Ping timeout: 480 seconds)
[15:11] <joao> well, I did have a rejection while trying to push my desktop's repo onto gh
[15:11] <damien> nhmlap_: ta
[15:11] <joao> but I managed to simply assume that I had rebased something and *that* was the reason why it got rejected; so why not just force it, huh? :p
[15:12] <joao> all fixed though
[15:12] <joao> just cherry picked what I was missing from my laptop's branch
[15:15] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[15:15] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[15:15] * markl (~mark@tpsit.com) has joined #ceph
[15:15] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[15:25] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[15:25] <nhmlap_> joao: I've done stuff like that all the time.
[15:27] <joao> well, everything is working again
[15:56] * loicd (~loic@178.20.50.225) Quit (Ping timeout: 480 seconds)
[16:52] * loicd (~loic@magenta.dachary.org) has joined #ceph
[17:07] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:15] * glowell1 (~Adium@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:18] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[17:20] * Cube (~Adium@12.248.40.138) has joined #ceph
[17:22] * sagelap (~sage@146.sub-70-197-142.myvzw.com) has joined #ceph
[17:24] * nhmlap (~nhm@67-220-20-222.usiwireless.com) has joined #ceph
[17:26] * nhmlap_ (~nhm@174-20-43-18.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[17:27] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) Quit (Remote host closed the connection)
[17:51] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[17:51] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit ()
[17:55] * sagelap1 (~sage@38.122.20.226) has joined #ceph
[17:56] * sagelap (~sage@146.sub-70-197-142.myvzw.com) Quit (Ping timeout: 480 seconds)
[17:59] <sagewk> for those that haven't heard: http://www.inktank.com/news-events/new/shuttleworth-invests-1-million-in-ceph-storage-startup-inktank/
[17:59] * benpol (~benp@garage.reed.edu) has joined #ceph
[17:59] <nhmlap> sagewk: I submitted it to slashdot and insidehpc. We'll see if it takes.
[18:00] * themgt (~themgt@96-37-22-79.dhcp.gnvl.sc.charter.com) Quit (Quit: themgt)
[18:00] <dspano> Congratulations!
[18:01] * joao checks firehose
[18:01] <nhmlap> joao: yes, promote it up! :)
[18:01] <joao> still looking for it :x
[18:01] * amatter (~amatter@209.63.136.130) Quit (Ping timeout: 480 seconds)
[18:02] <joao> ah
[18:02] <joao> found it!
[18:02] * themgt (~themgt@96-37-22-79.dhcp.gnvl.sc.charter.com) has joined #ceph
[18:03] <sagewk> joao: can you look at wip-mon-gv and wip-mon (if you haven't already)?
[18:04] <joao> sagewk, I left a couple of comments on wip-mon-gv last friday (?); did it change in the mean time?
[18:04] <sagewk> joao: that's right. i fixed up the recovered checks
[18:05] <joao> oh wow
[18:05] <joao> gh's site just displayed a pink unicorn
[18:05] <nhmlap> http://fosslien.com/startup/
[18:06] * MoZaHeM (~MoZaHeM@19NAACHK1.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:06] <joao> nhm, "Nite_Hawk"?
[18:06] * Tv_ (~tv@2607:f298:a:607:5905:afb4:18b:79c5) has joined #ceph
[18:06] * dabeowul1 (~dabeowulf@free.blinkenshell.org) Quit (Read error: Connection reset by peer)
[18:06] <nhmlap> joao: yeah, Nite_Hawk was my old handle several iterations ago.
[18:07] <joao> well, just did my good deed of the day and up'ed a post on /. :p
[18:08] <nhmlap> heh
[18:09] <joao> am I the only one being unable to open ceph's gh page?
[18:09] <MoZaHeM> am I the only one being unable to open ceph's gh page?
[18:09] <nhmlap> joao: nope, I'm getting the unicorn too.
[18:10] <MoZaHeM> joao: nope, I'm getting the unicorn too.
[18:10] * aliguori (~anthony@32.97.110.59) has joined #ceph
[18:10] <joao> well, now I got an animated octocat :)
[18:10] <MoZaHeM> well, now I got an animated octocat :)
[18:10] <joao> oh joy
[18:10] <nhmlap> MoZaHeM is a poopyhead
[18:10] <MoZaHeM> oh joy
[18:10] <MoZaHeM> MoZaHeM is a poopyhead
[18:10] * nhmlap snickers
[18:11] <elder> Snickers? I love Snickers!
[18:11] <MoZaHeM> Snickers? I love Snickers!
[18:11] * nhmlap thinks we all need to talk like this
[18:11] * elder thinks you may be right, but what if we mention MoZaHeM by name?
[18:12] <Tv_> MoZaHeM: will you do infinite recursion?
[18:12] <MoZaHeM> MoZaHeM: will you do infinite recursion?
[18:12] <MoZaHeM> No
[18:12] <MoZaHeM> I am smart
[18:12] <elder> Ooooooh!!!!!
[18:12] <MoZaHeM> Ooooooh!!!!!
[18:12] <elder> Recursion?
[18:12] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[18:12] <MoZaHeM> Recursion?
[18:13] * elder thinks he has the power but not the knowledge of what to do with this "smart" non-infinite-recursion-doing thing.
[18:14] <Tv_> elder: i'll take some of that!
[18:14] * dabeowulf (~dabeowulf@free.blinkenshell.org) has joined #ceph
[18:14] <MoZaHeM> elder: i'll take some of that!
[18:14] * Tv_ has the knowledge but not the power
[18:14] <nhmlap> Tv_: I think I got your stomach bug. ;P
[18:14] <MoZaHeM> Tv_: I think I got your stomach bug. ;P
[18:14] <joao> lol
[18:15] <MoZaHeM> lol
[18:15] <joao> this is going to be a fun night
[18:15] <MoZaHeM> this is going to be a fun night
[18:16] * amatter (~amatter@209.63.136.130) has joined #ceph
[18:17] * tomaw (tom@tomaw.netop.oftc.net) has joined #ceph
[18:18] * youam (~youam@youam.netop.oftc.net) has joined #ceph
[18:18] <youam> hi
[18:18] <MoZaHeM> hi
[18:18] * MoZaHeM (~MoZaHeM@19NAACHK1.tor-irc.dnsbl.oftc.net) Quit (Killed (tjfontaine (No reason)))
[18:18] <joao> yep, that solves it
[18:19] <Tv_> there sure was a reason for it ;)
[18:19] * tomaw (tom@tomaw.netop.oftc.net) has left #ceph
[18:19] * MoZaHeM (~MoZaHeM@28IAAHLSM.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:19] <elder> GO AWAY.
[18:19] <MoZaHeM> GO AWAY.
[18:19] * MoZaHeM (~MoZaHeM@28IAAHLSM.tor-irc.dnsbl.oftc.net) Quit (Killed (tomaw (No reason)))
[18:19] * benpol (~benp@garage.reed.edu) has left #ceph
[18:20] <Tv_> <3 the ops
[18:21] <nhmlap> we should probably have a couple people with chanops.
[18:21] * sage (~sage@cpe-76-94-40-34.socal.res.rr.com) Quit (Quit: Leaving.)
[18:21] * sagewk is now known as sage
[18:21] * ChanServ sets mode +o sage
[18:21] <joao> cephalobot would probably be enough
[18:21] <cephalobot> joao: Error: "would" is not a valid command.
[18:21] * rturk_ (~rturk@166.137.99.125) has joined #ceph
[18:21] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[18:22] * rturk_ (~rturk@166.137.99.125) Quit ()
[18:22] * tomaw (tom@tomaw.netop.oftc.net) has joined #ceph
[18:22] * ChanServ sets mode +o Tv_
[18:23] <sage> everyone should register with nickserv so you can get ops
[18:23] <youam> joao: instead of giving +o to a bot, it would be better if you'd add a bunch of yourselves to chanserv / nickserv
[18:23] <joao> youam, true
[18:23] <joao> looks like sage is already on it :)
[18:23] * youam (~youam@youam.netop.oftc.net) has left #ceph
[18:24] <nhm> heh, too many irc sessions
[18:24] <Tv_> cloudy weather over the github servers today...
[18:25] <joao> maybe they're hosted at godaddy?
[18:25] <Tv_> that would break dns not http
[18:27] * lofejndif (~lsqavnbok@82VAAGDVK.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:29] <joao> sage, wip-mon-gv looks good
[18:29] * dabeowulf (~dabeowulf@free.blinkenshell.org) Quit (Ping timeout: 480 seconds)
[18:31] <joao> I love when git creates these incrementals
[18:31] <joao> - if (header.version >=2)
[18:31] <joao> + if (header.version >= 2)
[18:31] <joao> oh
[18:31] <joao> nevermind
[18:31] <Tv_> joao: are you calling yourself a git?-)
[18:31] <joao> there's a space there that wasn't clear on gitk
[18:31] <joao> apparently I am
[18:31] <joao> :p
[18:32] <joao> well, I did managed to get something like
[18:32] <joao> + }
[18:32] <joao> - }
[18:32] <joao> one of these days
[18:32] <Tv_> that's a whitespace change
[18:32] <Tv_> or, it looks like one with my proportional fonts
[18:33] * dabeowulf (~dabeowulf@free.blinkenshell.org) has joined #ceph
[18:33] <joao> well, brb (snack before stand-up)
[18:40] * sage is now known as sagewk
[18:45] * Tv_ changes topic to 'ceph development, discussion || Github is having error, we know'
[18:45] * Tv_ changes topic to 'ceph development, discussion || Github is having errors, we know'
[18:53] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[19:02] * The_Bishop (~bishop@2a01:198:2ee:0:e164:1ab3:d7d7:1483) has joined #ceph
[19:02] * dmick (~dmick@2607:f298:a:607:1d88:5b53:8eec:5ac2) has joined #ceph
[19:06] * maelfius (~mdrnstm@66.209.104.107) has joined #ceph
[19:08] * MikeMcClurg (~mike@62.200.22.2) Quit (Quit: Leaving.)
[19:10] * Ryan_Lane (~Adium@160.sub-166-250-37.myvzw.com) has joined #ceph
[19:10] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[19:15] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:16] <joao> gotta reboot my router (yet again)
[19:16] <joao> can't manage to get the laptop to connect to the stand-up thingy
[19:19] * jluis (~JL@89-181-145-30.net.novis.pt) has joined #ceph
[19:22] * chutzpah (~chutz@100.42.98.5) has joined #ceph
[19:24] * gregaf1 is now known as help
[19:24] * help (~Adium@2607:f298:a:607:50bd:a787:da4:de25) Quit (Quit: Leaving.)
[19:25] * gregaf (~Adium@2607:f298:a:607:cc76:3f7c:278c:b1f0) has joined #ceph
[19:25] * joao (~JL@89.181.155.11) Quit (Ping timeout: 480 seconds)
[19:30] * BManojlovic (~steki@212.200.241.6) has joined #ceph
[19:31] * jluis is now known as joao
[19:32] <joao> well, a friend just sent me an email with and url about shuttleworth's investment with the subject ":-D"
[19:32] <joao> s/and/an
[19:32] <dmick> :)
[19:32] <dmick> big news
[19:32] <joao> yeah
[19:33] <joao> I just wasn't expecting it to hit people I know around here so fast
[19:33] <joao> maybe if it hit /.'s or reddit's front page; maybe...
[19:33] <joao> I was kind of caught by surprise :-P
[19:34] <BManojlovic> what are news?
[19:34] <joao> http://www.inktank.com/news-events/new/shuttleworth-invests-1-million-in-ceph-storage-startup-inktank/
[19:35] <BManojlovic> hm nice
[19:42] <Tv_> oh hey, there's an "inktank" tag on slashdot: http://slashdot.org/tag/inktank
[19:42] <joao> nhm is to blame
[19:42] <Tv_> hah not so well used
[19:43] <Tv_> oh right
[19:43] <Tv_> nhm: watch any shows with motorcycles as a kid, perhaps?
[19:43] <Tv_> or parodies about the german invasion of france?
[19:44] <dmick> Knight Rider
[19:44] <dmick> There is a Nighthawk production motorcycle, or was
[19:44] <dmick> http://en.wikipedia.org/wiki/Honda_Nighthawk
[19:45] <joao> http://en.wikipedia.org/wiki/Nighthawk_(Marvel_Comics)
[19:45] <dmick> Personally I'm going with the Sacramento pub :)
[19:45] <Tv_> oh i mis-remember this: http://en.wikipedia.org/wiki/Street_Hawk
[19:45] <dmick> Oo. 13 whole episodes
[19:46] <Tv_> dmick: kwalitee
[19:46] <dmick> God, Rex Smith. I'm so glad I never saw that
[19:46] <dmick> this is, I believe, what we call a "brutal tangentfest"
[19:47] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[19:50] * lofejndif (~lsqavnbok@82VAAGDVK.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[19:57] <elder> I've seen much worse, dmick.
[19:57] <dmick> I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhauser gate. All those moments will be lost in time, like tears in rain.
[19:59] <elder> Poor replicant.
[20:00] <Tv_> -1 for not using umlauts there
[20:01] <dmick> it's a fair cop
[20:03] <elder> You should have said �mlauts, just to start another tangent.
[20:03] <dmick> this is very odd code:
[20:04] <dmick> oh, no, missed the static. never mind.
[20:06] <Tv_> i really wish they'd have spelled umlaut as ümlaut
[20:06] <Tv_> just for the existential brainfuck of trying to explain it
[20:10] <liiwi> doubledot the target..
[20:15] <Tv_> why is plana25 running a dhcp server?
[20:16] <sjust> just in case?
[20:17] <Tv_> the more this happens the more i feel like putting *everyone* in a vm with very limited access to anything useful :(
[20:17] * adam_ (~chatzilla@c-69-246-99-102.hsd1.mi.comcast.net) has joined #ceph
[20:17] <Tv_> you don't accidentally install a dhcp server
[20:17] * adam_ is now known as Glowplug
[20:17] <elder> You never know when you might need one. :)
[20:17] <Glowplug> Hello everyone. =)
[20:18] <dmick> I didn't do it
[20:18] <dmick> I swear
[20:18] <dmick> let's see who locked it...hm.... :)
[20:19] <dmick> Hi Glowplug
[20:19] <nhm> Tv_: a bit far fetched, but perhaps it got installed as a dependency?
[20:19] <Tv_> nhm: nothing sane would depend on a locally-installed dhcp server
[20:19] <nhm> Tv_: rogue dhcp servers are certainly not ideal though.
[20:19] <Glowplug> Hey dmick. =)
[20:20] <Glowplug> I've been running smooth all week but there is something I can't quite figure out. If I need to backup a qemu-rbd volume from RADOS to an image file with lets say DD. Is that possible?
[20:21] <nhm> Tv_: We need openstack+quantum so you could have VMs in dynamically created vlans, then people could do whatever crazy things they want. ;)
[20:21] <dmick> Glowplug: you can export an rbd image with the rbd CLI
[20:21] <Glowplug> This supports images created through qemu-img as well???
[20:21] <dmick> that's probably easiest. You'd want ot make sure the VM is quiescent at the time, of course
[20:22] <dmick> an RBD image is an RBD image
[20:22] <dmick> the qemu lib stuff is just glue around it
[20:22] <Glowplug> I see. That explains why my rbd volumes diddn't work with qemu-kvm.
[20:22] <Glowplug> It needs the glue. But otherwise no other changes?
[20:22] <nhm> Tv_: The force10 guys were saying that they eventually have some plans for fusion, but were recommending openvswitch for now.
[20:22] <nhm> s/fusion/quantum
[20:22] <dmick> Glowplug: no, the glue is just access glue
[20:23] <Glowplug> Hahaha
[20:23] <Glowplug> Access glue... yum
[20:23] <dmick> but qemu-rbd wants to name the images itself; that's probably the problem
[20:23] <Glowplug> One thing I did notice is that an "rbd ls" won't show my qemu-img images.
[20:23] <dmick> in fact, understanding which underlying image relates to qemu's name for the image is not something I'm sure about
[20:23] <Glowplug> But I can still find them with "rbd ls *poolname*" strange
[20:23] <joshd> Glowplug: how did you create the qemu-rbd image
[20:23] <dmick> Glowplug: perhaps they're in a different pool?
[20:24] <dmick> nothing magic about the 'rbd' poolname; it's just the default if you don't specify
[20:24] <Glowplug> Ahhh I see. That must be the reason.
[20:24] <joshd> dmick: for qemu you specify rbd:poolname/imagename, there's nothing special about it
[20:24] <dmick> ah
[20:24] <Glowplug> Alright now I just need to find the image backup option, pause my VM and hope for the best. =)
[20:25] <joshd> Glowplug: you can also pause, take a snapshot, then export that snapshot while you resume the vm
[20:25] <Glowplug> I have a severe bug with my snapshotting right now. I get an error when I try to list snapshots with virsh.
[20:26] <joshd> what about the rbd command line tool?
[20:26] <joshd> virsh tends to do strange things with snapshots
[20:26] <dmick> Glowplug: joshd clarifies for me that the image name is the same, so you should be able to find it easily. Just need to look in the right pool
[20:26] <dmick> rbd -p <pool> ls
[20:27] <Glowplug> Interesting! I totally forgot about using rados native snapshotting.
[20:27] <Glowplug> Thats a great idea. =)
[20:27] <dmick> it's actually rbd snapshotting; there's a little more going on for rbd images. but yes.
[20:27] <Glowplug> The ceph docs page is down atm?
[20:28] <dmick> hm. I will notify the appropriate webthings
[20:28] <Glowplug> Sounds like the Adams Family over there.... get the webthings..
[20:29] <joshd> well, the wiki's still up (albeit out of date)
[20:30] <Glowplug> That's ok I think everybody knows how to use *man* =)
[20:30] <dmick> and there's always github.com/ceph/ceph/doc
[20:30] <dmick> (and ../man)
[20:39] <Glowplug> Got it! VM is paused and rbd is backing it up right now. =)
[20:39] <Glowplug> Thanks a ton guys you rock as usual. =D
[20:39] <nhm> We should put that on our webpage
[20:39] <Glowplug> Probably. Haha
[20:40] * pentabular (~sean@70.231.142.192) has joined #ceph
[20:40] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[20:40] * pentabular is now known as Guest6701
[20:41] * Guest6701 is now known as pentabular
[20:41] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[20:41] <pentabular> docs are 404 on ceph.com
[20:42] <pentabular> :(
[20:42] <dmick> fixed
[20:42] <pentabular> ^ like magic :)
[20:43] <dmick> well: (11:27:39 AM) Glowplug: The ceph docs page is down atm?
[20:43] <dmick> so we had some time :)
[20:43] <Glowplug> Crap you're supposed to play it off
[20:43] <Glowplug> "oh yeah we are that fast" ;)
[20:43] <pentabular> glad to make the party. thanks.
[20:44] <pentabular> lol
[20:46] <nhm> looks like the news hit CNNMoney
[20:46] * lofejndif (~lsqavnbok@9KCAABMU3.tor-irc.dnsbl.oftc.net) has joined #ceph
[20:48] <pentabular> nhm: whaa? noob here..
[20:49] <joao> nhm, wow
[20:49] <dmick> http://www.inktank.com/news-events/new/shuttleworth-invests-1-million-in-ceph-storage-startup-inktank/
[20:50] <nhm> pentabular: http://money.cnn.com/news/newsfeeds/gigaom/articles/cloud_open_source_champ_mark_shuttleworth_invests_1m_ceph_storage_startup.html
[20:51] <pentabular> ah, thanks. neat!
[20:51] <pentabular> er, rather: OMG, wow!
[20:52] <nhm> pentabular: yeah, we are all quite excited! :)
[20:52] <pentabular> I'm very excited for y'all as I've been a sideline ceph fan for some time
[20:52] <pentabular> so, it seems ceph is recommended to be run via some config mgmt. system, eh?
[20:53] <dmick> well it's complex to set up
[20:53] <dmick> so the more help you get the better/faster/more reproducible it is
[20:53] <dmick> you can still do it by hand, of course; there are varying levels of helpers
[20:53] <nhm> pentabular: I've got my own scripts that set it up for some of our performance tests, but you can really do it any way you want.
[20:53] <Glowplug> I set mine up by hand after about a week of reading. If you have some spare hardware and lots of time it can easily be done.
[20:54] <Glowplug> Start with no security and it's really quite easy.
[20:54] <pentabular> I'm hoping to go directly via cfg mgm instead of manual,
[20:54] <pentabular> just doing manual stuff for learning.
[20:54] <dmick> yes. we're putting the most work into Chef recipes
[20:54] <Glowplug> With something like this I would say that going as manual as humanly possible is best.
[20:54] <Glowplug> You will have to go manual anyways when you need to fix something.
[20:55] <pentabular> The thing I'm using is Salt (http://saltstack.org/)
[20:55] <dmick> Glowplug: depends on what you're doing. Setting up the 10th cluster of the day after you've done it ten times manually.... :)
[20:55] <pentabular> parallel remote execution & what we call "state management" (configuration management)
[20:56] <pentabular> not SSH: zeromq/msgpack, pub/sub, truly parallel
[20:56] <Glowplug> Absolutely. I was targeting more towards pentabular since he is just getting started. =)
[20:56] <dmick> pentabular: cool. I know there are other groups working on Puppet as well
[20:56] <Glowplug> Salt seems interesting penta. Are there advantages to this over Puppet or Chef?
[20:56] <dmick> but yeah, it's not *that* hard, particularly for a small cluster (say, one machine :))
[20:56] <nhm> I use pdsh pretty extensively
[20:57] <pentabular> The authors seem to think there are advantages. :)
[20:57] <pentabular> it's a bit simpler than the others, and strives for equivalent capabilities
[20:58] <pentabular> I'm very excited about both Salt and Ceph
[20:58] <amatter> congrats on the funding! :)
[20:58] <dmick> calamari is very tasty with some sea salt. I think it's a natural pentabular
[20:59] <dmick> amatter: tnx!
[20:59] <pentabular> this week I'm giving a presentation at LSPE on Salt, and I hope to whip up some examples using Ceph just for cool points.
[20:59] <nhm> pentabular: awesome. :)
[20:59] <pentabular> so, I'm basically looking at the quickstart and this:
[21:00] <pentabular> https://labs.enovance.com/projects/puppet/wiki/Puppet-ceph
[21:00] <pentabular> getting a simple state example together should be pretty trivial in Salt
[21:01] <dmick> Fantastic. Ask if you need help.
[21:01] <pentabular> thanks much. any nuggets that come to mind appreciated.
[21:01] <dmick> https://github.com/ceph/ceph-cookbooks
[21:02] <joshd> damien: that's really strange
[21:02] <joshd> damien: could you print out tv.tv_nsec from that utime_t?
[21:03] <pentabular> nhm: re: pdsh; no need to 'conserve sockets' w/ Salt: all 2000 hosts (or what have you) all at once (if you like)
[21:04] <nhm> pentabular: nice
[21:04] <nhm> pentabular: we had to do the rotating clients thing on our big cluster.
[21:04] <nhm> 500 at a time.
[21:05] <pentabular> Salt can do that if you want; only N% at a time, etc
[21:06] <nhm> pentabular: yeah, I'll have to look into it.
[21:07] <pentabular> you people are too interesting. it's hard to concentrate here. :)
[21:08] * pentabular (~sean@70.231.142.192) has left #ceph
[21:13] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[21:23] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[21:30] <joao> sagewk, looked into wip-mon and it looks good
[21:30] <joao> I nitpicked a bit on two lines, but feel free to disregard them
[21:31] <joao> also, I think the two top commits could be mostly squashed together
[21:32] <joao> well, going to brew some coffee and watch the match; bbiab
[21:37] <jmlowe> so is the posted system engineer job strictly LA/ Silicon Valley or would you entertain remote candidates?
[21:38] <nhm> nice, techcrunch, ars technica, and phoronix are all running the story.
[21:39] <nhm> jmlowe: I asked Sage that a while ago and I think we are interested in remote candidates for that role.
[21:39] <nhm> jmlowe: you should check with him though.
[21:39] <jmlowe> sagewk: ?
[21:40] <nhm> jmlowe: actually, just apply. ;)
[21:40] <jmlowe> nhm: if it's not even on the table I don't want to waste anybody's time
[21:40] <Tv_> jmlowe: guarantee not a waste
[21:41] <Tv_> the more we see good remote people, the more we'll accept good remote people
[21:41] <Tv_> feed the cycle ;)
[21:41] <Leseb> hi, guys
[21:45] <Leseb> after a new crush map injection I got stuck pg and when I tried to query the pg I got the following message: "pgid currently maps to no osd". Does anyone have an idea? Can't find anything about this issue
[21:46] <sjust> Leseb: can you post the osdmap?
[21:46] * Ryan_Lane (~Adium@160.sub-166-250-37.myvzw.com) Quit (Quit: Leaving.)
[21:46] <sjust> or first, ceph osd tree
[21:46] <Leseb> yep
[21:47] <Leseb> http://pastebin.com/MT6wvdU1
[21:47] <sjust> you appear to have 3 hosts without osds
[21:47] <sjust> that is likely the problem
[21:48] <Tv_> sjust: hey wait that sounds relevant to my interests.. is that an actual problem?
[21:48] <sjust> more well informed guess
[21:49] <Leseb> sjust: hum? I do have 3 odds running on each servers
[21:49] <Tv_> sjust: because we can trigger that with the osd hotplug logic, if you move disks out of a chassis for repairs etc
[21:50] <Leseb> s/odds/osds
[21:51] <dmick> Leseb: sjust means that ceph01, control01, compute01 have no OSDs on them according to the osdmap
[21:53] <sagewk> jmlowe: we are a post-geographic team :)
[21:54] <Leseb> dmick: thank for the clarification
[21:54] <Leseb> sjust: any idea to solve this?
[21:55] <Leseb> osdmap http://pastebin.com/NGbEunu0
[21:55] <sjust> I have reconsidered, that probably isn't the problem
[21:57] <Leseb> the thing is when I want to retrieve the rbd image list the prompt is hanging, no output
[21:57] <sjust> can you post the osdmap? (ceph osd getmap -o <outfile>)
[21:58] <Leseb> already pasted http://pastebin.com/NGbEunu0
[21:58] <Leseb> :)
[21:59] <gregaf> Leseb: and the pgmap, and the crushmap… ;)
[22:00] <sjust> that's ceph osd dump, I need the output of (ceph osd getmap -o <outfile>)
[22:00] <Leseb> sjust: it's the same output
[22:01] <sjust> it is?
[22:01] <Leseb> yep (wait I'm c/p)
[22:03] <sjust> c/p?
[22:03] <Leseb> osdmap http://pastebin.com/ZtJ0AJUq
[22:04] <sjust> oh, I see
[22:04] <sjust> I need the actual raw file, it has stuff in it other than what it prints
[22:04] <sjust> in this case osd
[22:05] <Leseb> crushmap http://pastebin.com/E093dZGZ
[22:05] <Leseb> sjust: do you want me to send you the binary?
[22:05] <sjust> yeah, that would be good
[22:05] <Leseb> how?
[22:05] <sjust> cephdrop@ceph.com
[22:05] <sjust> sftp
[22:13] <sjust> Leseb: which pg is not mapped?
[22:14] <Leseb> sjust: http://pastebin.com/U8hNPRKt
[22:18] * sagelap (~sage@38.122.20.226) has joined #ceph
[22:18] * sagelap1 (~sage@38.122.20.226) Quit (Read error: Connection reset by peer)
[22:20] <Tv_> ok poll time what drugs did The Register's journalist take before writing "Shuttleworth drops one million cluster bucks on Ceph upstart / Linux moneybags funds Um Bongo's cloudy file system"
[22:21] <Tv_> http://www.theregister.co.uk/2012/09/11/shuttleworth_ceph_investment/
[22:22] <elder> Why Um Bongo?
[22:23] <Tv_> http://en.wikipedia.org/wiki/Um_Bongo?
[22:23] <Tv_> copy-paste much?
[22:23] <joao> well, we had a juice brand called "Um Bongo" around here
[22:23] <joao> kids used to love it
[22:23] <elder> Right, but I don't get the reference in the article is all.
[22:23] <Tv_> http://www.urbandictionary.com/define.php?term=umbongo
[22:23] <joao> Tv_, it's exactly that one
[22:23] <elder> Oh.
[22:24] <elder> So ceph is Ubuntu's Cloud file system?
[22:24] <elder> That's good, right?
[22:24] <Tv_> "Linux moneybags"?
[22:24] <Tv_> srsly
[22:24] <nhm> Tv_: I think someone doesn't like Shuttleworth/Ubuntu. :)
[22:25] <gregaf> or us, now
[22:25] <elder> Gavin Clarke is a cluster bucker
[22:25] <Tv_> this guy seems quite a character: http://search.theregister.co.uk/?author=Gavin%20Clarke has things like "Apache man disables Internet Explorer 10 privacy setting", "Pret-a-porter: LG boffins' bendy battery can be worn as PANTS"
[22:25] <gregaf> does anybody else have the Whiptail ad on that page?
[22:25] <Tv_> we're talking quality, in-depth, journalism here
[22:26] <gregaf> I feel like they're showing off too many racks to be proud about 7GB/s in silicon
[22:26] <darkfader> british and american humour don't play well, i take it?
[22:26] <Tv_> gregaf: there are no ads on the internet.
[22:26] <elder> Yes gregaf
[22:26] <darkfader> i didn't read it, but the register is _never_ serious
[22:26] <nhm> Tv_: hrm, I take it back, I think he's just going for hits.
[22:26] <sjust> From wikipedia: Um Bongo is a mixed tropical fruit juice drink sold in the United Kingdom, manufactured by Gerber Juice Company Limited under the name Libby's.
[22:26] <sjust> I can see the confusion
[22:26] <Tv_> darkfader: i appreciate british humor more than your average mammal, but this guy is just bad
[22:26] <elder> Whiptail is apparently unreal.
[22:27] <Tv_> nhm: are you talking about the what drugs poll ?-)
[22:27] <nhm> Tv_: naw, the Um Bongo/Money Bags reference.
[22:27] <Tv_> nhm: i'm implying different kind of hits from a different kind of bong-o
[22:28] <nhm> Tv_: It's right below the headline. I assume he's trying to convince people to read the article by using the wacky language.
[22:28] <nhm> Tv_: I think the only drug involved is money. ;)
[22:29] * lofejndif (~lsqavnbok@9KCAABMU3.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[22:46] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:51] * dmick-mibbit (267a14e2@ircip3.mibbit.com) has joined #ceph
[22:51] * dmick-mibbit (267a14e2@ircip3.mibbit.com) Quit ()
[23:30] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[23:41] * Ryan_Lane (~Adium@2.sub-166-250-37.myvzw.com) has joined #ceph
[23:45] * Ryan_Lane (~Adium@2.sub-166-250-37.myvzw.com) Quit ()
[23:46] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[23:47] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[23:52] * The_Bishop (~bishop@2a01:198:2ee:0:e164:1ab3:d7d7:1483) Quit (Ping timeout: 480 seconds)
[23:54] * aliguori (~anthony@32.97.110.59) Quit (Remote host closed the connection)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.