#ceph IRC Log

Index

IRC Log for 2012-03-31

Timestamps are in GMT/BST.

[0:00] <joao> I hear a new version of globus came out in the meantime and that things "got better"
[0:00] <nhm> joao: We got some free money from microsoft one year and actually took an old .net port of the globus daemons and got them submitting stuff to the windows HPC scheduler. It was monsterous.
[0:01] <joao> well, my take on globus (although obviously biased) is that it is a solution for an yet-to-be-found problem
[0:03] <joao> and I honestly don't mind being terribly unfair, as long as I don't have to cross paths with it again for the foreseeable future :)
[0:03] <nhm> joao: Probably the biggest push behind it was for the TeraGrid project. The idea was that researchers from any participating institution could submit jobs to any cluster on the grid.
[0:03] <joao> yeah, so I heard
[0:03] * sam-410 (~sam-410@09GAAEG7T.tor-irc.dnsbl.oftc.net) has joined #ceph
[0:03] <joao> but I got the idea that it wasn't the biggest success ever though
[0:04] <nhm> joao: Tough to say. Good research was done with TeraGrid, but probably not because of the software, or the grid.
[0:05] * sam-410 (~sam-410@09GAAEG7T.tor-irc.dnsbl.oftc.net) has left #ceph
[0:07] <nhm> As far as I know the clusters mostly were just treated as individual clusters and you could have probably achieved atleast as much without the federation.
[0:08] <nhm> gridftp and globus online are still arguably useful for high speed data transfer.
[0:09] <joao> well, I wish them all the best
[0:10] <nhm> joao: heh, me too.
[0:10] <joao> but hey, I have some deep hatred towards it... can't help it :x
[0:11] <nhm> joao: it's understandable. Drink some scotch, it'll help. ;)
[0:13] <nhm> huh, lustre 2.2 got released
[0:14] <joao> oh boy
[0:15] <joao> looks like I'll have to leave ceph compiling again tonight
[0:15] <joao> just did a "make clean" on the wrong terminal
[0:15] <nhm> joao: how long is it taking to compile?
[0:15] <joao> nhm, roughly an hour
[0:16] <joao> but I blame the fact that I have a lot of stuff draining my memory
[0:16] <joao> and my cpu
[0:16] <joao> such as eclipse and banshee
[0:16] <joao> thus leaving it compiling during the night, when I'm not actively using the computer :)
[0:17] <nhm> joao: are you using cdt then?
[0:17] <joao> yes
[0:17] <nhm> joao: That's what I used for C++. I tried to get the python plugins to work with eclipse but it kept crashing so I'm just using vim now.
[0:18] <joao> yeah, I have to uninstall pydev
[0:18] <joao> it keeps reindexing a non-existent project
[0:18] <joao> and eclipse stops responding
[0:28] <dmick> are you recompiling the ceph userland, or the kernel-and-client?
[0:30] <Tv|work> joao: you might like ccache..
[0:32] * BManojlovic (~steki@212.200.240.52) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:34] * lofejndif (~lsqavnbok@28IAADNTA.tor-irc.dnsbl.oftc.net) has joined #ceph
[1:01] * Tv|work (~Tv_@aon.hq.newdream.net) Quit (Read error: Operation timed out)
[1:32] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[1:36] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit ()
[2:00] * bchrisman (~Adium@108.60.121.114) Quit (Read error: Connection reset by peer)
[2:00] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[2:44] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[3:43] * andret (~andre@pcandre.nine.ch) Quit (Ping timeout: 480 seconds)
[3:43] * andret (~andre@pcandre.nine.ch) has joined #ceph
[3:46] * adjohn (~adjohn@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[3:56] * perplexed_ (~ncampbell@216.113.168.141) has joined #ceph
[3:59] * lofejndif (~lsqavnbok@28IAADNTA.tor-irc.dnsbl.oftc.net) Quit (Quit: Leaving)
[4:02] * perplexed (~ncampbell@216.113.168.141) Quit (Ping timeout: 480 seconds)
[4:04] * perplexed_ (~ncampbell@216.113.168.141) Quit (Ping timeout: 480 seconds)
[4:21] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[4:48] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) has joined #ceph
[5:02] * adjohn (~adjohn@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) Quit (Quit: adjohn)
[5:10] * joao (~JL@89.181.151.120) Quit (Quit: Leaving)
[5:14] * adjohn (~adjohn@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[5:34] * chutzpah (~chutz@216.174.109.254) Quit (Quit: Leaving)
[5:47] * perplexed (~ncampbell@c-76-21-85-168.hsd1.ca.comcast.net) has joined #ceph
[5:48] * perplexed (~ncampbell@c-76-21-85-168.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[5:48] * perplexed (~ncampbell@216.113.168.130) has joined #ceph
[5:50] * adjohn (~adjohn@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) Quit (Quit: adjohn)
[5:53] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[6:26] * Tv__ (~tv@cpe-24-24-131-250.socal.res.rr.com) has joined #ceph
[6:27] * Tv__ (~tv@cpe-24-24-131-250.socal.res.rr.com) has left #ceph
[7:06] * dmick (~dmick@aon.hq.newdream.net) Quit (Quit: Leaving.)
[8:15] * perplexed_ (~ncampbell@c-76-21-85-168.hsd1.ca.comcast.net) has joined #ceph
[8:22] * perplexed (~ncampbell@216.113.168.130) Quit (Ping timeout: 480 seconds)
[8:22] * perplexed_ is now known as perplexed
[8:41] * perplexed (~ncampbell@c-76-21-85-168.hsd1.ca.comcast.net) Quit (Quit: perplexed)
[9:05] * LarsFronius (~LarsFroni@g231139206.adsl.alicedsl.de) has joined #ceph
[10:16] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[11:01] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[11:04] * LarsFronius_ (~LarsFroni@e176058038.adsl.alicedsl.de) has joined #ceph
[11:09] * LarsFronius (~LarsFroni@g231139206.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[11:09] * LarsFronius_ is now known as LarsFronius
[11:23] * adjohn (~adjohn@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[11:23] * adjohn (~adjohn@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) Quit ()
[12:17] <loicd> Hi, is anyone around here using ceph with openstack ?
[12:24] * cotolez (~cotolez@81.88.224.110) has joined #ceph
[12:37] <cotolez> Hi all,
[12:38] <cotolez> I'm looking at the procedure to replacing a failed hd (http://ceph.newdream.net/wiki/Replacing_a_failed_disk/OSD)
[12:38] <cotolez> There is a way to automate the procedure?
[12:43] <cotolez> It would be great to set ceph in some kind of "automatic bring back OSD in cluster"
[12:43] * cotolez (~cotolez@81.88.224.110) Quit (Quit: Sto andando via)
[14:17] <nhm> loicd: I think there are a couple of people in channel that have been using it with openstack. They may not be around on the weekend though...
[14:19] <loicd> nhm: thanks. I'll wait for them to return ;-) I'm trying to evaluate how much time I should prepare to spend on learning enough about it to use it for real.
[14:21] <nhm> loicd: How were you thinking of using it?
[14:21] <nhm> loicd: ie S3, nova-volumes, etc?
[14:27] <loicd> nova-volumes
[14:27] <loicd> swift is already working fine
[14:28] <loicd> we tried it last november but ran into problems we could not figure out
[14:29] <loicd> last week I learnt that it will soon be possible to buy support from ceph.com and that may be just what we need to use it for production
[14:30] <nhm> loicd: Yeah. I don't really know the details, though someone in here can probably point you to our business folks.
[14:31] <loicd> :-) what are you using ceph for ?
[14:31] <nhm> loicd: If you do end up testing it out again, we'd certainly like to know if you run into any problems.
[14:32] <nhm> loicd: Up until about a month ago I worked for a supercomputing institute and was planning on using ceph for nova-volumes in an openstack deployment. Then I ended up working for them. :)
[14:36] <loicd> :-)
[14:36] <loicd> You mean it was a freelance contract and they decided to hire you for the job ?
[14:37] <loicd> Or do you mean you were recruited by ceph ?
[14:37] <nhm> loicd: no, I mean I used to work for a supercomputing institute, but then I was recruited by ceph...
[14:37] <loicd> :-) nice. The company is located in SF ?
[14:37] <nhm> Los Angeles, though I work from Minnesota.
[14:39] <loicd> From what you say the supercomputing institute decided not to use ceph ... yet ;-) Right ?
[14:40] <nhm> loicd: We weren't at a stage to decide yet. production deployment isn't going to be until the fall. It was on our list along with falling back to iscsi to one of our netapps.
[14:41] <loicd> A friend told me that in his opinion (subjective gut feeling ;-) ceph won't be ready before june or so.
[14:42] <loicd> I've been an early adopter of DRBD and quite happy with it despite a few bumps. It required a lot of expertise to get the best of it though.
[14:42] <nhm> loicd: It really depends on how you want to use it. Rados will stabalize first, then the other layers ontop of it.
[14:43] <loicd> I'm looking forward to using it with nova-volume therefore I'm mostly interested in rbd
[14:59] <nhm> loicd: definitely let us know how testing goes.
[14:59] <nhm> loicd: it's still pretty young software, so the more reports the better. :)
[15:01] <loicd> :-)
[15:25] * LarsFronius (~LarsFroni@e176058038.adsl.alicedsl.de) Quit (Quit: LarsFronius)
[17:08] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) Quit (Quit: Leaving.)
[17:10] * adjohn (~adjohn@s24.GtokyoFL16.vectant.ne.jp) has joined #ceph
[17:17] * adjohn is now known as Guest95
[17:17] * Guest95 (~adjohn@s24.GtokyoFL16.vectant.ne.jp) Quit (Read error: Connection reset by peer)
[17:17] * adjohn (~adjohn@s24.GtokyoFL16.vectant.ne.jp) has joined #ceph
[17:17] * adjohn (~adjohn@s24.GtokyoFL16.vectant.ne.jp) Quit ()
[17:18] * cotolez (~cotolez@79.98.6.197) has joined #ceph
[17:19] <cotolez> Hi all, I'm looking at the procedure to replacing a failed hd (http://ceph.newdream.net/wiki/Replacing_a_failed_disk/OSD)
[17:19] <cotolez> There is a way to automate the procedure?
[17:49] * LarsFronius (~LarsFroni@e176058038.adsl.alicedsl.de) has joined #ceph
[18:29] * cotolez (~cotolez@79.98.6.197) Quit (Ping timeout: 480 seconds)
[19:07] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[20:17] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[21:04] * f4m8_ (f4m8@kudu.in-berlin.de) Quit (Read error: Connection reset by peer)
[21:06] * f4m8_ (f4m8@kudu.in-berlin.de) has joined #ceph
[21:51] * oyijkl (~root@122.163.39.73) has joined #ceph
[21:54] * oyijkl (~root@122.163.39.73) Quit (Quit: IRC)
[22:02] * LarsFronius (~LarsFroni@e176058038.adsl.alicedsl.de) Quit (Quit: LarsFronius)
[22:04] * blufor (~blufor@mongo-rs2-1.candycloud.eu) Quit (Remote host closed the connection)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.