#ceph IRC Log

Index

IRC Log for 2011-07-21

Timestamps are in GMT/BST.

[0:01] * yoshi (~yoshi@KD027091032046.ppp-bb.dion.ne.jp) has joined #ceph
[0:14] <darkfaded> http://www.amazon.com/gp/product/0764570048/ $1.95 for a fun read. (object oriented ksh programming)
[0:16] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[0:19] <Tv> why.. would.. you..
[0:20] * Anticimex (anticimex@netforce.csbnet.se) Quit (Ping timeout: 480 seconds)
[0:24] * Anticimex (anticimex@netforce.csbnet.se) has joined #ceph
[1:00] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[1:22] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:26] * yoshi (~yoshi@KD027091032046.ppp-bb.dion.ne.jp) Quit (Remote host closed the connection)
[1:27] * Tv (~Tv|work@ip-64-111-111-107.dreamhost.com) has left #ceph
[1:42] * yoshi (~yoshi@KD027091032046.ppp-bb.dion.ne.jp) has joined #ceph
[2:08] * yoshi (~yoshi@KD027091032046.ppp-bb.dion.ne.jp) Quit (Ping timeout: 480 seconds)
[2:36] * joshd (~joshd@ip-64-111-111-107.dreamhost.com) Quit (Quit: Leaving.)
[2:54] * yoshi (~yoshi@p4094-ipngn1601marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[3:38] * bchrisman (~Adium@70-35-37-146.static.wiline.com) Quit (Quit: Leaving.)
[4:03] * cmccabe (~cmccabe@69.170.166.146) has left #ceph
[4:09] * MK_FG (~MK_FG@188.226.51.71) Quit (Quit: o//)
[4:10] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[4:16] * bchrisman (~Adium@64.164.138.146) has joined #ceph
[4:17] * bchrisman (~Adium@64.164.138.146) Quit ()
[5:11] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[5:17] * monrad-51468 (~mmk@domitian.tdx.dk) Quit (Ping timeout: 480 seconds)
[5:20] * monrad-51468 (~mmk@domitian.tdx.dk) has joined #ceph
[7:38] * greglap (~Adium@cpe-76-90-232-177.socal.res.rr.com) has joined #ceph
[7:38] * greglap (~Adium@cpe-76-90-232-177.socal.res.rr.com) Quit (Quit: Leaving.)
[7:44] * ajm (adam@adam.gs) Quit (Read error: Connection reset by peer)
[7:45] * ajm (adam@adam.gs) has joined #ceph
[8:01] * Dantman (~dantman@S0106001731dfdb56.vs.shawcable.net) Quit (Remote host closed the connection)
[8:18] * greglap (~Adium@cpe-76-90-232-177.socal.res.rr.com) has joined #ceph
[8:18] * greglap (~Adium@cpe-76-90-232-177.socal.res.rr.com) Quit ()
[9:05] * peritus (~andreas@h-150-131.a163.priv.bahnhof.se) has joined #ceph
[9:07] <peritus> sorry for bringing up something that might be obvious, but how far is ceph from being production ready?
[9:08] <peritus> i think that wikipedia, the ceph home page and the wiki gives kind of different views
[9:08] <u3q> i think it depends on what your definition of production is
[9:08] <u3q> weve been using it in staging with almost no problems at all
[9:08] <peritus> wikipedia says nothing about experimental/heavy developement and lists a stable release
[9:08] <u3q> but i still dont get the feeling that i want to put 20,000 users on it
[9:08] <u3q> in my prod environment
[9:09] <peritus> the ceph home page says nothing about experimental/heavy development
[9:09] <peritus> the wiki says "Ceph is under heavy development, and is not yet suitable for any uses other than benchmarking and review."
[9:09] <peritus> and the FAQ says "absolutely not for production use, heavy development"
[9:09] <peritus> so i am confused :)
[9:10] <peritus> u3q: yeah, of course it depends, but i am just trying to get an idea :)
[9:10] <u3q> i mean still having to reverse engineer monitoring
[9:11] <u3q> and sort of deciding your own best practices
[9:11] <u3q> are good signs its not really production ready for people who dont want to debug traces of profiling builds in production to help solve problems
[9:11] <u3q> imo at least
[9:11] <u3q> i am just a user tho
[9:12] <peritus> yeah, i guess that would not be ideal for a critical system
[9:12] <u3q> ya
[9:12] <u3q> although weve been using it for testing with great success
[9:12] <peritus> what kind of hardware do you run?
[9:12] <u3q> they are supermicro boxes with lsi controllers and 36 spindles per node
[9:14] <peritus> do you have a test cluster for ceph thats not part of your production stuff?
[9:14] <u3q> well we have a distributed production infrastructure so we bought a copy of one of the nodes to run as a staging env
[9:14] <u3q> so we sync data from prod over to it
[9:14] <u3q> and run prod load and user tests against it
[9:14] <u3q> tith ceph
[9:14] <u3q> but our real users never touch it
[9:14] <peritus> i see
[9:14] <peritus> what do you use for production?
[9:15] <u3q> NFS
[9:15] <u3q> with rsync
[9:15] <u3q> is it the 1.4ghz?
[9:15] <u3q> er ww
[9:43] * wido (~wido@fubar.widodh.nl) Quit (Remote host closed the connection)
[9:50] * Dantman (~dantman@S0106001731dfdb56.vs.shawcable.net) has joined #ceph
[9:57] * wido (~wido@fubar.widodh.nl) has joined #ceph
[10:08] * wido (~wido@fubar.widodh.nl) Quit (Remote host closed the connection)
[10:09] * wido (~wido@rockbox.widodh.nl) has joined #ceph
[12:20] * phil_ (~quassel@chello080109010223.16.14.vie.surfer.at) has joined #ceph
[14:45] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[15:02] * yoshi (~yoshi@p4094-ipngn1601marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[16:36] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[16:38] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[16:46] * greglap (~Adium@166.205.143.236) has joined #ceph
[17:22] * Juul (~Juul@82.211.213.151) has joined #ceph
[17:39] * greglap (~Adium@166.205.143.236) Quit (Quit: Leaving.)
[17:50] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[18:07] * eternaleye_ (~eternaley@195.215.30.181) has joined #ceph
[18:09] * cmccabe (~cmccabe@c-24-23-254-199.hsd1.ca.comcast.net) has joined #ceph
[18:09] * eternaleye (~eternaley@195.215.30.181) Quit (Remote host closed the connection)
[18:15] * Juul (~Juul@82.211.213.151) Quit (Ping timeout: 480 seconds)
[18:23] * lxo (~aoliva@9KCAAAXVO.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[18:28] * phil_ (~quassel@chello080109010223.16.14.vie.surfer.at) Quit (Remote host closed the connection)
[18:33] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Remote host closed the connection)
[18:33] * joshd (~joshd@ip-64-111-111-107.dreamhost.com) has joined #ceph
[18:36] * lxo (~aoliva@09GAAFLGC.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:52] * joshd (~joshd@ip-64-111-111-107.dreamhost.com) Quit (Quit: Leaving.)
[18:56] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[18:56] * bchrisman (~Adium@70-35-37-146.static.wiline.com) has joined #ceph
[19:02] * gregaf (~Adium@ip-64-111-111-107.dreamhost.com) Quit (Quit: Leaving.)
[19:24] * jim (~chatzilla@astound-69-42-16-6.ca.astound.net) Quit (Quit: ChatZilla 0.9.87 [Firefox 4.0.1/20110609040224])
[19:25] * jim (~chatzilla@astound-69-42-16-6.ca.astound.net) has joined #ceph
[19:44] * sjust (~sam@ip-64-111-111-107.dreamhost.com) Quit (Remote host closed the connection)
[19:52] <sagewk> cmccabe: so the priority right now is to get these collectd plugins working
[19:53] <sagewk> let's not get too distracted by potential g_conf changes
[19:53] <cmccabe> sagewk: I don't think there's anything else that has to be done on the perfcounters side to make that happen
[19:53] <sagewk> great. we need to write the plugins themselves tho
[19:53] <sagewk> #1218
[19:54] <sagewk> basically two of them. one that gathers perfcounter stats, and one that pulls stuff out of the monitor. i'm finishing up the json formatted dumps now.
[19:54] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[19:56] <cmccabe> sagewk: I'm not too familiar with all of this
[19:57] <cmccabe> sagewk: what does collectd want to see exactly?
[19:57] <sagewk> for the perfcounters, basically everything we're dumping
[19:57] <sagewk> for the monitor, we want to distill it down to items of interest
[19:57] * yehudasa (~yehudasa@ip-64-111-111-107.dreamhost.com) Quit (Ping timeout: 480 seconds)
[19:58] <sagewk> cmccabe: btw, http://jsonlint.com/ doesn't like the trailing , ina list.. e.g. [1,2,] doesn't parse. :/
[19:58] <cmccabe> it's basically a key-value type situation?
[19:58] <cmccabe> or does collectd understand structured data
[19:58] <sagewk> not sure
[19:58] <sagewk> i suspect just kv
[19:59] <cmccabe> sagewk: http://whereswalden.com/2010/09/08/spidermonkey-json-change-trailing-commas-no-longer-accepted/
[20:00] <sagewk> so we should omit them
[20:00] <cmccabe> yeah, I guess so.
[20:03] <sagewk> k
[20:13] * aliguori (~anthony@32.97.110.65) has joined #ceph
[20:18] * gregaf (~Adium@ip-64-111-111-107.dreamhost.com) has joined #ceph
[20:21] * joshd (~joshd@ip-64-111-111-107.dreamhost.com) has joined #ceph
[20:40] <wido> hi guys
[20:41] <wido> I've been doing some tests lately with my cluster and it really seems that the high number of PG's seem to be killing my cluster
[20:42] <wido> in a recovery situation it seems to become to much for the cluster
[20:42] <wido> and I start seeing these weird crashes. When I keep my number of PG's low and don't put that much data on the cluster, everything is fine
[20:42] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[20:43] <cmccabe> wido: I'm headed to lunch for a bit. I will say one thing, which is that sam has been working on the recovery code lately
[20:43] <cmccabe> wido: he might have some insight
[20:44] <wido> cmccabe: tnx, but I just wanted to 'report' this
[20:45] <wido> It's something we all suspected, but I started doing some actual tests
[21:07] <cmccabe> wido: yeah, I really think we need to start taking recovery more seriously
[21:07] <cmccabe> wido: and in particular, how to avoid cascading failures
[21:08] <cmccabe> wido: I almost feel like inserting some kind of randomized delay between seeing problems and starting recovery might be required
[21:08] <cmccabe> wido: either that or enlist the help of the monitors for rate-limiting
[21:09] * aliguori (~anthony@32.97.110.65) Quit (Ping timeout: 480 seconds)
[21:11] * yehudasa (~yehudasa@ip-64-111-111-107.dreamhost.com) has joined #ceph
[21:12] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[21:18] <wido> cmccabe: something like that should be good
[21:19] <wido> I already set my recovery ops back to 1
[21:19] * aliguori (~anthony@32.97.110.64) has joined #ceph
[21:19] <wido> But the first proces when a OSD boots, when it's scanning it's local data takes ages
[21:21] <cmccabe> wido: I don't think it has to do that unless there are missing objects? Maybe someone who knows more can clarify
[21:26] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[21:29] <yehudasa> cmccabe: expressions like sizeof(sockaddr_un::sun_path) don't compile on certain compilers
[21:29] <cmccabe> yehudasa: yeah, I figured that out from your change
[21:30] <yehudasa> cmccabe: so the master branch fails to compile now
[21:30] <cmccabe> yehudasa: so I replaced my uses of that with the NULL pointer hack... at least the ones I happened to find
[21:30] <cmccabe> yehudasa: what wacky compiler are you using that has this problem?
[21:30] <yehudasa> cmccabe: gcc
[21:30] <cmccabe> what version?
[21:30] <yehudasa> 4.4.5
[21:30] <cmccabe> sigh
[21:30] <cmccabe> the march of standardization goes on
[21:30] <yehudasa> actually g++ 4.3.2
[21:31] <cmccabe> I'm on 4.4.5
[21:31] <yehudasa> yeah, well, I think we should compile on 4.3 too
[21:31] <cmccabe> it might be a new C++0x thing
[21:32] <cmccabe> anyway, it's pretty difficult for me to audit all the sizeofs
[21:32] <cmccabe> do you have an error list generated with g++ -k?
[21:34] <cmccabe> I don't have access to a machine with older gcc
[21:34] <cmccabe> at least not one that can compile anything in a reasonable amount of time
[21:34] <cmccabe> we should set up a gitbuilder for this eventually. And pick a really old gcc, like 4.1 or something
[21:35] <yehudasa> cmccabe: there are 3 places in admin_socket.cc that do sizeof(sockaddr_un::sun_path)
[21:36] <cmccabe> ok, will fix
[21:36] <yehudasa> other than that my grep didn't find anything
[21:36] <cmccabe> my grep actually found a bunch
[21:36] <cmccabe> cmccabe@metropolis:~/ceph$ grep -r 'sizeof([^)]*[:][^)]*' *
[21:36] <cmccabe> src/mds/MDS.cc: dout(10) << sizeof(elist<void*>::item) << "\t elist<>::item *7=" << 7*sizeof(elist<void*>::item) << dendl;
[21:36] <cmccabe> src/mds/MDS.cc: dout(10) << sizeof(elist<void*>::item) << "\t elist<>::item" << dendl;
[21:36] <cmccabe> src/mds/MDS.cc: dout(10) << sizeof(elist<void*>::item) << "\t elist<>::item *2=" << 2*sizeof(elist<void*>::item) << dendl;
[21:36] <cmccabe> src/mds/MDS.cc: dout(10) << sizeof(xlist<void*>::item) << "\t xlist<>::item *2=" << 2*sizeof(xlist<void*>::item) << dendl;
[21:36] <cmccabe> src/gtest/test/gtest_unittest.cc: EXPECT_LE(sizeof(testing::internal::AssertHelper), sizeof(void*));
[21:36] <cmccabe> src/gtest/include/gtest/internal/gtest-internal.h: (sizeof(::testing::internal::IsNullLiteralHelper(x)) == 1)
[21:36] <cmccabe> src/common/admin_socket.cc: if (sock_path.size() > sizeof(sockaddr_un::sun_path) - 1) {
[21:36] <cmccabe> src/common/admin_socket.cc: << (sizeof(sockaddr_un::sun_path) - 1);
[21:36] <cmccabe> src/common/admin_socket.cc: snprintf(address.sun_path, sizeof(sockaddr_un::sun_path),
[21:37] <yehudasa> oh, there's something yeah.. the MDS.cc isn't relevant
[21:37] <cmccabe> I think perhaps in those other cases, the thing after the :: is a type name, rather than a field name
[21:38] <gregaf> yep, elist::item is a type
[21:40] * yoshi (~yoshi@KD027091032046.ppp-bb.dion.ne.jp) has joined #ceph
[21:42] <yehudasa> cmccabe: #define sizeoffield(type, field) (sizeof(((type *)0)->field))
[21:42] <cmccabe> yehudasa: yeah, I did that in some other cases
[21:43] <cmccabe> yehudasa: in this case, I had an instance of the type so I didn't have to
[21:43] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[21:43] <yehudasa> cmccabe: I meant that we should define that somewhere and use it if we really needed instead of doing this cast every time
[21:44] <gregaf> why would you ever depend on the type? use variables whenever possible, remember
[21:44] <cmccabe> gregaf: well, there are some cases where you might not have an instance available. It's pretty rare though
[21:45] <gregaf> only example I can think of is when you're trying to memset a C struct or something???which wouldn't depend on any individual fields anyway
[21:45] <cmccabe> gregaf: you're right that in general we should encourage sizeof(instance) rather than sizeof(type)
[21:46] <cmccabe> gregaf: well, in this case, I might have wanted to check whether the path to the socket supplied was longer than the maximum path a socket could have
[21:46] <cmccabe> gregaf: as it turns out, I do this in the same function as actually creating the socket. But that's not a requirement.
[21:47] <cmccabe> gregaf: strangely, sys/un.h doesn't define an actual constant representing the max socket path
[21:48] <cmccabe> gregaf: so basically it's kind of a weird situation that probably won't come up that often
[21:51] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[22:07] * yoshi (~yoshi@KD027091032046.ppp-bb.dion.ne.jp) Quit (Remote host closed the connection)
[22:20] * jim (~chatzilla@astound-69-42-16-6.ca.astound.net) Quit (Remote host closed the connection)
[22:23] * jim (~chatzilla@astound-69-42-16-6.ca.astound.net) has joined #ceph
[22:56] * aliguori (~anthony@32.97.110.64) Quit (Ping timeout: 480 seconds)
[23:08] * sjust (~sam@ip-64-111-111-107.dreamhost.com) has joined #ceph
[23:11] * aliguori (~anthony@32.97.110.64) has joined #ceph
[23:25] * Dantman (~dantman@S0106001731dfdb56.vs.shawcable.net) Quit (Remote host closed the connection)
[23:32] * sagewk (~sage@ip-64-111-111-107.dreamhost.com) Quit (Quit: Leaving.)
[23:35] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[23:54] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.