#ceph IRC Log


IRC Log for 2011-11-17

Timestamps are in GMT/BST.

[0:01] <joshd> grape: yeah, a lot of the config options aren't documented (or not very well)
[0:02] <grape> joshd: it just ties you guys up with support questions, not that you don't like lots of support questions or anything...
[0:06] <grape> is specifying the port ex.. "mon addr =" required, or are the default ports assumed?
[0:07] <grape> in a simple configuration, I should add
[0:07] <Tv> grape: there are at least some places that required the port, even though 6789 is the default
[0:07] <Tv> grape: some day we'll get to cleaning that up, for now it's safest to specify port
[0:07] <grape> Tv: great!
[0:07] <Tv> right now the only example i can think of was kernel client mount address
[0:09] <grape> cool, I added them back in
[0:22] * Tv (~Tv|work@aon.hq.newdream.net) has left #ceph
[0:25] * Tv (~Tv|work@aon.hq.newdream.net) has joined #ceph
[0:32] * tnt_ (~tnt@45.184-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[1:14] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) Quit (Remote host closed the connection)
[1:15] * Tv (~Tv|work@aon.hq.newdream.net) Quit (Read error: Operation timed out)
[2:08] * bchrisman (~Adium@ Quit (Quit: Leaving.)
[2:30] * aNoNymOus (~aNoNymOus@abo-35-35-69.lil.modulonet.fr) has joined #ceph
[2:31] * aNoNymOus (~aNoNymOus@abo-35-35-69.lil.modulonet.fr) Quit ()
[2:42] * jojy (~jvarghese@75-54-231-2.lightspeed.sntcca.sbcglobal.net) Quit (Quit: jojy)
[2:55] * jojy (~jvarghese@75-54-231-2.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[2:55] * jojy (~jvarghese@75-54-231-2.lightspeed.sntcca.sbcglobal.net) Quit ()
[2:57] * votz_ (~votz@pool-108-52-121-103.phlapa.fios.verizon.net) Quit (Quit: Leaving)
[2:59] * adjohn (~adjohn@50-0-164-220.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[3:16] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[3:24] * cp (~cp@ Quit (Quit: cp)
[5:01] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[5:06] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[5:23] * cp (~cp@c-98-234-218-251.hsd1.ca.comcast.net) has joined #ceph
[5:24] * cp (~cp@c-98-234-218-251.hsd1.ca.comcast.net) Quit ()
[6:50] * yoshi (~yoshi@p9224-ipngn1601marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[8:01] * tom (~tom@n16h50.rev.sprintdatacenter.pl) has joined #ceph
[8:03] * tnt__ (~tnt@45.184-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[8:14] <tom> does some1 knows how to create users in radosgw-admin ?
[8:35] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[9:00] * tom (~tom@n16h50.rev.sprintdatacenter.pl) has left #ceph
[9:14] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:20] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:25] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:29] * lxo (~aoliva@lxo.user.oftc.net) Quit ()
[9:30] * tnt__ (~tnt@45.184-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[9:30] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:30] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:49] * tnt__ (~tnt@212-166-48-236.win.be) has joined #ceph
[9:52] * Nightdog (~karl@190.84-48-62.nextgentel.com) has joined #ceph
[10:15] * rosco (~r.nap@ has joined #ceph
[10:16] * rosco is now known as Colomonkey
[10:29] * slang (~slang@chml01.drwholdings.com) Quit (Ping timeout: 480 seconds)
[11:09] * yoshi (~yoshi@p9224-ipngn1601marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:55] <chaos_> grape, one note about our yesterday talk, someone could change comment at line 746, it says that op_r_lat is "client read latency", it's misleading
[12:04] * tnt__ (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[12:06] * gregorg (~Greg@ has joined #ceph
[12:13] * tnt__ (~tnt@45.184-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[12:15] * mtk (~mtk@ool-44c35967.dyn.optonline.net) Quit (Remote host closed the connection)
[13:00] * tnt__ (~tnt@45.184-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[13:08] * tnt_ (~tnt@212-166-48-236.win.be) has joined #ceph
[14:39] * tools (~tom@ipx20310.ipxserver.de) Quit (Quit: Changing server)
[14:53] * mtk (~mtk@ool-44c35967.dyn.optonline.net) has joined #ceph
[15:43] * adsllc (~davel@cblmdm72-240-119-60.buckeyecom.net) has left #ceph
[15:49] * slang (~slang@chml01.drwholdings.com) has joined #ceph
[16:18] <grape> chaos_: I think you intended to direct the comment at line 746 to gregaf
[16:18] <grape> chaos_: about line 746, rather
[16:48] <chaos_> grape, sorry ;) i've hit g<tab>
[16:56] <grape> chaos_: no worries :-)
[17:10] * ghaskins_ (~ghaskins@68-116-192-32.dhcp.oxfr.ma.charter.com) has joined #ceph
[17:17] * ghaskins (~ghaskins@68-116-192-32.dhcp.oxfr.ma.charter.com) Quit (Ping timeout: 480 seconds)
[17:49] * tnt_ (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[18:00] * tnt__ (~tnt@45.184-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[18:07] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Quit: Ex-Chat)
[18:35] * aliguori (~anthony@ has joined #ceph
[18:55] * Tv (~Tv|work@aon.hq.newdream.net) has joined #ceph
[19:04] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[19:06] <yehudasa_> Tv: can you look at sepia22: SWIFT_TEST_CONFIG_FILE=/tmp/cephtest/archive/testswift.client.0.conf /tmp/cephtest/swift/virtualenv/bin/nosetests -w /tmp/cephtest/swift/test/functional
[19:06] <yehudasa_> ?
[19:07] <yehudasa_> Tv: for some reason it skips all tests and I'm having trouble figuring why
[19:07] <Tv> yehudasa_: poking at it with a stick
[19:08] <yehudasa_> Tv: thanks
[19:08] <Tv> probably this:
[19:08] <Tv> for key in 'auth_host auth_port auth_ssl username password'.split():
[19:08] <Tv> if not config.has_key(key):
[19:08] <Tv> raise SkipTest
[19:09] <Tv> no username & password
[19:10] <yehudasa_> Tv: /tmp/cephtest/archive/testswift.client.0.conf does set username and password
[19:11] <gregaf> so it's probably that mysterious yaml issue causing it to fail to read the config?
[19:11] <Tv> grep -q username /tmp/cephtest/archive/testswift.client.0.conf || echo nope
[19:11] <Tv> nope
[19:12] <yehudasa_> oh, was looking at the wrong machine
[19:12] <yehudasa_> Tv: thanks
[19:13] * jojy (~jvarghese@ has joined #ceph
[19:21] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:21] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[19:36] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:42] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[19:50] * cp (~cp@ has joined #ceph
[19:56] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[20:34] * bchrisman (~Adium@ has joined #ceph
[21:24] * grape (~grape@c-76-17-80-143.hsd1.ga.comcast.net) Quit (Quit: leaving)
[22:08] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) has joined #ceph
[22:08] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[22:14] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[22:30] <sagewk> tv: mayhe a chef.py task that just runs chef-client?
[22:30] <Tv> sagewk: that's what i was thinking, though not chef-client
[22:30] <sagewk> wget -q -O- https://raw.github.com/NewDreamNetwork/ceph-qa-chef/master/solo/solo-from-scratch | sh ?
[22:32] <sagewk> tv: let me know what you come up with, i'll put of running them by hand for now
[23:23] * Bircoph (~Bircoph@nat0.campus.mephi.ru) has joined #ceph
[23:25] <Bircoph> Hello,
[23:25] <Bircoph> first of all, ceph wiki looks like down right now:
[23:25] <Bircoph> Can't contact the database server: Lost connection to MySQL server at 'reading authorization packet', system error: 0 (mysql.ceph.newdream.net)
[23:26] <Bircoph> I plan to try Ceph on small scientific computing cluster: it has 5 nodes, each node has small storage space (120 GB, upgrade is not possible), I want to join most of this space into a distributed file system
[23:27] <Bircoph> the very same nodes will be used for computing, in spite of this I want to:
[23:27] <Bircoph> 1) decentralize traffic going to the current node from other nodes;
[23:29] <Bircoph> 2) if node will use block from Ceph which resides on local hdd, it would be great if it will be possible to acces this block without transfering it to and from monitor server
[23:29] <Bircoph> So I'm interested how traffic will go if I'll mount a ceph monitor (which is the only way to mount ceph):
[23:30] <Bircoph> will it go from the storage node to the monitor server and back to the mount point node, or will it go directly from storage node to the node mounted it?
[23:30] <Bircoph> thanks for your attention
[23:31] <Bircoph> oh, I have one more question: is there any way to use disk quotas with ceph, maybe using some external means?
[23:31] <Bircoph> this is the only thing I really miss in Ceph
[23:32] <Bircoph> btw wiki is ok now, maybe it was some temporary failure
[23:32] <joshd> traffic goes from the clients (nodes mounting the ceph dfs) to the osds (storage nodes) - for metadata there's also traffic from clients to metadata servers (mds) and from mds to osd, unless the operations can be cached on the client
[23:34] <joshd> the monitors are used to coordinate cluster membership and distribute cluster maps to clients, but that's not much traffic compared to the osds
[23:36] <Bircoph> ok, thanks
[23:36] <joshd> one caveat is if you want to run osds on the same machine as clients, you need to use fuse - the kernel client can deadlock under memory pressure if it and an osd on the same machine are both trying to flush pages to process a write from that client
[23:36] <Bircoph> how large is metadata traffic compared to an ordinary data flow?
[23:37] <joshd> I'm not sure we've done much measurement there, but it shouldn't be much in terms of bandwidth
[23:37] <Bircoph> thanks again
[23:38] <joshd> oh, and if you aren't aware running multiple active mds isn't stable yet - but you can run one active and some standbys
[23:41] <Bircoph> I think I'll run single mds: for computations I node will be a control/login server node, and 4 nodes will be workers, so I'll put mds and mon on that control node
[23:42] <Bircoph> and sorry, but I can't understand how fuse will help to solve deadlock on page flushing
[23:54] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) Quit (Remote host closed the connection)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.