#ceph IRC Log

Index

IRC Log for 2011-10-15

Timestamps are in GMT/BST.

[0:16] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[0:19] * jmlowe (~Adium@129-79-195-139.dhcp-bl.indiana.edu) Quit (Quit: Leaving.)
[0:23] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:25] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[0:28] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:31] * lxo (~aoliva@lxo.user.oftc.net) Quit ()
[0:33] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:44] <gregaf> df__: it depends on the circumstances — stat'ing by itself should be pretty quick but depending on what all happens it might force the other node to dump its buffers out to disk
[0:44] <gregaf> (if that happens under *all* stats then we have a bug; it's designed to not force that dump unless you try and do a great deal more than just look at file size)
[0:54] <conner> anyone know what the current largest ceph deployment is?
[0:55] <conner> I'm doing a GPFS deployment because there weren't any other production ready options
[0:59] * greglap (~Adium@aon.hq.newdream.net) has joined #ceph
[0:59] <greglap> conner: depends on what level of the stack you're discussing
[1:00] <conner> greglap, OIC, well I guess all the way to behaving with a POSIX interface would be my interest
[1:00] <greglap> the filesystem isn't production-ready for most uses yet, but the largest existing cluster I'm aware of is ~90 daemons in a testing environment
[1:00] <conner> how large are the OSTs?
[1:01] <greglap> not sure how big the OSDs are
[1:01] <conner> we care more about space than performance
[1:02] <conner> we've been buying these 48 3.5" disk bay nodes
[1:02] <greglap> we have a cluster here of like 1.5 PB in 1.5 racks or something that's running the object store but not the filesystem
[1:03] <conner> that's interesting... is it lots of small nodes or a few big guys?
[1:03] <greglap> it's running 1 daemon/disk
[1:04] <conner> oh interesting, so no hardware raid at all?
[1:04] <greglap> (I'm in a meeting now so responses may take a while, sorry)
[1:04] <greglap> I think they've got RAID cards but it's in JBOD mode?
[1:05] <greglap> sorry, not super-familiar with the hardware itself
[1:06] <conner> greglap, has anyone looked at plugging irods into the ceph OSDs?
[1:07] <greglap> I'm not familiar with irods?
[1:08] <conner> https://www.irods.org/index.php/IRODS:Data_Grids,_Digital_Libraries,_Persistent_Archives,_and_Real-time_Data_Systems
[1:10] <greglap> heh, that'll take a while to go through :)
[1:11] <conner> greglap, it's sort of a meta layer... it hands moving files around, replication, and object access but it doesn't do storage
[1:11] <conner> but it's rule/policy based... you can write rules for everything
[1:12] <greglap> probably not then — RADOS handles placement on its own and if you try and make somebody else handle placement you'd break the entire system
[1:12] <conner> well rados could be treated as a single storage backend
[1:44] * jojy (~jojyvargh@108.60.121.114) Quit (Quit: jojy)
[1:56] * verwilst (~verwilst@dD576F744.access.telenet.be) Quit (Quit: Ex-Chat)
[1:58] * greglap (~Adium@aon.hq.newdream.net) Quit (Quit: Leaving.)
[2:00] * Dantman (~dantman@S010600259c4d54ff.vs.shawcable.net) Quit (Remote host closed the connection)
[2:17] * adjohn (~adjohn@50-0-92-177.dsl.dynamic.sonic.net) has joined #ceph
[2:22] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[2:39] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[2:45] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[3:39] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[4:11] * jojy (~jojyvargh@75-54-231-2.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[4:11] * jojy (~jojyvargh@75-54-231-2.lightspeed.sntcca.sbcglobal.net) Quit ()
[4:49] * adjohn (~adjohn@50-0-92-177.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[5:32] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[6:34] * yehuda_hm (~yehuda@99-48-179-68.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[6:47] * adjohn (~adjohn@50-0-92-177.dsl.dynamic.sonic.net) has joined #ceph
[7:25] * jclendenan (~jclendena@204.244.194.20) Quit (Quit: Leaving)
[8:42] * adjohn (~adjohn@50-0-92-177.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[9:10] * greglap (~Adium@cpe-24-24-170-80.socal.res.rr.com) has joined #ceph
[9:10] * greglap (~Adium@cpe-24-24-170-80.socal.res.rr.com) Quit ()
[11:00] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (Ping timeout: 480 seconds)
[12:51] <df__> dd: writing `/mnt/ceph/lf.7453.12618.27625': File too large
[12:51] <df__> 1099511627776 bytes (1.1 TB) copied, 12870.9 s, 85.4 MB/s
[12:51] <df__> is that meant to happen?
[15:10] * verwilst (~verwilst@dD576F1FD.access.telenet.be) has joined #ceph
[15:34] * verwilst (~verwilst@dD576F1FD.access.telenet.be) Quit (Quit: Ex-Chat)
[15:34] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[16:42] * iribaar_ (~iribaar@200.111.172.142) Quit (Read error: Operation timed out)
[17:36] * alexxy[home] (~alexxy@79.173.81.171) Quit (Remote host closed the connection)
[17:42] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[17:50] * n0de (~ilyanabut@c-24-127-204-190.hsd1.fl.comcast.net) has joined #ceph
[17:56] * n0de (~ilyanabut@c-24-127-204-190.hsd1.fl.comcast.net) Quit (Quit: This computer has gone to sleep)
[19:20] * Dantman (~dantman@S010600259c4d54ff.vs.shawcable.net) has joined #ceph
[19:30] * n0de (~ilyanabut@c-24-127-204-190.hsd1.fl.comcast.net) has joined #ceph
[19:58] * n0de (~ilyanabut@c-24-127-204-190.hsd1.fl.comcast.net) Quit (Quit: This computer has gone to sleep)
[22:50] * Dantman (~dantman@S010600259c4d54ff.vs.shawcable.net) Quit (Read error: Operation timed out)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.