#ceph IRC Log

Index

IRC Log for 2011-09-06

Timestamps are in GMT/BST.

[2:11] * ghaskins (~ghaskins@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: Leaving)
[2:26] * ghaskins (~ghaskins@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[4:08] * huangjun (~root@221.234.37.229) has joined #ceph
[4:38] * huangjun (~root@221.234.37.229) Quit (Quit: leaving)
[6:57] * mark-s (~mark-s@cpe-76-176-196-167.san.res.rr.com) has joined #ceph
[6:59] <mark-s> just curious, but what is the largest Ceph storage "Unit?" in testing right now? and also whats a good ratio of MDS to storage? 1:15 1:30? supposing each storage system had 144TB in it.
[6:59] <mark-s> thanks
[7:03] <mark-s> i'll look for responses in the archive later today ... bye
[7:03] * mark-s (~mark-s@cpe-76-176-196-167.san.res.rr.com) has left #ceph
[10:28] * Meths_ (rift@2.25.193.40) has joined #ceph
[10:28] * djlee (~dlee064@des152.esc.auckland.ac.nz) has joined #ceph
[10:29] <djlee> is the default number of pg is 198s for 6x2tb disk ?
[10:29] <djlee> 198
[10:35] * Meths (rift@2.25.189.91) Quit (Ping timeout: 480 seconds)
[11:38] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[11:46] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[11:50] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[11:56] * yoshi (~yoshi@p5039-ipngn601marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[12:14] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[12:17] * morse (~morse@supercomputing.univpm.it) Quit ()
[12:18] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[12:19] <failbaitr> ey guys, im not getting my cosd to boot
[12:19] <failbaitr> it crashes with an error 17 after seeing snap 1
[12:19] <failbaitr> this on debian 6 stable
[12:20] <failbaitr> 2011-09-06 11:02:02.202239 7fd9e8ce5700 filestore(/data/osd0) snap create 'snap_1' got error 17
[12:20] <failbaitr> *** Caught signal (Aborted) **
[12:20] <failbaitr> the folder snap_1 is already present, and cannot be removed, as its not empty, nor can it be emptied
[12:29] <failbaitr> ok, this might be bug #780
[12:34] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[12:34] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[13:08] * julienhuang (~julienhua@mtl93-4-82-226-130-144.fbx.proxad.net) has joined #ceph
[13:21] * julienhuang (~julienhua@mtl93-4-82-226-130-144.fbx.proxad.net) Quit (Quit: julienhuang)
[13:24] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[13:27] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[14:08] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) has joined #ceph
[14:59] <failbaitr> anyone?
[15:07] <failbaitr> hmz
[15:32] <failbaitr> ok, so stock debian 6 kernel does not work with the current ceph server (.34)
[15:33] <failbaitr> but, the 2.6.37 kernel from backports works just fine
[15:47] * cheg (~Adium@91.199.119.77) has joined #ceph
[16:31] * lxo (~aoliva@19NAADKN5.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[16:33] * lxo (~aoliva@9YYAABCJX.tor-irc.dnsbl.oftc.net) has joined #ceph
[17:52] * slang (~slang@chml01.drwholdings.com) Quit (Quit: Leaving.)
[18:10] * gregaf1 (~Adium@aon.hq.newdream.net) Quit (Remote host closed the connection)
[18:11] * gregaf (~Adium@aon.hq.newdream.net) has joined #ceph
[18:12] <gregaf> mark-s: largest I know of is 96 OSD nodes ??? the number of MDSes you need is dependent on the amount of metadata ops, not the number of OSDs ??? start with one and add more as you need them
[18:13] <gregaf> djlee: the default number of PGs is set based on the initial number of OSDs in the cluster; eventually it will be autobalancing but we haven't gotten to that yet
[18:15] <gregaf> failbaitr: hmm, what filesystem are you running the OSD on?
[18:15] <gregaf> and what do you mean, stock debian 6 kernel doesn't work ??? you mean you're seeing the crash with that but you upgraded and now it's fine?
[18:22] * slang (~slang@chml01.drwholdings.com) has joined #ceph
[18:22] * slang (~slang@chml01.drwholdings.com) has left #ceph
[18:22] * slang (~slang@chml01.drwholdings.com) has joined #ceph
[18:28] * Tv (~Tv|work@aon.hq.newdream.net) has joined #ceph
[18:29] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[18:35] * greglap (~Adium@aon.hq.newdream.net) has joined #ceph
[18:38] * cmccabe (~cmccabe@c-24-23-254-199.hsd1.ca.comcast.net) has joined #ceph
[18:38] * greglap (~Adium@aon.hq.newdream.net) Quit ()
[18:39] * jojy (~jojyvargh@70-35-37-146.static.wiline.com) has joined #ceph
[19:28] * cheg (~Adium@91.199.119.77) Quit (Quit: Leaving.)
[19:40] * The_Bishop (~bishop@port-92-206-251-64.dynamic.qsc.de) has joined #ceph
[19:51] * The_Bishop (~bishop@port-92-206-251-64.dynamic.qsc.de) Quit (Remote host closed the connection)
[19:55] * Meths_ is now known as Meths
[20:20] * julienhuang (~julienhua@mtl93-4-82-226-130-144.fbx.proxad.net) has joined #ceph
[20:20] * julienhuang (~julienhua@mtl93-4-82-226-130-144.fbx.proxad.net) Quit ()
[20:21] * julienhuang (~julienhua@mtl93-4-82-226-130-144.fbx.proxad.net) has joined #ceph
[20:22] * julienhuang (~julienhua@mtl93-4-82-226-130-144.fbx.proxad.net) Quit ()
[20:51] <failbaitr> gregaf: btrfs
[20:53] <failbaitr> gregaf: I had a vanilla debian 6 install (amd64), setup the osd on 1 machine with btrfs on a separate partition, running kernel 2.6.27 (or the like), and i got a crash+backtrace on the start of the cosd
[20:53] <failbaitr> I cleared the partition, recreated the btrfs filesystem, upgraded to 2.6.37 from debian backports, recreated the ceph filesystem, and voila all is happy and fine :)
[20:54] <failbaitr> I can create a bug including the backtrace log tomorrow if you want
[20:54] <gregaf> hmm
[20:54] <gregaf> yeah, that'd be god
[20:54] <gregaf> *good
[20:54] <failbaitr> No problem :)
[20:54] <gregaf> what ceph version was this?
[20:54] <failbaitr> .34
[20:54] <gregaf> okay
[20:54] <failbaitr> from the debian repos you guys host
[20:55] <gregaf> yeah
[20:55] <gregaf> sjust: were there filestore changes in 0.34?
[20:55] <failbaitr> When i started the cosd again it would give me an error 17, ill include that in the bug
[20:55] <failbaitr> No, its was a new ceph install, on a new server
[20:55] <failbaitr> Im new to ceph ;)
[20:55] <sjust> I don't think so
[20:56] <gregaf> okay
[20:56] <failbaitr> altough not new to cfs's
[20:56] <failbaitr> Love the single config file system btw, saves on the typo's a lot :P
[20:56] <gregaf> heh
[20:56] <gregaf> lunchtime for me, but give us a bug in the tracker and we'll make sure somebody gets to it pretty quickly :)
[20:58] <failbaitr> Im just doing a demo setup atm, considering ceph for a smallish cluster
[20:59] <failbaitr> so no need to hurry
[20:59] <failbaitr> (besides, its working on my end now)
[20:59] <failbaitr> and Bon appetit :)
[21:44] * adjohn (~adjohn@50.0.103.34) has joined #ceph
[22:33] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[23:00] <Tv> new docs: http://localhost:8080/ops/install/#installing-ceph-using-mkcephfs
[23:00] <Tv> err
[23:00] <Tv> http://ceph.newdream.net/docs/latest/ops/install/#installing-ceph-using-mkcephfs
[23:15] * bchrisman (~Adium@64.164.138.146) has joined #ceph
[23:23] <Tv> doing it wrong, very wrong.. "sudo vi /sbin/mkcephfs"
[23:24] <Tv> yay 10-character bugfix "backport"
[23:28] * hutchins (~hutchins@ltc-vpn.dothill.com) has joined #ceph
[23:39] * cheg (~Adium@85-250-130-179.bb.netvision.net.il) has joined #ceph
[23:49] * hutchins (~hutchins@ltc-vpn.dothill.com) Quit (Read error: Connection reset by peer)
[23:56] * lxo (~aoliva@9YYAABCJX.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[23:57] * bchrisman (~Adium@64.164.138.146) Quit (Quit: Leaving.)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.