#ceph IRC Log

Index

IRC Log for 2010-08-07

Timestamps are in GMT/BST.

[0:37] * Osso (osso@AMontsouris-755-1-10-232.w90-46.abo.wanadoo.fr) Quit (Remote host closed the connection)
[0:37] * Osso (osso@AMontsouris-755-1-10-232.w90-46.abo.wanadoo.fr) has joined #ceph
[0:45] <todinini> ok, now the mds keeps running
[0:46] <sagewk> great, thanks for the report!
[0:46] <todinini> no prob
[0:47] <todinini> we still have quite a few issues with ceph, we will post them next week to the ml
[0:48] <sagewk> thanks
[0:48] <todinini> one of our biggest problem is the huge memory usage of the cosd, we have atm 15osd server with 1G Ram und 5G swap, and they still go oom
[1:01] <todinini> in the command output from ceph osd dump -o - was does this mean?
[1:01] <todinini> pg_temp 0.1a [17]
[1:01] <todinini> pg_temp 0.52 [11]
[1:07] <todinini> now one of the cosd cores http://pastebin.com/ZmMs5wmC 0.22~rc (f5487fd11ba5f1ebae6014a53557c781292e0cca)
[1:12] <sagewk> todinini: can you reproduce the crash with osd logging on (debug osd = 20, debug filestore = 20)?
[1:17] <todinini> sagewk: here is the log http://tuxadero.com/multistorage/osd.14.log
[1:20] <sagewk> can you pastebin teh output of 'ceph osd dump 1106 -o -'
[1:21] <todinini> http://pastebin.com/DiKdwqBk
[1:30] <todinini> osd18 dies as well http://tuxadero.com/multistorage/osd.18.log
[1:37] * allsystemsarego (~allsystem@188.26.32.97) Quit (Quit: Leaving)
[1:48] <todinini> and osd21 http://tuxadero.com/multistorage/osd.21.log
[2:25] * cowbar (edf0c0fd96@dagron.dreamhost.com) has joined #ceph
[2:26] <cowbar> howdy guys
[2:29] <cowbar> anyone know where in the docs it tells about the directory-size reporting feature
[2:32] <cowbar> ah nevermind I found it.
[2:46] <cowbar> looks like the wiki page about it got spammed, but found the blog posts about it.
[3:15] * Osso (osso@AMontsouris-755-1-10-232.w90-46.abo.wanadoo.fr) Quit (Quit: Osso)
[4:44] * akhurana (~ak2@c-98-232-30-233.hsd1.wa.comcast.net) has joined #ceph
[4:46] * akhurana is now known as Guest1163
[4:46] * Guest1163 (~ak2@c-98-232-30-233.hsd1.wa.comcast.net) Quit (Read error: Connection reset by peer)
[4:46] * akhurana (~ak2@c-98-232-30-233.hsd1.wa.comcast.net) has joined #ceph
[4:58] * akhurana (~ak2@c-98-232-30-233.hsd1.wa.comcast.net) Quit (Quit: akhurana)
[6:07] * akhurana (~ak2@c-98-232-30-233.hsd1.wa.comcast.net) has joined #ceph
[6:27] * akhurana (~ak2@c-98-232-30-233.hsd1.wa.comcast.net) Quit (Quit: akhurana)
[9:21] * allsystemsarego (~allsystem@188.26.32.97) has joined #ceph
[9:54] * mtg (~mtg@port-87-193-189-26.static.qsc.de) has joined #ceph
[10:25] * tjikkun_ (~tjikkun@195-240-122-237.ip.telfort.nl) has joined #ceph
[10:27] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) Quit (Read error: No route to host)
[11:10] <jantje> I'm thinking how
[11:10] <jantje> I'm thinking how CEPH would behave in a multi-site with slow links environment
[11:11] <jantje> It would be cool to make it site aware
[11:11] <jantje> for example, users on site A read/write to ceph osd's that are physically on site A, and that data gets (slowly) replicated to site B and vice-versa
[11:13] <jantje> (and without any replication: if userA needs data from siteB, it must go over the slow link
[11:14] <jantje> It would be cool, but I'm not sure how you could configure this, eg, know for sure which user is from site A or site B, and which OSD's are located where
[11:14] <jantje> rsync is an alternative, but ...
[11:15] <jantje> and there probably would be issues when the link goes down, etc etc
[11:15] <jantje> so maybe not such a good idea
[11:33] <jantje> I've never tried it, but PXE bootable clients can mount their root filesystem from nfs, could that be CEPH as well? anyone tried?
[11:33] <jantje> I guess as long as you have the kernel module loaded: no problem?
[12:16] * tjikkun_ (~tjikkun@195-240-122-237.ip.telfort.nl) Quit (Ping timeout: 480 seconds)
[13:24] * Osso (osso@AMontsouris-755-1-10-232.w90-46.abo.wanadoo.fr) has joined #ceph
[13:35] * Osso (osso@AMontsouris-755-1-10-232.w90-46.abo.wanadoo.fr) Quit (Quit: Osso)
[13:56] * alexxy (~alexxy@79.173.82.178) Quit (Ping timeout: 480 seconds)
[14:55] <darkfade1> jantje: been thinking about ceph root too
[14:55] <darkfade1> but dont know if it works - most probably it does
[14:56] <jantje> i'm not sure
[14:57] <jantje> for nfsroot you have to do root=/dev/nfs and nfsroot=ip:path
[14:57] <jantje> so ceph will need a mount option
[14:57] <jantje> I dont know if there is a mount option where you can specify the fstype
[15:00] <jantje> http://www.linuxhq.com/kernel/v2.6/33-rc4/Documentation/filesystems/nfsroot.txt
[15:00] <jantje> for example
[15:02] * jantje &
[15:11] <darkfade1> oh no i dont think you always need the root= or rootdev options
[15:11] <darkfade1> i think i'll try later
[15:12] <darkfade1> i can do in xen (kernel from outside vm) and so skip all the thinking about initial root mount :)
[15:15] <wido> jantje: the multi-site env is not the way Ceph was designed
[15:15] <wido> but it will work over multi-site if the latency is low enough
[15:16] <wido> but what would be cool if you could specify "Read" OSD's, a group of OSD's where you client will try do to most the reads of
[15:16] <wido> but when you create a multi-site env you will always need some monitors on a third location which will decide which location is down or up
[15:23] <wido> personally i don't like booting over NFS, always a hassle
[15:23] <wido> i use small IDE/SATA SSD's of 4GB (Transcend)
[15:24] <wido> then use NFS for /home or something
[15:24] <wido> or use iSCSI if i need more IOps
[15:34] <darkfade1> say, the transcend 4GB ones, they went up in price a lot, didnt they?
[15:34] <darkfade1> i wanted to order a few 2 weeks ago and they were like $55 each
[15:34] <darkfade1> or is it because they only list the fast model now
[15:34] <wido> i don't know, we bought about a 100 a few months ago
[15:34] <wido> for $25 each or so
[15:34] <darkfade1> ok
[15:35] <wido> we use a lot of virtualization, so OS on a Transcend and run the rest of a SAN
[15:35] <darkfade1> yeah
[15:35] <darkfade1> and i even want them if i got local disk
[15:35] <wido> yes, true :) just for the OS
[15:35] <darkfade1> separating OS and data makes me just feel a lot better
[15:35] <wido> whenever your RAID controller starts bugging, your OS is still working
[15:36] <darkfade1> bbl i'll go try the root-thing or i'll never do it
[15:36] <wido> k
[15:37] <darkfade1> magic! i just managed to pick the one box where the osd is broken
[15:38] <wido> happends to me all the time ;)
[15:42] <darkfade1> ah, one has to put them in /etc/modules
[15:44] <darkfade1> urgls
[15:45] <darkfade1> to my left, there was a cup of healthy tea for later, to the right a coffee that i was just drinking
[15:45] <darkfade1> mixing them up is very strange.
[15:53] <darkfade1> i'm copying over most of the test vm now
[15:54] <darkfade1> around 2GB fs size
[15:54] <darkfade1> and it's sloooow :)
[15:54] <darkfade1> but i still got one system that shares mds and osd roles
[15:54] <darkfade1> i think thats the culprit
[16:03] <darkfade1> [1770395.000057] ceph: mds0 caps stale
[16:03] <darkfade1> do you what does "caps stale" mean?
[16:03] <darkfade1> +know
[16:12] <wido> darkfade1: yes, then your MDS is probably very slow
[16:13] <darkfade1> it switched to "hung" a minute later
[16:16] <darkfade1> i dont know how to go on. i figure i should fix the broken osd, move the data from the one on the first mds host there and then remove the osd on it
[16:25] <darkfade1> wow: http://wartungsfenster.pastebin.org/452935
[16:25] <darkfade1> now that triggered some really cool issues
[16:25] <darkfade1> i think i should do something else
[16:26] <darkfade1> the messages at the start are just from restarting ceph
[17:21] * ghaskins_mobile (~ghaskins_@66-189-114-103.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[17:21] * ghaskins_mobile (~ghaskins_@66-189-114-103.dhcp.oxfr.ma.charter.com) has joined #ceph
[17:21] * ghaskins_mobile (~ghaskins_@66-189-114-103.dhcp.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[18:36] <jantje> is it that bad to run mds/osd on the same machine?>
[18:37] <wido> jantje: no it's not, but the MDS can eat a lot of memory and CPU
[18:58] * mtg (~mtg@port-87-193-189-26.static.qsc.de) Quit (Quit: Verlassend)
[19:33] * sagelap (~sage@166.135.28.73) has joined #ceph
[20:14] * sagelap (~sage@166.135.28.73) Quit (Ping timeout: 480 seconds)
[22:39] * akhurana (~ak2@c-98-232-30-233.hsd1.wa.comcast.net) has joined #ceph
[22:59] * akhurana (~ak2@c-98-232-30-233.hsd1.wa.comcast.net) Quit (Quit: akhurana)
[23:09] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) has joined #ceph
[23:22] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) Quit (Ping timeout: 480 seconds)
[23:22] * tjikkun (~tjikkun@195-240-122-237.ip.telfort.nl) has joined #ceph
[23:58] * akhurana (~ak2@c-98-232-30-233.hsd1.wa.comcast.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.