#ceph IRC Log

Index

IRC Log for 2010-09-02

Timestamps are in GMT/BST.

[1:02] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) Quit (Quit: Leaving.)
[2:24] * sagelap (~sage@c-24-218-65-120.hsd1.ma.comcast.net) has joined #ceph
[2:41] * sagelap (~sage@c-24-218-65-120.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[3:33] * Osso (osso@AMontsouris-755-1-6-62.w86-212.abo.wanadoo.fr) Quit (Quit: Osso)
[3:34] * mbrknewy (~mbrokeos@209.236.250.213) has joined #ceph
[3:39] <mbrknewy> welcome
[3:39] <mbrknewy> ceph is up %5 today
[3:49] * mbrknewy (~mbrokeos@209.236.250.213) Quit (Read error: Connection reset by peer)
[3:51] * MK_FG (~fraggod@188.226.51.71) Quit (Quit: o//)
[3:53] * mbrknewy (~mbrokeos@209.236.250.213) has joined #ceph
[5:17] * MK_FG (~fraggod@wall.mplik.ru) has joined #ceph
[5:39] * Kazuhiro (~paul@ppp244-218.static.internode.on.net) has joined #ceph
[6:07] <Kazuhiro> Is it recommend to have a OSD per disk and not bother with hardware raid/mdadm on each storage server?
[6:08] <mbrknewy> sounds like a good question
[6:09] <monrad-51468> i guess that depends a bit on how many disks each storage server has
[6:16] <mbrknewy> each storage server has only 4
[6:17] <mbrknewy> lets say
[6:24] <Kazuhiro> Id be looking at 4-16 drives per server (x 2 server). Initially I would be using less drives per server while using old hardware to test everything and learn how it all works. Also are any people using ceph in production specifically the Rbd client + ext3/4?
[6:49] <monrad-51468> i am not a ceph guru i just noticede some talk about it some time ago
[6:49] <monrad-51468> i have not even got around to make a test ceph installation yet
[6:50] * f4m8_ is now known as f4m8
[6:51] <MK_FG> I noticed there are huge warning signs like "don't use in production" all over the docs
[6:57] <mbrknewy> it could be in production if its a mirror anyway serving as a backup
[6:59] <MarkN1> Even as an archive I still get the occasional crash - though no data lost so far.
[7:58] * mbrknewy (~mbrokeos@209.236.250.213) Quit (Quit: ++// S>V V<S =DOTENTER_))))))|)
[8:17] * tjikkun (~tjikkun@195-240-122-237.ip.telfort.nl) Quit (Quit: Ex-Chat)
[8:17] * lidongyang (~lidongyan@222.126.194.154) Quit (Remote host closed the connection)
[8:18] * lidongyang (~lidongyan@222.126.194.154) has joined #ceph
[8:38] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) has joined #ceph
[8:44] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[8:50] * allsystemsarego (~allsystem@188.25.128.208) has joined #ceph
[9:23] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) Quit (Quit: Yoric)
[9:34] * ezgreg (~Greg@78.155.152.6) has joined #ceph
[9:36] * littlejo (~joseph@78.155.152.6) has joined #ceph
[10:10] * Yoric (~David@213.144.210.93) has joined #ceph
[10:45] * Jiaju (~jjzhang@222.126.194.154) has joined #ceph
[12:15] * Kazuhiro (~paul@ppp244-218.static.internode.on.net) Quit (Quit: Leaving)
[13:04] * Yoric_ (~David@213.144.210.93) has joined #ceph
[13:04] * Yoric (~David@213.144.210.93) Quit (Read error: Connection reset by peer)
[13:04] * Yoric_ is now known as Yoric
[13:32] * MK_FG (~fraggod@wall.mplik.ru) Quit (Remote host closed the connection)
[13:48] * Osso (osso@AMontsouris-755-1-6-62.w86-212.abo.wanadoo.fr) has joined #ceph
[13:48] * Osso_ (osso@AMontsouris-755-1-6-62.w86-212.abo.wanadoo.fr) has joined #ceph
[13:48] * Osso (osso@AMontsouris-755-1-6-62.w86-212.abo.wanadoo.fr) Quit (Read error: Connection reset by peer)
[13:48] * Osso_ is now known as Osso
[13:53] * Yoric (~David@213.144.210.93) Quit (Quit: Yoric)
[14:12] <todinini> wido: you asked for me?
[14:56] * MK_FG (~fraggod@188.226.51.71) has joined #ceph
[16:29] * jantje (~jan@shell.sin.khk.be) Quit (Quit: leaving)
[16:44] * sagelap (~sage@c-24-218-65-120.hsd1.ma.comcast.net) has joined #ceph
[16:45] * Osso (osso@AMontsouris-755-1-6-62.w86-212.abo.wanadoo.fr) Quit (Quit: Osso)
[16:47] * f4m8 is now known as f4m8_
[17:11] * sagelap (~sage@c-24-218-65-120.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[17:15] * sagelap (~sage@c-24-218-65-120.hsd1.ma.comcast.net) has joined #ceph
[17:28] * sagelap (~sage@c-24-218-65-120.hsd1.ma.comcast.net) Quit (Read error: Connection reset by peer)
[17:32] * Yoric (~David@213.144.210.93) has joined #ceph
[18:07] * monrad (~mmk@domitian.tdx.dk) has joined #ceph
[18:09] * jbl_ (~jbl@charybdis-ext.suse.de) has joined #ceph
[18:11] * littlejo (~joseph@78.155.152.6) Quit (reticulum.oftc.net magnet.oftc.net)
[18:11] * ezgreg (~Greg@78.155.152.6) Quit (reticulum.oftc.net magnet.oftc.net)
[18:11] * pruby (~tim@leibniz.catalyst.net.nz) Quit (reticulum.oftc.net magnet.oftc.net)
[18:11] * jbl (~jbl@charybdis-ext.suse.de) Quit (reticulum.oftc.net magnet.oftc.net)
[18:11] * darktim (~andre@pcandre.nine.ch) Quit (reticulum.oftc.net magnet.oftc.net)
[18:11] * monrad-51468 (~mmk@domitian.tdx.dk) Quit (reticulum.oftc.net magnet.oftc.net)
[18:12] * littlejo (~joseph@78.155.152.6) has joined #ceph
[18:12] * pruby (~tim@leibniz.catalyst.net.nz) has joined #ceph
[18:12] * darktim (~andre@pcandre.nine.ch) has joined #ceph
[18:12] * ezgreg (~Greg@78.155.152.6) has joined #ceph
[18:12] * darktim (~andre@pcandre.nine.ch) Quit (Ping timeout: 481 seconds)
[18:12] * darktim (~andre@pcandre.nine.ch) has joined #ceph
[18:29] <wido> todinini: yes, was something about the Qemu snapshots, but that's already sorted out
[18:29] <wido> yehudasa: Gave the snapshots a try again, but snapshotting a running VM (qemu-img or rbd) doesn't work, i have to shut down the VM before the snapshot works
[18:30] <wido> snapshotting while the VM runs results in a snapshot becoming "active" after the VM shuts down
[18:30] <wido> expected behaviour at this moment?
[19:18] <yehudasa> wido: what do you mean becoming active
[19:28] * Yoric (~David@213.144.210.93) Quit (Quit: Yoric)
[19:44] <wido> yehudasa: hard to explain, uhm
[19:45] <yehudasa> how do you do the snapshotting while the vm runs?
[19:46] <wido> when I snapshot the VM while it's running, it doesn't work. But it seems the snapshot point goes to the point where the VM shuts down
[19:46] <wido> u tried qemu-img snapshot -c and both "rbd"
[19:46] <yehudasa> oh
[19:46] <yehudasa> that's the reason
[19:46] <yehudasa> both qemu-img and rbd are external utilities to the running vm
[19:46] <wido> I tried both, that's what I mean
[19:46] <wido> ah, i thought so, so the VM doesn't know the snapshot is created
[19:47] <yehudasa> the running vm is not aware of the new snapshot. What you need is to do a 'savevm' operation
[19:47] <yehudasa> via virsh
[19:47] <wido> that will cause the VM to go "down"
[19:47] <yehudasa> actually, not sure how you do it via virsh
[19:48] <wido> snapshotting without any interruption is not possible at the moment
[19:48] <yehudasa> I actually did it before
[19:48] <yehudasa> I did it via the running qemu console
[19:49] <yehudasa> I didn't use virsh
[19:49] <yehudasa> I can ask our guy here that used virsh how he did it exactly
[19:49] <wido> ok, great :) I'll give that a try then
[19:50] <yehudasa> he says 'virsh snapshot-create guestname'
[19:58] <wido> but then you need RBD support in libvirt
[19:59] <wido> But I get the point, the qemu process doesn't know about the snapshot
[19:59] <yehudasa> right
[19:59] * rklahn (~rklahn@38.104.128.78) has joined #ceph
[20:00] <wido> you have to "inform" it somehow that a snapshot is created
[20:00] <wido> and when you shut it down and start it again, it notices the snapshot
[20:00] <yehudasa> exactly.. not an easy task
[20:00] <yehudasa> yep
[20:00] <wido> I get the problem, but not really easy in a big setup where you have lots of VM which you all want to snapshot
[20:01] <yehudasa> when a snapshot is created we need to notify all running clients about the new snapshot, and wait for them to ack on that
[20:01] <wido> is there such a method in qemu?
[20:02] <yehudasa> no
[20:02] <yehudasa> that's why at the moment you should create it via virsh
[20:02] <yehudasa> I don't think you need rbd support in libvirt
[20:02] <yehudasa> other than being able to run it via libvirt..
[20:03] <wido> btw, there is no "snapshot-create" in libvirt?
[20:04] <wido> oh, seems something pretty new
[20:05] <wido> is there since virsh 0.8.1, i'm running 0.7.5
[20:05] <wido> i'm afk, tnx!
[20:06] <yehudasa> ok
[20:24] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[20:53] * rklahn (~rklahn@38.104.128.78) Quit (Quit: rklahn)
[21:13] * Osso (osso@AMontsouris-755-1-6-62.w86-212.abo.wanadoo.fr) has joined #ceph
[21:33] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) Quit (Quit: Yoric)
[21:35] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[21:46] * allsystemsarego (~allsystem@188.25.128.208) Quit (Quit: Leaving)
[21:58] <wido> yehudasa: I've created some patches for Ubuntu's libvirt 0.7.5 and 0.8.3 and qemu-kvm 0.12.3, can be found at: http://zooi.widodh.nl/ceph/qemu-kvm/
[21:58] <wido> this way you can backport the RBD support into the mainstream Ubuntu code so you can create .debs of it
[21:58] <yehudasa> great
[23:24] * ezgreg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[23:24] * ezgreg (~Greg@78.155.152.6) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.