#ceph IRC Log

Index

IRC Log for 2012-12-16

Timestamps are in GMT/BST.

[0:15] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[0:19] <Psi-jack> I'm guessing nobody is presently around to give me a little hand with crush maps?
[0:30] <janos> Psi-jack - you using kvm directly, or through something like libvirt for your VM's?
[0:34] <Psi-jack> I'm personally using kvm via Proxmox VE 2.2 currently.
[0:34] <janos> ah. i messed a little - i eventually hope to get mine running in rbd-images
[0:34] <Psi-jack> It supports ceph, minimally, but functionally as-is. Can create pool images, and display them. :)
[0:34] <janos> about to blow away my first ceph cliuster though. i think i've leanr-mangled it beyond my ability to repair
[0:34] <Psi-jack> janos: I had to use a gateway VM to convert my qcow2 disk images to RBD. :)
[0:35] <janos> *learn
[0:35] <janos> i'm still very much in the learn-and-burn phase
[0:35] <Psi-jack> One VM that had it's own base OS, manually configured the disks to bring in from NFS and another disk the new unused RBD one, partition, format, rsync old to new, chroot in, install grub, and adjust fstab.
[0:36] <janos> ah. i've actually have pretty good fortune with standard dd
[0:36] <Psi-jack> This works because you're rsyncing the raw data from disk to disk, and not the extra mounts from dev,proc,run,sys, etc.
[0:36] <janos> either from lv's or images, or raw block devices
[0:36] <Psi-jack> janos: dd works, too, but doesn't work so well from qcow2 image files. ;)
[0:37] <janos> ah, i have not tried that path. all mine are block devices atm
[0:37] <Psi-jack> My method allows you to re-structure how you want. I had some qcow2's that were sized to 32GB, but only used 2.9 GB or less. :)
[0:37] <janos> well i take that back, raw block devices or lvm-backed
[0:37] <Psi-jack> Heh
[0:37] <janos> ah that's a good point
[0:37] <Psi-jack> I stopped using raw a long time ago because it was a waste of provisioned space.
[0:37] <janos> yeah i'm finding the same
[0:38] <Psi-jack> Sparse data, aka, Thin provisioning, is awesome. :)
[0:38] <janos> some day i shall get there
[0:38] <Psi-jack> Heh
[0:38] <janos> while i'm fully responsible for infrastructure at work, the bulk of my time is ap-layer and architecture
[0:38] <janos> the infrastructure is not so "seen"
[0:39] <Psi-jack> Right now, I'm just trying to figure out how to work with crush maps and replicas, because I want to insure there's 2 to 3 replicas, but on two different hosts, minimal.
[0:39] <janos> ah. i have not figured out the replica stuff. i've tested quite afew crushmap laterations though
[0:39] <janos> alterations
[0:39] <Psi-jack> Hehe
[0:40] <Psi-jack> Well, I don't need to go in serious depth. Like, on my rack, all my storage servers are on a single shelf (row) :)
[0:40] <janos> i need to figure out mons as well - failover
[0:40] <Psi-jack> And all in one room. ;)
[0:40] <Psi-jack> Each host has 3 OSD's.
[0:40] <janos> cool
[0:40] <janos> i'm just dogfooding at home on my rack here
[0:40] <janos> 2 hosts right now, soon enough to be 3
[0:41] <Psi-jack> Same here.
[0:41] <Psi-jack> 3 storage servers, 3 OSD's each, 1 SSD each for journal+mon+mds data, and 4 hypervisors.
[0:42] <janos> cool
[0:42] <Psi-jack> Storage network is multipath'd 2x1Gb ethernet. :)
[0:42] <Psi-jack> LAN is 1x1Gb network. :)
[0:43] <Psi-jack> I'm currently.. Scarilly, upgrading my webserver VM's from Ubuntu 10.04 to 12.04, so I can installl the cephfs tools so I can get them back online. :/
[0:43] <janos> haha
[0:44] <Psi-jack> Need to get my webserver's back online.. Heh. My in-house AUR and DEB repos are in there. :D
[0:45] <janos> doh
[0:46] <Psi-jack> heh
[0:46] <Psi-jack> What I have, though, is a pretty effective setup.
[0:46] <Psi-jack> The only thing I don't like is what I found out from one of my servers. Only has SATA 1 controllers. hehe
[0:47] <Psi-jack> So, I'm looking for PCI-X Sata2 4~6ch controllers to replace it. ;)
[0:47] <janos> aww dang!
[0:47] <janos> i was just looking for some either pci-e x4 ones or pci
[0:47] <Psi-jack> Heh
[0:47] <janos> supermicro baord is nice, but sata II
[0:48] <Psi-jack> PCI-X is relatively cheap as heck these days, since it's out of style. :)
[0:48] <janos> cool. i just don't have any of those slots
[0:48] <janos> that seemed like the last hurrah before pci-e
[0:48] <Psi-jack> Nice. Intel SATA2 PCI-X 8ch, $67
[0:48] <janos> wow
[0:48] <Psi-jack> hehe
[0:48] <janos> cheeeeeeeap
[0:48] <Psi-jack> Yeaaah budddy. :)
[0:48] <janos> dang
[0:49] * janos eyeballs an old 2u case behind him, wondering what's insode
[0:49] <Psi-jack> I got a 2-port Intel E1000 Pro 2port, which has the full TCP Offloading and all, $40.
[0:49] <Psi-jack> Also PCI-X :)
[0:49] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[0:49] <janos> party
[0:49] <Psi-jack> That's what my server uses for the SAN connection. :D
[0:50] <Psi-jack> Syba 4ch PCI-X SATA2, $37
[0:51] <janos> hrm, do you recall offhand if ceph.conf uses # or ; (or both) for commenting things out?
[0:51] <Psi-jack> Uhhh..
[0:51] <Psi-jack> I see ;
[0:52] <janos> i won't be actually reloading ceph any time soon to test
[0:52] <janos> but don't want it to yell when i do
[0:52] <Psi-jack> Mine's using all ; for comments.
[0:59] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) has left #ceph
[1:02] <Psi-jack> Hmm
[1:03] <Psi-jack> Is what I need for Ubuntu for CephFS, just the ceph-fuse stuff?
[1:06] * aliguori (~anthony@cpe-70-113-5-4.austin.res.rr.com) has joined #ceph
[1:08] <Psi-jack> ceph mount failed with (95) Operation not supported :(
[1:13] * danieagle (~Daniel@177.133.175.253) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[1:34] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[1:41] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[1:54] * LeaChim (~LeaChim@5ad684ae.bb.sky.com) Quit (Remote host closed the connection)
[1:55] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[2:22] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[2:22] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[2:22] * Leseb_ is now known as Leseb
[2:28] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[2:45] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[2:45] * The_Bishop_ (~bishop@e179016104.adsl.alicedsl.de) has joined #ceph
[2:46] * llorieri (~llorieri@177.141.245.115) has joined #ceph
[2:46] <llorieri> hi guys
[2:46] <llorieri> we did the same test described here: http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/9427
[2:47] <llorieri> does that mean ceph do have a single point of failure ?
[2:47] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:47] <llorieri> pardon my english
[2:51] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[2:51] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit ()
[2:53] * The_Bishop (~bishop@e179016104.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[2:56] <llorieri> yehuda_hm: you there ?
[3:04] <llorieri> we have another issue, testing radosgw. when I stop big transfers during uploads, the disc space is never reclaimed. I waited 24 hours, then I run again the temp remove, then I updated to 5.5, incomplete files are still there
[3:07] * scalability-junk (~stp@188-193-202-99-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[3:27] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[4:51] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[5:00] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[5:12] * llorieri (~llorieri@177.141.245.115) Quit (Remote host closed the connection)
[5:31] <Psi-jack> Hmmm
[5:32] <Psi-jack> Is it possible with mount.ceph to not specifically specify /a/ mon host, or something that can try multiple ones?
[5:32] <Psi-jack> mount -t ceph 172.18.0.5:6789:/cweb /var/www/ -o name=... -- Seems like it would be a serious SPoF for obtaining a mount.
[5:33] <Psi-jack> Oh nevermind, you can specify multiple. ;)
[5:35] <Psi-jack> In that case, is it possible to use a DNS name, instead of an IP? I recall reading somewhere that using DNS was a bad idea, and it just doesn't make sense.
[7:07] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) has joined #ceph
[7:38] <Psi-jack> hmmmm
[7:38] <Psi-jack> Okay.. So, on a single server, I was having pretty much no issue with mount.ceph mounts, but when multiple get involved... Holy hell
[7:39] <Psi-jack> Crazy kernel panics, locking up of the system, all sorts of crazy.
[8:04] <Psi-jack> Okay, so I upgraded to ceph 0.55 clients to see if it was an issue with 0.48. And I'm getting the same issues.
[8:05] <Psi-jack> Two nodes mounting the same resource causes serious issues.
[8:11] <phantomcircuit> Psi-jack, dont do dat
[8:11] <Psi-jack> What?
[8:11] <phantomcircuit> mounting cephfs on the same system the osd is on is almost guaranteed to cause problems
[8:11] <phantomcircuit> i forget why
[8:11] <Psi-jack> Umm
[8:12] <Psi-jack> Wrong thing.
[8:12] <phantomcircuit> http://tracker.newdream.net/issues/3076
[8:12] <Psi-jack> I have two webservers, not on the storage servers, trying to mount the same cephfs resource, and run webservers.
[8:12] <phantomcircuit> oooh
[8:12] <phantomcircuit> nvm then
[8:12] <Psi-jack> And it's kernel panicking.
[8:13] <phantomcircuit> you have the panic message?
[8:13] <Psi-jack> Not handy at the moment.
[8:13] <Psi-jack> Easily reproduced, though. hehe
[8:13] <phantomcircuit> vms?
[8:14] <Psi-jack> VM guests, yes.
[8:16] <phantomcircuit> easy to get a screenshot then :P
[8:16] <Psi-jack> Yep.
[8:20] <Psi-jack> Even causes the VM's to be unable to restart at all.
[8:20] <phantomcircuit> is the storage node in a vm on the same host?
[8:22] <Psi-jack> Grrr. NO! LOL
[8:22] <Psi-jack> It's 3 dedicated storage servers just for Ceph.
[8:22] <Psi-jack> 3 storage servers + 4 hypervisors.
[8:24] <phantomcircuit> k
[8:24] <phantomcircuit> Psi-jack, why are you sharing the cephfs anyways?
[8:24] <phantomcircuit> actually
[8:24] <Psi-jack> Because I have 2 webservers load balanced.
[8:24] <phantomcircuit> why are you using cephfs at all instead of rbd
[8:25] <Psi-jack> RBD is used for the base OS itself, CephFS is what I'm using for the NFS alternative mount to get these two servers to share the same /var/www
[8:25] <Psi-jack> I used to do this with NFSv4, which does it very reasonably well, except, not HA at all.
[8:26] <Psi-jack> With an RBD disk, you'd have to layer on top of that with GFS2, OCFS2, etc..
[8:26] <phantomcircuit> i suspect you'd be better off with shared rbd devices and a filesystem that can handle that
[8:26] <Psi-jack> Which is HORRIBLE.
[8:26] <phantomcircuit> cephfs isn't stable
[8:27] <phantomcircuit> "The CephFS POSIX-compliant filesystem is functionally-complete and has been evaluated by a large community of users, but is still undergoing methodical QA testing. Once Ceph�s filesystem passes QA muster, Inktank will provide commercial support for CephFS in production systems."
[8:28] <phantomcircuit> crap the fan in this laptop just went from bad to i can hear the broken bearings
[8:28] <phantomcircuit> Psi-jack, i've never actually used any shared filesystem, what's the problem with them
[8:28] <Psi-jack> Well, so far, RBD works great. it's the filesystem mount layer that's not working very well. :)
[8:28] <Psi-jack> phantomcircuit: Painfully slow.
[8:29] <phantomcircuit> maybe there's a reason for that :P
[8:29] <Psi-jack> clustered filesystems adds like 2~5s of access time, especially to webserver content.
[8:29] <Psi-jack> However, NFSv4 is not a clustered filesystem, just a network filesystem.
[8:29] <phantomcircuit> webservers readonly?
[8:30] <Psi-jack> Mostly.
[8:30] <Psi-jack> They do store sessions to either database or disk, depending on the application.
[8:30] <phantomcircuit> if they were entirely you could just use a normal fs and mount ro
[8:30] <Psi-jack> Oi..
[8:30] <phantomcircuit> h4x...
[8:31] <Psi-jack> Bad ones. :p
[8:31] <phantomcircuit> that would totally work
[8:31] <phantomcircuit> one mistake and you'd be screwed
[8:31] <phantomcircuit> but it would work
[8:31] <Psi-jack> Even when you mount a filesystem readonly, it's not 100% read /ONLY/
[8:32] <phantomcircuit> map the rbd device and set it read only
[8:36] <Psi-jack> Seriously. Just stop.
[8:37] <Psi-jack> I will go BACk to NFSv4 for the webserver's shared content, before I do silly things like that. :p
[8:38] <Psi-jack> Heck, in my case, for the time being, I could run up another VM just to provide a NFS share resource for the webservers to use. Which would still be better than using GFS2 or OCFS2
[8:39] <phantomcircuit> lol
[8:39] <phantomcircuit> btw i was just being ridiculous
[8:39] <phantomcircuit> i'd never do that
[8:40] <Psi-jack> heh
[8:40] <Psi-jack> Yeah, I'm thinking what I may have hit is a bug in a the git-master checkout's 0.55 I'm using.
[8:40] <Psi-jack> If it worked in 0.48, I don't know. :)
[8:42] <Psi-jack> I suspect the devs may or may not be here till Monday, too. heh
[8:43] <phantomcircuit> try the mailing list you're more likely to eventually get an answer
[8:57] <Psi-jack> Looking through the mailing list archives, I see what I'm trying to do is supposed to work. ;)
[8:58] <Psi-jack> And in fact, a bug related to it where files removed with multiple mounted clients don't actually free up the space. hehe
[9:29] <Psi-jack> Well, either way, for now, I at least have 1 webserver up, which is better than 0. So, I'ma hit the sack for a bit, been at this all since Friday, converting everything to ceph. :)
[9:35] * loicd (~loic@magenta.dachary.org) has joined #ceph
[9:57] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[10:32] * loicd (~loic@gw-wifi.paris.fph.ch) has joined #ceph
[10:40] * loicd (~loic@gw-wifi.paris.fph.ch) Quit (Ping timeout: 480 seconds)
[10:41] * loicd (~loic@2001:7a8:1170:201:221:ccff:febc:f879) has joined #ceph
[10:52] * The_Bishop__ (~bishop@e179009120.adsl.alicedsl.de) has joined #ceph
[10:53] * snaff (~z@81-86-160-226.dsl.pipex.com) Quit (Quit: Leaving)
[10:59] * The_Bishop_ (~bishop@e179016104.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[11:10] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[11:14] * LeaChim (~LeaChim@5ad684ae.bb.sky.com) has joined #ceph
[11:51] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) has joined #ceph
[12:01] * maxiz (~pfliu@111.194.203.249) has joined #ceph
[12:06] * LeaChim (~LeaChim@5ad684ae.bb.sky.com) Quit (Remote host closed the connection)
[12:07] * LeaChim (~LeaChim@5ad684ae.bb.sky.com) has joined #ceph
[12:49] * jluis (~JL@89.181.151.160) Quit (Read error: Connection reset by peer)
[14:21] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[14:53] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) Quit (Read error: Connection reset by peer)
[15:09] * gaveen (~gaveen@112.135.23.129) has joined #ceph
[15:22] * gaveen (~gaveen@112.135.23.129) Quit (Remote host closed the connection)
[15:40] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[15:41] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[15:59] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[16:20] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (Quit: Ex-Chat)
[16:30] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) has joined #ceph
[16:35] <CloudGuy> hi .. is it possible to use a LVM partition for storage ?
[16:35] <CloudGuy> i am new to ceph and trying to set it up
[16:46] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[17:11] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[17:16] * f4m8 (f4m8@kudu.in-berlin.de) Quit (charon.oftc.net reticulum.oftc.net)
[17:16] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (charon.oftc.net reticulum.oftc.net)
[17:16] * sagewk (~sage@2607:f298:a:607:64a1:288d:93ad:96c1) Quit (charon.oftc.net reticulum.oftc.net)
[17:16] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (charon.oftc.net reticulum.oftc.net)
[17:16] * sileht (~sileht@sileht.net) Quit (charon.oftc.net reticulum.oftc.net)
[17:16] * l3akage (~l3akage@martinpoppen.de) Quit (charon.oftc.net reticulum.oftc.net)
[17:16] * zynzel (zynzel@spof.pl) Quit (charon.oftc.net reticulum.oftc.net)
[17:16] * jochen_ (~jochen@laevar.de) Quit (charon.oftc.net reticulum.oftc.net)
[17:16] * paravoid (~paravoid@scrooge.tty.gr) Quit (charon.oftc.net reticulum.oftc.net)
[17:16] * Anticimex (anticimex@netforce.csbnet.se) Quit (charon.oftc.net reticulum.oftc.net)
[17:16] * Lennie`away (~leen@lennie-1-pt.tunnel.tserv11.ams1.ipv6.he.net) Quit (charon.oftc.net reticulum.oftc.net)
[17:16] * wonko_be_ (bernard@november.openminds.be) Quit (charon.oftc.net reticulum.oftc.net)
[17:20] * f4m8 (f4m8@kudu.in-berlin.de) has joined #ceph
[17:20] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[17:20] * sagewk (~sage@2607:f298:a:607:64a1:288d:93ad:96c1) has joined #ceph
[17:20] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[17:20] * sileht (~sileht@sileht.net) has joined #ceph
[17:20] * l3akage (~l3akage@martinpoppen.de) has joined #ceph
[17:20] * Lennie`away (~leen@lennie-1-pt.tunnel.tserv11.ams1.ipv6.he.net) has joined #ceph
[17:20] * zynzel (zynzel@spof.pl) has joined #ceph
[17:20] * jochen_ (~jochen@laevar.de) has joined #ceph
[17:20] * paravoid (~paravoid@scrooge.tty.gr) has joined #ceph
[17:20] * Anticimex (anticimex@netforce.csbnet.se) has joined #ceph
[17:20] * wonko_be_ (bernard@november.openminds.be) has joined #ceph
[17:55] * ron-slc_ (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[17:59] * yehuda_hm (~yehuda@2602:306:330b:a40:b584:19da:e1ac:10fa) Quit (Ping timeout: 480 seconds)
[18:05] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Remote host closed the connection)
[18:07] * yehuda_hm (~yehuda@2602:306:330b:a40:6947:67ce:a823:6597) has joined #ceph
[18:07] * ron-slc_ (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[18:12] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[18:18] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[18:46] * yehuda_hm (~yehuda@2602:306:330b:a40:6947:67ce:a823:6597) Quit (Ping timeout: 480 seconds)
[18:47] * yehuda_hm (~yehuda@2602:306:330b:a40:6947:67ce:a823:6597) has joined #ceph
[18:56] * yehuda_hm (~yehuda@2602:306:330b:a40:6947:67ce:a823:6597) Quit (Ping timeout: 480 seconds)
[18:59] <Psi-jack> Well, I sent off my CephFS kernel OOPS issues to the mailing list. :)
[18:59] * occ (~onur@38.103.149.209) Quit (Quit: Leaving.)
[19:01] * yehuda_hm (~yehuda@99-48-176-164.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[19:04] * loicd (~loic@2001:7a8:1170:201:221:ccff:febc:f879) Quit (Ping timeout: 480 seconds)
[19:08] * zK4k7g (~zK4k7g@digilicious.com) has joined #ceph
[19:08] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[19:09] * sileht (~sileht@sileht.net) Quit (Quit: WeeChat 0.3.9.2)
[19:09] * sileht (~sileht@sileht.net) has joined #ceph
[19:16] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[19:18] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Quit: Leaving)
[19:26] * jlogan1 (~Thunderbi@2600:c00:3010:1:5dfe:284a:edf3:5b27) has joined #ceph
[19:33] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) has joined #ceph
[19:37] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[19:37] * ChanServ sets mode +o scuttlemonkey
[19:43] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) has left #ceph
[19:43] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[19:46] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[19:48] * yehuda_hm (~yehuda@99-48-176-164.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[19:51] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) has joined #ceph
[19:53] * yehuda_hm (~yehuda@99-48-176-164.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[19:53] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[20:13] * Machske (~bram@d5152D87C.static.telenet.be) has joined #ceph
[20:14] * The_Bishop__ (~bishop@e179009120.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[20:30] * The_Bishop__ (~bishop@e179009120.adsl.alicedsl.de) has joined #ceph
[20:45] <phantomcircuit> hmm apparently the libvirt rbd backend cant do clone operations
[20:50] <Psi-jack> libvirt can't do a lot of things, still.
[20:54] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:55] <phantomcircuit> Psi-jack, well i sort of need cow to work for this vm setup to really work
[20:55] <phantomcircuit> i could make people install from iso but due to insanity in the ovh network that doesn't work very well
[20:55] <phantomcircuit> of course i can always hack together something to do a full copy
[21:08] * yehuda_hm (~yehuda@99-48-176-164.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[21:10] * yehuda_hm (~yehuda@99-48-176-164.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[21:10] <lurbs> phantomcircuit: You can use an HTTP URL for the location of an ISO, if that helps.
[21:11] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) has joined #ceph
[21:11] <phantomcircuit> im actually avoiding that since i'd end up with people installing cracked windows server instances
[21:11] <phantomcircuit> i dont have time for dealing with that
[21:36] * yehuda_hm (~yehuda@99-48-176-164.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[21:37] * yehuda_hm (~yehuda@99-48-176-164.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[21:40] * jlogan1 (~Thunderbi@2600:c00:3010:1:5dfe:284a:edf3:5b27) Quit (Ping timeout: 480 seconds)
[22:01] * danieagle (~Daniel@177.133.175.253) has joined #ceph
[22:14] * michaeltchapman (~mxc900@150.203.248.116) Quit (Read error: Operation timed out)
[22:16] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[22:21] * loicd1 (~loic@magenta.dachary.org) has joined #ceph
[22:24] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[22:26] * loicd (~loic@magenta.dachary.org) Quit (Ping timeout: 480 seconds)
[22:26] * yehuda_hm (~yehuda@99-48-176-164.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[22:27] * yehuda_hm (~yehuda@2602:306:330b:a40:513b:92fb:8a99:e8e8) has joined #ceph
[22:31] * maxiz (~pfliu@111.194.203.249) Quit (Ping timeout: 480 seconds)
[22:40] * maxiz (~pfliu@221.223.242.247) has joined #ceph
[22:41] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[22:44] * michaeltchapman (~mxc900@150.203.248.116) has joined #ceph
[22:47] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[22:53] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[23:03] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) has joined #ceph
[23:09] * yehuda_hm (~yehuda@2602:306:330b:a40:513b:92fb:8a99:e8e8) Quit (Ping timeout: 480 seconds)
[23:11] * yehuda_hm (~yehuda@99-48-176-164.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[23:12] * ron-slc_ (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[23:18] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[23:19] * danieagle (~Daniel@177.133.175.253) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[23:23] * ron-slc_ (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[23:24] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[23:28] * yehuda_hm (~yehuda@99-48-176-164.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[23:29] * yehuda_hm (~yehuda@2602:306:330b:a40:513b:92fb:8a99:e8e8) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.