#ceph IRC Log

Index

IRC Log for 2014-04-01

Timestamps are in GMT/BST.

[0:00] <mjevans> vilobhmm: Ooooh is your qemu stack compiled with rbd support?
[0:00] <vilobhmm> foo wasn't created
[0:00] <vilobhmm> no i used rpm to deeply qemu
[0:00] <vilobhmm> deploy
[0:01] <vilobhmm> do i need to checkout source and confiure it with rbd ?
[0:02] <mjevans> If this results in any text output you have rbd support: qemu-system-x86_64 -drive format=? 2>&1 | grep rbd
[0:03] <mjevans> vilobhmm: similarly: qemu-img -h 2>&1 | grep rbd
[0:03] <mjevans> vilobhmm: yeah, your qemu needs to support rbd in order to talk with ceph.
[0:03] <vilobhmm> qemu-img -h 2>&1 | grep rbd
[0:03] <vilobhmm> Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2 qed vhdx parallels nbd blkdebug host_cdrom host_floppy host_device file gluster gluster gluster gluster rbd
[0:03] * sarob (~sarob@2001:4998:effd:600:d013:9612:c196:766e) Quit (Ping timeout: 480 seconds)
[0:03] <vilobhmm> but qemu-system-x86_64 -drive format=? 2>&1 | grep rbd doesn't return anything
[0:04] <mjevans> vilobhmm: very odd, you should probably re-install with a version that has rbd support
[0:04] <vilobhmm> that means my qemu-img has rbd support right ?
[0:04] <mjevans> Either that or your version might be a bit old...
[0:04] <vilobhmm> what qemu version is needed ?
[0:04] <mjevans> vilobhmm: well, it's claiming it can...
[0:05] <mjevans> vilobhmm: I'm currently using qemu-1.7.1 my self.
[0:05] <vilobhmm> ok
[0:05] * sprachgenerator (~sprachgen@130.202.135.204) Quit (Quit: sprachgenerator)
[0:05] <mjevans> I don't know what version is /needed/ though.
[0:06] <vilobhmm> i am using qemu-img version 0.12.1
[0:06] <mjevans> vilobhmm: that might be a little old for talking to newer versions of ceph. I seem to recall running across that kind of thing on one page or another... but I can't recall where.
[0:07] <vilobhmm> ok
[0:07] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[0:07] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) has joined #ceph
[0:07] <vilobhmm> do you have repo details from where I can fetch the latest rpm or source details that you used I can try it out quickly
[0:07] <vilobhmm> qemu
[0:09] <mjevans> Sorry, I really don't do redhat stuff
[0:10] <vilobhmm> ok
[0:13] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[0:19] <mjevans> So, I need to completely rebuild the test cluster from the ground up... however: ceph-deploy purgdata HOST yields: [ceph_deploy][ERROR ] RuntimeError: refusing to purge data while ceph is still installed
[0:19] <mjevans> There doesn't seem to be a --yes-just-do-it option
[0:20] * AfC (~andrew@101.119.15.212) has joined #ceph
[0:21] <elder> vilobhmm, I am here for a few minutes.
[0:21] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[0:21] <vilobhmm> hi elder: as mjevans suggested i am trying to get newer qemu version and configure it with rbd
[0:21] <vilobhmm> lets see
[0:22] <elder> I'm not going to be much help with rbd in Qemu
[0:22] <elder> I'm sorry, I know a lot about the kernel rbd.
[0:23] * dmsimard (~Adium@108.163.152.2) Quit (Ping timeout: 480 seconds)
[0:24] <ponyofdeath> hi, where is the keyring for ceph auth list? is it in /etc/ceph
[0:26] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) Quit (Ping timeout: 480 seconds)
[0:26] * lxo (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[0:28] * AfC (~andrew@101.119.15.212) Quit (Quit: Leaving.)
[0:28] <mjevans> ponyofdeath: did you use ceph-deploy or a manual method? In the later you must create it, in the former you must gather it.
[0:28] <ponyofdeath> mjevans: i used ceph-deploy
[0:28] <vilobhmm> mjevans : when i checked out latest qemu
[0:28] <vilobhmm> and try to run configure with it , it says
[0:28] * vilobhmm (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Quit: vilobhmm)
[0:29] <ponyofdeath> mjevans: my problem is that i am unable to all of a sudden do operations on the cluster with a client i updated
[0:29] <ponyofdeath> mjevans: i have caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=libvirt, allow rwx pool=dc2-dev-app-pool01
[0:29] <ponyofdeath> and i am unable to do rbd -id libvirt -p libvirt ls
[0:29] <ponyofdeath> it just hangs
[0:29] <ponyofdeath> same with qemu-image create -f rbd
[0:30] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:30] <ponyofdeath> this is my config on the kvm client i am trying to access ceph cluster from http://paste.ubuntu.com/7187162
[0:33] <mjevans> ponyofdeath: you're tring to use libvirt which is probably running as another user. You're going to have to setup the pool that you're looking to use with a capability to do stuff and provide libvirt that secret key data. There are guides for this, but it's been a while since I've used them and I do not remember where they are or any more than I just stated.
[0:33] <ponyofdeath> mjevans: i have this working
[0:34] <ponyofdeath> mjevans: i had it set up
[0:34] <ponyofdeath> and its currently working
[0:34] * ismell (~ismell@host-64-17-89-79.beyondbb.com) Quit (Quit: leaving)
[0:34] <ponyofdeath> but i created a new pool
[0:34] * ismell (~ismell@host-64-17-89-79.beyondbb.com) has joined #ceph
[0:34] <ponyofdeath> and added that to the caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=libvirt, allow rwx pool=dc2-dev-app-pool01
[0:34] <ponyofdeath> libvirt pool + dc2-dev-app-pool01
[0:34] <ponyofdeath> but now am unable to use qemu-image to create an image on the new pool
[0:34] <ponyofdeath> it just hangs
[0:35] <ponyofdeath> so i tried to test out rbd -p libvirt ls and that hangs as well
[0:35] <ponyofdeath> so i was trying to do this from an local ceph box
[0:35] <ponyofdeath> with the libvirt username
[0:35] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) has joined #ceph
[0:35] <ponyofdeath> to isolate
[0:35] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:35] * vilobhmm (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[0:36] <ponyofdeath> and not sure where to get the rbd --keyfile that it needs
[0:36] <vilobhmm> User requested feature rados block device
[0:36] <vilobhmm> configure was not able to find it.
[0:36] <vilobhmm> Install librbd/ceph devel
[0:37] * zidarsk8 (~zidar@tm.78.153.58.217.dc.cable.static.telemach.net) has joined #ceph
[0:38] <ponyofdeath> 2014-03-31 22:37:41.207728 7fb1660c1780 -1 auth: failed to decode key 'key = XXXX'
[0:38] <ponyofdeath> is the error i get when i do sudo rbd --id libvirt --keyfile ./keyring -p libvirt ls
[0:38] <mjevans> vilobhmm: you're trying to build support for rbd without the development versions of the libraries... you should /really/ ask in libvirt/etc where to get such things.
[0:38] * zidarsk8 (~zidar@tm.78.153.58.217.dc.cable.static.telemach.net) has left #ceph
[0:38] <ponyofdeath> apperently that format of the keyfile is different than the one for /etc/ceph/ceph.conf
[0:38] <mjevans> ponyofdeath: right, rbd is trying to look up an id named 'libvirt' within the specified keyring...
[0:41] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[0:46] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) Quit (Ping timeout: 480 seconds)
[0:50] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[0:52] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[0:54] * andelhie_ (~Gamekille@128-107-239-235.cisco.com) Quit (Quit: This computer has gone to sleep)
[0:56] * mattt (~textual@S010690724001c795.vc.shawcable.net) has joined #ceph
[0:56] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:56] * ircolle (~Adium@2601:1:8380:2d9:b8bd:657c:f1fa:eae9) has joined #ceph
[0:57] * BillK (~BillK-OFT@58-7-59-126.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[0:58] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) Quit (Ping timeout: 480 seconds)
[0:59] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[1:00] * yeled (~yeled@spodder.com) Quit (Ping timeout: 480 seconds)
[1:01] * BillK (~BillK-OFT@124-148-252-1.dyn.iinet.net.au) has joined #ceph
[1:03] <vilobhmm> elder : how do i go about back porting changes to 2.6.32 kernel for kernel rbd support ?
[1:03] * zackc (~zackc@0001ba60.user.oftc.net) Quit (Quit: rebooting)
[1:07] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[1:07] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:08] <mjevans> vilobhmm: if it hasn't already been done, it's probably not worth the effort of someone asking how to do it... to do it.
[1:08] <vilobhmm> mjevans : i have working qemu and ceph setup
[1:09] <vilobhmm> librbd is also installed
[1:09] <vilobhmm> need a way to interact between qemu and ceph
[1:09] <vilobhmm> not sure what i am missing here
[1:11] * Fruit (wsl@2001:981:a867:2:216:3eff:fe10:122b) Quit (Ping timeout: 480 seconds)
[1:12] <KaZeR> wow. Bandwidth (MB/sec): 14.555 on a 4 nodes cluster with fusionIO storage. i must have done something wrong
[1:13] <KaZeR> the BW of the device itself is 1.7 GB/s ...
[1:13] <joshd> vilobhmm: do you have a symlink to librbd in /usr/lib64/qemu?
[1:14] * fghaas (~florian@sccc-66-78-236-243.smartcity.com) has joined #ceph
[1:15] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[1:17] <vilobhmm> @joshd : I have something like this
[1:17] <cephalobot> vilobhmm: Error: "joshd" is not a valid command.
[1:17] <vilobhmm> lrwxrwxrwx 1 root root 15 Mar 28 23:01 /usr/lib64/librbd.so.1 -> librbd.so.1.0.0
[1:17] <vilobhmm> -rwxr-xr-x 1 root root 8037870 Feb 19 21:09 /usr/lib64/librbd.so.1.0.0
[1:18] <vilobhmm> no librbd under /usr/lib64/qemu though
[1:18] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[1:19] <joshd> vilobhmm: depending on which qemu-kvm you're using, you may need a symlink from /usr/lib64/qemu/librbd.so.1 pointing to /usr/lib64/librbd.so.1
[1:22] * zackc (~zackc@0001ba60.user.oftc.net) has joined #ceph
[1:24] * yeled (~yeled@spodder.com) has joined #ceph
[1:25] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[1:35] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) has joined #ceph
[1:48] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[1:49] * sjm1 (~sjm@cpe-67-248-135-198.nycap.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:54] * Gamekiller77 (~Gamekille@c-24-6-85-12.hsd1.ca.comcast.net) has joined #ceph
[1:54] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[1:56] * fghaas (~florian@sccc-66-78-236-243.smartcity.com) Quit (Quit: Leaving.)
[1:57] * andelhie_ (~Gamekille@128-107-239-234.cisco.com) has joined #ceph
[1:59] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Quit: Leaving...)
[1:59] * Pedras (~Adium@64.191.206.83) has joined #ceph
[2:00] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:00] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:00] * Pedras (~Adium@64.191.206.83) Quit ()
[2:01] * ircolle (~Adium@2601:1:8380:2d9:b8bd:657c:f1fa:eae9) Quit (Quit: Leaving.)
[2:01] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[2:01] * Pedras (~Adium@64.191.206.83) has joined #ceph
[2:03] * Gamekiller77 (~Gamekille@c-24-6-85-12.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:05] * zack_dolby (~textual@e0109-114-22-12-245.uqwimax.jp) has joined #ceph
[2:05] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[2:08] * Pedras (~Adium@64.191.206.83) Quit (Quit: Leaving.)
[2:09] * zack_dolby (~textual@e0109-114-22-12-245.uqwimax.jp) Quit (Read error: Connection reset by peer)
[2:09] * Pedras (~Adium@64.191.206.83) has joined #ceph
[2:09] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[2:09] * zack_dolby (~textual@e0109-114-22-12-245.uqwimax.jp) has joined #ceph
[2:13] <vilobhmm> @joshd : still when i run qemu-img create -f raw rbd:data/foo 1G the command just hangs; i have checkout source for qemu for qemu-1.7.1 the qemu-img show version 0.12.1, is this fine ?
[2:13] <cephalobot> vilobhmm: Error: "joshd" is not a valid command.
[2:14] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Quit: Leaving...)
[2:14] <vilobhmm> joshd: ^^
[2:14] <vilobhmm> joshd : lrwxrwxrwx 1 root root 26 Apr 1 00:01 librbd.so.1.0.0 -> /usr/lib64/librbd.so.1.0.0
[2:16] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[2:17] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:18] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[2:18] * ChanServ sets mode +v andreask
[2:18] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[2:21] * bitblt (~don@128-107-239-233.cisco.com) Quit (Ping timeout: 480 seconds)
[2:21] * diegows_ (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[2:22] * danieagle (~Daniel@186.214.53.19) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[2:28] * mattt (~textual@S010690724001c795.vc.shawcable.net) Quit (Quit: Computer has gone to sleep.)
[2:32] * mattt (~textual@S010690724001c795.vc.shawcable.net) has joined #ceph
[2:34] * yuriw1 is now known as yuriw
[2:36] * Pedras (~Adium@64.191.206.83) Quit (Quit: Leaving.)
[2:39] * mattt (~textual@S010690724001c795.vc.shawcable.net) Quit (Quit: Computer has gone to sleep.)
[2:40] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) Quit (Remote host closed the connection)
[2:42] <mjevans> vilobhmm: you might not be running the qemu you compiled, but the one that already existed.
[2:45] * sputnik1_ (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:51] * ghartz (~ghartz@ip-68.net-80-236-84.joinville.rev.numericable.fr) Quit (Remote host closed the connection)
[2:52] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[2:54] <KaZeR> ok now that's an issue. health HEALTH_OK, osdmap e12: 4 osds: 4 up, 4 in, 320 active+clean. BUT the osd process died on all nodes.
[2:54] <KaZeR> trace in the logs : http://bpaste.net/show/rCcowu2AOxZMAqqaeVG6/
[2:55] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[3:03] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[3:10] * JeffK (~Jeff@38.99.52.10) Quit (Quit: JeffK)
[3:24] * KaZeR (~KaZeR@64.201.252.132) Quit (Remote host closed the connection)
[3:25] * LeaChim (~LeaChim@host86-162-2-97.range86-162.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[3:39] <vilobhmm> i can use the new qemu -1.7.1 but still the qemu-img create command just hangs
[3:40] <vilobhmm> qemu-img and qemu-systems show rbd supoorted format
[3:40] <vilobhmm> mjevans : ^^
[3:42] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:43] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) has joined #ceph
[3:45] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) Quit (Quit: Leaving.)
[3:46] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[3:48] * bandrus (~Adium@173.245.93.182) has joined #ceph
[3:49] * bandrus (~Adium@173.245.93.182) Quit ()
[3:49] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[3:50] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[3:52] * andelhie_ (~Gamekille@128-107-239-234.cisco.com) Quit (Quit: Leaving)
[3:54] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[3:57] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[3:59] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) Quit (Quit: Leaving.)
[4:02] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[4:12] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:15] * geekmush (~Adium@cpe-66-68-198-33.rgv.res.rr.com) has left #ceph
[4:26] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) has joined #ceph
[4:31] * haomaiwa_ (~haomaiwan@117.79.232.177) Quit (Remote host closed the connection)
[4:32] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[4:33] * lianghaoshen (~slhhust@119.39.124.239) has joined #ceph
[4:34] * lianghaoshen (~slhhust@119.39.124.239) Quit ()
[4:34] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[4:35] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) Quit (Quit: Leaving.)
[4:38] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[4:42] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:43] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[4:43] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[4:44] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[4:44] * c74d is now known as Guest5102
[4:44] * Guest5102 is now known as c74d
[4:45] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[4:47] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) has joined #ceph
[4:52] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[4:58] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[4:59] * haomaiwa_ (~haomaiwan@117.79.232.153) has joined #ceph
[5:00] * Qu310 (~Qu310@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[5:04] <Qu310> Heya, When looking at ceph -w, Updated are shown every second however quite often I notice I get a slight pause for a second or two then the is this "normal"?
[5:04] <Qu310> some english would have been nice...
[5:04] * Vacum_ (~vovo@88.130.200.80) has joined #ceph
[5:05] <Qu310> Rather, When looking at ceph -w, usually updates are shown every second. However quite often I notice I get a slight pause for a second or two is this "normal"?
[5:07] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Ping timeout: 480 seconds)
[5:09] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) Quit (Quit: mtanski)
[5:10] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[5:11] <mikedawson> Qu310: yes, that is normal
[5:11] * Vacum (~vovo@i59F79E46.versanet.de) Quit (Ping timeout: 480 seconds)
[5:14] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[5:16] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) has joined #ceph
[5:18] <Qu310> mikedawson: cheers
[5:25] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[5:26] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[5:35] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[5:37] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[5:38] * thanhtran (~thanhtran@123.30.135.76) has joined #ceph
[5:38] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[5:39] * mattt (~textual@S010690724001c795.vc.shawcable.net) has joined #ceph
[5:39] * Cube (~Cube@66-87-130-209.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[5:42] <thanhtran> hi, when i upgrade ceph from emperor (0.72.2) to firefly (0.78-367-gd9a2dea), I don't see files .asok of osds in folder /var/run/ceph/. where are they? anybody know about this?
[5:44] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:46] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[5:47] * zack_dolby (~textual@e0109-114-22-12-245.uqwimax.jp) Quit (Quit: Textual IRC Client: www.textualapp.com)
[5:47] * zack_dolby (~textual@e0109-114-22-12-245.uqwimax.jp) has joined #ceph
[5:55] * BillK (~BillK-OFT@124-148-252-1.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[5:57] * BillK (~BillK-OFT@58-7-102-93.dyn.iinet.net.au) has joined #ceph
[6:05] <vilobhmm> There is not rbd driver in 2.6.32 is there a way to include the rbd.ko in 2.6.32 ?
[6:08] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[6:13] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[6:20] * thuc_ (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[6:21] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:28] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:31] * mattt (~textual@S010690724001c795.vc.shawcable.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[6:32] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[6:34] * JC (~JC@ip-64-134-186-149.public.wayport.net) has joined #ceph
[6:35] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[6:35] * JC (~JC@ip-64-134-186-149.public.wayport.net) Quit ()
[6:37] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) has left #ceph
[6:40] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[6:42] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[6:44] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:47] * erice_ (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[6:52] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[7:10] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[7:13] * thuc_ (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[7:14] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[7:18] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[7:19] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[7:22] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:26] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[7:34] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[7:36] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[7:36] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[7:37] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[7:45] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:53] * Cube (~Cube@66-87-130-209.pools.spcsdns.net) has joined #ceph
[8:03] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[8:08] <jerker> do you want to run on RHEL6/CentOS6/SL6? An alternative option is to run the kernel-ml package from elrepo.org.
[8:09] * Cube (~Cube@66-87-130-209.pools.spcsdns.net) Quit (Quit: Leaving.)
[8:20] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[8:22] * Fruit (wsl@2001:981:a867:2:216:3eff:fe10:122b) has joined #ceph
[8:25] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[8:28] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[8:31] * dis is now known as Guest5116
[8:31] * dis (~dis@109.110.67.80) has joined #ceph
[8:32] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:33] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[8:36] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[8:38] * Guest5116 (~dis@109.110.66.89) Quit (Ping timeout: 480 seconds)
[8:40] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[8:46] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:46] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[8:48] * shang (~ShangWu@175.41.48.77) has joined #ceph
[8:49] * vilobhmm (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Quit: vilobhmm)
[8:54] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[8:57] * carif (~mcarifio@ip-37-25.sn1.eutelia.it) has joined #ceph
[9:00] <aarontc> argh, my cluster is completely unusable with the OSDs crashing all the time :( I hope this bug is fixed soon
[9:01] * thb (~me@2a02:2028:1c0:ac30:6267:20ff:fec9:4e40) has joined #ceph
[9:03] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (Remote host closed the connection)
[9:04] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[9:08] * carif (~mcarifio@ip-37-25.sn1.eutelia.it) Quit (Ping timeout: 480 seconds)
[9:14] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[9:15] * zack_dolby (~textual@e0109-114-22-12-245.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[9:16] * zack_dolby (~textual@pw126253081052.6.panda-world.ne.jp) has joined #ceph
[9:17] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Quit: We be chillin - IceChat style)
[9:18] * vince (~vince@160.85.231.67) has joined #ceph
[9:18] <vince> in ceph object storage, which doesn't use MDSs, where is metadata stored?
[9:21] * vilobhmm (~vilobhmm@nat-dip6.cfw-a-gci.corp.yahoo.com) has joined #ceph
[9:22] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[9:28] * zack_dol_ (~textual@e0109-114-22-12-245.uqwimax.jp) has joined #ceph
[9:28] * zack_dolby (~textual@pw126253081052.6.panda-world.ne.jp) Quit (Read error: Connection reset by peer)
[9:29] * analbeard (~shw@support.memset.com) has joined #ceph
[9:30] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) has joined #ceph
[9:31] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[9:31] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) has joined #ceph
[9:35] * hybrid512 (~walid@195.200.167.70) Quit (Quit: Leaving.)
[9:36] <jerker> vince: what kind of metadata? AFAIU a hash algorithm (crush) is used to eliminate the need for a specific metadataserver for looking up where data is stored. certain applications can of course also use the object store to store metadata about specific objects. http://ceph.com/docs/master/architecture/
[9:37] <vince> but if you want to know for example when an object was created, etc
[9:37] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[9:37] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[9:40] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[9:43] <Fruit> vince: then you ask the osd
[9:44] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[9:45] <vince> Fruit: that is the "regular" xattr metadata?
[9:46] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:49] <vince> I think this explains it: http://ceph.com/docs/master/rados/configuration/filestore-config-ref/?highlight=ext4#extended-attributes
[9:49] * TMM (~hp@c97185.upc-c.chello.nl) Quit (Quit: Ex-Chat)
[9:52] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) has joined #ceph
[9:53] <Fruit> vince: also, rados_stat() in http://ceph.com/docs/master/rados/api/librados/
[9:55] * vince (~vince@160.85.231.67) Quit (Read error: Operation timed out)
[10:02] <stewiem20001> I've updated all my monitors to ceph version 0.72.2-61-gd258fcc, and just done the first OSD-server, but now the OSDs won't start with: osd/PG.cc: 1018: FAILED assert(peer_info.count(*i)) ??? a bit of Googling's no help, any ideas where to start?
[10:09] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[10:13] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[10:17] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[10:17] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) Quit (Ping timeout: 480 seconds)
[10:18] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[10:18] * ChanServ sets mode +v andreask
[10:18] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) has joined #ceph
[10:19] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) Quit ()
[10:19] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[10:20] * vince (~vince@160.85.122.123) has joined #ceph
[10:20] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[10:29] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) Quit (Read error: Operation timed out)
[10:31] * Fruit (wsl@2001:981:a867:2:216:3eff:fe10:122b) Quit (Remote host closed the connection)
[10:31] * LeaChim (~LeaChim@host86-162-2-97.range86-162.btcentralplus.com) has joined #ceph
[10:32] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:33] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:35] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:36] * vince (~vince@160.85.122.123) Quit (Ping timeout: 480 seconds)
[10:36] * ksingh (~Adium@2001:708:10:10:a9b9:6b85:2cfa:3142) has joined #ceph
[10:38] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[10:43] * Fruit (wsl@2001:981:a867:2:216:3eff:fe10:122b) has joined #ceph
[10:43] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[10:43] * allsystemsarego (~allsystem@188.25.131.129) has joined #ceph
[10:47] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:47] * vilobhmm (~vilobhmm@nat-dip6.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[11:02] * b0e1 (~aledermue@213.95.15.4) has joined #ceph
[11:03] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[11:06] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[11:08] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[11:11] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[11:15] * b0e1 (~aledermue@213.95.15.4) Quit (Ping timeout: 480 seconds)
[11:18] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[11:26] * zack_dol_ (~textual@e0109-114-22-12-245.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:26] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[11:38] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[11:39] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[11:39] * ksingh1 (~Adium@2001:708:10:91:2477:9fcf:5732:46e1) has joined #ceph
[11:44] * ksingh (~Adium@2001:708:10:10:a9b9:6b85:2cfa:3142) Quit (Ping timeout: 480 seconds)
[11:46] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[11:53] * oro (~oro@2001:620:20:222:f4eb:a026:d10:e254) has joined #ceph
[11:57] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[11:58] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[12:03] <oro> Hey everyone. After some previous experiences with ubuntu 12.04-based ceph cluster, we are migrating to a bigger one. We are really taking into consideration to have our infrastructure on RHEL, but because of some iscsi features we want to have at least 3.12-lts kernel (vaai capabilities for vmware). My question is, would you go with a RHEL 7 + epel kernel-lt (they will probably have 3.12)? Will you ceph-devs even test with a that recent kerne
[12:03] <oro> l and OS your builds?
[12:05] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[12:09] * b0e (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[12:14] * vince (~vince@160.85.122.139) has joined #ceph
[12:16] * ShaunR (~ShaunR@staff.ndchost.com) Quit ()
[12:18] * oro (~oro@2001:620:20:222:f4eb:a026:d10:e254) Quit (Ping timeout: 480 seconds)
[12:22] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[12:32] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[12:36] * thanhtran (~thanhtran@123.30.135.76) Quit (Quit: Going offline, see ya! (www.adiirc.com))
[12:39] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[12:40] * ksingh1 (~Adium@2001:708:10:91:2477:9fcf:5732:46e1) Quit (Quit: Leaving.)
[12:45] * jks (~jks@3e6b5724.rev.stofanet.dk) Quit (Ping timeout: 480 seconds)
[12:46] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[12:47] * oro (~oro@2001:620:20:222:f4eb:a026:d10:e254) has joined #ceph
[12:51] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[12:52] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) has joined #ceph
[12:52] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[12:53] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) Quit ()
[12:54] * i_m (~ivan.miro@gbibp9ph1--blueice1n2.emea.ibm.com) has joined #ceph
[12:55] * joao (~joao@a95-92-33-54.cpe.netcabo.pt) has joined #ceph
[12:55] * ChanServ sets mode +o joao
[12:56] * TheBittern (~thebitter@195.10.250.233) Quit (Read error: Operation timed out)
[12:57] * diegows_ (~diegows@190.190.5.238) has joined #ceph
[12:58] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[13:01] * zidarsk8 (~zidar@tm.78.153.58.217.dc.cable.static.telemach.net) has joined #ceph
[13:01] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) has joined #ceph
[13:04] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[13:12] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (Ping timeout: 480 seconds)
[13:20] <janos> oro, don't you lose RH support if you go with someone else's kernel?
[13:21] <janos> if so, i'd go with a minnimal install fedora if you wish to go the RPM-based route. it's a much cleaner option. RH is fine and good, but they patch things so much that you really need to stay within their sandbox
[13:28] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[13:32] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[13:40] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[13:40] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Quit: Leaving.)
[13:42] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[13:45] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[13:49] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:49] * vince (~vince@160.85.122.139) Quit (Ping timeout: 480 seconds)
[13:53] * marrusl_ (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[13:53] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[13:53] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[13:54] * vince (~vince@160.85.122.139) has joined #ceph
[13:54] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[13:55] * dmsimard (~Adium@108.163.152.66) has joined #ceph
[13:56] * allsystemsarego (~allsystem@188.25.131.129) Quit (Read error: Operation timed out)
[13:59] * ksingh (~Adium@2001:708:10:10:e8d0:f6d4:b65b:6950) has joined #ceph
[14:03] * vince (~vince@160.85.122.139) Quit (Ping timeout: 480 seconds)
[14:06] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[14:08] * ksingh (~Adium@2001:708:10:10:e8d0:f6d4:b65b:6950) Quit (Ping timeout: 480 seconds)
[14:08] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[14:08] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit ()
[14:11] * leseb (~leseb@185.21.172.77) Quit (Killed (NickServ (Too many failed password attempts.)))
[14:11] * vince (~vince@160.85.231.155) has joined #ceph
[14:13] * leseb (~leseb@185.21.172.77) has joined #ceph
[14:14] * allsystemsarego (~allsystem@188.27.166.29) has joined #ceph
[14:15] * allsystemsarego (~allsystem@188.27.166.29) Quit ()
[14:15] * allsystemsarego (~allsystem@188.27.166.29) has joined #ceph
[14:18] * dmsimard (~Adium@108.163.152.66) Quit (Ping timeout: 480 seconds)
[14:20] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[14:21] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) has joined #ceph
[14:24] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) has joined #ceph
[14:25] * ksingh (~Adium@teeri.csc.fi) has joined #ceph
[14:31] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) Quit (Quit: mtanski)
[14:31] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[14:32] * dmsimard (~Adium@70.38.0.246) has joined #ceph
[14:34] * marrusl_ (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[14:34] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[14:37] <classicsnail> elrepo kernels work fine
[14:37] * vince (~vince@160.85.231.155) Quit (Quit: Leaving)
[14:38] <classicsnail> they work with ceph too, I've not had issues
[14:38] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[14:39] <classicsnail> the elrepo kernels focus on keeping ABI the same as that RHEL release, as for how they sit with ceph packages themselves, YMMV and all the rest
[14:39] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[14:40] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[14:41] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[14:41] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[14:42] <janos> i'm sure they are fine. but when someone mentions RHEL there is usually some aspect of desiring support
[14:43] * timidshark (~timidshar@70-88-62-73-fl.naples.hfc.comcastbusiness.net) has joined #ceph
[14:44] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[14:45] <classicsnail> rh won't turn you away in my experience, but you will have to potentially switch back to a rhel kernel
[14:45] <classicsnail> and it's more effort than it's worth to remove the rhel kernel from the system anyway
[14:45] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[14:46] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[14:47] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) has joined #ceph
[14:47] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[14:48] * BillK (~BillK-OFT@58-7-102-93.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[14:49] * BillK (~BillK-OFT@106-69-6-59.dyn.iinet.net.au) has joined #ceph
[14:50] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[14:51] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) Quit ()
[14:57] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[14:58] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:01] <oro> janos, yes we know that we will lose the support. (Actually redhat is still better becasue of the remote updates, and so on). But we had so many problems with ubuntu that we don't tust it anymore; and of course if we go with ubuntu, we don't have the support at all. Redhat licences are already available fortunately.
[15:02] <darkfader> tbh outside of #ceph i've never seen any person running a server on ubuntu
[15:03] <oro> darkfader, you mean running servers with ceph?
[15:03] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) Quit (Ping timeout: 480 seconds)
[15:03] * simulx (~simulx@66-194-114-178.static.twtelecom.net) Quit (Ping timeout: 480 seconds)
[15:03] <darkfader> no i mean using ubuntu on anything that is a production server
[15:04] <darkfader> people do it in here since it's closer to the dev os, but it's perfectly understandable you feel something else would be bit saner
[15:04] <oro> oh, come on :) it's not that bad, it just has its issues
[15:05] <janos> oro - cool. i'm just used to hearing people going the RHEL route without support instead using centOS
[15:05] <janos> nothing wrong with what you're suggesting, i'm just not used to hearing it
[15:05] * thomnico (~thomnico@192.165.183.201) has joined #ceph
[15:05] * simulx (~simulx@vpn.expressionanalysis.com) has joined #ceph
[15:06] <janos> i personally prefer bare installs of fedora because it's pretty up to date and unmolested, for better or worse
[15:06] <janos> though i admit to looking forward to RHEL 7 somewhat
[15:06] <janos> i think that's based off of fedora 18
[15:07] <darkfader> 19
[15:07] <janos> !
[15:07] <janos> nice
[15:07] <darkfader> luckily, means you get forking fc target mode in targetcli (probably. I hope)
[15:07] <darkfader> working!
[15:08] <oro> yeah, i also suggested centOS, but the remote upgrades we really would like t have. Also, we are looking into using RH's OpenStack distribution
[15:08] <janos> cool
[15:08] <darkfader> oro: one benefit is you *can* take rhel into support if you find out you need it down the road
[15:08] * julian (~julianwa@125.69.104.83) has joined #ceph
[15:08] <darkfader> i've only once wished for that, but then i did so very very much
[15:10] * thomnico (~thomnico@192.165.183.201) Quit ()
[15:10] <oro> so but then the question is, what relationship do you, devs have with rhel/centos/fedora? in terms of testing ceph, etc on those systems?
[15:13] * ghartz (~ghartz@ip-68.net-80-236-84.joinville.rev.numericable.fr) has joined #ceph
[15:14] <oro> or at all, if we go with rhel, can we expect that you will have up-to-date, production ready repos where we can get ceph from?
[15:14] * mnash (~chatzilla@vpn.expressionanalysis.com) has joined #ceph
[15:17] * thomnico (~thomnico@192.165.183.201) has joined #ceph
[15:34] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[15:37] * dmsimard (~Adium@70.38.0.246) Quit (Quit: Leaving.)
[15:40] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[15:41] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[15:41] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[15:42] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[15:43] <darkfader> does anyone have performance number for running with a cache tier?
[15:44] * zidarsk8 (~zidar@tm.78.153.58.217.dc.cable.static.telemach.net) has left #ceph
[15:45] * yanzheng (~zhyan@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[15:48] <oro> actually i will be interested in darkfader's question, too. We have several tens of terabytes HDDs running under ceph while our previous, SSD-only storage could go to as a cache tier. If that makes sense at all
[15:49] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:50] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:50] * ircolle (~Adium@2601:1:8380:2d9:b152:4e31:e203:1d53) has joined #ceph
[15:51] <ksingh> oro and darkfader :: i am interested in this discussion too
[15:51] <ksingh> oro : you said ssd-only storage as cache tier , so did you made single ssd pool or multiple ssd pools
[15:52] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:52] <ksingh> how you have done the configuration , if you can tell us in brief
[15:53] <darkfader> "could go to as a cache tier"
[15:53] <darkfader> future ;)
[15:56] * thomnico (~thomnico@192.165.183.201) Quit (Ping timeout: 480 seconds)
[15:57] <ksingh> Anyone : if you can help , i am trying to create a ssd pool with custom crush map . Setup is Node 1 has OSD.0 and OSD.1 as ssd , and i have created a POOL with these 2 SSD's . But still when i am putting data in this pool
[15:57] * i_m (~ivan.miro@gbibp9ph1--blueice1n2.emea.ibm.com) Quit (Quit: Leaving.)
[15:57] <ksingh> its not actually getting stored on osd.0 and osd.1
[15:57] <ksingh> rather its getting stored on other OSDs
[15:57] <oro> ksingh, right, we are cuurently building our next cluster, and the soon-to-be ssd-cache machine now operates under our current infrastructure. We (or at least, I) don't know that much about tiering, but if we find out how things could work, we will definitely have a blog post about it :)
[15:57] <ksingh> please suggest
[15:58] <ksingh> oro : i would love to see blog post , underneath SSD pool are you planning erasure coded Pool
[15:59] * BillK (~BillK-OFT@106-69-6-59.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:59] * sjm (~sjm@cpe-67-248-135-198.nycap.res.rr.com) has joined #ceph
[16:06] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[16:07] * tjikkun_ (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[16:11] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (Ping timeout: 480 seconds)
[16:11] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[16:12] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[16:13] * dmsimard1 (~Adium@ap03.wireless.co.mtl.iweb.com) has joined #ceph
[16:13] * dmsimard (~Adium@108.163.152.2) Quit (Read error: Connection reset by peer)
[16:16] * dmsimard (~Adium@70.38.0.245) has joined #ceph
[16:18] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) has joined #ceph
[16:21] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:22] * dmsimard1 (~Adium@ap03.wireless.co.mtl.iweb.com) Quit (Ping timeout: 480 seconds)
[16:23] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) Quit ()
[16:25] * julian (~julianwa@125.69.104.83) Quit (Quit: afk)
[16:25] * dmsimard1 (~Adium@108.163.152.2) has joined #ceph
[16:26] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[16:27] * julian (~julianwa@125.69.104.83) has joined #ceph
[16:28] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit ()
[16:28] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[16:28] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[16:29] * dmsimard (~Adium@70.38.0.245) Quit (Ping timeout: 480 seconds)
[16:33] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) has joined #ceph
[16:36] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[16:39] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[16:39] * ismell (~ismell@host-64-17-89-79.beyondbb.com) Quit (Quit: leaving)
[16:40] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[16:41] * KaZeR (~KaZeR@64.201.252.132) has joined #ceph
[16:42] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[16:46] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[16:46] * ChanServ sets mode +v andreask
[16:47] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[16:50] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:57] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[17:00] * JoeGruher (~JoeGruher@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[17:00] * vata (~vata@2607:fad8:4:6:b8c1:427:27d2:8f15) has joined #ceph
[17:02] * b0e (~aledermue@juniper1.netways.de) Quit (Read error: Operation timed out)
[17:05] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[17:08] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) Quit (Quit: Ex-Chat)
[17:08] * ifur (~osm@hornbill.csc.warwick.ac.uk) has joined #ceph
[17:14] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[17:16] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) Quit (Quit: leaving)
[17:16] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:17] * bitblt (~don@128-107-239-233.cisco.com) has joined #ceph
[17:18] <ifur> any experiences with bonding 2x 10GbE to nodes?
[17:19] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:22] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[17:24] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) has joined #ceph
[17:26] * sprachgenerator (~sprachgen@130.202.135.213) has joined #ceph
[17:27] * valeech (~valeech@38.122.132.154) has joined #ceph
[17:28] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Read error: Operation timed out)
[17:30] * JeffK (~Jeff@38.99.52.10) has joined #ceph
[17:30] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[17:41] * oms101 (~oms101@2620:113:80c0:5::2222) Quit (Ping timeout: 480 seconds)
[17:47] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[17:49] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Ping timeout: 480 seconds)
[17:49] * rmoe (~quassel@12.164.168.117) has joined #ceph
[17:52] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:54] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:55] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:58] * yanzheng (~zhyan@jfdmzpr02-ext.jf.intel.com) Quit (Remote host closed the connection)
[17:59] * leseb (~leseb@185.21.172.77) Quit (Ping timeout: 480 seconds)
[18:00] * zackc (~zackc@0001ba60.user.oftc.net) Quit (Quit: brb)
[18:02] <wrencsok>
[18:02] <wrencsok> we run ours bonded. no issues.
[18:02] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has left #ceph
[18:03] * joshd1 (~joshd@2602:306:c5db:310:90bd:c04b:212d:344b) has joined #ceph
[18:03] * rmoe_ (~quassel@12.164.168.117) has joined #ceph
[18:05] * zackc (~zackc@0001ba60.user.oftc.net) has joined #ceph
[18:05] * oro (~oro@2001:620:20:222:f4eb:a026:d10:e254) Quit (Ping timeout: 480 seconds)
[18:05] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[18:06] * timidshark (~timidshar@70-88-62-73-fl.naples.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[18:07] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[18:15] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) has joined #ceph
[18:15] * valeech (~valeech@38.122.132.154) Quit (Quit: valeech)
[18:16] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:16] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[18:20] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:22] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[18:23] * ksingh (~Adium@teeri.csc.fi) Quit (Quit: Leaving.)
[18:24] * sjm (~sjm@cpe-67-248-135-198.nycap.res.rr.com) Quit (Ping timeout: 480 seconds)
[18:24] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[18:26] * ksingh (~Adium@teeri.csc.fi) has joined #ceph
[18:28] * ksingh (~Adium@teeri.csc.fi) Quit ()
[18:28] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[18:29] * gregmark (~Adium@68.87.42.115) has joined #ceph
[18:29] * valeech (~valeech@38.122.132.154) has joined #ceph
[18:32] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[18:32] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:32] * valeech (~valeech@38.122.132.154) Quit ()
[18:33] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[18:39] * scuttlemonkey changes topic to 'Latest stable (v0.72.x "Emperor") -- http://ceph.com/get || dev channel #ceph-devel'
[18:39] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Quit: Leaving.)
[18:40] * fghaas (~florian@172.56.16.213) has joined #ceph
[18:41] * scalability-junk (uid6422@id-6422.ealing.irccloud.com) has joined #ceph
[18:45] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has left #ceph
[18:46] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[18:48] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:50] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[18:51] <KaZeR> how can i force remove an osd ? Error EBUSY: osd.2 is still up; must be down before removal.
[18:51] <Fruit> ceph osd down 2
[18:52] * fghaas (~florian@172.56.16.213) has left #ceph
[18:52] <KaZeR> thanks Fruit
[18:53] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) Quit (Quit: Leaving.)
[18:55] <ifur> been aproached by bioinformatics people wanting to do a cloud with ceph to analyze genomes
[18:56] <ifur> wanting erasure encoding... assume this would mean to design with iops, except use spinlde drives
[19:02] <nhm> ifur: what tools are they using?
[19:03] * The_Bishop_ (~bishop@f055167181.adsl.alicedsl.de) has joined #ceph
[19:03] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:04] <nhm> ifur: I used to work on grid tools for proteomics/bioinformatics data analysis.
[19:04] <ifur> nhm: not very much forthcoming, and their medling with implementations another... initially they mainly wanted to use erasure enncoding to reduce the amount of data they had to transfer between sites....
[19:04] <nhm> mostly just wrappers around things like bowtie, xtandem, etc
[19:05] <ifur> nhm: i believe they want to make a cloud that deals and stores bacteria genomes if that makes any sense...
[19:05] <ifur> at least for compute they want a mix of nodes with 1.5TB to 6TB of memory
[19:06] <ifur> and it is read once and write once worloads from what ive been told
[19:06] <nhm> ifur: Back when I was designing systems for that kind of thing, it looked like it was going to work out best for our guys to have local SSDs for speed and some big bulk data store (like ceph with erasure coding) for long term storage for the raw data.
[19:06] <ifur> so IOPS is needed?
[19:07] <nhm> it was for at least some of the applications.
[19:07] <nhm> ifur: Some of the old protein search engines on the proteomics side were poorly written and didn't cache things in memory well.
[19:08] <nhm> ifur: We saw like a 10x speed up by putting the databases on a ramdisk.
[19:08] <ifur> fantastic....
[19:08] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[19:08] <nhm> ifur: I'm less familiar with the tools on the genomics side though
[19:08] <ifur> wouldnt surprise me that a mix of systems would be needed
[19:08] <nhm> ifur: And it's been like 3-4 years...
[19:09] <nhm> ifur: in any event, expect a bunch of 10 year old code written by graduate students and post docs. :D
[19:09] <ifur> but they also wanted a small ~300TB paralell filesystem, which tells me that they want a quick and easy fix for IOPS loads, so I think you are right here
[19:09] * zerick (~eocrospom@190.187.21.53) Quit (Read error: Connection reset by peer)
[19:09] <nhm> And don't expect erasure coding to be very fast as far as latency/iops goes.
[19:09] <stewiem20001> Hello all. I've updated all my monitors to ceph version 0.72.2-61-gd258fcc, and just done the first OSD-server, but now the OSD processes start and then crash with: osd/PG.cc: 1018: FAILED assert(peer_info.count(*i)) ??? a bit of Googling's not shed any light, any ideas why the new code's killing things?
[19:10] <ifur> nhm: ive seen a student do bitflipping on an integer by converting it to double float to do the bit flipping and then convert it back to integer
[19:10] * The_Bishop__ (~bishop@f055083171.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[19:10] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[19:11] <ifur> nhm: curious about how cpu intensive erasure encoding actually will be.... has anyone done any testing on this yet?
[19:11] <nhm> ifur: I've seen programmers working for cabig recommend with a straight face that terabytes of binary medical data should be base64 encoded and sent embedded as XML so it could be 'annotated'. :D
[19:12] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[19:12] <janos> omg
[19:12] <ifur> nhm: that just screams promotion to management
[19:13] <nhm> ifur: it's fairly heavy. I've got some test results locally. Basic gist is that large IO writes are fairly competitive in terms of performance with 2x-3x replication, but small IO is a lot slower.
[19:14] <ifur> so for throughput you dont need much more CPU power?
[19:16] <nhm> ifur: I may have been pegging the CPUs on this machine during the write tests, but at least for large IOs erasure coding was in the right ballpark.
[19:16] <nhm> for small IOs it was (as expected) slower.
[19:16] <nhm> don't remember the CPU load in those tests. I can go back and look in a bit, got a meeting now.
[19:18] <ifur> alright, thanks!
[19:18] * joshd1 (~joshd@2602:306:c5db:310:90bd:c04b:212d:344b) Quit (Quit: Leaving.)
[19:18] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[19:22] * Osmann (~kureysli@88.234.104.81) has joined #ceph
[19:22] * Osmann (~kureysli@88.234.104.81) Quit (autokilled: Do not spam other people. Mail support@oftc.net if you feel this is in error. (2014-04-01 17:22:27))
[19:23] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) has joined #ceph
[19:23] * The_Bishop_ (~bishop@f055167181.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[19:23] * Cube (~Cube@12.248.40.138) has joined #ceph
[19:25] <ponyofdeath> hi, so i seem to have an issue where if if have kvm vm's and i create a new ceph pool then modify the client.libvirtuser osd grant permissions, i am unable to use that username from the kvm hosts with any new connections. i have to create a new client/key in order to talk with the ceph cluster again. any ideas why that is happening?
[19:33] * leseb (~leseb@185.21.174.206) has joined #ceph
[19:35] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[19:35] * sarob (~sarob@2601:9:7080:13a:2c06:784f:4ef4:47dd) has joined #ceph
[19:39] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[19:39] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has joined #ceph
[19:41] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) Quit ()
[19:42] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[19:43] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[19:43] * sarob (~sarob@2601:9:7080:13a:2c06:784f:4ef4:47dd) Quit (Ping timeout: 480 seconds)
[19:47] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) has joined #ceph
[19:58] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[19:59] * neurodrone_ (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) has joined #ceph
[20:04] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[20:04] * neurodrone_ is now known as neurodrone
[20:04] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[20:08] * The_Bishop_ (~bishop@2001:470:50b6:0:38c1:f719:c265:d189) has joined #ceph
[20:11] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[20:11] * sarob (~sarob@2601:9:7080:13a:a0bb:3455:6a77:895c) has joined #ceph
[20:12] * thb (~me@2a02:2028:1c0:ac30:6267:20ff:fec9:4e40) has joined #ceph
[20:12] * sarob (~sarob@2601:9:7080:13a:a0bb:3455:6a77:895c) Quit ()
[20:12] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[20:13] * flaxy (~afx@78.130.174.164) Quit (Ping timeout: 480 seconds)
[20:15] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[20:16] * kiwigera_ (~kiwigerai@208.72.139.54) has joined #ceph
[20:16] * ismell (~ismell@host-64-17-89-79.beyondbb.com) has joined #ceph
[20:25] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) Quit (Quit: Leaving.)
[20:32] * sjm (~sjm@cpe-67-248-135-198.nycap.res.rr.com) has joined #ceph
[20:37] * Underbyte (~jerrad@pat-global.macpractice.net) has joined #ceph
[20:43] * Fernando (~Fernando@static-71-165-13-111.lsanca.fios.verizon.net) has joined #ceph
[20:43] <Fernando> Hello
[20:44] * Fernando is now known as Guest5167
[20:45] <Guest5167> was wondering if i could get a light shown on a problem i have run accros with my ceph cluster
[20:45] <Guest5167> 1 of my mons is unable to join quorum
[20:45] <Guest5167> and ive treid removing it and then re-adding it and it is still unable to join quorum
[20:45] * TMM (~hp@c97185.upc-c.chello.nl) has joined #ceph
[20:46] * flaxy (~afx@78.130.174.164) has joined #ceph
[20:47] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[20:52] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) has joined #ceph
[20:52] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) Quit ()
[20:59] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[20:59] * Guest5167 (~Fernando@static-71-165-13-111.lsanca.fios.verizon.net) Quit (Quit: Guest5167)
[21:00] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[21:02] * sjm (~sjm@cpe-67-248-135-198.nycap.res.rr.com) Quit (Remote host closed the connection)
[21:07] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[21:12] * jks (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[21:31] * kiwigera_ is now known as kiwigeraint
[21:34] * vilobhmm (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:34] * giorgis (~oftc-webi@46-93-243.adsl.cyta.gr) has joined #ceph
[21:34] <giorgis> hi all
[21:35] <giorgis> I am trying to install a new ceph implementation
[21:35] <giorgis> following the guide and when I try to start the initial mon I have a problem
[21:35] <giorgis> Starting Ceph mon.ceph1 on ceph1... [ceph1][DEBUG ] failed: 'ulimit -n 32768; /usr/bin/ceph-mon -i ceph1 --pid-file /var/run/ceph/mon.ceph1.pid -c /etc/ceph/ceph.conf '
[21:36] <giorgis> any ideas??
[21:36] <alfredodeza> giorgis: yes, you want to try that manually on that server, not through ceph-deploy
[21:36] <alfredodeza> go ssh to ceph1 and run that
[21:36] <alfredodeza> *and* tail the logs
[21:36] <giorgis> I am on the server
[21:36] <alfredodeza> if the logs don't say much increase the verbosity for the mon
[21:37] <alfredodeza> giorgis: you are running that from ceph-deploy though
[21:37] <alfredodeza> that output is from ceph-deploy
[21:37] <giorgis> alfredodeza: should I run the ulimit -n 32768
[21:37] <giorgis> as well?
[21:37] <alfredodeza> the whole thing
[21:37] <alfredodeza> sure
[21:37] <alfredodeza> or ulimit first
[21:37] <alfredodeza> and then the ceph-mon
[21:38] <giorgis> as root?
[21:39] <giorgis> alfredodeza: [3056]: (33) Numerical argument out of domain
[21:39] <alfredodeza> great
[21:39] <alfredodeza> with sudo?
[21:40] <giorgis> alfredodeza: directly as rot
[21:40] <giorgis> root
[21:40] <alfredodeza> ok, so you have a ulimit problem then
[21:40] <alfredodeza> you need to raise the limits
[21:40] <alfredodeza> that is for open file handles iirc
[21:41] <giorgis> ulimit didn't produce any errors...
[21:41] <alfredodeza> oh it wasn't that?
[21:41] <alfredodeza> is there any more output?
[21:41] * mtanski (~mtanski@69.193.178.202) Quit (Ping timeout: 480 seconds)
[21:46] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[21:46] * Cube (~Cube@12.248.40.138) Quit (Read error: Connection reset by peer)
[21:47] <giorgis> alfredodeza: thanks for the suggestions
[21:48] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[21:48] * ChanServ sets mode +v andreask
[21:48] <mjevans> Arg, why does ceph /insist/ on altering my custom osd placmenet each and every startup /by default/
[21:49] <giorgis> alfredodeza: seems that I had mixed put wrong public network on the ceph.conf file...I had specified a network that didn't exist and that is because I am running it at a VM with just one interface for internal and public...
[21:49] <alfredodeza> ah nice
[21:53] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[21:53] <mjevans> Ah that's why... I forgot to set the thing in the config file
[22:01] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[22:08] <giorgis> alfredodeza: thx a million for the feedback!!! I owe you one!!!
[22:08] <alfredodeza> giorgis: oh not at all! this was all you :)
[22:10] <giorgis> alfredodeza: I am not sure how long would it take before I could solve it myself...therefore thank you once again for your time and your useful feedback!
[22:12] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[22:14] * yeled (~yeled@spodder.com) Quit (Ping timeout: 480 seconds)
[22:15] <wrencsok> I've an issue that maybe someone can help me fix or verify if its a bug. We posted some questions in the mailings list, no repsonse so far. I seem to be able to pull a list of hourly logs on a given bucket using the command: radosgw-admin log list --bucket=abc | grep abc | sort. I can't seem to figure out, or maybe there is a bug. How do I to use "log show" to dump the contents of a specific file. http://pastebin.com/UKqeW8T6 It appear
[22:17] * mjblw1 (~mbaysek@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[22:20] * leseb (~leseb@185.21.174.206) Quit (Ping timeout: 480 seconds)
[22:21] * giorgis (~oftc-webi@46-93-243.adsl.cyta.gr) Quit (Quit: Page closed)
[22:28] * allsystemsarego (~allsystem@188.27.166.29) Quit (Quit: Leaving)
[22:30] * leseb (~leseb@185.21.174.206) has joined #ceph
[22:31] * Sysadmin88 (~IceChat77@176.254.32.31) has left #ceph
[22:38] * yeled (~yeled@spodder.com) has joined #ceph
[22:45] * BillK (~BillK-OFT@58-7-77-11.dyn.iinet.net.au) has joined #ceph
[22:47] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[22:53] * BillK (~BillK-OFT@58-7-77-11.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[22:54] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[22:55] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[23:01] * suaporraquente (~suaporraq@200.79.255.125) has joined #ceph
[23:01] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[23:02] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[23:04] * suaporraquente (~suaporraq@200.79.255.125) Quit (autokilled: Do not spam other people. Mail support@oftc.net if you feel this is in error. (2014-04-01 21:04:42))
[23:05] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) has joined #ceph
[23:05] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit ()
[23:12] * Gamekiller77 (~Gamekille@128-107-239-234.cisco.com) has joined #ceph
[23:12] <Gamekiller77> any one here to help with simple problem i think it is
[23:12] <Gamekiller77> i see this when i do a ceph osd tree
[23:12] <Gamekiller77> 80 0.36 osd.80 DNE
[23:13] <mjevans> Down among other things
[23:13] <Gamekiller77> the DNE means what and how to i get rid of it i rebuilt this node and did the manual removal of the rm
[23:13] <Gamekiller77> so the OSD is gone
[23:13] <mjevans> 'down' means the OSD isn't on the network
[23:13] <Gamekiller77> it new number now so need to clean up the crush map
[23:13] <Gamekiller77> ok so OSD.80 gone
[23:13] <Gamekiller77> bye bye
[23:13] <mjevans> I'm not familliar with the other things
[23:13] <mjevans> You should read the documentation
[23:14] <Gamekiller77> will do
[23:14] <Gamekiller77> just seeing if there is a quick fix here
[23:14] <Gamekiller77> thanks
[23:14] <mjevans> pg / osd / crush map related stuff
[23:14] <lurbs> DNE will be 'Does Not Exist', but not sure what to do past that.
[23:14] <mjevans> Your data is at risk; a 'quick fix' isn't always the best option.
[23:15] <Gamekiller77> funny thing is i saw it on other node but then went away
[23:15] <Gamekiller77> o well i find a fix hehe
[23:20] * BillK (~BillK-OFT@58-7-77-11.dyn.iinet.net.au) has joined #ceph
[23:24] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[23:25] <Gamekiller77> there we go all nice and clean
[23:25] <Gamekiller77> it was simple in a way
[23:25] <Gamekiller77> fyi just had to remove the osd from the master crush map
[23:26] <Gamekiller77> when i did it one way it never removed the OSD from the cluster
[23:26] <Gamekiller77> very odd
[23:26] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[23:27] <vilobhmm> Hello Everyone, how do i backport rbd.ko to linux kernel 2.6.32 ?
[23:28] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) Quit (Read error: Connection reset by peer)
[23:29] <mjevans> vilobhmm: you hire some extremely tallened linux kernel developers and pay them a fair rate (probably a lot of money for such a task). It's likely more cost effective to upgrade to a kernel version that already supports newer rbd protocols.
[23:30] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) has joined #ceph
[23:30] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) has joined #ceph
[23:30] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[23:31] <vilobhmm> mjevans : ok :) will upgrade to 2.6.37. What process should i follow. I mean checkout source or for rbd.ko go with some specific module
[23:31] <Fruit> 2.6.32? that's debian squeeze or similar
[23:31] <mjevans> Fruit: probably redhat actually
[23:31] <Serbitar> 2.6.32 is what redhat6 and friends claim to run
[23:31] <mjevans> The new debian stable uses 3.2
[23:31] <Fruit> yes wheezy is 3.2
[23:31] <Fruit> 2.6.32 is oldstable
[23:32] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[23:32] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[23:32] <vilobhmm> ok
[23:32] <vilobhmm> i will upgrade to 2.6.37
[23:33] <vilobhmm> you have any suggested process to follow ?
[23:33] <Fruit> why 2.6.37? why not something recent?
[23:33] <lurbs> Which distribution are you running?
[23:33] <mjevans> https://www.kernel.org/ WTF is 2.6.37 anyway?
[23:34] <lurbs> And do you actually need the kernel driver, or would something in userspace be more appropriate?
[23:34] <vilobhmm> 2.6.37 is linux kernel release from which they include rbd.ko driver
[23:34] <mjevans> I would guess someone backported to it
[23:34] <mjevans> I'm not sure if that can even talk with a 'recent' ceph though.
[23:35] <Fruit> and even if it can, I'm sure it'll have a bug or two
[23:35] <vilobhmm> if i want to use CEPH storage will need rbd.ko driver running on the client to expose and attach the block devices created on the ceph storage
[23:35] <lurbs> It's also very old, and certainly won't support things like the layering (snapshots and clones) in RBD format 2 (kernel 3.10+ required, I believe).
[23:35] <mjevans> vilobhmm: kernel rbd has some really nasty gotchas
[23:35] <Fruit> vilobhmm: not necessarily. qemu/kvm can access rbd's directly
[23:36] <Fruit> vilobhmm: no rbd.ko required
[23:36] <Fruit> depends on what you want to use it for
[23:36] <mjevans> Fruit: If he's running ganeti then the kernel rbd is used for initial setup, but ganeti 2.10 adds a flag to enable userspace kvm if using qemu
[23:36] <vilobhmm> Fruit : I am aware qemu/kvm can access rbd directly
[23:37] <Fruit> mjevans: itym userspace rbd ;)
[23:37] <vilobhmm> but if i want the client to directly create raw drive on CEPH storage and then mount it on the client i will need rbd.ko kernel driver
[23:37] <Fruit> vilobhmm: true
[23:37] <mjevans> Fruit: yeah... I do mean userspace rbd in qemu kvm
[23:37] <vilobhmm> do we have something like userspace rbd ?
[23:38] <mjevans> qemu-img can push files up to rbd... but I honestly don't know.
[23:38] <vilobhmm> lures : what kernel version and what specific ceph version you would suggest ?
[23:38] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[23:39] <vilobhmm> mjevans : sure so hence thought of using the rbd driver to create raw devices
[23:39] <mjevans> vilobhmm: the latest you can tollerate running of a kernel, and at least what's in the topic here as 'stable'
[23:39] <lurbs> vilobhmm: That depends on which distribution, and which release of it, you're running, and if there's a suitable packaged kernel for it already.
[23:40] <vilobhmm> mjevans : I didn't got you sorry
[23:41] * Gamekiller77 (~Gamekille@128-107-239-234.cisco.com) Quit (Ping timeout: 480 seconds)
[23:41] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[23:43] * sprachgenerator (~sprachgen@130.202.135.213) Quit (Quit: sprachgenerator)
[23:49] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[23:50] * diegows_ (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[23:51] <mjevans> Grumble... ganeti not yet doing trim nicely... https://code.google.com/p/ganeti/issues/detail?id=706
[23:56] * vata (~vata@2607:fad8:4:6:b8c1:427:27d2:8f15) Quit (Quit: Leaving.)
[23:57] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.