#ceph IRC Log

Index

IRC Log for 2013-05-17

Timestamps are in GMT/BST.

[0:01] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[0:12] * portante (~user@66.187.233.206) has joined #ceph
[0:23] <mrjack> joao: sorry deleted them because too big
[0:38] <paravoid> sagewk: so, any other ideas? :)
[0:39] <paravoid> also, why was it stuck on recovery_wait for those two pgs for two minutes?
[0:40] <sagewk> osd.3 is doing something with its time but i'm not sure what.
[0:41] <sagewk> doing it again with 'debug ms = 1' will peel the onion one layer further..
[0:41] * aliguori (~anthony@32.97.110.51) Quit (Remote host closed the connection)
[0:42] <paravoid> debug ms on what? mon? osd?
[0:42] <paravoid> all osds?
[0:44] <sagewk> just hte osd you are restarting
[0:51] * spicewiesel (~spicewies@2a01:4f8:191:316b:dcad:caff:feff:ee19) has left #ceph
[0:52] <paravoid> no debug osd?
[0:52] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) Quit (Quit: Leaving.)
[0:52] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:52] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) has joined #ceph
[1:01] <sagewk> not yet.. i'm afraid that will have a big impact on timing
[1:01] <paravoid> oh hah, now it crashed
[1:01] <paravoid> I did debug-osd 10 :)
[1:01] <sagewk> that'll be interesting in and of itself :)
[1:02] <paravoid> -1> 2013-05-16 23:00:36.250569 7fa00454e700 -1 filestore(/var/lib/ceph/osd/ceph-5) _set_replay_guard 3.1bc3_TEMP error -1
[1:02] <paravoid> 0> 2013-05-16 23:00:36.253730 7fa00454e700 -1 os/FileStore.cc: In function 'void FileStore::_set_replay_guard(coll_t, const SequencerPosition&, bool)' thread 7fa00454e700 time 2013-
[1:02] <paravoid> 05-16 23:00:36.250599
[1:02] <paravoid> os/FileStore.cc: 2157: FAILED assert(0 == "_set_replay_guard failed")
[1:02] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[1:02] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[1:02] <paravoid> holy crap
[1:02] <sagewk> about how many pgs are on this osd? (ls /var/lib/ceph/osd/.../current | wc)
[1:03] <paravoid> 370
[1:03] <paravoid> but there's something more interesting
[1:03] <paravoid> possibly
[1:03] <paravoid> so after the crash
[1:03] <paravoid> there's a
[1:03] <paravoid> 2013-05-16 23:00:36.256227 7fa003d4d700 0 filestore(/var/lib/ceph/osd/ceph-5) transaction dump:
[1:04] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[1:04] <paravoid> which has some ops
[1:04] <paravoid> "op_name": "omap_setkeys",
[1:04] <paravoid> "collection": "3.2450_TEMP",
[1:04] <paravoid> "oid": "29f36450\/.dir.10267.444\/head\/\/3",
[1:04] <paravoid> that's op_num 5
[1:04] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[1:05] <paravoid> then 900 lines or so for attr_lens
[1:05] <paravoid> 0> 2013-05-16 23:00:36.306600 7fa003d4d700 -1 os/FileStore.cc: In function 'unsigned int FileStore::_do_transaction(ObjectStore::Transaction&, uint64_t, int)' thread 7fa003d4d700 time 2013-05-16 23:00:36.305284
[1:05] <paravoid> os/FileStore.cc: 2679: FAILED assert(0 == "unexpected error")
[1:05] <paravoid> another crash, it's my lucky day
[1:07] <paravoid> also this in the logs:
[1:07] <paravoid> -9> 2013-05-16 23:00:36.256188 7fa003d4d700 0 filestore(/var/lib/ceph/osd/ceph-5) error (1) Operation not permitted not handled on operation 13 (18133813.0.0, or op 0, counting from 0)
[1:07] <paravoid> -8> 2013-05-16 23:00:36.256225 7fa003d4d700 0 filestore(/var/lib/ceph/osd/ceph-5) unexpected error code
[1:08] <sagewk> can you put that whole log on the bug?
[1:09] <paravoid> it has a large number of filenames
[1:09] <paravoid> so I'd rather not
[1:09] <sagewk> cephdrop?
[1:09] <paravoid> yeah, I can do that
[1:09] <sagewk> thanks
[1:16] <paravoid> 5084-ceph-osd.5.log.bz2
[1:24] * The_Bishop (~bishop@i59F6AB6A.versanet.de) has joined #ceph
[1:29] <terje-> anyone know what cuttlefish package (el6) contains rbd-fuse?
[1:31] <terje-> heh, I see that is a dumb question.. nevermind..
[1:33] * portante (~user@66.187.233.206) Quit (Quit: leaving)
[1:33] <dmick> terje-: we named it very confusingly :)
[1:34] <terje-> I was looking for it in the ceph-fuse rpm
[1:34] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[1:34] <dmick> I guess in retrospect I should have said con-fuse-ingly
[1:37] <terje-> :)
[1:40] <terje-> I'm a little con-fuse'd on rbd-fuse
[1:40] <terje-> how does it know which rbd volume to mount?
[1:41] <terje-> I can specify a pool but not a volume
[1:46] <sagewk> paravoid this happens each time you start?
[1:46] <sagewk> can you do an ls -al and getattr -d on that pg temp dir?
[1:46] <paravoid> the asserts?
[1:46] <paravoid> no, just the ones you see
[1:47] <sagewk> i mean, do teh osd crash again if you start again?
[1:47] <paravoid> ho
[1:47] <paravoid> no
[1:47] <paravoid> I restarted it and it recovered
[1:47] <sagewk> oh ok.
[1:47] <sagewk> did you capture a log of the peering?
[1:47] <paravoid> the peering pgs you mean?
[1:47] <paravoid> not this time
[1:48] * lofejndif (~lsqavnbok@9KCAACZCV.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[1:51] <sagewk> paravoid: weird, it got an error from 2 different threads at roughly the same time.
[1:51] <sagewk> well, in any case, a debug ms = 1 log showing the peering during an osd restart would still be helpful. i'm about to head out, but can look tomorrow
[1:51] <paravoid> yeah it's getting late here too
[2:01] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Quit: Leaving.)
[2:20] * glowell (~glowell@12.248.40.138) Quit (Quit: Leaving.)
[2:25] * coyo|2 (~unf@71.21.193.106) has joined #ceph
[2:26] * Orban (~ruckc@173-167-202-19-ip-static.hfc.comcastbusiness.net) has joined #ceph
[2:27] * LeaChim (~LeaChim@176.250.188.136) Quit (Ping timeout: 480 seconds)
[2:30] * coyo (~unf@00017955.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:30] <Orban> I don't have easy access to ceph-deploy on RHEL6, but i'm trying to create an mds, i found the ceph auth CAPS requirements for a keyring for mds, but i can't find an example of the ceph.conf syntax to define the MDS. Also, can you run more than one mds for fault tolerance? Also, what is the $mds_home path under /var/lib/ceph/mds for each mds?
[2:30] <Orban> the documentation doesn't lend itself to setting up an mds without ceph-deploy
[2:30] * alram (~alram@38.122.20.226) Quit (Quit: leaving)
[2:31] <dmick> Orban: it's just like the other daemons: http://ceph.com/docs/master/rados/configuration/ceph-conf/#config-sections
[2:32] <dmick> [mds.0] for daemon-specific things, like mon and osd
[2:33] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[2:33] <dmick> default path /var/lib/ceph/mds/$cluster-$id, similar to other daemons
[2:34] <dmick> it does seem strange that there is no Configuration section specifically for msd
[2:34] <dmick> *mds
[2:35] <dmick> you can run multiple MDSes, but only one active at a time (the others are fallbacks to handle failure)
[2:35] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[2:35] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[2:37] <Orban> ok cool
[2:37] <Orban> thanks
[2:37] <dmick> http://ceph.com/docs/master/cephfs/mds-config-ref/ has MDS-specific settings (it really should also be linked under Configuration IMO)
[2:37] <Orban> i think i've finally gotten ceph licked from a setup of a test cluster, at least once i get cephfs mounted...
[2:38] <dmick> but as you don't need it for block or object use at all, it occupies different ground
[2:38] <dmick> i.e. many people run ceph clusters and don't use mds/cephfs at all
[2:40] <Orban> i'm looking to export cifs/nfs, but the RHEL6 rpms don't contain an rbd kernel module, so i'm trying cephfs, which actually looks better due to the way it load balances directories
[2:41] * DarkAceZ (~BillyMays@50.107.54.92) Quit (Ping timeout: 480 seconds)
[2:41] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[2:43] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[2:43] * Tamil (~tamil@38.122.20.226) Quit (Quit: Leaving.)
[2:50] <ron-slc> Quick question. I have upgraded a mon from Bobtail 56.4 to 56.6. Now the ceph-mon is not auto-starting upon system reboot. Is there a known issue?
[2:51] <ron-slc> manually issuing "service ceph start" gets things going. But obviously this needs to be automated.
[2:55] * DarkAceZ (~BillyMays@50.107.54.92) has joined #ceph
[2:56] * The_Bishop_ (~bishop@89.246.184.15) has joined #ceph
[2:57] <nhm_> Orban: you may want to look into the ganesha stuff
[2:57] <nhm_> Orban: I don't know what state it's in though.
[2:57] <ron-slc> well. nevermind.. I guess you just need to be patient... took 2 mins
[2:58] <nhm_> ron-slc: strange, no idea
[2:59] <dmick> did we do the monstore update across that boundary? I can't remember
[2:59] <dmick> but that could take some time if so
[2:59] <ron-slc> yea, know what you mean. But manually executing "service ceph start" was done on two previous reboots.
[3:00] <ron-slc> with instant results.
[3:00] <dmick> ok, something's wrong
[3:01] <ron-slc> Hmm and another Mon also had immediate + automatic startup; after a upgrade+reboot. I'll have to look closer at services which may come before-hand in the init
[3:02] <ron-slc> I'm thinking it's an issue with another pre-loading service lagging
[3:03] * The_Bishop (~bishop@i59F6AB6A.versanet.de) Quit (Ping timeout: 480 seconds)
[3:12] <Orban> dmick, with the mds's how do you make a client (mount point) do failover incase the mds disappears?
[3:13] <dmick> I think that's handled for you by the cluster; the fs module really talks to the monitors first, as do all clients
[3:15] <Orban> but you point the fs module to a specific monitor ip:port, hmm, its something i'll have to build a testcase in for
[3:15] <dmick> once it connects to that monitor, it learns about all of them
[3:16] <dmick> and I believe if you want to handle "down on mount startup", you can specify more than one to the mount
[3:16] <Orban> ok, cool
[3:16] <dmick> http://ceph.com/dev-notes/cephfs-mds-status-discussion/ is worth reviewing
[3:17] * Meths (rift@2.25.193.124) Quit (Ping timeout: 480 seconds)
[3:17] * alrs (~lars@cpe-142-129-65-37.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:24] * The_Bishop_ (~bishop@89.246.184.15) Quit (Ping timeout: 480 seconds)
[3:32] <Orban> dmick, ok so that is fairly telling... you can run a ceph cluster, but you can't do rbd or cephfs on rhel6 due to kernel module incompatability... fun.
[3:34] * dpippenger (~riven@206-169-78-213.static.twtelecom.net) Quit (Quit: Leaving.)
[3:34] <dmick> you can't do them with kernel moudles
[3:34] <dmick> but you can do either with userland code
[3:36] <dmick> ceph-fuse, ganesha, hadoop are all userland; qemu supports rbd through librbd, and there's an rbd stgt driver for iSCSI and rbd-fuse to show images as files in a FUSE fs
[3:36] <dmick> or, you know, you could use a modern kernel
[3:38] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) Quit (Quit: Leaving.)
[3:39] * themgt (~themgt@24-177-232-33.dhcp.gnvl.sc.charter.com) Quit (Read error: Connection reset by peer)
[3:40] <Orban> policy ties me to the RHEL kernel, and i'm just trying to figure out how to get to NFS/CIFS exports from the cluster, argh
[3:41] <Orban> so, ganesha gets me NFS, but not cifs argh
[3:41] <sage> samba can low link directly to libcephfs
[3:41] <sage> or you can run samba on top of ceph-fuse
[3:42] <sage> dmick: when you have a minute.. wip-sysvinit ?
[3:43] <sage> https://github.com/ceph/ceph/pull/293
[3:47] * dgbaley27 (~matt@mrct45-133-dhcp.resnet.colorado.edu) has joined #ceph
[3:49] * coyo|2 (~unf@71.21.193.106) Quit (Quit: F*ck you, I'm a daemon.)
[3:50] <dgbaley27> Hi, I have a few servers 32+ cores, 128 GiB RAM, and 12 2TB disks. I'd like to use 10 disks from each as an osd. They will then be running VMs whose image is a RBD. If I have replication == # of servers, can I take advantage of the locality of the rbd image and the running VM?
[3:51] * Meths (rift@2.25.191.72) has joined #ceph
[3:51] * The_Bishop (~bishop@e179004062.adsl.alicedsl.de) has joined #ceph
[3:54] <joshd> dgbaley27: you can in the master branch, though it won't likely do you much good since the rbd images will be striped across objects stored on many different osds
[3:56] <dmick> sage: looking
[3:56] <elder> Thanks for the reviews Josh.
[3:57] <dmick> got no context, need to look at whole script
[3:59] <joshd> elder: no problem. I'm much more confident in the error handling now that I took a closer look at it
[4:00] <joshd> elder: I think the error handling ended up pretty clean
[4:00] <dmick> sage: so get_local_name_list must be called before get_name_list, and leaves results in a global? /me feels queasy
[4:04] <dmick> nothing dedups $allconf?
[4:06] <dgbaley27> joshd: couldn't the striping be across the set of local disks/osds for each replication?
[4:07] * The_Bishop (~bishop@e179004062.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[4:08] <joshd> dgbaley27: you could do that with a pool per host, but you lose the reliability and availability benefits of non-local storage
[4:09] <dmick> sage: clobbering $f in the middle of a for f in; loop?
[4:10] <dmick> (and none of them are files, anyway; should be better names)
[4:10] <dmick> what's this resolve?
[4:10] <joshd> dgbaley27: I guess you might still have faster recovery than raid, but it's not really how ceph is usually used
[4:12] <dgbaley27> joshd: you mean keep compute and storage separate?
[4:12] <dgbaley27> joshd: or do you mean raid local disks so ceph only sees one per host?
[4:14] * tkensiski (~tkensiski@86.sub-70-197-7.myvzw.com) has joined #ceph
[4:14] * tkensiski (~tkensiski@86.sub-70-197-7.myvzw.com) has left #ceph
[4:15] * tkensiski1 (~tkensiski@86.sub-70-197-7.myvzw.com) has joined #ceph
[4:15] * tkensiski1 (~tkensiski@86.sub-70-197-7.myvzw.com) has left #ceph
[4:15] <Orban> so, i'm trying to use ceph-fuse, but all i get is "starting ceph client" and it never detaches from the console, i don't get anything in the ceph logs and there doesn't seem to be a way to get debug logs from the ceph-fuse binary
[4:16] <dmick> sage: put comments in github instead
[4:17] <dmick> Orban: I'm no expert on ceph-fuse but I think it respects the standard FUSE debug -sd
[4:17] <dmick> sorry
[4:17] <dmick> -d
[4:18] <Orban> yea its not detaching, i think i got the CAPS wrong, i finally found the errors on the mon process it was connecting too
[4:22] <dmick> ah, -d means daemonize rather than debug, for ceph-fuse
[4:24] <dmick> option fuse_debug might help
[4:25] <Orban> i got it to mount once i fixed the mds caps, now trying to see how well it works
[4:25] <dmick> oh good.
[4:25] <dmick> I'll stop researching now then
[4:25] <Orban> oh sorry, thanks
[4:26] <Orban> ceph is slightly more confusing than anything else i've dealt with, but this is my first foray into distributed filesystems instead of clustered filesystems sharing a SAN
[4:31] <dmick> distributed setup is challenging
[4:34] * tkensiski (~tkensiski@139.sub-70-211-66.myvzw.com) has joined #ceph
[4:35] <Orban> alright, thanks for the help dmick, tomorrow shall be nfs/cifs time...
[4:35] <dmick> yw
[4:35] * tkensiski (~tkensiski@139.sub-70-211-66.myvzw.com) has left #ceph
[4:39] * john_barbee_ (~jbarbee@c-98-226-73-253.hsd1.in.comcast.net) has joined #ceph
[4:42] * dgbaley27 (~matt@mrct45-133-dhcp.resnet.colorado.edu) has left #ceph
[4:47] * Orban (~ruckc@173-167-202-19-ip-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[4:55] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[5:00] * tkensiski1 (~tkensiski@244.sub-70-197-7.myvzw.com) has joined #ceph
[5:02] * tkensiski1 (~tkensiski@244.sub-70-197-7.myvzw.com) has left #ceph
[5:02] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) has joined #ceph
[5:05] * DarkAceZ (~BillyMays@50.107.54.92) Quit (Ping timeout: 480 seconds)
[5:17] * DarkAceZ (~BillyMays@50.107.54.92) has joined #ceph
[5:19] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) Quit (Quit: Leaving.)
[5:27] <elder> joshd, I was only here briefly before.
[5:28] <elder> Yes, the cleanup was intended to be clean, or at least that was my hope. I tried to make everything building up and cleaning up symmetrical. There's still a long way to go but the last few weeks that's the sort of stuff I've been trying to do.
[5:28] <elder> When you do that you sometimes find problems that weren't obvious before.
[5:30] * Cube (~Cube@12.248.40.138) Quit (Read error: Operation timed out)
[5:31] * glowell (~glowell@ip-64-134-236-4.public.wayport.net) has joined #ceph
[5:32] <elder> That, and having everything avoid changing any externally visible state except when fully successful.
[5:35] <buck> if I use ceph-deploy to add mon's, osd's, etc. Then where would the ceph.conf equivalent be? Specifically, I'm looking for the caps for different clients
[5:35] <dmick> caps are usually in keyrings, which usually end up in the default places in /var/lib/ceh
[5:35] <dmick> *ceph
[5:36] <sage> dmick: thanks, fixed the for variable.
[5:37] <dmick> terje-: missed the q before, but rbd-fuse exports each image in a pool as a file
[5:37] <dmick> doesn't mount any of them
[5:38] <dmick> sage, cool
[5:44] * The_Bishop (~bishop@e179004062.adsl.alicedsl.de) has joined #ceph
[5:45] <terje-> dmick: I see that
[5:45] <terje-> now, thanks.
[5:45] <terje-> I'm running RHEL and so I don't have rbd in the kernel.
[5:46] * tkensiski (~tkensiski@c-98-234-160-131.hsd1.ca.comcast.net) has joined #ceph
[5:46] <terje-> What is the best way to mount a volume locally without using rbd map?
[5:46] <elder> sage I'm going to update origin/for-linus to match your most recent pull request, OK?
[5:47] <elder> Then I was going to append a few recent bug fixes that should go this release. Does that match what you suggested today?
[5:47] * tkensiski (~tkensiski@c-98-234-160-131.hsd1.ca.comcast.net) has left #ceph
[5:52] <sage> perfect
[5:53] <sage> i suggest we accumulate reviewed+tested patches in either for-linus or master, and make testing include both those branches and new stuff. once a patch looks good, we decide whether it goes to for-linus or master
[5:56] <elder> So what gets nightly test coverage?
[5:57] <elder> At the moment, I've reset (locally) my for-linus branch to 638f5ab, which is what was in your last pull request.
[5:57] <sage> testing
[5:58] <sage> for master+next, and master for the stable bobtail/cuttlefish runs
[5:58] <elder> There are 8 new reviewed and tested patches since that commit. I'm rearranging them so the 5 that are bugs (3 old, 2 regressions from this release) are at the front.
[5:58] <elder> I was going to make those 5 be at the end of the for-linus branch, and then make testing be that plus the 3 non-bugs.
[6:00] <elder> I'm tired and I'm not following your for-linus, master, and testing logic, but I don't blame you for that.
[6:00] <elder> I just want to fire off tests before I go to bed...
[6:00] * coyo (~unf@71.21.193.106) has joined #ceph
[6:03] <elder> FYI I pushed "testing-next" that includes what I proposed above--5 bugs, followed by 3 cleanups. I'm going to run some teuthology stuff overnight against that.
[6:03] <elder> We can sort out what gets put where tomorrow.
[6:06] <dmick> terje-: if you want the volume to show up as a block device in the native kernel? Dunno about best
[6:06] <dmick> I think you could rbd-fuse and then mount the file, but I don't know about deadlocks
[6:07] <dmick> you definitely can stgt-export as iSCSI, and then mount the iSCSI, too
[6:07] <dmick> both are kinda weird
[6:07] <dmick> but you can run a VM and access them too
[6:09] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[6:20] <buck> super vague question, but are there common causes for a radosgw-admin command to return this error " -1 Couldn't init storage provider (RADOS)" ?
[6:23] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:27] <dmick> can't connect to the cluster at all. suspect bad .conf or bad auth/caps
[6:27] <dmick> (or dead mons)
[6:30] <terje-> dmick: yea, I want the volume to show up as a block device
[6:31] <terje-> but I don't think I can use krbd and expect RH to support that system
[6:31] <terje-> so, I have to use some userland trick.
[6:32] <terje-> I may just go through a VM
[6:32] <terje-> that's probably easiest.
[6:32] <dmick> yeah. I haven't personally tested the two I mentioned but they have possibilities
[6:33] <terje-> I did just create a volume in a 'data' pool
[6:33] <dmick> I have sometimes wondered if there's a good userland-block-dev framework to use. I think FUSE does blkdevs; it might be worth investigating
[6:33] <terje-> well -
[6:33] <terje-> I used rbd-fuse to 'mount' all the volumes as files
[6:33] <dmick> would take some porting, but if it's anything like the filesystem form, it wouldn't be hard
[6:33] <terje-> then, I ran mkfs.ext4 on that
[6:33] <dmick> ok
[6:33] <terje-> and mount -o loop
[6:34] <dmick> right
[6:34] <terje-> and that worked
[6:34] <terje-> but it's kinda janky
[6:34] <dmick> you mean behavior, or it just offends your sensibilities?
[6:34] <terje-> the latter
[6:34] <terje-> works perfectly
[6:34] <dmick> free your mind :)
[6:35] <terje-> :)
[6:35] <dmick> this is how you cope with a years-old kernel; you gotta be a little flexible :)
[6:35] <terje-> if I'm going to go through all that, I'd like to be able to mount if from fstab
[6:35] <dmick> you should be sure to bug RHEL support about including later rbd modules
[6:35] <dmick> maybe if enough people complain they'll get on it
[6:35] <terje-> oh I have
[6:36] <terje-> I had a couple of long conversations about it with them on the RDO mailing list
[6:36] <terje-> the answer was basically: have you given gluster a shot?
[6:37] <terje-> I have actually and it's one reason I appreciate ceph so much. :)
[6:38] <dmick> heh
[6:38] <terje-> I'll just go through a VM for now. I do have that working via rbd.
[6:39] <terje-> oh snap, what if I were to create a big volume. Run it as the root partition of a vm
[6:39] <terje-> then, run an nfs server and export it
[6:39] <terje-> it's just like krbd.
[6:39] <terje-> not
[6:39] <dmick> with 300% of the overhead! :)
[6:39] <dmick> but if you're willing to do nfs, there is ganesha...
[6:39] <terje-> oh?
[6:40] <dmick> yep
[6:42] <terje-> mount via fuse and export via nfs
[6:42] * davidzlap (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[6:42] <terje-> I'll have a look, thanks.
[6:42] <dmick> I believe that's how it works; again, never personally done it
[6:42] <dmick> but it seems to be regarded as workable
[6:43] <terje-> so close, yet so far with RHEL
[6:44] * gaveen (~gaveen@175.157.230.206) has joined #ceph
[6:45] <terje-> do you know if I can do: rbd import -p mypool /images/image.qcow2 myvol --size 102400
[6:45] <terje-> notice the .qcow2
[6:45] <dmick> no, import is a simple bag'o'bytes copy
[6:45] <terje-> so, only raw?
[6:46] <dmick> I'm not certain of the status of qemu-img; I think it still doesn't tie into the rbd backend
[6:46] <dmick> that would be the obvious answer here. but otherwise, yeah, convert to raw for importing
[6:46] <terje-> ok
[6:47] <terje-> I've noticed that my 2G qcow goes to a 100G raw file prior to the import
[6:47] <dmick> you can import from stdin and it'll try to compress 0s
[6:47] <terje-> so, I'm probaby doing something wrong.. will google
[6:47] <dmick> as it will when importing from raw
[6:47] <dmick> but no, that's not surprising, there's probably a lot of wasted space in a blkdev typically
[6:48] <dmick> presumably you mean 2G actual size, but representing 100G disk
[6:48] <terje-> no problem, I notice that once the raw has been imported, rbd info tells me it's back to 2G vol
[6:48] <terje-> riht
[6:48] <terje-> well, the qcow2 is 2G
[6:48] <terje-> when I use qemu-img convert -O raw
[6:48] <dmick> right. well be aware that you can maybe skip the tmpfile and pipe it straight
[6:48] <terje-> the raw file is 100G on disk
[6:48] <terje-> yea, that's cool I'll work on that
[6:49] <terje-> you're super helpful. :) thanks.
[6:50] <dmick> it tries to fill up blocks of rbd-image-blksize, and then scans them for zeros before actually consuming space
[6:50] <dmick> (you can set the blksize in the import command)
[6:51] <terje-> ok
[7:02] * alrs (~lars@cpe-142-129-65-37.socal.res.rr.com) has joined #ceph
[7:28] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has left #ceph
[7:31] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) has joined #ceph
[8:00] * sjusthm (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) Quit (Read error: Operation timed out)
[8:07] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) has joined #ceph
[8:07] * tnt (~tnt@91.177.224.32) has joined #ceph
[8:09] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[8:10] * eternaleye (~eternaley@2607:f878:fe00:802a::1) Quit (Ping timeout: 480 seconds)
[8:15] * bergerx_ (~bekir@78.188.204.182) has joined #ceph
[8:18] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Remote host closed the connection)
[8:18] * eternaleye (~eternaley@2607:f878:fe00:802a::1) has joined #ceph
[8:26] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[8:35] * coyo (~unf@00017955.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:42] * eternaleye (~eternaley@2607:f878:fe00:802a::1) Quit (Ping timeout: 480 seconds)
[8:43] * eternaleye (~eternaley@2607:f878:fe00:802a::1) has joined #ceph
[8:46] * ghartz (~ghartz@ill67-1-82-231-212-191.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[8:52] * glowell (~glowell@ip-64-134-236-4.public.wayport.net) Quit (Ping timeout: 480 seconds)
[8:53] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[8:53] * ChanServ sets mode +v andreask
[9:11] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) has joined #ceph
[9:13] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[9:13] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[9:13] * ChanServ sets mode +v andreask
[9:16] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Remote host closed the connection)
[9:31] * jgallard (~jgallard@gw-aql-129.aql.fr) has joined #ceph
[9:31] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[9:32] * eternaleye (~eternaley@2607:f878:fe00:802a::1) Quit (Ping timeout: 480 seconds)
[9:39] * eternaleye (~eternaley@2607:f878:fe00:802a::1) has joined #ceph
[9:47] * LeaChim (~LeaChim@176.250.188.136) has joined #ceph
[9:48] * leseb (~Adium@83.167.43.235) has joined #ceph
[9:49] <bergerx_> hi, is there a best practice if we want to upgrade all machines in our cluster, we are planning to create 10 machine cluster using ubuntu 13.04 but lifetime of 13.04 seems like 9 months, and after that 13.10 will again have a lifetime of 9 months
[9:50] <bergerx_> when upgrading between versions we need to reboot the machine
[9:50] <leseb> bergerx_: you should have a look at this: http://ceph.com/docs/master/release-notes/#v0-61-cuttlefish
[9:50] <bergerx_> every reboot seems like many migrations
[9:51] <bergerx_> leseb: not asking for updating the ceph
[9:51] <bergerx_> updating dirtibution needs reboot
[9:51] <bergerx_> and reboot end up with migration
[9:52] <leseb> bergerx_: so?
[9:52] <bergerx_> is there a best practice for this
[9:52] <leseb> bergerx_: so you're concerned about hardware reboot and ceph re-balancing?
[9:52] <bergerx_> yes
[9:53] <bergerx_> for 10 machines to reboot for new version of kernel, i need to plan 10 re-balance
[9:54] <bergerx_> i'm just asking if there is a best practice for this
[9:54] <bergerx_> i thought somebody have already faced with this problem
[9:55] <leseb> bergerx_: by default there is a value set to 5min —> osd down out interval (mon)
[9:56] <leseb> thus this is only after those 5min that data will start re-balancing
[9:56] <leseb> you can also play with the flag 'noout'
[9:56] <bergerx_> hmm, can i restart an osd without need of re-balance
[9:57] <leseb> this flag prevents to mark OSDs out of the crushmap, this is usually what I use when I have to do an hardware upgrade or something
[9:58] <bergerx_> maybe i can look into documentation, thanks for guidance
[9:58] <bergerx_> this is actualy what am i asking for
[9:58] <leseb> bergerx_: np!
[9:59] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[10:02] * loicd1 (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[10:03] <joao> morning loicd
[10:03] <bergerx_> ı've seen this now, thanks: http://www.sebastien-han.fr/blog/2012/08/17/ceph-storage-node-maintenance/
[10:04] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[10:07] * spicewiesel (~spicewies@2a01:4f8:191:316b:dcad:caff:feff:ee19) has joined #ceph
[10:07] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Ping timeout: 480 seconds)
[10:12] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[10:13] * rustam (~rustam@90.216.255.245) has joined #ceph
[10:14] * spicewiesel (~spicewies@2a01:4f8:191:316b:dcad:caff:feff:ee19) Quit (Remote host closed the connection)
[10:14] * rustam (~rustam@90.216.255.245) Quit (Remote host closed the connection)
[10:16] * rustam (~rustam@90.216.255.245) has joined #ceph
[10:18] * rustam (~rustam@90.216.255.245) Quit (Remote host closed the connection)
[10:20] * spicewiesel (~spicewies@2a01:4f8:191:316b:dcad:caff:feff:ee19) has joined #ceph
[10:20] * rustam (~rustam@90.216.255.245) has joined #ceph
[10:22] * rustam (~rustam@90.216.255.245) Quit (Remote host closed the connection)
[10:24] * rustam (~rustam@90.216.255.245) has joined #ceph
[10:25] * dignus (~dignus@bastion.jkit.nl) Quit (Read error: Connection reset by peer)
[10:26] * mhu (~mhu@83.167.43.235) has joined #ceph
[10:33] * loicd1 (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Ping timeout: 480 seconds)
[10:40] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) Quit (Quit: Leaving.)
[10:43] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[10:48] * rustam (~rustam@90.216.255.245) Quit (Remote host closed the connection)
[10:50] * vizh (~vizh@195.211.238.242) has joined #ceph
[10:56] * rustam (~rustam@90.216.255.245) has joined #ceph
[11:00] * rustam (~rustam@90.216.255.245) Quit (Remote host closed the connection)
[11:10] * jgallard (~jgallard@gw-aql-129.aql.fr) Quit (Remote host closed the connection)
[11:11] * jgallard (~jgallard@gw-aql-129.aql.fr) has joined #ceph
[11:15] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Read error: Operation timed out)
[11:18] * Rocky (~r.nap@188.205.52.204) has left #ceph
[11:26] * frank9999 (~frank@kantoor.transip.nl) Quit (Remote host closed the connection)
[11:26] * frank9999 (~frank@kantoor.transip.nl) has joined #ceph
[11:28] * jgallard (~jgallard@gw-aql-129.aql.fr) Quit (Remote host closed the connection)
[11:28] * jgallard (~jgallard@gw-aql-129.aql.fr) has joined #ceph
[11:35] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[11:35] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) has joined #ceph
[11:39] <loicd> ccourtaut: would you be so kind as to remind me the link to the geo replication description ?
[11:47] <jtang> good morning
[11:47] <ccourtaut> loicd: www.spinics.net/lists/ceph-devel/msg11905.html
[11:49] <jtang> oh, http://wiki.ceph.com/01Planning/02Blueprints/Dumpling/create_crush_library -- i hadnt seen that
[11:49] * saaby (~as@mail.saaby.com) has joined #ceph
[12:06] <madkiss> .
[12:06] <madkiss> woops
[12:14] <darkfader> jtang: that is a great proposal
[12:14] * darkfader still dreams of hot-plugging adjacent racks and things doing stuff to distribute load
[12:15] <darkfader> and that's where you'd need something to easily manipulate the map
[12:28] <jtang> darkfader: i was thinking more about using CRUSH for something lese
[12:28] <jtang> else
[12:29] <uli> someone knows why my deb wheezy gives error on mounting cephfs mount: error writing /etc/mtab: Invalid argument
[12:30] <uli> mount works, but this errormessage... why?
[12:30] * andreask1 (~andreask@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[12:30] * ChanServ sets mode +v andreask1
[12:30] * andreask is now known as Guest5737
[12:30] * andreask1 is now known as andreask
[12:30] * Guest5737 (~andreask@h081217068225.dyn.cm.kabsi.at) Quit (Read error: Connection reset by peer)
[12:32] * psieklFH (psiekl@wombat.eu.org) Quit (Ping timeout: 480 seconds)
[12:33] * psiekl (psiekl@wombat.eu.org) has joined #ceph
[12:35] * gaveen (~gaveen@175.157.230.206) Quit (Quit: Leaving)
[12:43] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[12:51] * rustam (~rustam@90.216.255.245) has joined #ceph
[12:51] * tnt (~tnt@91.177.224.32) Quit (Ping timeout: 480 seconds)
[13:05] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[13:05] * ChanServ sets mode +v andreask
[13:08] * DLange (~DLange@dlange.user.oftc.net) Quit (Quit: spring cleaning)
[13:10] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[13:11] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[13:13] * john_barbee_ (~jbarbee@c-98-226-73-253.hsd1.in.comcast.net) Quit (Quit: ChatZilla 0.9.90 [Firefox 21.0/20130506154904])
[13:18] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[13:22] * spicewiesel (~spicewies@2a01:4f8:191:316b:dcad:caff:feff:ee19) has left #ceph
[13:22] * spicewiesel (~spicewies@2a01:4f8:191:316b:dcad:caff:feff:ee19) has joined #ceph
[13:34] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[13:34] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Quit: Konversation terminated!)
[13:36] * esammy (~esamuels@host-2-102-68-228.as13285.net) has joined #ceph
[13:36] * esammy (~esamuels@host-2-102-68-228.as13285.net) has left #ceph
[13:38] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) Quit (Read error: Connection reset by peer)
[13:38] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[13:38] * ChanServ sets mode +v andreask
[13:44] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[13:50] * humbolt (~elias@62-46-149-101.adsl.highway.telekom.at) Quit (Ping timeout: 480 seconds)
[13:51] <jksM> hi guys... had to take a server down for a reboot... now when I start it up the osd come up for ~2 minutes then flop down, then go back up, down, etc.
[13:52] <jksM> In the logs I see that it starts backfilling on the osd... then after a short while I get "osd.3 reported failed by osd.xxx" for other osds... and then messages like this from the failed osd:
[13:52] <jksM> 10.0.0.1:6801/32163 >> 10.0.0.3:6801/32682 pipe(0x7f9328001ad0 sd=35 :39485 s=2 pgs=306417 cs=3 l=0).fault, initiating reconnect
[13:52] <jksM> thousands of log messages like that
[13:52] <jksM> and then the process starts over
[13:52] <jksM> I assume I have some kind of timeout set too low - any ideas?
[13:54] <nhm_> jksM: is it possible there could anyhting preventing the OSDs from talking to each other? firewall? network issues?
[13:56] <nhm_> Also, recovery takes a lot of CPU and Memory... Could your OSDs be getting super bogged down?
[13:57] <jksM> nhm_, I don't think so, no... I have checked all the obvious stuff
[13:57] <jksM> yeah, I think that could be the problem... I tried starting one osd only (out of 4)... but the same thing happened there
[13:57] <jksM> it goes up very quickly... and then after 2-3 minutes it is kicked out again
[13:57] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[13:58] <nhm_> hrm. You could try bumping up the debugging and see if anything shows up
[13:59] <nhm_> beyond the thousand of messages you have now. ;)
[13:59] <nhm_> also watch the machine with top or collectl or something and see if you hit swap or something at that point.
[14:01] <nhm_> how much memory per node do you have (for how many OSDs?)
[14:01] * humbolt (~elias@91-113-46-139.adsl.highway.telekom.at) has joined #ceph
[14:03] <jksM> 12 GB for 4 OSDs
[14:04] <nhm_> ok, that should be fine
[14:04] <jksM> hmm, I'll try upping the debugging - start one osd only and see what happens
[14:04] <nhm_> ok
[14:14] <absynth> nhm_: do you have any idea if scrubbing is safe on 0.56.6?
[14:14] <absynth> or do you still have that high memory consumption thing
[14:15] <nhm_> absynth: don't know, there's been a lot of work on scrubbing recently.
[14:16] <nhm_> absynth: I logged some bugs for uncessary scrubbing on cluster creation, but beyond that I've mostly been just focused on getting this bobtail vs cuttlefish performance testing done. So much data....
[14:17] <absynth> you need one of those "data nerd" t-shirts
[14:17] <absynth> http://newrelic.com/datanerd
[14:18] <nhm_> hehe
[14:18] <janos> where's the beard??!?
[14:18] <janos> that picture can't be right
[14:19] <nhm_> yes, he clearly hasn't been working hard enough.
[14:19] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[14:24] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[14:25] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[14:25] <jksM> nhm_, I have set the target transaction size down to 50... seems to work... one osd is up and stable now :)
[14:26] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[14:27] <nhm_> jksM: ah, interesting. Based on Sage's comment on the mailing list?
[14:28] <nhm_> how fast are your CPUs?
[14:28] <jksM> nhm_, haven't read Sage's comment, no? (I haven't written to the mailing list about this)
[14:28] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[14:29] <jksM> nhm_, Xeon E5606, quad-core 2.13 ghz
[14:29] <nhm_> jksM: http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/12292
[14:29] <jksM> ah, okay - yes, it was probably from there I got the idea :-)
[14:29] <jksM> but that thread is back from January... so I had set them back to the default transaction size in the mean while
[14:30] <jksM> but I have upgraded to 0.56.6 in between, so I didn't think it was necessary anymore
[14:31] <nhm_> jksM: if it happens again with target transaction size = 50, I imagine Sage or Sam might want to see the debugging logs.
[14:32] <jksM> okay, I'll have that in mind! - seems to be working now though... just started up the third osd
[14:34] <nhm_> jksM: fyi: http://tracker.ceph.com/projects/ceph/repository/revisions/f47b2e8b607cc0d56a42ec7b1465ce6b8c0ca68c
[14:36] <jksM> oh.. I'll set mine to 30 and keep it there then!
[14:36] <jksM> perhaps some of my troubles comes from the fact that I'm running btrfs on kernel 3.7 on this machine
[14:36] <nhm_> jksM: I imagine 50 is fine if you aren't having problems, but it looks like 300 is not optimal.
[14:37] <jksM> rebooted the machine to add an SSD for the journal... will change to XFS at the same time... hopefully this will give me a performance boost as well
[14:37] <nhm_> tough to say. BTRFS is usally faster at first, but can degrade over time.
[14:38] <nhm_> but yeah, you really don't want ot run btrfs on anything 3.8
[14:38] <nhm_> anything prior to 3.8 that is
[14:38] <jksM> hmm, perhaps I should simply upgrade the kernel instead
[14:39] <jksM> it is my understanding that if the machine crashes for some reason, ceph will start up a lot faster on btrfs than on xfs
[14:39] <jksM> because it can just roll back to a known good snapshot
[14:40] <jksM> and I really like that... I have only very few servers in the cluster, so if one crashes and needs hours to start up again, it would probably impact performance quite a lot
[14:44] <nhm_> jksM: well, our official advice is that xfs is the most stable right now, but btrfs is likely the future. :)
[14:45] <nhm_> ext4 is viable too.
[14:57] * eschenal (~eschnou@85.234.217.115.static.edpnet.net) has joined #ceph
[14:58] <jksM> thanks for the advice... I think I'll convert these to xfs :-)
[15:02] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[15:03] <nhm_> jksM: XFS performance has improved a lot with cuttleflish as well.
[15:03] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[15:04] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[15:04] <wogri> bergerx_ yes
[15:09] <bergerx_> wogri: "yes" for what, did i miss something?
[15:11] <wogri> bergerx_ nope, you didn't. I scrolled up, forgot that I've scrolled up, saw your question some hours ago and answered it. IRSSI accident, sorry :)
[15:11] * jgallard (~jgallard@gw-aql-129.aql.fr) Quit (Remote host closed the connection)
[15:12] * jgallard (~jgallard@gw-aql-129.aql.fr) has joined #ceph
[15:13] * yanzheng (~zhyan@101.82.171.76) has joined #ceph
[15:14] * spekzor (~rens@90-145-135-59.bbserv.nl) has joined #ceph
[15:18] <spekzor> question about snapshots. Is it normal that when i create an snapshot of a rbd image (mapped and mounted) the filesystem upon it (ext4) becomes inaccessable? When i purge the snapshot all becomes normal again. (ubuntu 12.04 lts, cuttlefish latest stable, 1 client, 3 servers each osd and mon VirtualBox)
[15:19] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[15:19] <Gugge-47527> No, a snapshot should not affect the "parent" image
[15:19] <wogri_risc> I know that this is not the case with rbd images that are attached through qemu, spekzor.
[15:20] <spekzor> Hmm this one is mounted trough the kernel module on the client.
[15:21] <spekzor> But it is supposed to be like zfs snapshots right? You can just create them while in use. Ideally you should't have any IO since unflushed buffers can lead to data loss, but thats the same with zfs. Or is the concept of snaphots different than that of zfs?
[15:22] <wogri_risc> spekzor: that's how I understood snapshots in ceph, yes.
[15:22] <spekzor> hmm
[15:23] <wogri_risc> do you run the latest kernel?
[15:23] <wogri_risc> kernel development usually lags a little behind ceph itself.
[15:23] <spekzor> The problem is consitent trough debian and ubuntu
[15:23] <spekzor> let me check
[15:23] <wogri_risc> stock kernels or 3.9-kernels?
[15:23] <spekzor> 3.5.0-30 generic
[15:23] <wogri_risc> the stock kernels are not a good idea to use.
[15:24] <wogri_risc> you should build a new kernel with the latest and greatest ceph version
[15:24] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[15:24] <spekzor> hmm it's been a while since i did that... You meen the whole make menuconfig, make, make install procedure ?
[15:24] <wogri_risc> yay!
[15:25] <wogri_risc> or, better alternative, use libvirt
[15:25] <wogri_risc> or the iscsi-userland-ceph-thing, forgot it's name
[15:25] <spekzor> kvm/qemu talks directly right, doesn't need the module?
[15:25] <wogri_risc> right
[15:25] <wogri_risc> very good performance, too.
[15:26] <wogri_risc> and the iscsi-userland-daemon also talks directly, no need for the module
[15:26] <wogri_risc> need to go, sorry. bye.
[15:26] <spekzor> thanks alot!
[15:26] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has left #ceph
[15:36] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[15:37] <loicd> ccourtaut: https://objects.dreamhost.com/inktankcom/DreamCompute%20Architecture%20Blueprint.pdf
[15:37] * rustam (~rustam@90.216.255.245) Quit (Remote host closed the connection)
[15:42] * themgt (~themgt@96-37-28-221.dhcp.gnvl.sc.charter.com) has joined #ceph
[15:46] * jgallard (~jgallard@gw-aql-129.aql.fr) Quit (Remote host closed the connection)
[15:46] * jgallard (~jgallard@gw-aql-129.aql.fr) has joined #ceph
[15:56] * drokita (~drokita@199.255.228.128) has joined #ceph
[16:02] * alrs (~lars@cpe-142-129-65-37.socal.res.rr.com) Quit (Ping timeout: 481 seconds)
[16:02] <spekzor> when using ceph-deploy a ceph.conf gets generated containing the initial qorum members but no information about OSD's. Where is this information stored? And what hapens if i simultaniously shutdown all monitors and osd's. Will the cluster know What's Where when everything is powered up again?
[16:06] <spekzor> http://rensreinders.nl/shirt.png YEAH, the fat one is mine...
[16:07] <joao> nice
[16:07] <joao> those don't seem have the awesome octopus though
[16:08] * rustam (~rustam@90.216.255.245) has joined #ceph
[16:09] <joao> spekzor, btw, where are those being sold?
[16:13] <spekzor> asked the guy's at ceph (hello) for a highress logo and used a t-shirt do it your self webshop
[16:13] * tnt (~tnt@91.177.224.32) has joined #ceph
[16:13] <spekzor> can send you the .ai files if you'd like
[16:13] <joao> naa, that's okay, but thanks for offering :)
[16:13] <joao> was just wondering really
[16:14] <spekzor> or you could take a tatoo :)
[16:15] <spekzor> and wait for the next logo change, then you're the 'old logo guy'.
[16:15] <joao> eh, I'll leave that feat for Pete @ dreamhost :p
[16:16] <spekzor> any clues on my question above about ceph-deploy ?
[16:16] <joao> not really, sorry
[16:16] <spekzor> tnx :)
[16:16] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[16:18] <joao> spekzor, wrt the tattoo: http://www.inktank.com/wp-content/uploads/2013/04/cephfanboy.png
[16:19] * markbby (~Adium@168.94.245.2) has joined #ceph
[16:23] <darkfader> wow
[16:24] * alrs (~lars@209.144.63.76) has joined #ceph
[16:25] <jtang> just ordered me some dell boxes
[16:26] <jtang> looks like we might rollout some more ceph osd's!
[16:35] * markbby1 (~Adium@168.94.245.2) has joined #ceph
[16:35] * markbby (~Adium@168.94.245.2) Quit (Ping timeout: 480 seconds)
[16:38] <jerker> jtang: what boxes?
[16:39] <jtang> a pair of dell r720's
[16:39] <jerker> jtang: i am just talking to dell here in sweden regarding their r415 and r515 models.
[16:39] <jtang> yea i saw that the r515's were known machines for deploying ceph
[16:39] <jerker> jtang: how many drives in each?
[16:40] <jtang> i went with the 720's cause i need to run some VM's and the ceph osd/mon/radosgw
[16:40] <jtang> i plan on getting more machines next year as my project progresses
[16:40] <jtang> jerker: each box has 2x100gb ssd's and 4x4tb sata disks
[16:40] <jtang> i have some space to expand with a few more down the road, but not much
[16:42] <jtang> the plan is to start of small, then migrate the machines to being full fledged ceph osd's next year when i replace them with other hosts to run vm's
[16:42] <jtang> im not sure if i have 2 or 4 disk slots free, either way, it shuold be pretty good and powerful for running stuff, they weren't massively expensive
[16:43] <jtang> i could have gotten 2tb or 3tb disks and just filled them
[16:44] * rustam (~rustam@90.216.255.245) Quit (Remote host closed the connection)
[16:44] <jerker> jtang: do they have vendor lock in for the HDD/SSD or do any SATA-drive fit and work?
[16:44] <jtang> jerker: any disks work, but you do need to get the "caddy" from dell
[16:45] <jtang> or if you are good with a welder and metal cutter, you can make your own caddy
[16:45] * zhyan_ (~zhyan@101.83.239.187) has joined #ceph
[16:45] * glowell (~glowell@ip-64-134-236-4.public.wayport.net) has joined #ceph
[16:46] <jerker> jtang: do they deliver free caddys like supermicro do with the empty slots? i hate the caddy lock in...
[16:46] <jtang> jerker: one of the guys here in the office did that, they got some 4u dell boxes cheap with one disk, then they manufactured their own caddy's to not have to pay ~400e per disk
[16:46] <jtang> jerker: not as far as i know, i dont think they do
[16:47] * amb (~amb@82-69-2-201.dsl.in-addr.zen.co.uk) has joined #ceph
[16:48] <jerker> jtang: ok will include that as an requisite when i do my renewed competion on nodes the next time. (Good I hate that but I guess it is good for anti-corruption etc.)
[16:49] <jerker> s/Good/God/
[16:49] <jtang> jerker: they seem good value enough if you get 2-3tb disks
[16:49] <jtang> thr 4tb disks are pricey, but im planning ahead to not have balancing problems
[16:50] <jtang> for when i expand
[16:50] <jtang> btw i got mine with a 3yr warranty
[16:50] <jerker> I am more into 4 TB desktop drives for 172 USD/each and SSDs in front with bcache/dm-cache/zfs-log-cache (whatever it will be)
[16:52] * mistur (~yoann@kewl.mistur.org) Quit (Remote host closed the connection)
[16:52] * mistur (~yoann@kewl.mistur.org) has joined #ceph
[16:52] * yanzheng (~zhyan@101.82.171.76) Quit (Ping timeout: 480 seconds)
[16:53] <jerker> jtang: the balancing, you go for 4TB now and keep using 4 TB drives or how do you mean? cant you just set different weight on different size osd?
[16:53] <jtang> jerker: i plan on just getting more 4tb disks next year for my current machines
[16:54] <jtang> i dont need the capacity now so i only half filled the machines
[16:54] <jtang> i could set the weights, but its just easier to not mess around
[16:55] <jtang> i dont have enough staff right now to learn and figure things out, the aim is to go for a pretty simplistic setup
[16:55] <jtang> its what works for me ;)
[16:55] <jtang> at least i hope it will
[16:57] <jtang> btw, we were pretty impressed with the radosgw "emulating" the s3 api
[16:57] * rustam (~rustam@90.216.255.245) has joined #ceph
[16:57] <jtang> plus it wasnt too hard to get going
[16:58] <jtang> well at least on ubuntu anyway, it wasnt pleasant on EL6, but i guess thats down to the lack of users on that platform
[16:59] <amb> Newbie question: If I am only using Ceph for RBD, and using OSD+MON (no MDS), what's the absolute minimum I need to do on each OSD (initially or when a new one is added) to add the OSD's disk in? I don't think I need mkcephfs, and that's deprecated anyway. So do I need to pull ceph-deploy apart?
[17:04] * yehuda_hm (~yehuda@2602:306:330b:1410:baac:6fff:fec5:2aad) Quit (Read error: No route to host)
[17:08] * markbby1 (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[17:13] * glowell (~glowell@ip-64-134-236-4.public.wayport.net) Quit (Quit: Leaving.)
[17:13] * Esmil (esmil@horus.0x90.dk) has joined #ceph
[17:16] * portante (~user@66.187.233.206) has joined #ceph
[17:28] * bergerx_ (~bekir@78.188.204.182) Quit (Quit: Leaving.)
[17:29] * pconnelly (~pconnelly@71-93-233-229.dhcp.mdfd.or.charter.com) has joined #ceph
[17:29] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Ping timeout: 480 seconds)
[17:30] * jgallard (~jgallard@gw-aql-129.aql.fr) Quit (Quit: Leaving)
[17:34] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[17:38] * kyle_ (~kyle@216.183.64.10) has joined #ceph
[17:45] * rustam (~rustam@90.216.255.245) Quit (Remote host closed the connection)
[17:49] * senner (~Wildcard@68-113-232-90.dhcp.stpt.wi.charter.com) has joined #ceph
[17:50] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) Quit (Remote host closed the connection)
[17:50] * eschenal (~eschnou@85.234.217.115.static.edpnet.net) Quit (Remote host closed the connection)
[17:52] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:54] * mynameisbruce_ (~mynameisb@tjure.netzquadrat.de) has joined #ceph
[17:54] * mynameisbruce_ (~mynameisb@tjure.netzquadrat.de) Quit (Remote host closed the connection)
[17:55] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[17:55] * ShaunR (~ShaunR@staff.ndchost.com) Quit (Ping timeout: 480 seconds)
[17:59] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[18:01] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[18:05] <Azrael> paravoid, sagewk: i've been suffering from that debian udev bug too
[18:05] * markbby (~Adium@168.94.245.1) has joined #ceph
[18:06] <Azrael> paravoid, sagewk: to workaround, i modified the chef cookbook to populate /dev/disk/by-partuuid manually. not preferred, but works.
[18:06] <paravoid> sage found the culprit
[18:06] <Azrael> paravoid, sagewk: hopefully the debian udev maintaner does indeed take notice
[18:06] <Azrael> yeah
[18:06] <Azrael> rule ordering
[18:06] <Azrael> whats the fix though; didn't see that
[18:07] <paravoid> the workaround is to reorder
[18:07] <paravoid> either move blkid upper or the partuuid rules to the bottom
[18:07] <paravoid> I believe sage is planning to ship rules.d with ceph to workaround the issue
[18:07] <paravoid> btw, the maintainer replied
[18:07] <paravoid> I am not sure if I want to spend time working on a new stable release
[18:07] <paravoid> considering that there is a big number of bugs to be fixed, but this
[18:07] <paravoid> has already been fixed in the future unstable release (and that will
[18:07] <paravoid> be backported to stable).
[18:07] * Fetch__ (fetch@gimel.cepheid.org) Quit (Read error: Connection reset by peer)
[18:07] * Fetch (fetch@gimel.cepheid.org) has joined #ceph
[18:08] <Azrael> haha
[18:08] <Azrael> yeah that was the gist i got from looking at debian udev
[18:08] <Azrael> didn't even bother filing a bug report
[18:08] <Azrael> knowing there's no way in hell it would be addressed within the next 3-5 years
[18:08] <Azrael> as this is indeed... debian
[18:09] <Azrael> so sage will ship new udev rules
[18:09] <Azrael> ok this is good
[18:09] <Azrael> hopefully that comes with cuttlefish?
[18:09] <Azrael> we are stabilizing on cuttlefish
[18:09] <Azrael> and btw paravoid .. we are referring to /lib/udev/rules.d/95-ceph-osd.rules right?
[18:09] <paravoid> no idea where he'll put them
[18:10] <Azrael> oh ok
[18:10] * spicewiesel (~spicewies@2a01:4f8:191:316b:dcad:caff:feff:ee19) has left #ceph
[18:10] <Azrael> thanks paravoid
[18:10] * ghartz (~ghartz@ill67-1-82-231-212-191.fbx.proxad.net) has joined #ceph
[18:10] <paravoid> no worries
[18:23] <kyle_> hello all, i have a quick question about using the ceph-deploy stuff. I have a RAID10 setup with the OS partitioned off seperately. But there is only one disk due the the RAID setup. When using osd prepare the disk needs to be provided. Is this possible with my setup since i already partitioned of the space i want to use for ceph and there is only one disk??
[18:25] * tkensiski (~tkensiski@173.sub-70-197-15.myvzw.com) has joined #ceph
[18:25] * tkensiski (~tkensiski@173.sub-70-197-15.myvzw.com) has left #ceph
[18:27] * leseb (~Adium@83.167.43.235) Quit (Quit: Leaving.)
[18:28] * yehudasa (~yehudasa@2607:f298:a:607:c1fc:1433:ca04:dc9e) Quit (Remote host closed the connection)
[18:30] * yehudasa (~yehudasa@2607:f298:a:607:fc7b:7397:97da:230e) has joined #ceph
[18:32] * rustam (~rustam@90.216.255.245) has joined #ceph
[18:33] <amb> If I lose a ceph mon, which has previously been part of a cluster, and I rebuild the machine from scratch, is it merely a question of copying over the keyring then doing: ceph-mon --mkfs -i <name> --mon-initial-hosts 'foo,bar,baz' --keyring <initial_keyring> --public-addr <ip>
[18:34] * vizh (~vizh@195.211.238.242) has left #ceph
[18:34] <amb> i.e. will it pull the cluster map from other mon hosts provided they are quorate?
[18:38] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Ping timeout: 480 seconds)
[18:42] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) has joined #ceph
[18:45] <pconnelly> hi Patrick...
[18:45] <scuttlemonkey> hey
[18:45] <scuttlemonkey> so upgraded bobtail -> cuttlefish on MDS but NFS mount hanging?
[18:45] <scuttlemonkey> that right?
[18:45] <pconnelly> yep
[18:46] <scuttlemonkey> oh crud, looks like gregaf took a sick day today
[18:46] * alram (~alram@38.122.20.226) has joined #ceph
[18:48] <scuttlemonkey> lemme see if any of the other MDS rangers are kicking around
[18:51] <pconnelly> ok
[18:51] * yaaic (~yaaic@mobile-166-137-213-193.mycingular.net) has joined #ceph
[18:53] <yaaic> in an object store like ceph how does a directory or bucket work?
[18:57] * sagelap (~sage@2600:1012:b020:8b8e:e837:e7e2:8d3f:d9a0) has joined #ceph
[18:57] <sagelap> pconnelly: nfs client is linux kernel?
[18:58] <sagelap> er, server rather?
[18:58] <pconnelly> yes, ubuntu
[18:58] <sagelap> and a linux kernel ceph mount?
[18:58] <sagelap> can you mount -t debugfs none /sys/kernel/debug
[18:58] <sagelap> and then cat /sys/kernel/debug/ceph/*/mdsc ?
[18:58] <pconnelly> # mount -t debugfs none /sys/kernel/debug
[18:58] <pconnelly> mount: none already mounted or /sys/kernel/debug busy
[18:59] <pconnelly> mount: according to mtab, none is already mounted on /sys/kernel/debug
[18:59] * mhu (~mhu@83.167.43.235) Quit (Remote host closed the connection)
[18:59] <pconnelly> want me to paste it here? there's approx. 30 lines
[18:59] <sagelap> can skip that step tehn :)
[18:59] <sagelap> fpaste.org or similar
[19:01] <pconnelly> http://ur1.ca/dwih3
[19:01] <sagelap> hmm. what happens if you restart ceph-mds?
[19:02] <pconnelly> nothing, still hung, that's what we used to do on bobtail
[19:02] <pconnelly> when the mount went away
[19:02] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[19:02] * ChanServ sets mode +v andreask
[19:03] <sagelap> is the mds logging turn up or at the default levels?
[19:04] <pconnelly> probably low...
[19:05] <sagelap> if you put 'debug mds = 20' and 'debug ms = 1' in teh [mds] section of your ceph.conf and restart ceph-mds, we'll get a nice big log with the gory details of what went wrong..
[19:06] <paravoid> are there any plans to have a snapshot feature for radosgw?
[19:06] <sagelap> unfortunately the usual mds restart isn't sufficient to kick it in this case. restartin the client will obviously get around the issue
[19:07] <pconnelly> what's the command to turn up logging on the fly?
[19:07] <pconnelly> inject something?
[19:07] <sagelap> note that in both cases you may see some ESTALE from some clients.. this is a generic problem with reexporting ceph via nfs that we haven't addressed yet
[19:07] * rustam (~rustam@90.216.255.245) Quit (Remote host closed the connection)
[19:07] * loicd (~loic@magenta.dachary.org) has joined #ceph
[19:07] <pconnelly> when you say restart client, do you mean ceph or reboot?
[19:07] <sagelap> pconnelly: ceph mds tell \* injectargs '--debug-mds 20 --debug-ms 1'
[19:07] <kyle_> is "ceph-deploy osd prepare" needed if i already have the disk, partition and filesystem ready and mounted?
[19:07] <pconnelly> thanks sagelap
[19:07] <sagelap> but in this case that doesn't really help.. we want to see what happened leading up tot eh hang, but after
[19:08] <sagelap> paravoid: nothing concrete at this point. the rados stuff has all the pieces to support it, though!
[19:08] <pconnelly> what's the \* for?
[19:08] <sagelap> all mds's, and \ to avoid a list of filenames in .
[19:09] <paravoid> sagelap: I noticed, I read a piece where it talked about how applications must support it :)
[19:09] <paravoid> that it's not transparent in that sense
[19:09] <pconnelly> gotcha… I think we only have 1 active mds, other is passive backup
[19:09] <sagelap> yeah
[19:09] <paravoid> the crazy idea that I had in a semi-unrelated discussion before was
[19:10] <paravoid> to essentially provide a torrent seeder over our radosgw objects
[19:10] <paravoid> in the mid-term that is
[19:10] <paravoid> possibly using librgw when that happens
[19:10] <paravoid> that's very crazy and abstract so far
[19:10] <paravoid> but for this to happen, you'd need a consistent state, i.e. a snapshot
[19:12] <sagelap> yeah, i think the ticket there is librgw and snapshots
[19:12] <paravoid> the backstory is that we provide dumps to people to fetch all of our files, currently rsync and tarballs
[19:15] <pconnelly> Patrick… so reboot one of the clients?
[19:15] * yaaic (~yaaic@mobile-166-137-213-193.mycingular.net) Quit (Read error: Connection reset by peer)
[19:15] <paravoid> oh, I should resume yesterday's debugging
[19:15] <sagelap> pconnelly: er.. ideally, set those debug optoins in ceph.conf and restart ceph-mds so we capture a log of the bad behavior
[19:19] <cjh_> has anyone tried s3backer with ceph?
[19:19] <cjh_> paravoid: i like it :)
[19:21] <pconnelly> Patrick?
[19:22] <scuttlemonkey> wha? sry, I need to put a 'bonk' in for patrick
[19:22] <scuttlemonkey> what's up?
[19:23] <loicd> sjust could you please rephrase your question regarding proc_replica_log ?
[19:23] <pconnelly> on the console of the server, it said ceph: mds0 hung
[19:23] <pconnelly> this is the hypervisor running ceph client
[19:23] <pconnelly> have restarted MDS and enabled logging...
[19:23] <pconnelly> am rebooting the client
[19:24] <pconnelly> will be a few minutes before we know if it can connect
[19:24] <scuttlemonkey> k
[19:24] <scuttlemonkey> and logging is turned up?
[19:26] <elder> sage, once again, here's what I'd like to do with the branches--at the moment. (We can modify this after today, but here's what I'm prepared to do.)
[19:26] <elder> Reset the "testing" branch to what I posted as "testing-next" yesterday, d48fa64
[19:27] <elder> Reset the "for-linus" branch to be 638f5ab
[19:27] <elder> That makes the for-linus branch be what you last sent as a pull request plus 5 new commits that are bugs worthy of inclusion in 3.10.
[19:27] <elder> And that makes the testing branch be for-linus plus 3 commits that aren't that critical.
[19:27] <elder> It rebases the testing branch.
[19:27] <elder> OK with you?
[19:29] <sagelap> elder: sounds good
[19:29] * tkensiski (~tkensiski@209.66.64.134) has joined #ceph
[19:29] <elder> I wasn't clear yesterday about what you meant to do with the "master" branch.
[19:29] <pconnelly> yes, logging is enabled,
[19:29] <elder> But future bug commits can be added to the end of for-linus.
[19:29] <pconnelly> server running client back up, can't connect to mds
[19:29] <sagelap> going forward, i think we can do a master branch parallel to for-linus with reviewed and stable stuff, and make testing merge them together but be frequently rebased as we add reviewed-by: etc.
[19:29] * tkensiski (~tkensiski@209.66.64.134) has left #ceph
[19:29] <elder> About to implement that proposal, thanks sagelap.
[19:30] <sagelap> then it's a conscious decision whether each patch is for-linus (fix) or for the next window.
[19:30] <sagelap> and we hopefully will have less trolling through old patches looking for stuff to send upstream for -rc's
[19:30] * glowell (~glowell@38.122.20.226) has joined #ceph
[19:31] <elder> Except for-linus needs to go to both, right?
[19:31] <elder> That is, bugs need to go to both master and for-linus
[19:31] * zhyan_ (~zhyan@101.83.239.187) Quit (Remote host closed the connection)
[19:31] <sagelap> only if we intend to run a kernel that has only master
[19:31] <sagelap> we can do all the testing on a branch that merges both together
[19:31] <elder> So master will lack certain critical bug fixes?
[19:32] <sagelap> maybe calling it for-next instead of master would be more clear
[19:32] <elder> That would make more sense to me I guess. I don't think duplicate commits are that big a deal, as long as they're not that common.
[19:32] <sagelap> that's my idea du jour.. i'm not sure there is a totally satisfying arrangement :)
[19:33] <elder> We can check back in another jour or two to see if you've changed your mind :)
[19:33] * jjgalvez (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) has joined #ceph
[19:33] <sagelap> hehe
[19:33] * davidzlap (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[19:34] <sagelap> pconnelly: i think you jus tnee dto reboot the client. but if you've generated that log, please send it our way so we can take a look
[19:35] <pconnelly> I rebooted client, no luck
[19:35] <sagelap> i mean the ceph client / nfs server ...
[19:35] <pconnelly> 40+ HV's running bobtail are hung, and 1 running cuttlefish, MDS seems dead
[19:35] <pconnelly> running but not responding...
[19:35] <pconnelly> where do you want logs?
[19:35] <pconnelly> fpaste.org?
[19:36] * Tamil (~tamil@38.122.20.226) has joined #ceph
[19:37] <sagelap> the sftp account i just /msg'd to you probably, since they're presumably big
[19:37] <sagelap> along with another cat of /sys/kernel/debug/ceph/*/mdsc that matches hung requests after the logged ceph-mds restart
[19:38] <kyle_> when i try to start a monitor on a fresh clusteri see "Starting ceph-create-keys on ceph-mon0..." which just hangs. On the monitor itselfi can see that "/usr/bin/python /usr/sbin/ceph-create-keys -i 0" is also just hanging. In the log it says "accepter.accepter.bind unable to bind to 10.0.0.80:6789: Address already in use" which is the IP of the monitor on which this is happening. can anyone tell me please what I'm doing wrong?
[19:39] * sjustlaptop (~sam@38.122.20.226) has joined #ceph
[19:39] <madkiss> oh, hello sagelap
[19:40] * loicd (~loic@magenta.dachary.org) Quit (Ping timeout: 480 seconds)
[19:41] <sagelap> hi!
[19:42] <kyle_> also accepter is spelled wrong
[19:43] * kyle_ (~kyle@216.183.64.10) Quit (Quit: Leaving)
[19:44] * kyle_ (~kyle@216.183.64.10) has joined #ceph
[19:45] * kyle_ is now known as kmekil
[19:46] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[19:47] * sjustlaptop (~sam@38.122.20.226) Quit (Ping timeout: 480 seconds)
[19:48] * markbby (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[19:48] * markbby (~Adium@168.94.245.1) has joined #ceph
[19:50] * TiCPU__ (~jeromepou@190-130.cgocable.ca) Quit (Ping timeout: 480 seconds)
[19:51] <elder> sagelap, the testing and branches are updated. I ended up doing them twice because I forgot to insert the "Cc: stable@" lines, but they're there now.
[19:57] * LeaChim (~LeaChim@176.250.188.136) Quit (Read error: Operation timed out)
[19:59] * TiCPU__ (~jeromepou@190-130.cgocable.ca) has joined #ceph
[20:00] <joao> sagelap, whenever you have the time, https://github.com/ceph/ceph/pull/299
[20:00] <joao> gregaf, ^
[20:01] <joao> don't merge it though; I really need another set of eyes on that patch to make sure it does indeed make sense, not just in my head
[20:01] <joao> I have to run, but will be available on email
[20:01] <joao> well, bbl
[20:01] <joao> o/
[20:02] <sagelap> joao: just clearing the pending values seems much simpler...
[20:08] <joao> are we willing to lose those pending values?
[20:08] <joao> if so, then it's trivial
[20:08] <sagelap> yeah
[20:08] * LeaChim (~LeaChim@176.250.188.136) has joined #ceph
[20:08] <joao> well, then that patch is overkill
[20:09] <sagelap> i think the main thing is that the requests get completed with EAGAIN.. i think that is all already there tho
[20:09] <joao> yeah, but that's only if the requests are queued
[20:09] <joao> if we delay the proposal during _dispatch() they don't get queued
[20:10] <joao> we just backoff the proposal; if the pending value is lost, those changes will be lost
[20:10] <sagelap> it's fine to lose the pending if the request gets retried, just like it does when we are midway through paxos and an election happens
[20:10] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:13] <joao> right
[20:13] <joao> well
[20:13] <joao> I'll look into it again tomorrow
[20:13] <joao> c ya
[20:13] <sagelap> sounds good, ttyl!
[20:18] * alrs (~lars@209.144.63.76) Quit (Ping timeout: 480 seconds)
[20:20] * sagelap (~sage@2600:1012:b020:8b8e:e837:e7e2:8d3f:d9a0) Quit (Quit: Leaving.)
[20:28] * sagelap (~sage@2600:1012:b020:8b8e:9d3b:a069:66e9:a44d) has joined #ceph
[20:29] * portante (~user@66.187.233.206) Quit (Quit: upgrading)
[20:31] * coyo (~unf@71.21.193.106) has joined #ceph
[20:41] * eschnou (~eschnou@249.73-201-80.adsl-dyn.isp.belgacom.be) has joined #ceph
[20:41] * rustam (~rustam@90.216.255.245) has joined #ceph
[20:46] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[20:46] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:51] * Tamil (~tamil@38.122.20.226) Quit (Quit: Leaving.)
[20:52] * sagelap (~sage@2600:1012:b020:8b8e:9d3b:a069:66e9:a44d) Quit (Ping timeout: 480 seconds)
[20:56] * Tamil (~tamil@38.122.20.226) has joined #ceph
[21:00] * rustam (~rustam@90.216.255.245) Quit (Remote host closed the connection)
[21:08] * rustam (~rustam@90.216.255.245) has joined #ceph
[21:12] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[21:19] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[21:19] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:24] * ShaunR (~ShaunR@staff.ndchost.com) Quit ()
[21:27] * portante (~user@66.187.233.206) has joined #ceph
[21:27] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[21:34] * rustam (~rustam@90.216.255.245) Quit (Remote host closed the connection)
[21:36] * TiCPU__ (~jeromepou@190-130.cgocable.ca) Quit (Remote host closed the connection)
[21:45] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[21:45] * ChanServ sets mode +v andreask
[21:46] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Quit: Leaving.)
[22:06] * sagelap (~sage@2600:1010:b000:ae47:9d3b:a069:66e9:a44d) has joined #ceph
[22:11] * senner (~Wildcard@68-113-232-90.dhcp.stpt.wi.charter.com) Quit (Quit: Leaving.)
[22:14] * rturk-away is now known as rturk
[22:17] * alrs (~lars@cpe-142-129-65-37.socal.res.rr.com) has joined #ceph
[22:19] * Wolff_John (~jwolff@vpn.monarch-beverage.com) has joined #ceph
[22:21] * pconnelly (~pconnelly@71-93-233-229.dhcp.mdfd.or.charter.com) Quit (Quit: pconnelly)
[22:21] * rturk is now known as rturk-away
[22:24] * buck (~buck@bender.soe.ucsc.edu) has joined #ceph
[22:25] * danieagle (~Daniel@177.99.135.75) has joined #ceph
[22:28] * themgt (~themgt@96-37-28-221.dhcp.gnvl.sc.charter.com) Quit (Quit: themgt)
[22:29] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:36] * Wolff_John (~jwolff@vpn.monarch-beverage.com) Quit (Quit: ChatZilla 0.9.90 [Firefox 20.0.1/20130409194949])
[22:39] * danieagle (~Daniel@177.99.135.75) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[22:48] * nhm (~nhm@174-20-107-121.mpls.qwest.net) has joined #ceph
[22:54] * nhm_ (~nhm@184-97-193-157.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[22:56] * rustam (~rustam@90.216.255.245) has joined #ceph
[23:00] * The_Bishop_ (~bishop@e179011252.adsl.alicedsl.de) has joined #ceph
[23:06] * The_Bishop (~bishop@e179004062.adsl.alicedsl.de) Quit (Read error: Operation timed out)
[23:07] * themgt (~themgt@24-177-232-33.dhcp.gnvl.sc.charter.com) has joined #ceph
[23:08] * rturk-away is now known as rturk
[23:09] <loicd> ccourtaut: still around ?
[23:10] <loicd> I'm 81% done recompiling / fixing ReplicatedPG in the context of http://tracker.ceph.com/issues/5046 . I keep my finger crossed, no suprises so far ;-)
[23:11] * Tamil (~tamil@38.122.20.226) Quit (Quit: Leaving.)
[23:15] * sjustlaptop (~sam@38.122.20.226) has joined #ceph
[23:19] * rturk is now known as rturk-away
[23:26] * sagelap (~sage@2600:1010:b000:ae47:9d3b:a069:66e9:a44d) Quit (Ping timeout: 480 seconds)
[23:31] * drokita1 (~drokita@199.255.228.128) has joined #ceph
[23:32] * Tamil (~tamil@38.122.20.226) has joined #ceph
[23:33] <loicd> 86% recompiling / fixing ReplicatedPG
[23:33] * eschnou (~eschnou@249.73-201-80.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[23:34] <kmekil> can someone please tell me if, when using ceph-deploy, the "osd prepare" step is mandatory?
[23:35] * drokita (~drokita@199.255.228.128) Quit (Ping timeout: 480 seconds)
[23:36] <kmekil> I ask because i only have one disk (RAID10) so i'm not sure how to run "osd prepare" if i have to have the disk mounted for the OS and the /var/lib/ceph.... directory.
[23:38] <elder> Let me see if I can help, kmekil. I don't know the answer off hand but I'll try to get you one.
[23:38] <kmekil> okay thanks a ton
[23:39] <elder> If I understand you right... You want to run ceph, but you only have your system disk available for ceph storage?
[23:39] <kmekil> yes i have a six disk raid 10 array with the OS partitioned off
[23:40] * drokita1 (~drokita@199.255.228.128) Quit (Ping timeout: 482 seconds)
[23:40] <elder> Do you have other partitions with free space on them that you intend to use for ceph?
[23:41] <kmekil> i would like to have my data nodes this way for maximum performance while still having some disk redundancy. my tests show RAID10 with six disks performs better than say having the OS on two of the disk and raid 5 or 10 for the other four
[23:41] <kmekil> yes
[23:41] <kmekil> the problem is that i need that space to be mounted at /var/lib/ceph
[23:41] <dmick> "disk" in a lot of the ceph-deploy documentation really means "blockdev (whole disk or part) or path"
[23:41] <dmick> it's not made very clear in the docs
[23:42] <dmick> ceph-deploy will mount devs at /var/lib/ceph, or you can use a path whereever you like.
[23:43] <kmekil> hmm okay i'll try it from the start again withouth anythign but the OS mounted. it seemed like it errored out the first time if /var/lib/ceph was not already in existence
[23:45] <loicd> 88% recompiling / fixing ReplicatedPG and I'm out for tonight. Have a nice week-end everyone :-)
[23:45] * rustam (~rustam@90.216.255.245) Quit (Remote host closed the connection)
[23:46] <dmick> /var/lib/ceph is installed by the packages, and should exist
[23:47] <dmick> but things in it may be symlinks to other locations
[23:53] <loicd> 95% recompiling / fixing ReplicatedPG ... could not resist, this is addictive.
[23:53] <elder> What does it say now?
[23:55] <loicd> 100% recompiling / fixing ReplicatedPG :-) I'm out for good now.
[23:56] <elder> Good for you! Time for a weekend.

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.