#ceph IRC Log

Index

IRC Log for 2013-08-27

Timestamps are in GMT/BST.

[0:00] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[0:07] <sagewk> sjust: pushed new wip-6036... now its 'copy_get' with an explicit cursor
[0:07] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[0:07] <sjust> cool, will look in a bit
[0:07] <sagewk> also fixed the throttling; was a good cleanup anyway
[0:07] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[0:10] * dmsimard (~Adium@108.163.152.2) Quit (Quit: Leaving.)
[0:13] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[0:14] * ScOut3R_ (~scout3r@54026B73.dsl.pool.telekom.hu) has joined #ceph
[0:14] * ScOut3R (~scout3r@54026B73.dsl.pool.telekom.hu) Quit (Read error: Connection reset by peer)
[0:15] <MACscr> is it me or is Ceph less ram hungry than ZFS?
[0:15] * diegows (~diegows@host63.186-108-72.telecom.net.ar) Quit (Ping timeout: 480 seconds)
[0:16] * ishkabob (~c7a82cc0@webuser.thegrebs.com) Quit (Quit: TheGrebs.com CGI:IRC)
[0:17] <Kioob> MACscr: ZFS is hungry to handle dedup, no ?
[0:17] * Pauline (~middelink@2001:838:3c1:1:be5f:f4ff:fe58:e04) Quit (Ping timeout: 480 seconds)
[0:18] <MACscr> if you use that, but its also hungry just for caching
[0:19] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) Quit (Quit: Leaving.)
[0:21] * TiCPU (~jeromepou@190-130.cgocable.ca) has joined #ceph
[0:22] <TiCPU> I just added infiniband connections to my cluster and was wondering if it is possible for all mons to communicate between them using this connection instead of using ethernet but to keep external resources using ethernet?
[0:22] * Pauline (~middelink@2001:838:3c1:1:be5f:f4ff:fe58:e04) has joined #ceph
[0:22] <TiCPU> to make sure localhost uses infiniband I had to define public/cluster network to be the same (infiniband)
[0:23] <sjust> gregaf: can you seperate mon-mon from mon-other ports?
[0:23] <gregaf> umm, nope, sorry
[0:26] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[0:26] <MACscr> TiCPU: cant you setup your IB to use Ethernet?
[0:26] <MACscr> im pretty sure its possible
[0:27] <TiCPU> you mean, using routing and make ceph forget about the ethernet network?
[0:27] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) has joined #ceph
[0:28] <MACscr> well more just setting up different subnets and setting up IB to use ethernet. Thus its just like routing any other interfaces
[0:29] * BillK (~BillK-OFT@220-253-162-118.dyn.iinet.net.au) has joined #ceph
[0:32] <TiCPU> it seems changing the mon address is not enough, it still tries the old address
[0:34] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[0:36] <itatar> hello, I'd like to give cephs a try. I followed the preflight checklist (http://ceph.com/docs/master/start/quick-start-preflight/) and then 'storage cluster quick start' (http://ceph.com/docs/master/start/quick-ceph-deploy/) but it is not clear how to 1. make sure the cluster is deployed correctly and is running 2. how to proceed from here. What's the quickest way to write/read data from it? thank you
[0:37] * tnt (~tnt@91.177.230.140) Quit (Ping timeout: 480 seconds)
[0:38] <TiCPU> itatar, what do you want to write? block devices, filesystem, objects?
[0:38] <itatar> objects (I'd play with fs later as well)
[0:38] <itatar> or I can start with the file system if that is better to start with
[0:39] <TiCPU> I'd say filesystem/block device are easier to start with
[0:39] <itatar> would be nice to use an S3 client to interface with the cluster
[0:39] <itatar> ok
[0:40] <loicd> gregaf: are you around ? I would like a second opinion on http://tracker.ceph.com/issues/6117#note-4
[0:40] <TiCPU> just looked at the guide, I think it was way easier for a *fast* startup to use mkcephfs
[0:40] * madkiss (~madkiss@tmo-096-227.customers.d1-online.com) Quit (Read error: Operation timed out)
[0:40] <TiCPU> itatar, were you able to issue ceph -s with success?
[0:42] * doxavore (~doug@99-7-52-88.lightspeed.rcsntx.sbcglobal.net) Quit (Quit: :qa!)
[0:42] <gregaf> loicd: I haven't looked at that, what are the relevant parts beside the locking order?
[0:42] <itatar> that wasn't part of the two pages I referenced above so I didn't try that yet. Should I move on to 'cephfs quick start' page then?
[0:43] <loicd> gregaf: I'm trying to establish a possible scenario for the stack trace to happen.
[0:44] <gregaf> right, but what are the threads A,B,C you're referring to?
[0:45] <itatar> ceph -s fails. How do I "Ensure that the Ceph Storage Cluster is running and in an active + clean state"?
[0:45] <TiCPU> itatar, that is what ceph -s is for
[0:45] <itatar> :) ah ok
[0:45] <TiCPU> if that fails, ceph monitor is not running
[0:46] <TiCPU> ps aux|grep ceph does that show any ceph processes, is your /var/lib/ceph/ directory (or equivalent) populated?
[0:47] * gaveen (~gaveen@175.157.139.239) Quit (Remote host closed the connection)
[0:49] <itatar> it shows a lot of ceph processes but /var/lib/ceph is not there.. hm.. it was there when I followed the quick start page.. strange
[0:49] <gregaf> loicd: I don't think pthread_mutex_lock() can return EBUSY
[0:49] <gregaf> http://linux.die.net/man/3/pthread_mutex_trylock
[0:50] <TiCPU> itatar, what process? ceph-osd, ceph-mon, ceph-mds, ceph, python create-keys-somehing?
[0:50] <loicd> right, I should have written r != 0 because I really don't know ( that's not in the stack trace ) what error was returned
[0:50] <loicd> gregaf: ^
[0:50] <gregaf> I doubt the issue is with our handling of the pthreads atomicity stuff; it's probably a bad pointer deref or something :)
[0:52] <gregaf> the only error conditions pthread_mutex_lock() can return are if you've done something *very* naughty; probably one of them is what's been violated but it doesn't involve racing with other threads or anything that I can see?
[0:52] <gregaf> in particular that trylock/lock thing can't race as lock is blocking; the trylock is just to avoid doing perfcounters overhead on non-blocking mutex locks
[0:56] <itatar> TiCPU, sorry, misread the ps output earlier. the grep showed what the 'ceph' user was running. there are actually no ceph processes on the admin node. on the server node there is only: /usr/bin/ceph-mds --cluster=ceph -i cephserver -f
[0:57] <loicd> gregaf: thanks, I understand now, you're right. That's going to be a tough one :-)
[0:57] <TiCPU> well, the mds is the last service to start and is useless on its own. I'm really not confortable with ceph-deploy :/
[0:57] <gregaf> loicd: that assert usually means we're looking at some deallocated memory for some reason, or another similar issue
[0:58] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:59] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[1:00] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:01] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[1:02] <itatar> /var/lib/ceph exists on the server (not admin). I think that's right
[1:02] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[1:09] <MACscr> ok, so does storage traffic go directly from the ceph-osd's or does it go through ceph-mon? I only have 10GbE links from my switch to my ceph-osd servers
[1:10] <sjust> MACscr: storage traffic goes directly to the osds
[1:10] <MACscr> great!
[1:10] <MACscr> thanks
[1:12] * AfC (~andrew@2407:7800:200:1011:f946:9508:d31e:c9fa) has joined #ceph
[1:12] * mtanski (~mtanski@69.193.178.202) Quit (Ping timeout: 480 seconds)
[1:16] <TiCPU> itatar, yes, this is right, it should contain mon/mds/osd depending and mon should be populated
[1:23] * devoid (~devoid@130.202.135.234) Quit (Quit: Leaving.)
[1:25] <itatar> TiCPU, could walk me through what I should verify on my admin node and on my server node after following the quick start
[1:26] <gregaf> loicd: hmm, the unittest_sharedptr_registry is failing intermittenly; made issue 6130 for you :)
[1:26] <kraken> gregaf might be talking about: http://tracker.ceph.com/issues/6130 [SharedPtrRegistry_all.wait_lookup_or_create fails inconsistently]
[1:28] <sagewk> loicd: if we're lucky that is a dup of bug 6117
[1:28] <kraken> sagewk might be talking about: http://tracker.ceph.com/issues/6117 [osd: bad mutex assert in ReplicatedPG::context_registry_on_change()]
[1:28] <gregaf> oh, didn't put that possibility together, but it's not spamming any assert failures so I don't think so?
[1:34] <itatar> does ceph stores objects onto disks, does it treat the disks as block devices or does it operate on top a a linux file system?
[1:34] <itatar> (does->when)
[1:35] <dmick> the OSDs sit on top of a local filesystem
[1:39] <itatar> thanks dmick
[1:40] <dmick> the journal can use a raw block device, but that's the only exception
[1:41] <itatar> do OSDs care what file system they are on top of?
[1:43] <lurbs> itatar: Yep: http://ceph.com/docs/master/rados/configuration/filesystem-recommendations/#filesystems
[1:43] <MACscr> the journal device is on the ceph-mon, right? Sorry for all he dumb questions. I havent looked at it in over the month and am just now starting to move forward with it
[1:43] <MACscr> sry, its on the ceph-osd
[1:44] * indeed (~indeed@206.124.126.33) has joined #ceph
[1:45] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[1:50] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[1:51] <MACscr> ok, this is my design concept for Ceph and Openstack http://content.screencast.com/users/MACscr/folders/Snagit/media/84cce9e8-1564-486e-94f9-1d021fed69d7/2013-08-26_18-51-00.png
[1:52] <MACscr> since im going to have 12 osd's on each of the two nodes, should i even be using SSD for cache?
[1:52] <MACscr> er, for the journals
[1:53] <Gugge-47527> i would use more than one ssd for journal
[1:53] <Gugge-47527> would hate to loose 12 osd's because a single ssd dies
[1:53] <Gugge-47527> and i would use more than two storage nodes
[1:53] <MACscr> why?
[1:53] <kraken> why is amazon so popular? (yanfali on 08/15/2013 05:26PM)
[1:54] <MACscr> i understand the need for more than one ssd
[1:54] <MACscr> but i dont get the need to have more than 2 storage nodes
[1:54] <MACscr> i will have 3 monitoring nodes
[1:54] <Gugge-47527> MACscr: because two storage nodes is a high procentage of my date unprotected when one machine dies
[1:55] <sagewk> gregaf: http://ceph.com/docs/wip-6036/dev/cache-pool/
[1:55] <MACscr> Gugge-47527: how so? thats full redundancy
[1:56] <Gugge-47527> MACscr: when one machine dies 100% of your date is unprotected
[1:56] <Gugge-47527> you only have one copy online
[1:56] <MACscr> lol, well of course
[1:57] <Gugge-47527> and it cant fix itself
[1:57] <sagewk> gregaf: make that https://github.com/ceph/ceph/blob/wip-6036/doc/dev/cache-pool.rst
[1:57] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[1:58] <MACscr> cant fix itself? when the second node is "fixed", it wouldnt be able to "sync" the two systems again?
[1:58] <MACscr> thats pretty poor design if thats the case
[1:58] <Gugge-47527> MACscr: you will wake up to a degraded cluster if one machine dies
[1:58] <MACscr> Guest3115: thats a given
[1:59] <Gugge-47527> if you have 10 machines, you will wake up to a healthy cluster
[1:59] <Gugge-47527> because it just replicated the degraded data to other machines
[2:00] <MACscr> right, but im not going to store 3 copies of everything, only 2, so im not going to spend money for 3 x needed useable storage. Plus as you can see, the environment is pretty small. only 12 total systems.
[2:01] <Gugge-47527> who said 3 copies?
[2:01] <MACscr> isnt that the default replication?
[2:02] <Gugge-47527> no
[2:03] * dmsimard (~Adium@108.163.152.66) has joined #ceph
[2:04] <MACscr> but anyway, i cant afford right now to buy additional storage servers, so its a risk i am going to have to take. Why did you say it couldnt 'fix' itself?
[2:05] <Gugge-47527> because there is nowhere to replicate the second copy when there is only one machine
[2:06] <MACscr> understand, but once the second node is back up, it should sync backup, but will be degraded during that time. Correct?
[2:06] * ScOut3R_ (~scout3r@54026B73.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[2:06] <itatar> Can Ceph work in a hardware architecture where disks are shared between hosts. Let's say we have a system with two hosts that are physically connected to a disk. So the disk can be accessed by each host. Can ceph take advantage of such architecture in any way? for example increased access speed by doing it in a parallel from to OSDs that are running on the two hosts? and/or seamlessly tolerate one of the hosts going down?
[2:06] <Gugge-47527> yes. but that is not fixing itself, that requires you to wake up and do something :)
[2:07] <Gugge-47527> itatar: no
[2:08] <MACscr> itatar: like a DAS with two hosts?
[2:10] <sagewk> gregaf: can you open up a pull req for that branch?
[2:10] <itatar> MACscr: yes, the disk (or set of disks) is connected to two hosts via sata
[2:10] <itatar> or some other system bus
[2:11] <sagewk> also, github.com/github/hub cli ftw
[2:11] <sagewk> hub pull-request -b ceph:master -h ceph:wip-mybranch
[2:11] <gregaf> sagewk: I'd rather nobody get tempted to click merge until I can back in the extra commits I added at the end, but if you promise not to I guess I can :)
[2:11] <sagewk> prefix it with DNM?
[2:12] <gregaf> then what's the point of the pull request?
[2:12] <gregaf> *confused*
[2:12] <sagewk> to capture comments/review
[2:12] * sherry (~sherry@wireless-nat-10.auckland.ac.nz) has joined #ceph
[2:12] <gregaf> I doubt you'll have anything new, but okay
[2:12] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley)
[2:13] <sherry> hi, in the https://github.com/ceph/ceph mentioned that I do need to install "dot" while I think it is part of graphviz! is that right?!
[2:13] <sagewk> gregaf: i just have 1 comment... hence my request :)
[2:14] <gregaf> heh
[2:14] <gregaf> https://github.com/ceph/ceph/pull/543
[2:14] * alfredo|afk (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[2:14] <MACscr> itatar: yeah, thats not a good use case for ceph. The only way they would be useful is if you split up the DAS into two different arrays and then shared one array with each host and then the hosts themselves were the actual ceph-osd storage nodes. So pretty much nothing else running on them
[2:14] <gregaf> and apparently it won't auto-merge so nobody will click that button anyway :(
[2:14] <MACscr> at least thats my theory with my limited knowledge
[2:16] <lurbs> sherry: In Ubuntu/Debian land dot is part of the graphviz package, yes.
[2:16] <sherry> lurbs: thanks
[2:18] <itatar> thanks MACcsr
[2:18] <MACscr> why are there two stable versions? Which one should i be starting out with?
[2:19] <janos> time to go from bobtail to dumpling. wish me luck
[2:19] <janos> crap, looking fo the sequencing doc again
[2:20] * KindTwo (~KindOne@h20.56.186.173.dynamic.ip.windstream.net) has joined #ceph
[2:21] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[2:21] <lurbs> MACscr: For a new cluster, I'd use the 0.67.x release (Dumpling). The 'old' stable Cuttlefish still gets patches (which is why it's up to .8) but lacks recent features.
[2:21] <lurbs> http://ceph.com/docs/master/release-notes/#v0-67-dumpling
[2:21] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) Quit (Quit: Leaving.)
[2:22] <MACscr> doesnt dumbling have some big issues with cpu usage?
[2:22] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:22] * KindTwo is now known as KindOne
[2:22] <janos> thanks lurbs
[2:23] <lurbs> MACscr: Not sure about CPU usage. I haven't noticed it, but that doesn't mean it doesn't exist.
[2:24] <MACscr> it just appears to be a trend ive seen on the mailing list
[2:24] * silversurfer (~jeandanie@124x35x46x15.ap124.ftth.ucom.ne.jp) has joined #ceph
[2:24] * Pauline (~middelink@2001:838:3c1:1:be5f:f4ff:fe58:e04) Quit (Ping timeout: 480 seconds)
[2:26] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:33] <TiCPU> is there any reason why rados bench would give "0" bandwidth writing speed sometimes, I see hard disk 100% in use, journal 0% after only 4GB write (journal 10GB)
[2:33] * Pauline (~middelink@2001:838:3c1:1:be5f:f4ff:fe58:e04) has joined #ceph
[2:34] * yehudasa_ (~yehudasa@2602:306:330b:1410:ea03:9aff:fe98:e8ff) has joined #ceph
[2:36] * dmsimard (~Adium@108.163.152.66) Quit (Ping timeout: 480 seconds)
[2:36] <MACscr> hmm, i wonder how much speed i would lose by not using SSD's for journals. I guess i need to test that first before i decide if im going to use them or not
[2:36] <MACscr> they seem very touchy though in comparison to have zfs handles them
[2:36] <janos> when i restart the MON's after an upgrade, should they take a long time to come back up?
[2:36] <MACscr> zfs can failover to disk when the ssds arent available
[2:39] * yy-nm (~Thunderbi@122.233.46.4) has joined #ceph
[2:39] * sagelap1 (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) Quit (Ping timeout: 480 seconds)
[2:39] <janos> i'm seeing "failed: 'ulimit -n 32768; /usr/bin/ceph-mon" on restarted mons
[2:39] <janos> deoes anyone have any idea what this is?
[2:40] * xarses (~andreww@204.11.231.50.static.etheric.net) Quit (Ping timeout: 480 seconds)
[2:46] * sagelap (~sage@2600:1012:b024:c34b:10a4:efbc:77f3:da42) has joined #ceph
[2:47] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[2:47] <janos> well, my cluster seems to be boned
[2:47] <alphe> hello crowd !
[2:47] <janos> not sure where to start
[2:47] <janos> anyone available to help?
[2:47] <janos> just did a bobtail--> dumpling .67.2
[2:48] <janos> have only restarted the mons
[2:48] <alphe> after some fighting I succeed in connecting a s3 client for windows throught the SSL layer of my 100-continue special custom apache !
[2:48] * jaydee (~jeandanie@124x35x46x13.ap124.ftth.ucom.ne.jp) has joined #ceph
[2:49] <janos> i just get faults when trying to do things like 'ceph mon stat'
[2:50] <janos> is there some way for me to sorta restart the conversion process?
[2:51] * yanzheng (~zhyan@134.134.139.74) has joined #ceph
[2:52] <alphe> I janos hum ...
[2:52] <joshd> janos: I think the conversions can take a while, but you may be able to use the admin socket to see that the mon is at least running correctly
[2:52] <janos> i don't appear to have any running correctly
[2:52] <gregaf> I assume this is just the change to the monitor command protocol; don't you need to update the ceph tool?
[2:52] <janos> but honestly i don't know how to tell
[2:52] <alphe> janos I think you should reinstall completly if you can ... the ceph stack should not afect the "disks"
[2:53] <janos> i did a full ceph update
[2:53] <janos> from the repo
[2:53] <janos> yum update ceph - which dragged in many files
[2:53] <alphe> so now you cluster is not configured ...
[2:53] <janos> hrm, what does that mean exactly?
[2:54] <janos> the ceph.conf is there, the /var/lib/ceph/* is there
[2:54] <alphe> or more to say your ceph apps will read and load deprecated config files / keys etc so coredump like crazy
[2:54] <janos> did the ceph.conf format change?
[2:54] <janos> i don't recall that
[2:54] * silversurfer (~jeandanie@124x35x46x15.ap124.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[2:54] <janos> i could very well be misunderstanding
[2:54] <alphe> janos betwin bobtail and actual version there is like a whole other world ...
[2:55] <janos> i really hope i'm not completely boned here
[2:55] <alphe> janos it is in the documentation that the ceph.conf changed to feet cephX requierements :)
[2:55] <janos> that was from argonaut to botail though i though
[2:55] <janos> t
[2:55] <alphe> cephx is the lower layer for ceph-deploy tool :)
[2:55] * nerdtron (~kenneth@202.60.8.252) has joined #ceph
[2:55] <janos> i'm not suing auth, but i have that explicitly declared in ceph.conf
[2:56] <janos> suing/using
[2:56] <alphe> janos normally if you erase your ceph files and start from scratch a ceph-deploy procedure you should be able to restick your disks togather ...
[2:56] <janos> :O
[2:56] <janos> that sounds slightly frightening
[2:57] <alphe> janos ceph is in 0.SomethingEvolvingEveryDay so yes you will face time to time total reset stage
[2:57] <janos> sure
[2:58] * cofol1986 (~xwrj@110.90.119.113) Quit (Read error: Connection reset by peer)
[2:58] <janos> but i dind't get the impression from reading and being in here that this would be so catastrophic
[2:58] <janos> i wonder if i can just extract monmap, and scratch the mons
[2:58] <janos> and remake them
[2:58] <gregaf> I don't think that's quite right, guys :)
[2:58] <janos> i currenlty have no running cluster though
[2:58] <alphe> ok have to run out
[2:59] <gregaf> janos: what nodes did you update and what nodes are you trying to examine them from?
[2:59] <janos> i updated the mons
[2:59] <janos> restarted them in sequence
[2:59] <janos> 2 complained about keys
[2:59] <gregaf> "complained about keys"?
[2:59] <janos> looking to se i i still have error, sorry
[3:00] <gregaf> what do you mean by "complained about keys"?
[3:00] <alphe> bye all I just wanted to share that s3/ssl rados gateway ceph object storage was working great from windows and that the radosgateway server is 95% idle cpu
[3:00] <janos> ah sorry, i was seeing wrong thing
[3:00] <janos> here's what i'm getting
[3:00] <janos> Starting Ceph mon.2 on mon2...
[3:00] <janos> failed: 'ulimit -n 32768; /usr/bin/ceph-mon -i 2 --pid-file /var/run/ceph/mon.2.pid -c /etc/ceph/ceph.conf '
[3:00] <janos> Starting ceph-create-keys on mon2...
[3:00] <alphe> compare to the 48% idle cpu of a samba/ceph-fuse proxy I m quite happy
[3:01] <alphe> and I can transfer files 30 by 30 :)
[3:01] <alphe> which is something fantastic !
[3:01] <gregaf> wow, I didn't realize we were doing that in our normal packages...
[3:01] <gregaf> janos; update your ulimit -n hard limits and it should go fine
[3:01] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[3:01] <gregaf> gotta run though, night all
[3:02] <janos> i am sadly unfamiliar with that, but will look
[3:02] <janos> thank you
[3:04] * jluis (~JL@89.181.146.94) Quit (Ping timeout: 480 seconds)
[3:08] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:09] * alphe (~alphe@0001ac6f.user.oftc.net) Quit (Quit: Leaving)
[3:10] * Tamil (~tamil@38.122.20.226) Quit (Read error: Connection reset by peer)
[3:11] * Tamil (~tamil@38.122.20.226) has joined #ceph
[3:13] * gregmark (~Adium@68.87.42.115) Quit (Read error: Connection reset by peer)
[3:13] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[3:15] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:15] <nerdtron> how do i increase the numebr of placement group for an osd pool?
[3:19] <lurbs> Not sure if this is still considered experimental, so you may want to check: ceph osd pool set $pool pg_num $number
[3:19] <janos> i still don't seemt o be able to change ulimit -n
[3:19] <lurbs> I just did it with a test pool, but have never done it in production or on a pool I care about.
[3:19] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) has joined #ceph
[3:20] <janos> i really boned my evening i think
[3:20] <janos> what do i do with a cluster with no living mons?
[3:22] <lurbs> nerdtron: http://paste.uber.geek.nz/aa83b6 <-- I used bogus numbers for the number of PGs, you'd want to choose depending on your environment (as per http://ceph.com/docs/master/rados/operations/placement-groups/)
[3:22] <nerdtron> ahhh thanks lurbs
[3:22] <lurbs> And I don't believe you can decrease the number of PGs.
[3:23] * sagelap (~sage@2600:1012:b024:c34b:10a4:efbc:77f3:da42) Quit (Ping timeout: 480 seconds)
[3:23] <nerdtron> i kept forgetting about the ceph osd pool set
[3:24] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[3:24] <joshd> janos: try putting "max open files = 8192" in your ceph.conf on the monitors - that shouldn't hit the hard limits
[3:25] * janos will try that
[3:25] <joshd> janos: in the [global] section
[3:25] <janos> ok
[3:26] <janos> strange, still hit it
[3:26] <janos> ulimit -a show the -n as 1024
[3:26] <janos> i'mve tried repeatedly to increase it
[3:26] <janos> with luck
[3:26] <janos> changed in /etc/security/limits.conf
[3:26] <janos> and on cli
[3:26] <janos> at least i think i am
[3:27] <janos> interesting
[3:27] <janos> ulimit -n 8192 DID change teh output of ulimit -a
[3:28] <janos> still got his when restarting the mon though
[3:28] <janos> failed: 'ulimit -n 8192; /usr/bin/ceph-mon -i 4 --pid-file /var/run/ceph/mon.4.pid -c /etc/ceph/ceph.conf '
[3:28] <joshd> anything in the mon log?
[3:28] <joshd> or did it still fail at the ulimit part
[3:28] * jlhawn (~jlhawn@208-90-212-77.PUBLIC.monkeybrains.net) has joined #ceph
[3:28] <janos> it seems to start
[3:29] <janos> well
[3:29] <janos> kinda
[3:29] <janos> Starting Ceph mon.4 on osd1...
[3:29] <janos> failed: 'ulimit -n 8192; /usr/bin/ceph-mon -i 4 --pid-file /var/run/ceph/mon.4.pid -c /etc/ceph/ceph.conf '
[3:29] <janos> Starting ceph-create-keys on osd1...
[3:29] <janos> not sure that counts ;/
[3:29] * lx0 is now known as lxo
[3:30] <janos> 2013-08-26 21:28:54.039327 7fe6171bd7c0 -1 failed to create new leveldb store
[3:30] <janos> hrm
[3:30] <janos> could that be the issue?
[3:31] <joshd> that does sound suspicious
[3:31] <janos> i have 1.7.0 leveldb installed (f18)
[3:32] <jlhawn> Can anyone here tell me if CephFS supports a uid/gid like mount option (not using the FUSE client)
[3:33] <janos> sadly i have zero idea how to debug furhter
[3:33] <joshd> janos: I'm afraid I'm not sure either, I'd suggest waiting for joao
[3:33] <joao> ?
[3:34] <janos> lol
[3:34] <joshd> oh, he's here!
[3:34] <janos> joao: i did an update from botail to dumpling
[3:34] <joao> I had just decided that I should call it a day and go to bed lol
[3:34] <janos> and restarted the mons
[3:34] <janos> and now i'm dead in the water
[3:34] <janos> thanks, joshd
[3:34] <joao> oh
[3:34] <joshd> yw
[3:34] <joao> logs?
[3:34] <joao> ah
[3:34] <janos> and i have no idea how to proceed
[3:34] <joao> wait
[3:35] <janos> k
[3:35] <joao> janos, does the mon dir exist?
[3:35] <joao> are you adding a new monitor or upgrading an existing one?
[3:35] <janos> like /var/lib/ceph/ etc?
[3:35] <janos> upgrading existing
[3:35] <joao> janos, more like /var/lib/ceph/mon/ceph-foo
[3:35] <joao> okay
[3:35] <janos> yep
[3:36] <janos> have that dir structure
[3:36] <joao> janos, set 'debug mon = 20', rerun the monitor, and send the log my way
[3:36] <joao> btw, what version of cuttlefish are you upgrading to?
[3:36] <janos> ok. sorry to sound so noonish - set that in the ceph.conf?
[3:36] <janos> dumpling
[3:36] <janos> 67.2
[3:36] <joao> janos, yes
[3:36] <joao> oh, okay
[3:36] <janos> global, or just on that mon section
[3:36] <joao> log please :p
[3:36] <joao> just that mon's section is fine
[3:36] <janos> ok
[3:37] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) has joined #ceph
[3:37] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) Quit ()
[3:37] <janos> not sure if it will show in logs, but when restarting i get this output
[3:37] <janos> Starting Ceph mon.4 on osd1...
[3:37] <janos> failed: 'ulimit -n 8192; /usr/bin/ceph-mon -i 4 --pid-file /var/run/ceph/mon.4.pid -c /etc/ceph/ceph.conf '
[3:37] <janos> Starting ceph-create-keys on osd1...
[3:38] <joao> pastebin the log please
[3:38] <janos> ok
[3:39] <joao> 'ceph-create-keys' will wait for the monitors to come up and establish a quorum; if they fail to do so for some reason, that command will hang in there waiting
[3:40] <janos> joao: here's since you asked me to add the debug
[3:40] <janos> http://paste.fedoraproject.org/34980/13775676
[3:40] <joao> ty
[3:40] <janos> yw
[3:40] <janos> i have 3 mons
[3:40] <janos> all down
[3:40] <joao> err
[3:41] <janos> this is just mon.4
[3:41] <joao> that doesn't look a lot like debug mon = 10
[3:41] <janos> i was expecting more
[3:41] <joao> crank it up to 'debug mon = 30'
[3:41] <janos> ok
[3:41] <joao> I can't recall what's the log level on what would be useful
[3:41] <joao> and just to be on the safe side, set it on [global]
[3:41] <janos> ok
[3:42] <joao> that will make it sure to affect whichever monitor you're starting
[3:42] <joao> wait
[3:42] <joao> janos, can you please 'ls /var/lib/ceph/mon/ceph-4' ?
[3:42] <janos> this output ins't much more verbose
[3:43] <janos> sure thing
[3:43] <janos> http://paste.fedoraproject.org/34981/67809137
[3:43] <janos> would ls -l be more helpful?
[3:44] <joao> no :)
[3:44] <janos> haha ok
[3:44] <joao> janos, make sure /var/lib/ceph/mon/ceph-4/store.db is not empty, please?
[3:45] <janos> it has files, jsut checked
[3:45] <joao> awesome; so that message is irrelevant
[3:45] <janos> .sst files, CURRENT, LOCK, etc
[3:45] <joao> I'll make sure to figure out where that's coming from in the morning
[3:45] <joao> janos, how many monitors do you have?
[3:45] <janos> 3
[3:46] <janos> this was the last one i restarted
[3:46] <joao> can you please bring a second one up?
[3:46] <janos> i can try ;)
[3:46] <joao> same as before, set 'debug mon = 10'
[3:46] <janos> ok
[3:46] <joao> (and you can set the other monitor to 10 as well)
[3:46] <joao> 30 is overkill
[3:46] <janos> ok
[3:47] <joao> whenever you're ready, please paste the log
[3:47] <janos> will do
[3:48] <janos> jsut since i restareted?
[3:48] <janos> oh this is more interesting
[3:48] <joao> that would be ideal, but the whole log is okay too
[3:49] <janos> i can paste these 3 lines here
[3:49] <joao> sure
[3:49] <janos> this repeats
[3:49] <joao> 3 lines != bazillions of lines
[3:49] <janos> 2013-08-26 21:47:34.857190 7fe8a055a7c0 -1 there is an on-going (maybe aborted?) conversion.
[3:49] <janos> 2013-08-26 21:47:34.857535 7fe8a055a7c0 -1 you should check what happened
[3:49] <janos> 2013-08-26 21:47:34.857597 7fe8a055a7c0 -1 remove store.db to restart conversion
[3:49] <joao> ah
[3:49] <janos> diff than otehr one
[3:49] <joao> nice :)
[3:49] <janos> should i follow its advice?
[3:49] <joao> so, basically, you must have killed a monitor after upgrading to dumpling
[3:49] <joao> or all the monitors
[3:50] <joao> maybe they were taking too long to form a quorum and you thought killing them would fix it? :p
[3:50] * janos hides in shame
[3:50] * janos clubs himself and then hides in shame
[3:50] <janos> not on mon.4 though
[3:50] <joao> sure, rm the store.db directory on your monitors, rerun ceph-mon and let it simmer
[3:50] <janos> ok
[3:50] <joao> just store.db
[3:51] <janos> will do
[3:51] <joao> backup your monitors first if you feel like it
[3:51] <janos> k
[3:51] <joao> might be a good idea to remove the 'debug mon = 10' option as well before starting
[3:51] <janos> ok
[3:52] <joao> the store conversion can populate the logs with a lot of irrelevant messages
[3:52] <janos> copying the mon to mon.bak still
[3:52] <joao> otoh, you can leave it there and tail -f just to make sure they're alive
[3:52] <joao> well, big stores can take their sweet time converting
[3:52] <janos> i will learn more patience
[3:53] <janos> even if this doesn't get me rolling i still owe you
[3:53] <janos> much appreciated
[3:53] <joao> don't mention it
[3:53] <joao> I was idling in front of the computer anyway :p
[3:53] <janos> haha
[3:54] <joao> well, let me know how it goes
[3:54] <janos> will do
[3:54] <janos> tail -f in progress
[3:54] * indeed (~indeed@206.124.126.33) Quit (Remote host closed the connection)
[3:54] <janos> going to get some coffee i think
[3:55] <janos> brb
[3:55] <joao> and I'm going to bed
[3:55] * indeed (~indeed@206.124.126.33) has joined #ceph
[3:55] <joao> later
[3:56] <janos> back
[3:56] <janos> work from home!
[3:56] <janos> goignt o get this started on mon.1 as well
[3:57] * indeed_ (~indeed@206.124.126.33) has joined #ceph
[3:57] * indeed (~indeed@206.124.126.33) Quit (Read error: Connection reset by peer)
[4:01] <janos> !
[4:01] <joao> ?
[4:01] <janos> i seem to have 2/3 quorum
[4:01] <joao> there you go
[4:01] <janos> mon.1 still converting
[4:01] <janos> dang
[4:01] <joao> should be back on track when mon.1 finishes recovering
[4:01] <janos> really, i'll send you ribs, beer, whatever ;)
[4:01] <janos> a good book
[4:01] <joao> lol
[4:02] <joao> well, off to bed (this time for realsies)
[4:02] <janos> tahnk you again
[4:02] <joao> you're welcome
[4:02] * doxavore (~doug@108-85-233-208.lightspeed.rcsntx.sbcglobal.net) has joined #ceph
[4:03] <lxo> say I have a corrupted btrfs (can't remount ro) holding one replica of every pg, and a brand new disk that I want to use to increase the replication count by one
[4:03] <lxo> instead of letting ceph replicate everything to the new disk, I think it would be much faster to copy the data from the read-only btrfs to the new disk, and then create a new fs and copy the data back
[4:04] <lxo> but then, if I'm doing that... can I use the data I copied to the new disk to initialize the new osd in there? like, change a few bytes in the superblock and voila?
[4:04] * sherry (~sherry@wireless-nat-10.auckland.ac.nz) Quit (Quit: Konversation terminated!)
[4:05] * sherry (~sherry@wireless-nat-10.auckland.ac.nz) has joined #ceph
[4:10] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[4:12] * wschulze1 (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[4:14] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Read error: Operation timed out)
[4:19] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[4:27] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[4:30] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[4:35] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) Quit (Ping timeout: 480 seconds)
[4:37] * sherry_ (~sherry@wireless-nat-10.auckland.ac.nz) has joined #ceph
[4:39] * sherry (~sherry@wireless-nat-10.auckland.ac.nz) Quit (Read error: Connection reset by peer)
[4:40] * compbio (~compbio@nssc.nextspace.us) Quit (Remote host closed the connection)
[4:50] * jlhawn (~jlhawn@208-90-212-77.PUBLIC.monkeybrains.net) Quit (Quit: jlhawn)
[4:51] * indeed_ (~indeed@206.124.126.33) Quit (Remote host closed the connection)
[4:55] * wschulze1 (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[4:55] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[5:12] * rudolfsteiner (~federicon@181.21.162.194) has joined #ceph
[5:15] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) has joined #ceph
[5:16] * dpippenger (~riven@tenant.pas.idealab.com) Quit (Quit: Leaving.)
[5:22] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) Quit (Quit: leaving)
[5:23] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (Read error: Connection reset by peer)
[5:23] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[5:23] * rudolfsteiner (~federicon@181.21.162.194) Quit (Quit: rudolfsteiner)
[5:27] <yy-nm> hay, all. i have a question about log to syslog or error to syslog. how can i point out the ceph's message in syslog's facility levels?
[5:35] * rudolfsteiner (~federicon@181.21.160.181) has joined #ceph
[5:44] * rudolfsteiner (~federicon@181.21.160.181) Quit (Quit: rudolfsteiner)
[5:51] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[5:51] * wogri_risc (~Adium@85.233.126.167) has joined #ceph
[5:52] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Remote host closed the connection)
[6:07] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[6:12] * nerdtron (~kenneth@202.60.8.252) Quit (Quit: Leaving)
[6:19] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) has joined #ceph
[6:36] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[6:37] * janos (~janos@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Read error: Operation timed out)
[6:42] * doxavore (~doug@108-85-233-208.lightspeed.rcsntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[6:44] * brzm (~medvedchi@node199-194.2gis.com) has joined #ceph
[6:44] * brzm (~medvedchi@node199-194.2gis.com) Quit ()
[6:45] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[6:54] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 23.0.1/20130814063812])
[7:07] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) has joined #ceph
[7:08] * houkouonchi-work (~linux@gw.sepia.ceph.com) Quit (Remote host closed the connection)
[7:33] * xarses (~andreww@c-50-136-199-72.hsd1.ca.comcast.net) has joined #ceph
[7:35] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[8:05] * tnt (~tnt@91.177.230.140) has joined #ceph
[8:07] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[8:08] * foosinn (~stefan@office.unitedcolo.de) has joined #ceph
[8:16] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[8:22] * wogri_risc (~Adium@85.233.126.167) Quit (Quit: Leaving.)
[8:22] * wogri_risc (~Adium@85.233.126.167) has joined #ceph
[8:22] * wogri_risc (~Adium@85.233.126.167) has left #ceph
[8:25] * sleinen (~Adium@2001:620:0:25:d05c:9942:93fc:94e) has joined #ceph
[8:50] * ssejour (~sebastien@out-chantepie.fr.clara.net) has joined #ceph
[9:00] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[9:04] * sleinen (~Adium@2001:620:0:25:d05c:9942:93fc:94e) Quit (Quit: Leaving.)
[9:04] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[9:06] * tnt (~tnt@91.177.230.140) Quit (Ping timeout: 480 seconds)
[9:10] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:11] * BillK (~BillK-OFT@220-253-162-118.dyn.iinet.net.au) Quit (Quit: ZNC - http://znc.in)
[9:12] * BillK (~BillK-OFT@220-253-162-118.dyn.iinet.net.au) has joined #ceph
[9:12] * BillK (~BillK-OFT@220-253-162-118.dyn.iinet.net.au) Quit ()
[9:12] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[9:13] * BillK (~BillK-OFT@220-253-162-118.dyn.iinet.net.au) has joined #ceph
[9:24] * sleinen (~Adium@ext-dhcp-187.eduroam.unibe.ch) has joined #ceph
[9:24] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Ping timeout: 480 seconds)
[9:24] * jlogan2 (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[9:26] * sleinen1 (~Adium@2001:620:0:26:6954:11bd:1cf6:654a) has joined #ceph
[9:27] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:28] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (Ping timeout: 480 seconds)
[9:32] * sleinen (~Adium@ext-dhcp-187.eduroam.unibe.ch) Quit (Ping timeout: 480 seconds)
[9:35] * vipr (~vipr@office.loft169.be) has joined #ceph
[9:38] * Bada (~Bada@195.65.225.142) has joined #ceph
[9:48] * artwork_lv (~artwork_l@adsl.office.mediapeers.com) has joined #ceph
[9:49] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[9:49] * sherry_ (~sherry@wireless-nat-10.auckland.ac.nz) Quit (Remote host closed the connection)
[9:50] * artwork_lv (~artwork_l@adsl.office.mediapeers.com) Quit ()
[9:54] * artwork_lv (~artwork_l@adsl.office.mediapeers.com) has joined #ceph
[9:58] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[10:00] * yy-nm (~Thunderbi@122.233.46.4) Quit (Quit: yy-nm)
[10:03] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) has joined #ceph
[10:09] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[10:09] * AfC (~andrew@2407:7800:200:1011:f946:9508:d31e:c9fa) Quit (Quit: Leaving.)
[10:13] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[10:23] <loicd> ccourtaut: http://ceph.com/docs/master/radosgw/s3/ your matrix is not here ? I can't remember where to find it
[10:24] <ccourtaut> loicd: http://ceph.com/docs/master/dev/radosgw/s3_compliance/
[10:25] * KindTwo (~KindOne@h173.45.28.71.dynamic.ip.windstream.net) has joined #ceph
[10:28] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:28] * KindTwo is now known as KindOne
[10:52] * sherry (~sherry@wireless-nat-10.auckland.ac.nz) has joined #ceph
[10:59] <sherry> what should be result of this command > ~/ceph$ dpkg-checkbuilddeps
[10:59] <sherry> I got this > dpkg-checkbuilddeps: Unmet build dependencies: debhelper (>= 6.0.7~) javahelper libboost-system-dev (>= 1.42) libnss3-dev yasm
[11:02] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Quit: Leaving.)
[11:03] * allsystemsarego (~allsystem@5-12-37-127.residential.rdsnet.ro) has joined #ceph
[11:09] <tnt> Is there a doc abou the new radogw region / bucket placement rule stuff ?
[11:12] <sherry> yes
[11:14] <tnt> do you have a link ? :) I'm browing http://ceph.com/docs/master but don't see anything.
[11:17] <MACscr> so i noticed in a forum last night that some guy was running OSD on the same nodes that he was using as openstack compute nodes. Now he only had 3 disks per node, but isnt it recommended against sharing those two since OSDs can be really resource hungry?
[11:18] <sherry> http://ceph.com/docs/master/rados/operations/crush-map/
[11:19] <sherry> is that what u want?!
[11:20] <tnt> sherry: no. Seems that radosgw has a feature to place buckets into pools.
[11:21] <tnt> MACscr: depends on your cost/perf tradeoff ... I share compute nodes with osd nodes and it's fine.
[11:23] <MACscr> tnt: in production?
[11:24] <tnt> MACscr: yes.
[11:25] <tnt> let me pull the grap of cpu usage on OSD.
[11:27] <MACscr> how many osd's per node? what kind of disks? are you using kvm for the vm's?
[11:27] <tnt> http://i.imgur.com/Ggh1SM7.png
[11:27] <tnt> this is the cpu usage (sys & user) for the 4 OSD processes on the machine.
[11:27] <tnt> I'm using RBD for the VM, but XEN not KVM.
[11:29] <tnt> Most VM don't have much local data. Most of it is either in Postgres DB (which are on dedicated db servers), or in S3 directly. The biggest RBD use are some VM running full text index using elastricsearch.
[11:30] <MACscr> tnt: SATA or SAS drives?
[11:30] <tnt> The graphs are from 1 machine runnin 4 OSD (it has 6 disks but only 4 are dedicated to ceph).
[11:30] <tnt> those are 10k SAS drives.
[11:30] <MACscr> btw, what are you using for the graphing and i apologize for my 20 questions =P
[11:31] <MACscr> sry, for your cpu monitoring/charting
[11:31] <tnt> I use graphite to generate the graphs. The data are collected by a custom deamon I wrote (similar to collectd).
[11:32] <MACscr> tnt: this was my original ceph design that i havent implemented yet http://www.screencast.com/t/uvx3rfBg
[11:32] * sherry (~sherry@wireless-nat-10.auckland.ac.nz) Quit (Quit: Konversation terminated!)
[11:32] <tnt> If you start puttin 24 disks in one server, it makes sense to have dedicated OSD I guess, but in my case I wanted to have more "homogenous" setup with all the same nodes having both disk and hdd.
[11:32] <MACscr> though im being told that i shouldnt do SSD's for journals unless i have at least 2 of them per node
[11:33] <tnt> you probably want more than 1 ssd for 12 disk yes.
[11:33] <MACscr> yeah, well my compute nodes only have 3 bays for drives and only one pci slot and i wanted to use those for extra nics
[11:33] <mattch> MACscr: If you put all your osds onto one journal disk, then if it fails, all your osds go offline
[11:33] <tnt> if you loose the journal, you loose the OSD.
[11:34] <MACscr> stinks that the journal just doesnt failover (degraded of course) to the spindles
[11:34] <MACscr> i originally planned my hardware around zfs until i found ceph
[11:34] <MACscr> zfs degrades a little better in that particular case
[11:34] <tnt> well ... it can't really do that since the whole point of the journal is to write on it first and consider the data "safe" once written on it.
[11:35] <mattch> MACscr: Do you need the performance that ssd journalling gives?
[11:35] <mattch> if not, just put the journals on the same disk as the osd
[11:35] <MACscr> well i want the best possible performance obviously for the money =P
[11:36] <MACscr> i could find another way to use the SSD's though
[11:37] <MACscr> ok, so lets say an SSD did fail that was being used as the journal, what would be next step to recover the system?
[11:37] * sleinen1 (~Adium@2001:620:0:26:6954:11bd:1cf6:654a) Quit (Quit: Leaving.)
[11:37] * sleinen (~Adium@ext-dhcp-187.eduroam.unibe.ch) has joined #ceph
[11:37] <tnt> MACscr: you scrap the OSD and let it rebuild from scratch.
[11:38] <mattch> (assuming replication level of 2 of course)
[11:38] <MACscr> ok, so thats really not the end of the world
[11:38] <tnt> yeah. But it'd be crazy not to have lvl=2 at least and with only 2 physical host you can't have lvl=3 ...
[11:39] <MACscr> right, the entire point of having two storage nodes is for complete replication
[11:39] <mattch> MACscr: Well, your pool will be degraded/at risk for as long as it takes to copy 3.6TB of data back from the other server (and while 1 server is handling all the pool load too)
[11:39] <tnt> MACscr: not unless one of the 12 spindle fail at the same time.
[11:39] <mattch> tnt: yep - just making sure that's clear :)
[11:39] <mattch> tnt: Which is the risk with a single ssd journal
[11:39] <MACscr> well i have backups of course if shit really hit the fan, but obviously we want to avoid that from possibly happening
[11:40] <MACscr> so what i should probably do is set it up without the ssd and benchmark it, then benchmark it with it and then see if its worth investing in a second ssd per storage node. Right?
[11:41] <MACscr> i know with ZFS, and SSD can (but not always) make a big difference. Any idea how much SSD's help with ceph?
[11:41] <tnt> MACscr: one down side with only 2 ceph OSD nodes is that if one fail, you have a significant part of the cluster down.
[11:41] <MACscr> yeah, i understand that. Is it that common with ceph? lol
[11:41] <MACscr> im used to storage nodes being up for a few years at a time
[11:42] <tnt> Well there is updates and various maintenance tasks ...
[11:43] <tnt> I think I have a reboot or planned shutdown every couple of month or so.
[11:43] * nwat (~nwat@46.189.28.147) has joined #ceph
[11:44] <mattch> MACscr: Benchmarking makes sense http://ceph.com/w/index.php?title=Benchmark is as good a place to start as any)
[11:44] <MACscr> well thats no fun. I actually use ksplice to avoid having to do any reboots to apply kernel updates
[11:44] <mattch> MACscr: not sure if I missed it, but what's the network this pool is running on? 10G?
[11:45] * sleinen (~Adium@ext-dhcp-187.eduroam.unibe.ch) Quit (Ping timeout: 480 seconds)
[11:46] <MACscr> matt_: yes, dual 10GbE two the switch then everything else is 2 x 1Gb vlans (the ceph mon, compute nodes, etc)
[11:46] <MACscr> then a whole other nic for wan and another for management
[11:46] <MACscr> every system has at least 4 nics
[11:46] <tnt> cabling must be a fun :P
[11:47] <MACscr> lol, yep
[11:47] <MACscr> thank God for colored cables =P
[11:47] <mattch> MACscr: So your network isn't the replication bottleneck then, but it'll still be 1-2 hours I'd guess to replicate in the event of one OSD server being replaced
[11:48] * hug (~hug@nuke.abacus.ch) has joined #ceph
[11:49] <MACscr> hmm, so when i do have to do maintenance and reboot a storage node, can i just reboot one at a time and it will just be degraded while the other is offline and then when it resyncs again?
[11:49] <tnt> mattch: in my experience even with 1G network, the network is not the bottle neck ... so far ceph is.
[11:49] <mattch> tnt: single 1G network with SSD journals everywhere and it could max out the link I guess...
[11:50] <mattch> (not a hardware combo I've tried though, so just guessing)
[11:51] <tnt> mattch: depends on your average object size apparently. In my case I have lots of relatively small objects (like ~ 1M).
[11:52] * yanzheng (~zhyan@134.134.139.74) Quit (Remote host closed the connection)
[11:52] <MACscr> while i have your attention, my second storage node is an older dual xeon L5420 with only 16gb of ram and max is only 24gb. Think thats going to be ok?
[11:53] <MACscr> id prefer to remove one of the cpu's if possible to reduce the power usage, but obviously not required
[11:53] * jaydee (~jeandanie@124x35x46x13.ap124.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[11:53] <MACscr> i kind of got it for free when buying the hard drives
[11:54] <tnt> I use 1G per OSD process + 1G for OS stuff and it's running at less than half that in practice.
[11:54] <tnt> so 16G for 12 spindles should be fine.
[11:58] <MACscr> what do you think cpu wise?
[11:58] <MACscr> aka, would i be fine with one?
[11:58] <MACscr> their TDP is 50w, so they are pretty low power, but since im collocating, every watt counts =P
[12:02] <tnt> should be fine.
[12:03] <tnt> might be a loaded during recovery if one fails but for normal operation I think it should be ok.
[12:05] <MACscr> got a steal on the server, $560 and thats with 12 x 300gb 15k sas drives
[12:06] <MACscr> figured even if the server doesnt work out, it was still about $100 less than what the drives would have been alone
[12:06] <tnt> heh, yeah pretty good :)
[12:07] * fireD (~fireD@93-142-212-187.adsl.net.t-com.hr) has joined #ceph
[12:08] * nwat (~nwat@46.189.28.147) Quit (Quit: Lost terminal)
[12:17] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[12:23] <loicd> ccourtaut: thanks for the link. Is it linked from the doc or just isolated ?
[12:25] * DarkAce-Z (~BillyMays@50.107.55.36) has joined #ceph
[12:26] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[12:27] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Read error: Operation timed out)
[12:38] * sleinen (~Adium@ext-dhcp-187.eduroam.unibe.ch) has joined #ceph
[12:40] * sleinen1 (~Adium@2001:620:0:25:6948:3af5:629d:c812) has joined #ceph
[12:42] * sleinen (~Adium@ext-dhcp-187.eduroam.unibe.ch) Quit (Read error: Operation timed out)
[12:49] * Bada (~Bada@195.65.225.142) Quit (Ping timeout: 480 seconds)
[12:51] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[12:58] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[12:58] * ChanServ sets mode +v andreask
[13:09] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[13:18] * diegows (~diegows@190.190.11.42) has joined #ceph
[13:19] * KindTwo (~KindOne@h239.23.131.174.dynamic.ip.windstream.net) has joined #ceph
[13:19] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[13:20] * KindTwo is now known as KindOne
[13:26] * rudolfsteiner (~federicon@181.21.141.232) has joined #ceph
[13:28] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[13:29] * jlogan2 (~Thunderbi@2600:c00:3010:1:1::40) Quit (Ping timeout: 480 seconds)
[13:30] * thorus (~jonas@212.114.160.100) has joined #ceph
[13:30] <thorus> 2013-08-27 13:29:54.199405 7faa58991700 0 mon.a@0(leader).data_health(304) update_stats avail 89% total 28740740 used 3109768 avail 25630972
[13:30] <thorus> I'm getting this every minute in my logs
[13:30] * artwork_lv (~artwork_l@adsl.office.mediapeers.com) Quit (Quit: artwork_lv)
[13:31] <thorus> what does it mean exactly? Is there a failure or just a stats message?
[13:31] <tnt> just stats
[13:33] * rudolfsteiner (~federicon@181.21.141.232) Quit (Quit: rudolfsteiner)
[13:35] * rudolfsteiner (~federicon@181.21.141.232) has joined #ceph
[13:35] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[13:40] * artwork_lv (~artwork_l@adsl.office.mediapeers.com) has joined #ceph
[13:40] * markbby (~Adium@168.94.245.1) has joined #ceph
[13:41] * Bada (~Bada@195.65.225.142) has joined #ceph
[13:42] * yanzheng (~zhyan@101.82.56.138) has joined #ceph
[13:44] * rudolfsteiner_ (~federicon@181.21.158.244) has joined #ceph
[13:45] * libsysguy (~libsysguy@2620:0:28a0:2004:dc25:a996:5367:3150) has joined #ceph
[13:45] * rudolfsteiner_ (~federicon@181.21.158.244) Quit ()
[13:45] <libsysguy> is anybody around that uses the perl Amazon::S3 client with the radosgw?
[13:47] * rudolfsteiner (~federicon@181.21.141.232) Quit (Ping timeout: 480 seconds)
[13:47] * markbby (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[13:48] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley)
[13:52] * janos (~janos@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[13:53] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) has joined #ceph
[13:53] <mozg> hello guys
[13:53] <mozg> i've recently updated to ceph 0.67.2
[13:53] <mozg> the upgrade went very nicely and I didn't have any service interruption
[13:54] <mozg> many thanks for all your help
[13:54] <janos> you clearly were not a bonehead like me then
[13:54] <mozg> janos: did you have issues?
[13:54] <janos> yeah but of my own making
[13:54] <mozg> i've followed the procedure steps in release notes )))
[13:54] <janos> i thought the mon conversion was hung and cancelled it
[13:54] <mozg> yeah, i've also done mistakes in the past
[13:55] <libsysguy> is everyone using the ceph-deploy tool to do future upgrades?
[13:55] <mozg> However, I did notice an issue with my virtual machines which run disk benchmarks
[13:55] <mozg> i am using phoronix test suite
[13:55] <mozg> and their pts/disk benchmark collection consisting of 20 different benchmarks
[13:56] <janos> libsysguy: not yet. i haven't touched ceph-deploy
[13:56] <mozg> i've ran it 3 times and had kernel panic 2 out of 3 times
[13:56] <janos> libsysguy: but i just did bobtail to dumpling last night
[13:56] <mozg> whereas when I was using 0.61.7 i've not experienced this behaviour.
[13:56] <mozg> however, i did see occasional hang task issue with 0.61.7
[13:56] * markbby (~Adium@168.94.245.4) has joined #ceph
[13:57] <yanzheng> rbd or cephfs ?
[13:57] <mozg> but not as severe as with dumpling
[13:57] <mozg> rbd
[13:57] <libsysguy> I have gotten the hung key creation with 0.61
[13:57] <mozg> and cache=none
[13:57] <libsysguy> but that could be my doing
[13:58] <mozg> has anyone else experienced issues when running heavy io tests?
[13:58] <mozg> who would be the best persone from ceph team to speak to?
[13:58] * libsysguy is afraid to tempt fate and run any heavy tests
[13:58] <mozg> libsysguy, hehe
[13:59] <mozg> i do not see the tests impacting ceph cluster
[13:59] <libsysguy> I feel like my ceph setup is pretty brittle running on centos6 and lighttpd
[13:59] <mozg> it doesn't fall over
[13:59] <mozg> vms on the other hand tend to fall over ((
[13:59] <janos> oh i liked lighttpd. is that still under development?
[13:59] * libsysguy is unsure
[14:00] <janos> no biggie
[14:00] <libsysguy> I just know it was the only webserver I could make run ceph on cent
[14:00] <janos> ah
[14:00] <libsysguy> apache needed the mod_fastcgi rpm not in epel and nginx choked big time
[14:01] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[14:01] * ChanServ sets mode +v andreask
[14:01] <libsysguy> janos do you mean is the lighttpd server still under development or the support for it in ceph?
[14:02] <janos> lighttpd
[14:02] <janos> i thought it had kinda drifted off
[14:03] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[14:03] <libsysguy> it certainly seems like it has
[14:04] <libsysguy> the last release was from november 2012
[14:04] <janos> bummer
[14:04] <libsysguy> but their release cycle is also pretty long it seems
[14:05] <libsysguy> its okay with anything RHEL it'll be supported for the next 20 or so years and never get updated to the latest version, in true redhat fashion :p
[14:06] <janos> haha true
[14:10] <janos> odd i no longer seem to be getting responses from the cluster whe i issue commands like "ceph osd crush reweight {name} {weight}"
[14:10] <janos> where weight is things like .4 or 0.4
[14:10] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[14:12] * markbby (~Adium@168.94.245.4) has joined #ceph
[14:12] * rudolfsteiner (~federicon@181.21.158.244) has joined #ceph
[14:14] * yanzheng (~zhyan@101.82.56.138) Quit (Ping timeout: 480 seconds)
[14:17] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) Quit (Ping timeout: 480 seconds)
[14:31] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[14:34] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit ()
[14:38] <ron-slc> I have an odd question: will a Dumpling v.68 radosgw function on a Cuttlefish v.61 cluster?
[14:49] * sglwlb (~sglwlb@221.12.27.202) Quit ()
[14:49] * sglwlb (~sglwlb@124.90.123.148) has joined #ceph
[14:50] <wogri_risc> ron-slc: maybe. what do the upgrade docs of dumpling say - is radosgw the last part to update? then it might be working.
[14:53] * nwat (~nwat@ext-40-205.eduroam.rwth-aachen.de) has joined #ceph
[14:55] * sleinen1 (~Adium@2001:620:0:25:6948:3af5:629d:c812) Quit (Quit: Leaving.)
[14:55] * sleinen (~Adium@ext-dhcp-187.eduroam.unibe.ch) has joined #ceph
[14:56] <ron-slc> wogri_risc: Upgrade docs always state 1)mon 2)osd 3)mds 4)radgw, but rados gw is pretty much basic rados... I'm testing in a virtual cluster now. Needing Regions, but we can't upgrade production cluster to dumpling for ~2 months of dumpling success in testing cluster.
[14:57] <wogri_risc> ron-slc: you're probably stuck then. sorry.
[14:57] <ron-slc> :) quite possible
[14:57] * zhyan_ (~zhyan@101.82.160.28) has joined #ceph
[15:01] * rudolfsteiner (~federicon@181.21.158.244) Quit (Quit: rudolfsteiner)
[15:02] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Quit: wogri_risc)
[15:03] * sleinen (~Adium@ext-dhcp-187.eduroam.unibe.ch) Quit (Ping timeout: 480 seconds)
[15:05] * YD (YD@d.clients.kiwiirc.com) has joined #ceph
[15:06] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[15:06] <YD> hello everybody. I have some strange things with dumpling 0.67.2
[15:06] <YD> ceph -s gives this : health HEALTH_WARN 278 pgs stale; 278 pgs stuck stale
[15:07] <YD> so 278 pgs stuck stale. when I do ceph pg dump_stuck stale , I got pg referring to down (& out osd)
[15:08] <YD> ceph osd tree,still refer to those old (& removed) osd : 6 0 osd.6 down 0
[15:08] <YD> 7 0 osd.7 down 0
[15:08] <YD> 8 0 osd.8 down 0
[15:08] <YD> 21 0 osd.21 down 0
[15:08] <YD> 22 0 osd.22 down 0
[15:08] <YD> 23 0 osd.23 down 0
[15:09] <zhyan_> these 278 pgs are on the down osd
[15:09] <YD> ceph-mon-lmb-B-1:~# ceph osd crush remove 6
[15:09] <YD> device '6' does not appear in the crush map
[15:09] <YD> ceph-mon-lmb-B-1:~# ceph osd crush remove osd.6
[15:09] <YD> device 'osd.6' does not appear in the crush map
[15:09] <YD> is there something obviously wrong on my part ?
[15:10] <YD> yup, but those are old osd that don't exists anymore
[15:11] <YD> no reasonwhy the pgs are stuck
[15:13] <YD> I mean, data should have moved
[15:18] * sleinen (~Adium@ext-dhcp-187.eduroam.unibe.ch) has joined #ceph
[15:20] * jmlowe1 (~Adium@c-98-223-198-138.hsd1.in.comcast.net) Quit (Quit: Leaving.)
[15:21] * sleinen1 (~Adium@2001:620:0:26:d93e:624a:2536:38ef) has joined #ceph
[15:21] * rendar (~s@host88-109-dynamic.49-79-r.retail.telecomitalia.it) has joined #ceph
[15:22] <YD> and if I try to know why the pg is stuck :
[15:22] <YD> ceph-mon-lmb-B-1:~# ceph pg 6.49d query
[15:22] <YD> Traceback (most recent call last):
[15:22] <YD> File "/usr/bin/ceph", line 774, in <module>
[15:22] <YD> sys.exit(main())
[15:22] <YD> File "/usr/bin/ceph", line 698, in main
[15:22] <YD> inbuf)
[15:22] <YD> File "/usr/lib/python2.7/dist-packages/ceph_argparse.py", line 1044, in send_command
[15:22] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[15:22] <YD> raise RuntimeError('"{0}": exception {1}'.format(cmd, e))
[15:22] <YD> RuntimeError: "['pg', '6.49d', 'query']": exception No JSON object could be decoded
[15:22] * sleinen (~Adium@ext-dhcp-187.eduroam.unibe.ch) Quit (Read error: Operation timed out)
[15:22] <YD> (a pg query on a non stuck pg succeed)
[15:27] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[15:28] * rudolfsteiner (~federicon@181.21.135.221) has joined #ceph
[15:31] * paravoid (~paravoid@scrooge.tty.gr) Quit (Ping timeout: 480 seconds)
[15:33] * paravoid (~paravoid@scrooge.tty.gr) has joined #ceph
[15:37] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[15:38] <libsysguy> anybody have any recommendations/opinions on what the best S3 Driver is for ceph?
[15:42] * loicd has a lead on http://tracker.ceph.com/issues/6117 , writing a test to validate
[15:43] * jcfischer (~fischer@user-23-20.vpn.switch.ch) has joined #ceph
[15:44] * alfredo|afk (~alfredode@c-24-99-84-83.hsd1.ga.comcast.net) has joined #ceph
[15:44] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) has joined #ceph
[15:45] * torment2 (~torment@pool-72-64-192-26.tampfl.fios.verizon.net) Quit (Read error: Operation timed out)
[15:45] <jcfischer> hi there - I'm trying to follow http://ceph.com/docs/master/rados/operations/add-or-rm-mons/ to remove 2 of our 5 mons from a cluster that is WARN state (2 objects unfound). However, the command: ceph-mon -i {mon-id} --extract-monmap {map-path} doesn't work / isn't implemented...?
[15:51] * jmlowe (~Adium@2001:18e8:2:28cf:f000::5ab8) has joined #ceph
[15:51] * vata (~vata@2607:fad8:4:6:d943:e724:8ecb:1dce) has joined #ceph
[15:52] * KindTwo (~KindOne@h145.44.28.71.dynamic.ip.windstream.net) has joined #ceph
[15:54] * doxavore (~doug@99-89-22-187.lightspeed.rcsntx.sbcglobal.net) has joined #ceph
[15:55] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:55] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[15:57] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[15:58] <jcfischer> gnaw - sorry - I did "ceph-mon -h {mon-id}"
[15:59] * loicd reproduced the problem http://tracker.ceph.com/issues/6117#note-12
[16:00] * KindTwo (~KindOne@h145.44.28.71.dynamic.ip.windstream.net) Quit (Ping timeout: 480 seconds)
[16:02] * jmlowe (~Adium@2001:18e8:2:28cf:f000::5ab8) Quit (Quit: Leaving.)
[16:02] * jmlowe (~Adium@2001:18e8:2:28cf:f000::5ab8) has joined #ceph
[16:03] * KindTwo (~KindOne@h80.43.186.173.dynamic.ip.windstream.net) has joined #ceph
[16:04] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:04] * KindTwo is now known as KindOne
[16:05] * nwat (~nwat@ext-40-205.eduroam.rwth-aachen.de) Quit (Ping timeout: 480 seconds)
[16:05] * zhyan_ (~zhyan@101.82.160.28) Quit (Ping timeout: 480 seconds)
[16:11] <joao> oh wow
[16:11] <joao> TIL the tracker has colored syntax
[16:15] <YD> ok, for my problem I understood, the stuck pg were from a test pool with replica set to 1 :D
[16:16] <YD> brown paper bag :D :D
[16:17] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[16:18] <alfredo|afk> joao: same here
[16:18] <joao> hey alfredo|afk :)
[16:18] <joao> how's it going over there?
[16:18] * alfredo|afk is now known as alfredodeza
[16:18] <alfredodeza> it is going
[16:18] <alfredodeza> :D
[16:19] <joao> lol
[16:19] * zhyan_ (~zhyan@101.82.165.28) has joined #ceph
[16:20] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit ()
[16:21] * jmlowe (~Adium@2001:18e8:2:28cf:f000::5ab8) Quit (Quit: Leaving.)
[16:22] <janos> @joao: thanks again for your help yesterday/last night
[16:22] <cephalobot> janos: Error: "joao:" is not a valid command.
[16:22] <janos> :O
[16:22] <joao> everything's working okay?
[16:22] <janos> yeah i think so. rebalancing it
[16:23] <joao> cool :)
[16:23] <janos> i had some old tunables i removed
[16:23] <janos> stripped the crushmap down to as stock as possible
[16:23] <janos> it's %1.6 degraded and should be done within 30 minutes
[16:24] <janos> i had one over-full osd which kept it from being happy when i woke up
[16:24] <janos> that is being rectified now
[16:25] <janos> love waking up to 0.265% degraded and not budging. so close
[16:26] <joao> could be worse
[16:27] <joao> you could have waken up and found out the datacenter had been blown away
[16:27] <janos> oh defintiely!
[16:27] <janos> hahaha
[16:27] <janos> (don't say that)
[16:27] <loicd> ccourtaut: would you agree to review https://github.com/ceph/ceph/pull/545 ?
[16:27] * loicd thinks it fixes http://tracker.ceph.com/issues/6117 :-)
[16:30] * sglwlb (~sglwlb@124.90.123.148) Quit ()
[16:31] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[16:33] * YD slaps loicd around a bit with a large trout
[16:34] <loicd> YD: :-D
[16:36] * nwat (~nwat@ext-40-205.eduroam.rwth-aachen.de) has joined #ceph
[16:36] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[16:37] <loicd> YD: I actually deserve the blame, I added this prototype of get_next to SharedPtrRegistry
[16:42] * BillK (~BillK-OFT@220-253-162-118.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[16:42] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) Quit (Read error: Connection reset by peer)
[16:43] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) has joined #ceph
[16:43] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[16:45] * sleinen1 (~Adium@2001:620:0:26:d93e:624a:2536:38ef) Quit (Quit: Leaving.)
[16:45] * sleinen (~Adium@ext-dhcp-187.eduroam.unibe.ch) has joined #ceph
[16:46] * jmlowe (~Adium@2601:d:a800:97:e008:28ad:b281:62bc) has joined #ceph
[16:47] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[16:48] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit ()
[16:48] * turduks (~hddddhd@69.174.99.94) has joined #ceph
[16:48] * alfredodeza is now known as alfredo|noms
[16:52] <jcfischer> I'm looking to add 6 OSD (SSD) to our existing cluster of 64 SATA disks. I plan to have one SSD pool and leave the big pool on the SATA (according to this tutorial) http://www.sebastien-han.fr/blog/2012/12/07/ceph-2-speed-storage-with-crush/
[16:52] <jcfischer> I have prepared the OSD and drives but not added them to ceph yet, as I don't what to the cluster to start rebalancing onto the SSDs
[16:53] * sleinen (~Adium@ext-dhcp-187.eduroam.unibe.ch) Quit (Ping timeout: 480 seconds)
[16:54] <jcfischer> what is my best way forward? Edit the crush map before I do "ceph-osd --mkfs --mkkey…" and "ceph auth add odd…."
[16:55] <jcfischer> or does the fun start when I "ceph osd crush set n 1 root=default host=xx" ?
[16:56] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[16:57] <jcfischer> or can I add them to the crush set, then edit the crush map, and then everything will work when I start the OSD processes?
[16:57] <janos> jcfischer: i think you would need to actually start the new OSD for it to begin rebalancing. start and mark it in
[16:57] <janos> iirc, just starting a new one is not "in" yet
[16:58] <jcfischer> and the other question: when I have the rules for SSD and SATA disks, can I assign the existing pools to one of these new rules?
[16:59] * jcfischer is trying to thread very careful :)
[17:00] <janos> i will defer to others on that
[17:02] <mattch> jcfischer: I think you can add the osds but leave them down, then update the crushmap, then bring them up
[17:03] <jcfischer> janos & mattch: thanks - reading up on editing the crush map next
[17:05] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) Quit (Ping timeout: 480 seconds)
[17:05] * foosinn (~stefan@office.unitedcolo.de) Quit (Quit: Leaving)
[17:09] * turduks (~hddddhd@69.174.99.94) Quit (Ping timeout: 480 seconds)
[17:09] <jcfischer> hmm - Currently, there are 14 pools in my cluster. They are on crush_ruleset 1 or 2 (data and metadata) and the rulesets say "step take default".So if I add two pools in the crush map (one that contains all the SSDs and one that takes all the SATA disks, and update the three existing rules that they do "step take sata", I should be safe… Anyone with more crush map foo to verify this?
[17:10] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[17:11] * sagelap (~sage@2600:1012:b002:5d15:10a4:efbc:77f3:da42) has joined #ceph
[17:11] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:12] * nwat (~nwat@ext-40-205.eduroam.rwth-aachen.de) Quit (Quit: Lost terminal)
[17:13] * vipr (~vipr@office.loft169.be) Quit (Remote host closed the connection)
[17:19] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[17:20] <mattch> jcfischer; I take it you've seen http://www.sebastien-han.fr/blog/2012/12/07/ceph-2-speed-storage-with-crush/ ?
[17:20] <jcfischer> that's what I'm working from
[17:21] * nhm (~nhm@184-97-244-237.mpls.qwest.net) has joined #ceph
[17:21] * ChanServ sets mode +o nhm
[17:21] * DarkAce-Z is now known as DarkAceZ
[17:23] * rudolfsteiner (~federicon@181.21.135.221) Quit (Quit: rudolfsteiner)
[17:26] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[17:26] * ChanServ sets mode +v andreask
[17:31] <janos> woohoo. healthy cluster again (bobtail to dumpling)
[17:31] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[17:31] <nhm> janos: yay!
[17:31] * rendar (~s@host88-109-dynamic.49-79-r.retail.telecomitalia.it) Quit ()
[17:32] <janos> this seems to balance out osd's better too
[17:32] <janos> very nice
[17:36] * Bada (~Bada@195.65.225.142) Quit (Ping timeout: 480 seconds)
[17:36] <loicd> janos: congratulations :-)
[17:36] <janos> thanks. i can breathe easier
[17:36] <janos> you guys kick butt
[17:37] * jwilliams (~jwilliams@72.5.59.176) has joined #ceph
[17:41] <jwilliams> I recently upgraded one of our ceph clusters to dumpling, and now when I restart an osd it spits out "log bound mismatch" like this bug: http://tracker.ceph.com/issues/6057
[17:41] <jwilliams> my problem is that I also have some unfounds that have inexplicably come up and when I try to mark lost, the osd will crash with this error: FAILED assert(peer == backfill_target)
[17:42] <jwilliams> effectively preventing me from marking them lost
[17:42] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[17:42] * libsysguy (~libsysguy@2620:0:28a0:2004:dc25:a996:5367:3150) Quit (Quit: Leaving.)
[17:43] * libsysguy (~libsysguy@ng1.cptxoffice.net) has joined #ceph
[17:44] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[17:46] * turduks (~hddddhd@69.174.99.94) has joined #ceph
[17:46] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[17:47] * Tamil1 (~Adium@cpe-108-184-66-69.socal.res.rr.com) has joined #ceph
[17:48] * libsysguy (~libsysguy@ng1.cptxoffice.net) Quit (Read error: Connection reset by peer)
[17:50] * artwork_lv (~artwork_l@adsl.office.mediapeers.com) Quit (Quit: artwork_lv)
[17:50] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[17:51] * alfredo|noms is now known as alfredodeza
[17:54] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[17:55] * sagelap1 (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) has joined #ceph
[17:56] * xarses (~andreww@c-50-136-199-72.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:57] <jmlowe> jwilliams: what did you upgrade from?
[17:57] * tnt (~tnt@91.177.230.140) has joined #ceph
[17:58] * sagelap (~sage@2600:1012:b002:5d15:10a4:efbc:77f3:da42) Quit (Ping timeout: 480 seconds)
[18:02] <jwilliams> it looks like we went from 0.56.3->0.56.4->0.66-1->0.67-rc3->0.67.1-1->0.67.2
[18:02] <jwilliams> though we might have skipped 0.66-1
[18:03] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) has joined #ceph
[18:07] * xarses (~andreww@204.11.231.50.static.etheric.net) has joined #ceph
[18:09] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[18:10] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit ()
[18:14] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[18:15] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[18:17] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[18:19] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[18:19] <odyssey4me> I'm getting (ProgrammingError) (1146, "Table 'cinder.iscsi_targets' doesn't exist when trying to delete a RBD volume from Cinder. Can anyone assist? Full stack error here: http://pastebin.com/cumGwVsK
[18:20] <odyssey4me> joshd, perhaps if you're around?
[18:25] <loicd> joao: thanks for your reviews on https://github.com/ceph/ceph/pull/539, I've applied the suggested changes. Will you be the one merging them or should I ask someone else ? I added manually the "review-by" but maybe there is an automatic way of doing this.
[18:26] <joao> I always add the Reviewed-by: manually as well
[18:27] <joao> I'd suggest you'd run by the text through someone familiar with the technical background first; I'll be happy to merge it afterwards if no one else does
[18:28] <jcfischer> I have a pg that has had 2 unfound objects for over 2 weeks now. I can't mark them as lost because "g has 2 objects but we haven't probed all sources, not marking lost". Restarting OSDs didm;t help. Now this pg has become "active+recovering+degraded+remapped" - heres's the result of "ceph pg … query": http://pastebin.com/bbEK2hb3
[18:28] <jcfischer> any ideas how I can recover from this? ceph version 0.61.5
[18:31] * ScOut3R (~scout3r@54026B73.dsl.pool.telekom.hu) has joined #ceph
[18:33] * skm (~smiley@205.153.36.170) has joined #ceph
[18:35] * Vjarjadian (~IceChat77@176.254.37.210) Quit (Quit: Man who run behind car get exhausted)
[18:36] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[18:39] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[18:40] * turduks (~hddddhd@69.174.99.94) Quit (Remote host closed the connection)
[18:40] * ssejour (~sebastien@out-chantepie.fr.clara.net) Quit (Quit: Leaving.)
[18:41] * sleinen1 (~Adium@2001:620:0:25:294b:d7ab:1be3:9a7f) has joined #ceph
[18:42] * jcfischer (~fischer@user-23-20.vpn.switch.ch) Quit (Quit: jcfischer)
[18:42] * Tamil1 (~Adium@cpe-108-184-66-69.socal.res.rr.com) Quit (Quit: Leaving.)
[18:45] * turduks (~hddddhd@69.174.99.94) has joined #ceph
[18:45] * turduks (~hddddhd@69.174.99.94) Quit ()
[18:46] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has joined #ceph
[18:47] * turduks (~hddddhd@69.174.99.94) has joined #ceph
[18:47] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:48] * libsysguy (~libsysguy@ng1.cptxoffice.net) has joined #ceph
[18:49] <libsysguy> so is the ceph-deploy tool designed to require root level access on all nodes to manage them or just for the install process?
[18:49] * gregaf1 (~Adium@2607:f298:a:607:94a1:e5b4:fc26:deb5) has joined #ceph
[18:49] * turduks (~hddddhd@69.174.99.94) Quit ()
[18:49] * turduks (~hddddhd@69.174.99.94) has joined #ceph
[18:50] <alfredodeza> libsysguy: there are a bunch of commands that ceph-deploy will need to execute on the remote end that need sudo
[18:50] <alfredodeza> starting/stopping services for one
[18:51] <alfredodeza> that on top of installing ceph of course :)
[18:51] <libsysguy> I was trying to figure out a way to mange the cluster without using the tool
[18:51] <libsysguy> I know I can install ceph wtih puppet
[18:51] <libsysguy> but it seems like you really need the admin node with ceph-deploy on it
[18:51] * turduks (~hddddhd@69.174.99.94) Quit ()
[18:51] <alfredodeza> oh absolutely, ceph-deploy is meant to be a guide to show you how to get things done, so you can later improve on that by using chef/puppet/ansible etc...
[18:52] <alfredodeza> oh really? how so?
[18:52] <libsysguy> well the best example I have is adding and removing monitors
[18:52] <libsysguy> I suppose that could be done with puppet
[18:52] <alfredodeza> sure
[18:53] <libsysguy> but is seems pretty complex with the mapping file
[18:53] <libsysguy> I guess nobody ever said distributed storage was easy :p
[18:55] * sagelap1 (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) Quit (Quit: Leaving.)
[18:56] * gregaf (~Adium@2607:f298:a:607:99b4:5900:8f78:54a0) Quit (Ping timeout: 480 seconds)
[18:56] <ircolle> libsysguy - and if they did, they were lying
[18:57] <libsysguy> heh
[18:57] * Tamil1 (~Adium@cpe-108-184-66-69.socal.res.rr.com) has joined #ceph
[18:58] * turduks (~hddddhd@69.174.99.94) has joined #ceph
[19:00] * turduks (~hddddhd@69.174.99.94) Quit ()
[19:00] * turduks (~hddddhd@69.174.99.94) has joined #ceph
[19:01] * alram (~alram@38.122.20.226) has joined #ceph
[19:03] * turduks (~hddddhd@69.174.99.94) Quit ()
[19:03] * Tamil1 (~Adium@cpe-108-184-66-69.socal.res.rr.com) Quit (Quit: Leaving.)
[19:05] * zhyan__ (~zhyan@101.82.117.202) has joined #ceph
[19:12] * zhyan_ (~zhyan@101.82.165.28) Quit (Ping timeout: 480 seconds)
[19:13] * Tamil1 (~Adium@cpe-108-184-66-69.socal.res.rr.com) has joined #ceph
[19:15] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[19:18] * jluis (~JL@89.181.146.94) has joined #ceph
[19:18] * rturk-away is now known as rturk
[19:19] * jluis is now known as joao|lap
[19:30] * indeed (~indeed@206.124.126.33) has joined #ceph
[19:36] * imjustmatthew (~imjustmat@c-24-127-107-51.hsd1.va.comcast.net) has joined #ceph
[19:38] * imjustmatthew (~imjustmat@c-24-127-107-51.hsd1.va.comcast.net) Quit (Remote host closed the connection)
[19:38] * imjustmatthew (~imjustmat@c-24-127-107-51.hsd1.va.comcast.net) has joined #ceph
[19:39] * imjustmatthew (~imjustmat@c-24-127-107-51.hsd1.va.comcast.net) Quit (Remote host closed the connection)
[19:39] * imjustmatthew (~imjustmat@c-24-127-107-51.hsd1.va.comcast.net) has joined #ceph
[19:40] * themgt (~themgt@pc-236-196-164-190.cm.vtr.net) has joined #ceph
[19:42] * dpippenger (~riven@tenant.pas.idealab.com) has joined #ceph
[19:44] * rudolfsteiner (~federicon@181.21.151.146) has joined #ceph
[19:47] * Steki (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[19:51] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Ping timeout: 480 seconds)
[19:52] * Steki is now known as BManojlovic
[19:52] * zhyan__ (~zhyan@101.82.117.202) Quit (Read error: Connection timed out)
[19:52] * zhyan__ (~zhyan@101.82.117.202) has joined #ceph
[20:00] * Tamil1 (~Adium@cpe-108-184-66-69.socal.res.rr.com) Quit (Quit: Leaving.)
[20:02] * Cube (~Cube@12.248.40.138) has joined #ceph
[20:03] * joao|lap (~JL@89.181.146.94) Quit (Remote host closed the connection)
[20:19] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[20:19] * alfredodeza (~alfredode@c-24-99-84-83.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[20:19] * libsysguy (~libsysguy@ng1.cptxoffice.net) Quit (Quit: Leaving.)
[20:21] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) has joined #ceph
[20:21] * alfredodeza (~alfredode@c-24-99-84-83.hsd1.ga.comcast.net) has joined #ceph
[20:25] * indeed (~indeed@206.124.126.33) Quit (Remote host closed the connection)
[20:25] * indeed (~indeed@206.124.126.33) has joined #ceph
[20:25] * indeed (~indeed@206.124.126.33) Quit (Remote host closed the connection)
[20:28] * indeed (~indeed@206.124.126.33) has joined #ceph
[20:28] * jlhawn (~jlhawn@208-90-212-77.PUBLIC.monkeybrains.net) has joined #ceph
[20:29] * rudolfsteiner (~federicon@181.21.151.146) Quit (Quit: rudolfsteiner)
[20:32] * indeed (~indeed@206.124.126.33) Quit (Remote host closed the connection)
[20:38] * indeed (~indeed@206.124.126.33) has joined #ceph
[20:41] * indeed (~indeed@206.124.126.33) Quit (Remote host closed the connection)
[20:42] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[20:43] * indeed (~indeed@206.124.126.33) has joined #ceph
[20:44] * rudolfsteiner (~federicon@181.21.151.146) has joined #ceph
[20:59] * KindTwo (~KindOne@h227.170.17.98.dynamic.ip.windstream.net) has joined #ceph
[21:01] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:01] * KindTwo is now known as KindOne
[21:03] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[21:05] * markbby (~Adium@168.94.245.2) has joined #ceph
[21:06] * terje_ (~joey@174-16-125-70.hlrn.qwest.net) has joined #ceph
[21:06] * markbby (~Adium@168.94.245.2) Quit ()
[21:09] * markbby (~Adium@168.94.245.2) has joined #ceph
[21:10] * rudolfsteiner (~federicon@181.21.151.146) Quit (Quit: rudolfsteiner)
[21:17] <sagewk> joshd: can you look at https://github.com/ceph/ceph-client/commit/9c793a5bf228a7bc14219be1e1130b9f611325ab ?
[21:21] * Machske (~Bram@d5152D87C.static.telenet.be) has joined #ceph
[21:25] <joshd> sagewk: looks ok to me
[21:25] <sagewk> tahnks
[21:27] * sprachgenerator (~sprachgen@130.202.135.222) has joined #ceph
[21:29] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[21:30] * DarkAce-Z (~BillyMays@50.107.55.36) has joined #ceph
[21:31] * Tamil1 (~Adium@cpe-108-184-66-69.socal.res.rr.com) has joined #ceph
[21:36] <sagewk> gregaf1: does https://github.com/ceph/ceph/blob/wip-6036/doc/dev/cache-pool.rst look ok?
[21:38] <gregaf1> sagewk: just adding a tier subcommand to the syntax?
[21:38] <sagewk> s/cache/tier/ so that we can use it for ec too
[21:38] * zhyan__ (~zhyan@101.82.117.202) Quit (Ping timeout: 480 seconds)
[21:38] <gregaf1> yeah, looks fine
[21:42] * indeed (~indeed@206.124.126.33) Quit (Remote host closed the connection)
[21:42] * rturk is now known as rturk-away
[21:46] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[21:49] * danieagle (~Daniel@177.97.251.212) has joined #ceph
[21:52] * rudolfsteiner (~federicon@181.21.135.221) has joined #ceph
[21:52] * rudolfsteiner (~federicon@181.21.135.221) Quit ()
[21:57] * jluis (~joao@89.181.146.94) has joined #ceph
[21:57] * ChanServ sets mode +o jluis
[22:00] <mtanski> Sage, I got the updates for fscache https://bitbucket.org/adfin/linux-fs/commits/branch/wip-ceph-fscache
[22:00] <mtanski> I made one fix (8eeeba4), I wasn't cleaning up the re-validate work queue
[22:01] <mtanski> And the second patch gets rid of all the ifdefs in the c code now
[22:01] <mtanski> I checked that it compiles with the feature enabled and disabled, since from your email it sounds like I fumbled it last time
[22:01] <mtanski> I wasn't sure if were planing to squash some of this stuff or if you wanted a different commit
[22:02] <mtanski> I'll resubmit it to the ML, in a bit
[22:02] <mtanski> First, I'm going to reboot and make sure it still works in a quick test
[22:04] <mtanski> It might be a while before I get to that due to a meeting on my end
[22:04] * joao (~joao@89-181-146-94.net.novis.pt) Quit (Ping timeout: 480 seconds)
[22:06] * themgt (~themgt@pc-236-196-164-190.cm.vtr.net) Quit (Quit: Pogoapp - http://www.pogoapp.com)
[22:11] * indeed (~indeed@206.124.126.33) has joined #ceph
[22:12] * wonkotheinsane (~jf@jf.ccs.usherbrooke.ca) Quit (Quit: WeeChat 0.3.7)
[22:14] * bandrus (~Adium@12.248.40.138) has joined #ceph
[22:19] * rudolfsteiner (~federicon@181.21.148.211) has joined #ceph
[22:21] * rudolfsteiner (~federicon@181.21.148.211) Quit ()
[22:23] * allsystemsarego (~allsystem@5-12-37-127.residential.rdsnet.ro) Quit (Quit: Leaving)
[22:29] * imjustmatthew_ (~imjustmat@pool-72-84-255-225.rcmdva.fios.verizon.net) has joined #ceph
[22:32] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:32] * jcfischer (~fischer@user-23-11.vpn.switch.ch) has joined #ceph
[22:38] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[22:38] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) has joined #ceph
[22:40] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[22:40] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) has joined #ceph
[22:42] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[22:42] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) has joined #ceph
[22:44] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[22:44] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) has joined #ceph
[22:44] * b1tbkt (~b1tbkt@24-217-192-155.dhcp.stls.mo.charter.com) has joined #ceph
[22:46] <itatar> hi, I tried to follow ceph-deploy tutorial but my ceph -s gives me this:ceph@cephadmin:~/my-cluster$ ceph -s
[22:46] <itatar> 2013-08-27 13:45:24.536429 7f78c5efa700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
[22:46] <itatar> 2013-08-27 13:45:24.536436 7f78c5efa700 0 librados: client.admin initialization error (2) No such file or directory
[22:46] <itatar> Error connecting to cluster: ObjectNotFound
[22:46] <itatar> how do I fix the keyring problem?
[22:47] <alfredodeza> itatar: how did you got to this point?
[22:47] <xarses> try cepph-deploy gather keys
[22:47] <xarses> erm
[22:47] <xarses> ceph-deploy gatherkeys
[22:47] <xarses> if that fails, your mon's cant communicate
[22:47] <xarses> so the keys haven't been generated yet
[22:48] * jcfischer (~fischer@user-23-11.vpn.switch.ch) Quit (Ping timeout: 480 seconds)
[22:49] <itatar> I followed instructions on here http://ceph.com/howto/ and here http://ceph.com/docs/master/start/quick-ceph-deploy/. Things didn't work, then I tried don't know what :) and I am here.. I guess I could start over but I'd rather try to fix whatever I am missing if someone can help me
[22:49] <xarses> itatar: see above
[22:50] <itatar> ceph@cephadmin:~/my-cluster$ ceph-deploy gatherkeys cephserver
[22:50] <itatar> [ceph_deploy.gatherkeys][DEBUG ] Have ceph.client.admin.keyring
[22:50] <itatar> [ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
[22:50] <itatar> [ceph_deploy.gatherkeys][DEBUG ] Have ceph.bootstrap-osd.keyring
[22:50] <itatar> [ceph_deploy.gatherkeys][DEBUG ] Have ceph.bootstrap-mds.keyring
[22:50] <itatar> ceph@cephadmin:~/my-cluster$
[22:50] <skm> is anyone here using 10GigE switches from Dell (either Force 10 or Power Connect series)?
[22:50] * indeed (~indeed@206.124.126.33) Quit (Remote host closed the connection)
[22:50] <itatar> do the messages mean that I already have keys so knew keys were note gathered?
[22:51] <xarses> there should be ceph.*.keyring in the current directory now
[22:51] <alfredodeza> that looks like a successful run of gatherkeys to me
[22:51] * TiCPU (~jeromepou@190-130.cgocable.ca) Quit (Quit: Ex-Chat)
[22:51] <xarses> now try ceph -s again
[22:52] <itatar> same error :(
[22:52] * jcfischer (~fischer@port-212-202-245-234.static.qsc.de) has joined #ceph
[22:52] <itatar> (2013-08-27 13:51:51.552186 7fae0e86e700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication)
[22:52] * vata (~vata@2607:fad8:4:6:d943:e724:8ecb:1dce) Quit (Quit: Leaving.)
[22:53] <itatar> ceph@cephadmin:~/my-cluster$ ls *keyring
[22:53] <itatar> ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph.mon.keyring
[22:53] <xarses> odd
[22:53] <xarses> try ceph -s -k ceph.client.admin.keyring
[22:53] <sagewk> if nothing else emperor is going to teach me to spell tier
[22:54] * danieagle_ (~Daniel@177.99.133.157) has joined #ceph
[22:54] <itatar> ceph@cephadmin:~/my-cluster$ ceph -s -k ceph.client.admin.keyring
[22:54] <itatar> cluster a041ed8a-59b1-4639-8ff0-9e09d451c7a1
[22:54] <itatar> health HEALTH_WARN 192 pgs stale; 192 pgs stuck stale; 2/2 in osds are down
[22:54] <itatar> monmap e1: 1 mons at {cephserver=192.168.152.110:6789/0}, election epoch 1, quorum 0 cephserver
[22:54] <itatar> osdmap e9: 2 osds: 0 up, 2 in
[22:54] <itatar> pgmap v16: 192 pgs: 192 stale+active+clean; 0 bytes data, 12072 MB used, 24025 MB / 38043 MB avail
[22:54] <itatar> mdsmap e3: 1/1/1 up {0=cephserver=up:creating}
[22:54] <itatar> ceph@cephadmin:~/my-cluster$
[22:54] <itatar> that looks better, thank you
[22:55] <xarses> they appear to work OK without -k if they are in ~
[22:55] <xarses> or /etc/ceph/
[22:55] <xarses> im not sure which; im away from my cluster
[22:56] <itatar> not home dir and I don't have /etc/ceph on my system (should it be there or should I create it?)
[22:56] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[22:57] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) has joined #ceph
[22:57] <xarses> wherever ceph.conf is
[22:57] <xarses> there is another dir that's normal
[22:57] <xarses> i thought /etc/ceph was default
[22:57] <itatar> ok ceph.conf is in ~/my-cluster
[22:58] <itatar> (that's where I ran ceph-deploy from per tutorial)
[22:58] <xarses> there is a slightly strange ceph.conf in the working directory that you run ceph-deploy from
[22:58] <jmlowe> sagewk: i before e except after c and a few hundred exceptions http://cheezburger.com/7691772672
[22:59] <xarses> it doesn't appear to be the same as the config that the mon/osd's use
[22:59] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[22:59] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) has joined #ceph
[22:59] <sagewk> ha
[22:59] <itatar> ok, well, it works with -k and that's good enough for me for now. I just want to give ceph a try to read and write some data
[23:00] <jmlowe> major reason I started using computers, appleworks spell check
[23:00] <itatar> does the health status that ceph -s return indicate I have a problem?
[23:00] <xarses> you don't have any osd's running
[23:00] <xarses> so yes, there is a problem
[23:00] * danieagle (~Daniel@177.97.251.212) Quit (Ping timeout: 480 seconds)
[23:01] <xarses> service ceph -a start
[23:01] <xarses> unless you haven't created them yet
[23:01] <itatar> on cephserver?
[23:01] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[23:01] <itatar> (I think I created them)
[23:01] <xarses> you did ceph-deploy osd [create|activate] at some point?
[23:02] <itatar> I did (per the tutorial)
[23:02] * indeed (~indeed@206.124.126.33) has joined #ceph
[23:02] <xarses> ok, then try the service ceph -a start
[23:03] <itatar> no output from that command
[23:04] <itatar> is there a specific process I can ps|grep for?
[23:07] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[23:09] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) has joined #ceph
[23:09] <itatar> maybe I screwed up the osd create/activate commands..
[23:11] * alfredodeza (~alfredode@c-24-99-84-83.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[23:12] <xarses> you can try doing a ceph-deploy activate again
[23:13] <xarses> i think its odd that you dont have /etc/ceph/ceph.conf
[23:13] <dmick> itatar: you should see ceph-osd processes, surely
[23:13] <dmick> what are you using for data and journal for the osds?
[23:13] <xarses> search for it in your filesystem (outside of your ceph-deploy) directory
[23:14] * dxd828 (~dxd828@host-92-24-118-99.ppp.as43234.net) has joined #ceph
[23:14] * dxd828 (~dxd828@host-92-24-118-99.ppp.as43234.net) Quit ()
[23:16] <itatar> per tutorial I have two servers - cephadmin and cephserver. I issue ceph-deploy commands from the admin. I now see that cephserver does have /etc/ceph/ceph.conf. But it (cephserver) doesn't have an osd process running
[23:16] <itatar> how does it get started?
[23:16] <itatar> (I executed 'service ceph -a start' on cephserver)
[23:17] * kraken (~kraken@c-24-99-84-83.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[23:17] <xarses> where there any ==osd.X== lines in there?
[23:17] <xarses> or just ==mon.X==?
[23:18] <itatar> in the conf file on cephserver?
[23:18] * indeed_ (~indeed@206.124.126.33) has joined #ceph
[23:18] <xarses> output from service
[23:19] <itatar> hm.. when I executed the service command there was no output but I just tried it again and got a permission error.. :
[23:19] <itatar> ceph@cephserver:/var/lib/ceph$ service ceph -a start
[23:19] <itatar> INFO:ceph-disk:Activating /dev/disk/by-parttypeuuid/4fbd7e29-9d25-41b8-afd0-062c0ceff05d.b8ca2910-cbae-41e5-9d8e-37dbf0c77288
[23:19] <itatar> Traceback (most recent call last):
[23:19] <itatar> File "/usr/sbin/ceph-disk", line 2327, in <module>
[23:19] <itatar> main()
[23:19] <itatar> File "/usr/sbin/ceph-disk", line 2316, in main
[23:19] <itatar> args.func(args)
[23:19] <itatar> File "/usr/sbin/ceph-disk", line 1815, in main_activate_all
[23:19] <itatar> activate_lock.acquire()
[23:19] <itatar> File "/usr/sbin/ceph-disk", line 125, in acquire
[23:19] <itatar> self.fd = file(self.fn, 'w')
[23:19] <itatar> IOError: [Errno 13] Permission denied: '/var/lib/ceph/tmp/ceph-disk.activate.lock'
[23:19] <itatar> ceph@cephserver:/var/lib/ceph$
[23:20] <itatar> that lock file is owned by root
[23:20] <itatar> -rw-r--r-- 1 root root 0 Aug 27 14:13 /var/lib/ceph/tmp/ceph-disk.activate.lock
[23:20] <xarses> throw a sudo up there
[23:21] * evil_ste1e (~evil_stev@irc-vm.nerdvana.net.au) has joined #ceph
[23:21] <dmick> yeah, anything starting services always needs root
[23:22] * evil_steve (~evil_stev@irc-vm.nerdvana.net.au) Quit (Read error: Connection reset by peer)
[23:22] <itatar> ceph@cephserver:/var/lib/ceph$ sudo service ceph -a start
[23:22] <itatar> INFO:ceph-disk:Activating /dev/disk/by-parttypeuuid/4fbd7e29-9d25-41b8-afd0-062c0ceff05d.b8ca2910-cbae-41e5-9d8e-37dbf0c77288
[23:22] <itatar> ERROR:ceph-disk:Failed to activate
[23:22] <itatar> ceph-disk: Error: No cluster conf found in /etc/ceph with fsid a041ed8a-59b1-4639-8ff0-9e09d451c7a1
[23:22] <itatar> ceph-disk: Error: One or more partitions failed to activate
[23:22] <itatar> ceph@cephserver:/var/lib/ceph$
[23:22] <itatar> I did try to create osd for /dev/sdb and /dev/sdc but the second one failed. so maybe that is the reason for the error..
[23:23] <itatar> can I make ceph forget about /dev/sdc?
[23:23] <itatar> (if that is indeed a problem)
[23:24] * indeed (~indeed@206.124.126.33) Quit (Ping timeout: 480 seconds)
[23:28] * yanzheng (~zhyan@101.82.58.61) has joined #ceph
[23:29] <itatar> or maybe it has nothing to do with that..
[23:29] <xarses> the fsid is an issue
[23:30] <itatar> what creates fsid and can I redo that?
[23:30] <xarses> the fsid is generated during ceph-deploy new <monitors>
[23:30] * mschiff (~mschiff@46.59.142.56) has joined #ceph
[23:32] <itatar> is there a way to delete a monitor so I can create a new one?
[23:32] <xarses> you can "start over" if you're ok with that
[23:33] <itatar> I am fine with that :) I only need to know how to get to a clean state
[23:33] <xarses> ceph-deploy purgedata <nodes> ; ceph-deploy purge <nodes>
[23:33] <xarses> then start over with ceph-deploy install
[23:36] <itatar> done. do you think I should follow the instructions on http://ceph.com/docs/master/start/quick-start-preflight/ and http://ceph.com/docs/master/start/quick-ceph-deploy/
[23:36] <itatar> or is there a better tutorial I should be looking at?
[23:38] * rudolfsteiner (~federicon@200.41.133.175) has joined #ceph
[23:40] <xarses> nope, those should be fine
[23:40] * BillK (~BillK-OFT@220-253-162-118.dyn.iinet.net.au) has joined #ceph
[23:40] * indeed_ (~indeed@206.124.126.33) Quit (Remote host closed the connection)
[23:40] <xarses> with the osd's you can (create) OR (prepare and activate)
[23:41] * danieagle_ (~Daniel@177.99.133.157) Quit (Quit: inte+ e Obrigado Por tudo mesmo! :-D)
[23:41] <xarses> and un less you want to use cephFS, you can skip the mds create if you desire
[23:42] <itatar> the http://ceph.com/howto/ tuturial describes prepare/activate
[23:42] <xarses> both are valid
[23:42] <xarses> create does both if necessary
[23:42] <itatar> ok. I'll try to follow the steps again then
[23:43] <itatar> do I need to do anything on the cephadmin side (the host I execute ceph-deploy for) to clear the state?
[23:44] <itatar> before starting over. for example it has files in the ~/my-cluster dir
[23:46] <xarses> should be find
[23:47] <xarses> /find/fine
[23:47] <xarses> i run from ~
[23:47] * indeed (~indeed@206.124.126.33) has joined #ceph
[23:47] * rturk-away is now known as rturk
[23:48] <sagewk> sjust: i think we can merge https://github.com/ceph/ceph/pull/541. doesn't include copyfrom or some of the new tiering bits, but it has the osd objecter infrastructure
[23:49] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[23:50] <xarses> ceph-deploy 1.0.0 gatherkeys returns with an exit code 0 zero when it fails to retrieve the keys? Is this expected, is this different in the current version?
[23:51] * bstillwell (~bryan@bokeoa.com) has joined #ceph
[23:51] <lxo> (retry) say I have a corrupted btrfs (can't remount ro) holding one replica of every pg, and a brand new disk that I want to use to increase the replication count by one
[23:51] <lxo> but then, if I'm doing that... can I use the data I copied to the new disk to initialize the new osd in there? like, change a few bytes in the superblock and voila?
[23:51] <lxo> instead of letting ceph replicate everything to the new disk, I think it would be much faster to copy the data from the read-only btrfs to the new disk, and then create a new fs and copy the data back
[23:52] <lxo> but then, if I'm doing that... can I use the data I copied to the new disk to initialize the new osd in there? like, change a few bytes in the superblock and voila? (sorry about the out-of-order copy of this line above)
[23:54] <bstillwell> How could I get more debugging output from ceph-deploy?
[23:54] <gregaf1> oof; I think actually you could but I'd have to check with Sam
[23:54] * jhujhiti (~jhujhiti@00012a8b.user.oftc.net) has joined #ceph
[23:54] <gregaf1> I'm not sure if recovery/backfill will cooperate with an unknown OSD knowing anything about the state of the cluster, lxo
[23:54] <jhujhiti> can i disable cephx authentication on a running cluster without interruption?
[23:56] <gregaf1> bstillwell: alfredodeza has been working on that, though he's not around right now
[23:56] <bstillwell> gregaf1: Ok, thanks.
[23:56] <lxo> gregaf1, the new disk had been running and recovering for a while already when the other btrfs got corrupted, so there is info in the osdmaps that the osd has partially-backfilled copies of all of the pgs. the only difference is how far the backfilling is
[23:56] <bstillwell> I'm trying to run 'ceph-deploy disk list den2ceph002', but it gives me this for an error:
[23:56] <bstillwell> [ceph_deploy.osd][ERROR ] disk list failed: [Errno 2] No such file or directory
[23:56] <bstillwell> Running 'ceph-disk list' on the den2ceph002 works fine though
[23:57] <gregaf1> jhujhiti: I think you'd have to restart a daemon at at time as I'm pretty sure disabling the auth requires restarting the messaging, but I could be wrong — try injectarg'ing the looser config options and connecting without them
[23:57] <lxo> my main concern is about the osd itself getting confused because its osdmap says it doesn't have stuff that the superblock says it does
[23:57] <jhujhiti> gregaf1: "injectarg"?
[23:58] <gregaf1> dmick, injectargs is still the syntax for doing that, right?
[23:58] <jhujhiti> ah, i found a doc on injectargs
[23:58] <gregaf1> bstillwell: I'm not sure that's the right syntax for that? ceph-deploy doesn't let you run arbitrary ceph-disk commands
[23:58] <gregaf1> but I've barely used ceph-deploy so I could be wrong
[23:58] <jhujhiti> gregaf1: alternatively, is there some way i can make cephx optional, so it will work for clients that present authentication and those that don't?
[23:59] <lxo> heck, I'm actually using a (known-good) snapshot from before the osd was added to the cluster, which might make things more interesting. but I guess if it comes to worst, I can let the re-created osd recover for a while and then resync it to the external drive, then it will have new-enough osdmap info
[23:59] <dmick> injectargs should still work. be aware that you almost certainly need to use '--' to separate ceph-cli args from the injected args
[23:59] <sjust> lxo: that's pretty tricky
[23:59] <gregaf1> jhujhiti: yeah, if you look at the cephx config reference in the docs I believe it covers that
[23:59] <sjust> you'd have better luck trying to resurrect the copied data as the same osd

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.