#ceph IRC Log

Index

IRC Log for 2012-12-18

Timestamps are in GMT/BST.

[0:02] <KYann> dmick sorry i'm trying stuff
[0:02] <KYann> i'll post a email tomorrow on the mailing list with everything
[0:03] <dmick> np, I was just apologizing for missing your replies
[0:08] * cblack101 (8686894b@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[0:13] * sagelap (~sage@38.122.20.226) Quit (Ping timeout: 480 seconds)
[0:18] * sagelap (~sage@38.122.20.226) has joined #ceph
[0:21] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:31] * gucki (~smuxi@46-126-114-222.dynamic.hispeed.ch) Quit (Remote host closed the connection)
[0:37] * rread is now known as Guest1775
[0:37] * rread (~rread@c-98-234-218-55.hsd1.ca.comcast.net) has joined #ceph
[0:40] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[0:40] * loicd (~loic@magenta.dachary.org) has joined #ceph
[0:41] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[0:44] * Guest1775 (~rread@c-98-234-218-55.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[0:51] * PerlStalker (~PerlStalk@72.166.192.70) Quit (Quit: ...)
[0:53] * noob2 (~noob2@pool-71-244-111-36.phlapa.fios.verizon.net) has joined #ceph
[1:03] * sagelap (~sage@38.122.20.226) Quit (Ping timeout: 480 seconds)
[1:13] * sagelap (~sage@2607:f298:a:607:f5f5:ee4f:6791:8406) has joined #ceph
[1:13] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[1:15] * aliguori (~anthony@cpe-70-113-5-4.austin.res.rr.com) Quit (Remote host closed the connection)
[1:16] * BManojlovic (~steki@85.222.178.27) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:19] * fc (~fc@home.ploup.net) Quit (Quit: leaving)
[1:45] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[1:58] * LeaChim (~LeaChim@5ad684ae.bb.sky.com) Quit (Quit: Leaving)
[1:58] * jlogan1 (~Thunderbi@72.5.59.176) Quit (Ping timeout: 480 seconds)
[2:06] * sagelap (~sage@2607:f298:a:607:f5f5:ee4f:6791:8406) Quit (Ping timeout: 480 seconds)
[2:08] * sagelap (~sage@83.sub-70-197-145.myvzw.com) has joined #ceph
[2:09] * Kioob (~kioob@luuna.daevel.fr) Quit (Remote host closed the connection)
[2:10] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[2:16] * sagelap (~sage@83.sub-70-197-145.myvzw.com) Quit (Ping timeout: 480 seconds)
[2:16] * yasu` (~yasu`@dhcp-59-227.cse.ucsc.edu) Quit (Remote host closed the connection)
[2:27] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[2:28] * sagelap (~sage@25.sub-70-197-142.myvzw.com) has joined #ceph
[2:30] * sagelap (~sage@25.sub-70-197-142.myvzw.com) Quit ()
[2:42] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[2:48] * noob2 (~noob2@pool-71-244-111-36.phlapa.fios.verizon.net) Quit (Quit: Leaving.)
[2:51] <DrewBeer> anyone seen where a cephfs shows almost no storage?
[2:51] <DrewBeer> 1.2G 47M 1.1G 4% /mnt/cephfs
[2:51] <DrewBeer> 2012-12-17 17:51:03.641941 mon.0 [INF] pgmap v542: 1728 pgs: 1728 active+clean; 3681 MB data, 11785 MB used, 276 GB / 287 GB avail
[2:51] <DrewBeer> but ceph -w shows all of it?
[2:52] <DrewBeer> especially since I have about 5gb of data in the folder, but df only shows about 47M
[2:52] <DrewBeer> not that it matters that much, just curious if its a bug or something that is suppose to show that
[2:57] * slang (~slang@cpe-66-91-114-250.hawaii.res.rr.com) has left #ceph
[2:58] * slang (~slang@cpe-66-91-114-250.hawaii.res.rr.com) has joined #ceph
[3:04] <Kioob> # crushtool -c crush.plain -o crush.compiled2
[3:04] <Kioob> crush.plain:115 error: parse error at ''
[3:05] <Kioob> yes, very helpfull error message
[3:05] <Kioob> :p
[3:13] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[3:19] * mdxi_ (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) has joined #ceph
[3:19] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[3:23] <dmick> well at least you have a line number :)
[3:25] <Kioob> yes dmick ;)
[3:25] <Kioob> empty, but I have a number yes
[3:26] <dmick> oh, is that EOF?
[3:27] <Kioob> no, but it's an empty line
[3:27] <Kioob> and for now I didn't find what is the problem
[3:28] <dmick> I can give you another pair of eyes if you want to pastebin it somewhere
[3:28] <Kioob> thanks, I try to cleanup the file before... maybe I will find
[3:29] <dmick> sure
[3:31] <Kioob> so, same problem after cleanup. parse error line 70 in http://pastebin.com/1em8peRQ
[3:31] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[3:31] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[3:31] <Kioob> I tried to rename "net 188-165-14", but it doesn't help
[3:33] <dmick> did you try -v/--verbose?
[3:33] <Kioob> yes, and it didn't change the output
[3:34] <dmick> well I get the same error, if that's helpful :)
[3:34] <Kioob> great :p
[3:35] <dmick> hey, reproduction a critical step in the scientific method
[3:35] <Kioob> I agree ;)
[3:36] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[3:46] <Kioob> so. Too tired, I'm going to sleep. I will looking at that tomorrow. Good night
[3:46] <dmick> ok. I was just staring at Boost Spirit debug output and not seeing much
[3:47] <dmick> maybe send email; someone else might have seen thsi
[3:48] <dmick> the one thing I see that's obvious is that this is one of the few names using '-'; when you say you tried removing it, did you try replacing that char with something else? (I will try that)
[3:49] <Kioob> i tried with a letter at first place, and removing "-"
[3:49] <dmick> yeah
[3:49] <dmick> it's like it doesn't believe "net" is a node type
[3:49] <Kioob> for example z18816515
[3:49] <Kioob> yes, I renamed it to �row� too
[3:49] <dmick> but same with room if I comment out the net entries.
[3:50] <dmick> weird.
[3:52] * Ryan_Lane (~Adium@216.38.130.167) Quit (Quit: Leaving.)
[3:53] <Kioob> If I comment the �rack� blocks, then the first �net� block seems to work (the error is now between the 2 net blocks)
[3:54] <Kioob> I don't understand... so. GN ;)
[4:02] * The_Bishop__ (~bishop@e179011086.adsl.alicedsl.de) has joined #ceph
[4:04] * The_Bishop_ (~bishop@e179011086.adsl.alicedsl.de) Quit (Read error: Operation timed out)
[4:46] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[4:50] <michaeltchapman> I tried to remove 2 of the three monitors from my cluster, and I'm getting a lot of faults and the last remaining mon won't start. The errors when starting mon look like this: 0 -- :/20645 >> 172.22.4.5:6789/0 pipe(0x7f8b340023b0 sd=4 :0 pgs=0 cs=0 l=1).fault
[4:54] <dmick> what procedure did you use to remove them?
[5:06] <michaeltchapman> the one here: http://ceph.com/docs/master/rados/operations/add-or-rm-mons/
[5:07] <michaeltchapman> ceph mon stop and then ceph mon remove
[5:07] <michaeltchapman> on one at a time
[5:11] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[5:15] * jlogan (~Thunderbi@2600:c00:3010:1:19e4:b73c:924b:79fd) has joined #ceph
[5:20] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[5:32] <dmick> that error is pretty generic. It's (probably obviously) trying to talk to 172.22.4.5:6789; is that the monitor that remains?
[5:32] <dmick> you can try setting debug ms higher, perhaps
[5:37] <michaeltchapman> I killed everything and restarted and now all the mons are back up. I'm still not sure what happened when I tried to remove them, though.
[5:38] <michaeltchapman> now I have an unfound pg, though. How do I mark it as lost? My ceph -s looks like this: health HEALTH_WARN 1 pgs recovering; 1 pgs stuck unclean; recovery 1/1215 degraded (0.082%); 1/1215 unfound (0.082%)
[5:40] * eternaleye (~eternaley@tchaikovsky.exherbo.org) Quit (Remote host closed the connection)
[5:42] <michaeltchapman> ah nevermind I found it.
[5:49] <elder> nhm, any chance you're still up?
[5:50] <elder> nhm, wondering if lunch tomorrow will work. Not a big deal, but I'm contemplating going to Northfield to get my daughter so you're right on the way.
[5:52] * chutzpah (~chutz@199.21.234.7) Quit (Quit: Leaving)
[6:28] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:39] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[6:59] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[7:07] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[7:08] * loicd (~loic@magenta.dachary.org) has joined #ceph
[7:21] * jlogan (~Thunderbi@2600:c00:3010:1:19e4:b73c:924b:79fd) Quit (Ping timeout: 480 seconds)
[7:54] * ken_ (~chatzilla@118.97.180.124) has joined #ceph
[8:07] * agh (~2ee79308@2600:3c00::2:2424) has joined #ceph
[8:07] * rread (~rread@c-98-234-218-55.hsd1.ca.comcast.net) Quit (Quit: rread)
[8:08] * ken_ (~chatzilla@118.97.180.124) Quit (Quit: ChatZilla 0.9.87 [Firefox 8.0.1/20111120135848])
[8:11] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:12] * dmick (~dmick@2607:f298:a:607:343b:1e17:4acf:c62d) Quit (Quit: Leaving.)
[8:14] * low (~low@188.165.111.2) has joined #ceph
[8:15] * joshd1 (~jdurgin@2602:306:c5db:310:41a7:ad0e:fb84:9bc2) Quit (Read error: Operation timed out)
[8:26] <agh> Hello, is it possible to mount CephFS on a specific pool ?
[8:26] <agh> for instance, I want to mount /mnt/ceph1 on a SSD pool, /mnt/ceph2 on a SAS pool and /mnt/ceph3 on a SATA pool. How to do that ?
[8:26] <agh> Thankfs
[8:27] * LeaChim (~LeaChim@5ad684ae.bb.sky.com) has joined #ceph
[8:37] * gaveen (~gaveen@112.135.21.9) has joined #ceph
[8:45] <agh> hello to all. i'm playing with RadosGW. It seems to work but... not really.
[8:45] <agh> in fact I'm able to create buckets in Python with Boto
[8:46] * joshd1 (~jdurgin@2602:306:c5db:310:54e2:69af:e2fe:efd7) has joined #ceph
[8:46] <agh> but, I can't get into these buckets neither with S3Fox or s3cmd
[8:46] <agh> => The AWS Access Key Id you provided does not exist in our records.
[8:46] * joshd1 (~jdurgin@2602:306:c5db:310:54e2:69af:e2fe:efd7) Quit ()
[8:53] * verwilst (~verwilst@d5152D6B9.static.telenet.be) has joined #ceph
[8:54] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[9:08] * loicd (~loic@90.84.144.45) has joined #ceph
[9:09] * loicd (~loic@90.84.144.45) Quit ()
[9:10] * yoshi (~yoshi@80.30.51.242) has joined #ceph
[9:11] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[9:11] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[9:12] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[9:19] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[9:19] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[9:20] <Kioob`Taff> Hi
[9:21] * l0nk (~alex@83.167.43.235) has joined #ceph
[9:22] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[9:22] * eternaleye (~eternaley@tchaikovsky.exherbo.org) has joined #ceph
[9:24] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:25] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[9:30] * loicd (~loic@178.20.50.225) has joined #ceph
[9:30] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Remote host closed the connection)
[9:32] * fc (~fc@home.ploup.net) has joined #ceph
[9:33] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[9:35] * IceGuest_75 (~IceChat7@buerogw01.ispgateway.de) has joined #ceph
[9:35] * IceGuest_75 is now known as norbi
[9:35] <norbi> good mornin #ceph
[9:38] <Kioob`Taff> !!! I found :D
[9:38] <Kioob`Taff> in my "rack" block, I have " item brontes weight 0.000 "
[9:38] <Kioob`Taff> instead of " item host brontes weight 0.000"
[9:38] <Kioob`Taff> ..
[9:40] <Kioob`Taff> well, "item NAME weight N" is the good one :p
[9:41] <joao> morning all
[9:41] <norbi> have some "strange" problem :)
[9:42] <norbi> have created a new osd with --mkfs and --mkkey, after that have done "ceph auth add ....", then removed the keyring via "rm -f"
[9:42] <norbi> created new keyringfile, "ceph auth add..."
[9:43] <norbi> and now i get on evey command 2012-12-18 09:41:45.945954 7fa376816700 0 -- :/19824 >> IPADRESS:6789/0 pipe(0x116c580 sd=4 :0 pgs=0 cs=0 l=1).fault
[9:43] <joao> that's not an error
[9:43] <norbi> and how can i remove that ? :)
[9:43] <joao> it's just verbose output
[9:44] <joao> how's the cluster status?
[9:44] <joao> anything wrong on that end?
[9:44] <norbi> now in warning
[9:44] <norbi> he is remapping, recovering and so on
[9:44] <joao> give it time then; that 'fault' message is not a problem
[9:45] <joao> it's just the messenger being verbose in a misleading way
[9:45] <norbi> ok, sounds good
[9:46] <norbi> btw. "ceph auth" gives a error on usage. thats very cool :)
[9:47] <Kioob`Taff> » osd.20 does not exist. create it before updating the crush map
[9:47] <Kioob`Taff> mmm
[9:48] <joao> norbi, is the error informative?
[9:48] <norbi> oh yes !
[9:48] <joao> Kioob`Taff, must 'ceph osd create' first
[9:48] <norbi> tells me what can i do with "ceph auth", i missed that with "ceph osd"
[9:49] <Kioob`Taff> yes joao, I'm reading that (on http://ceph.com/docs/master/rados/operations/add-or-rm-osds/), the doc seems up to date, thanks !
[9:50] <Kioob`Taff> oh... joao : how can I choose/determine the {uuid} to use ?
[9:51] <joao> the uuid is optional
[9:51] <Kioob`Taff> ok, so it's safe to add OSD by that way
[9:51] <Kioob`Taff> thanks
[9:54] * Leseb (~Leseb@2001:980:759b:1:b816:98f1:5a4a:27a7) has joined #ceph
[9:56] * ScOut3R (~ScOut3R@dslC3E4E249.fixip.t-online.hu) has joined #ceph
[9:57] * Leseb_ (~Leseb@193.172.124.196) has joined #ceph
[9:58] <Kioob`Taff> joao : I still have the same error. Not sure if "ceph osd create" works : it only display the string "0"
[9:58] <joao> is that your first osd?
[9:58] <Kioob`Taff> no
[9:58] <Kioob`Taff> I have 8 OSD in production
[9:58] <joao> 'ceph osd create' only returns the allocated id for the osd
[9:59] <Kioob`Taff> :S
[9:59] <joao> well, that's weird
[9:59] <Kioob`Taff> mmm
[9:59] <joao> can you show me your 'ceph -s'?
[9:59] <norbi> if you delete osd.0 and you have osd 1-8 up, osd create will tell next id 1 :)
[9:59] <norbi> next id 0
[9:59] <Kioob`Taff> I didn't start my OSD at num 0, but 10... so the num 0 is «free»
[10:00] <joao> oh
[10:00] <Kioob`Taff> health HEALTH_OK
[10:00] <Kioob`Taff> monmap e1: 1 mons at {a=10.0.0.1:6789/0}, election epoch 1, quorum 0 a
[10:00] <Kioob`Taff> osdmap e144: 9 osds: 8 up, 8 in
[10:00] <Kioob`Taff> pgmap v327234: 2328 pgs: 2328 active+clean; 912 GB data, 2537 GB used, 4874 GB / 7412 GB avail
[10:00] <Kioob`Taff> mdsmap e1: 0/0/1 up
[10:00] <joao> Kioob`Taff, that's it then; it's returning the first available id, so it makes sense if you started at 10
[10:01] <Kioob`Taff> so, I should change my OSD nums
[10:01] <joao> we're dissociating ids from names though, but that's still a work in progress
[10:01] <Kioob`Taff> is there a way to do that ?
[10:02] <joao> Kioob`Taff, none that I can think of
[10:02] <Kioob`Taff> great :D
[10:02] <joao> others may have other ideas
[10:02] <norbi> it is normal that the MON kills himselve if he get to many "mons are laggy or clocks are too skewed" messages ?
[10:02] <joao> no
[10:03] * Leseb (~Leseb@2001:980:759b:1:b816:98f1:5a4a:27a7) Quit (Ping timeout: 480 seconds)
[10:03] * Leseb_ is now known as Leseb
[10:03] <joao> can you get me the backtrace and logs of that particular monitor?
[10:03] <Kioob`Taff> I will try to rename my new OSD to use the ID 0
[10:03] <norbi> query or email ?
[10:04] <joao> norbi, email would be best
[10:04] <joao> joao.luis@inktank.com
[10:04] <norbi> ok !
[10:04] <norbi> habe seen this many time in here
[10:04] <norbi> have
[10:06] <norbi> do you need the output "begin dump of recent events" too ?
[10:07] <joao> if you have previous output, then no
[10:07] <norbi> ok
[10:07] <joao> those would be duplicates on the file
[10:08] <joao> but feel free to send the whole log along
[10:08] <joao> if it's too big, I can try to figure out where our drop account is
[10:09] <joao> norbi, if you'd rather drop it somewhere instead of email, sftp to 'cephdrop@ceph.com'
[10:10] <norbi> no its no problem, its on the way
[10:11] * nosebleedkt (~kostas@213.140.128.74) has joined #ceph
[10:13] <norbi> can send u more crash backtraces if you want :) the last mon is crashed because i have change the mon_clock_drift_allowed in runtime
[10:19] * roald_ (~roaldvanl@139-63-21-176.nodes.tno.nl) has joined #ceph
[10:22] * nosebleedkt_ (~kostas@213.140.128.74) has joined #ceph
[10:28] * nosebleedkt (~kostas@213.140.128.74) Quit (Ping timeout: 480 seconds)
[10:42] * n-other (0249bb76@ircip2.mibbit.com) has joined #ceph
[10:44] <joao> norbi, have that monitor by any chance run out of disk space?
[10:45] <norbi> oh thath could be possible
[10:45] <n-other> hello! looking for help with stuck ceph clients
[10:45] <norbi> one osd is crashed because that
[10:45] <joao> norbi, the crash happens when trying to write down values to disk
[10:46] <n-other> i have simple setup with 3 servers each running osd, mon and mds
[10:46] <joao> hence my question
[10:46] <norbi> ok, then we need a better errormessage :)
[10:46] <norbi> thanks for help
[10:47] <n-other> i have 2 servers happily using ceph
[10:47] <n-other> but for some reason whenever I put additional clients, they can't read ceph
[10:47] <joao> norbi, that can sure be arranged, but I believe that crashing your monitor is the way to go anyway
[10:47] <joao> n-other, how's your 'ceph -s' looking?
[10:48] <nosebleedkt_> hello everybody
[10:48] <joao> hi
[10:48] <nosebleedkt_> joao, I was wondering if instead of backing up whole images
[10:48] <nosebleedkt_> just to move a diff file
[10:49] <n-other> it mounts the fs properly, but got stuck on stat* operations
[10:49] <n-other> just at the moment I started 'strace ls -al' and it's stuck with: lstat64("data",
[10:49] <n-other> ceph is mounted to the "data"
[10:49] <n-other> I am running 0.55 release
[10:50] <norbi> n-other ceph -w is running ? i have had the same problem, and the problem was the MONs
[10:50] <n-other> joao, it's fine. no warnings
[10:50] <norbi> if the client cant reach the MONs or to many MONs down, then you get this
[10:51] <norbi> hm ok
[10:51] <nosebleedkt_> The scenario is to have a normal cluster and a recovery cluster. Every day RBD images are exported from normal cluster and transferred to recovery cluster where they get imported. But those data could be immense. So if we could just transfer a diff file which depicts the difference in time between two RBD images.... ?
[10:51] <joao> nosebleedkt_, can't really advise you on rbd
[10:52] <joao> sorry
[10:52] * The_Bishop_ (~bishop@e179012174.adsl.alicedsl.de) has joined #ceph
[10:52] <nosebleedkt_> :P
[10:52] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[10:52] <joao> n-other, given you said you mounted the fs, I'm assuming your mds are up; is this right?
[10:53] <joao> up and active, I mean
[10:53] <joao> well, brb
[10:55] * n-other (0249bb76@ircip2.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[10:59] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[10:59] * Leseb_ (~Leseb@193.172.124.196) has joined #ceph
[10:59] * The_Bishop__ (~bishop@e179011086.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[10:59] * Leseb (~Leseb@193.172.124.196) Quit (Read error: Connection reset by peer)
[10:59] * Leseb_ is now known as Leseb
[11:12] <renzhi> how many threads can the librados create during a normal session? say, only one client.
[11:13] <renzhi> it depends on the number of osds or something?
[11:17] * kYann839 (~kYann@tui75-3-88-168-236-26.fbx.proxad.net) has joined #ceph
[11:18] <kYann839> Hi
[11:26] <Kioob`Taff> joao: so, for by ID problem. It's solved. I had to create the new OSD with the num 0, and it's ok
[11:28] <Kioob`Taff> question, on that page http://ceph.com/docs/master/rados/operations/add-or-rm-osds/ in the "Argonaut (v0.48) Best Practices" we can read "Note that this practice will no longer be necessary in Bobtail and subsequent releases.". So, with the v0.55 I can reweight to "1" directly, without having I/O problems ?
[11:30] * yoshi (~yoshi@80.30.51.242) Quit (Remote host closed the connection)
[11:43] <Kioob`Taff> so... it works very well, but adding OSD is very slow :p
[11:44] * KindOne (~KindOne@50.96.87.48) Quit (Remote host closed the connection)
[11:53] <Kioob`Taff> is there a doc/wiki with explanation of all PG states ?
[11:54] <Kioob`Taff> I see : HEALTH_WARN 59 pgs backfill; 21 pgs backfilling; 1 pgs recovery_wait; 81 pgs stuck unclean; recovery 48713/684742 degraded (7.114%)
[11:54] <Kioob`Taff> so, it seems to be recovering
[11:54] <Kioob`Taff> but what is «backfill» and «recovery_wait» ?
[11:54] <Kioob`Taff> for «stuck unclean» I found in the wiki
[11:56] <Kioob`Taff> of, some explanation here : http://ceph.com/docs/master/dev/osd_internals/recovery_reservation/
[12:07] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[12:11] * kbad_ (~kbad@malicious.dreamhost.com) has joined #ceph
[12:11] * fc_ (~fc@home.ploup.net) has joined #ceph
[12:11] * Meths_ (~meths@2.25.214.88) has joined #ceph
[12:11] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[12:11] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (resistance.oftc.net graviton.oftc.net)
[12:11] * fc (~fc@home.ploup.net) Quit (resistance.oftc.net graviton.oftc.net)
[12:11] * agh (~2ee79308@2600:3c00::2:2424) Quit (resistance.oftc.net graviton.oftc.net)
[12:11] * lxo (~aoliva@lxo.user.oftc.net) Quit (resistance.oftc.net graviton.oftc.net)
[12:11] * mdxi_ (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) Quit (resistance.oftc.net graviton.oftc.net)
[12:11] * mengesb (~bmenges@servepath-gw3.servepath.com) Quit (resistance.oftc.net graviton.oftc.net)
[12:11] * joao (~JL@89.181.148.171) Quit (resistance.oftc.net graviton.oftc.net)
[12:11] * yehudasa (~yehudasa@2607:f298:a:607:f417:6a39:eebd:1d71) Quit (resistance.oftc.net graviton.oftc.net)
[12:11] * sagewk (~sage@2607:f298:a:607:6df9:7a80:af99:5918) Quit (resistance.oftc.net graviton.oftc.net)
[12:11] * flakrat_ (~flakrat@eng-bec264la.eng.uab.edu) Quit (resistance.oftc.net graviton.oftc.net)
[12:11] * kbad (~kbad@malicious.dreamhost.com) Quit (resistance.oftc.net graviton.oftc.net)
[12:11] * michaeltchapman (~mxc900@150.203.248.116) Quit (resistance.oftc.net graviton.oftc.net)
[12:11] * wer (~wer@wer.youfarted.net) Quit (resistance.oftc.net graviton.oftc.net)
[12:11] * Meths (~meths@2.25.214.88) Quit (resistance.oftc.net graviton.oftc.net)
[12:11] * cclien_ (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (resistance.oftc.net graviton.oftc.net)
[12:11] * tryggvil_ is now known as tryggvil
[12:11] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) has joined #ceph
[12:14] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[12:16] * The_Bishop_ (~bishop@e179012174.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[12:17] * xdccFrien (~maravilla@193.144.61.240) has joined #ceph
[12:17] <xdccFrien> http://www.carolinaherrera.com/212/es/areyouonthelist?share=HyjA8uQ0DhXVArDkeEKzcr-ndZR6Hu9HUYpNrsGkBXbkz4rz3EUUdzs6j6FXsjB4447F-isvxjqkXd4Qey2GHw#episodio-6
[12:18] <kYann839> My cluster is stuck in the recovery process
[12:18] <kYann839> nothing move
[12:18] <kYann839> but there is no pg stuck
[12:18] <kYann839> no load on osd
[12:19] <kYann839> And I clearly don't know what to do :/
[12:19] * xdccFrien (~maravilla@193.144.61.240) Quit (Remote host closed the connection)
[12:21] * yehudasa (~yehudasa@2607:f298:a:607:f417:6a39:eebd:1d71) has joined #ceph
[12:22] * mengesb (~bmenges@servepath-gw3.servepath.com) has joined #ceph
[12:22] * flakrat_ (~flakrat@eng-bec264la.eng.uab.edu) has joined #ceph
[12:22] * wer (~wer@wer.youfarted.net) has joined #ceph
[12:22] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[12:22] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:23] * sagewk (~sage@2607:f298:a:607:6df9:7a80:af99:5918) has joined #ceph
[12:23] * joao (~JL@89-181-148-171.net.novis.pt) has joined #ceph
[12:23] * ChanServ sets mode +o joao
[12:24] * michaeltchapman (~mxc900@150.203.248.116) has joined #ceph
[12:26] <Kioob`Taff> kYann839 : stupid question, but are OSD running ?
[12:27] * The_Bishop_ (~bishop@e179012174.adsl.alicedsl.de) has joined #ceph
[12:27] <kYann839> yes
[12:28] <kYann839> but they are going up and down
[12:28] <kYann839> because either they crash (seg fault) or other osd report them dead but there not
[12:28] <Kioob`Taff> well... «seg fault» is not normal at all
[12:28] <Kioob`Taff> it's probably the main problem, no ?
[12:29] * mgalkiewicz (~mgalkiewi@toya.hederanetworks.net) Quit (Quit: Ex-Chat)
[12:29] <kYann839> yes, I send the stacktrace on the mailing list
[12:29] <kYann839> well it may be, but when the osd don't crash they are still reported as down by other osd
[12:37] <kYann839> and when I restart osd, some other osd report connection refused
[12:40] <kYann839> it's like osd doesn't know that another osd has been rebooted and that they have to do request on another port
[12:52] <renzhi> hi, my mons keep dying after calling an election and winning the election
[12:53] * gregorg (~Greg@78.155.152.6) Quit (Quit: Quitte)
[12:53] <renzhi> as soon as mon.a won the election, it crashes, then mon.b and mon.c are not in the quorum
[12:54] <norbi> with logfiles/backtrace, i think joao can help u
[12:54] <joao> renzhi, which version are you using?
[12:54] <kYann839> 0.55
[12:55] <renzhi> joao: 0.48.2
[12:55] <joao> renzhi, your log files would be great to check out what the issue is
[12:56] <renzhi> ceph version 0.48.2argonaut (commit:3e02b2fad88c2a95d9c0c86878f10d1beb780bfe)
[12:56] <renzhi> hang on
[12:56] <joao> renzhi, there were a couple of bugs fixed since 0.48.2 related with election and general monitor crashes
[12:56] <joao> actually, I mean 0.48.2; am not sure if any of those were backported to .2
[12:56] <joao> erm
[12:57] <joao> s/I mean 0.48.2/I mean 0.48.1
[13:00] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[13:04] <renzhi> joao: sorry for the delay, had been trying to fetch my passwd from pastebin
[13:04] <renzhi> here it is: http://pastebin.com/2Ed0SD7a
[13:04] <joao> thanks
[13:05] <renzhi> as soon as I start up mon.a, it calls an election and wins it, and then it crashes
[13:06] <renzhi> with only mon.b and mon.c, nothing works. mon.b reports it's not in qorum
[13:08] <renzhi> actually, mon.c is reporting it's not in quorum
[13:10] * tezra (~rolson@116.226.37.139) has joined #ceph
[13:11] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) Quit (Ping timeout: 480 seconds)
[13:12] <renzhi> joao: I'm trying to understand, is there a specific order that mon.a, mon.b and mon.c must be started?
[13:12] <joao> nope
[13:13] <joao> renzhi, so, you have mon.a down and mon.b and mon.c were not able to form a quorum?
[13:14] <joao> can you please also make mon.b's and mon.c's logs available?
[13:16] <renzhi> yeah, but I restarted them all, and now, seems to be going. I had to start mon.a first, then mon.b, and mon.c
[13:16] <renzhi> in that order
[13:19] * match (~mrichar1@pcw3047.see.ed.ac.uk) has joined #ceph
[13:21] <renzhi> joao: is there another way I can get those logs to you? there might information that does not fit in pastebin
[13:21] <joao> sure
[13:21] <joao> compress them and drop them at cephdrop@ceph.com
[13:21] <joao> sftp
[13:21] <renzhi> k
[13:21] <joao> pass is 'asdf' I think
[13:23] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[13:24] * triqon (~chatzilla@2001:980:a085:1:9013:4823:c100:a6a2) has joined #ceph
[13:28] <renzhi> joao: uploading mon.log.tar.gz
[13:29] <renzhi> 3 of them
[13:31] <joao> thanks
[13:31] <joao> going for lunch; bbiab
[13:31] <renzhi> thnx
[13:51] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[14:02] * yoshi (~yoshi@80.30.51.242) has joined #ceph
[14:03] * yoshi (~yoshi@80.30.51.242) Quit (Read error: Connection reset by peer)
[14:06] * KindOne (~KindOne@50.96.87.48) has joined #ceph
[14:09] * yoshi (~yoshi@80.30.51.242) has joined #ceph
[14:12] * The_Bishop__ (~bishop@e179012174.adsl.alicedsl.de) has joined #ceph
[14:13] * yoshi (~yoshi@80.30.51.242) Quit (Read error: Connection reset by peer)
[14:15] * yoshi (~yoshi@80.30.51.242) has joined #ceph
[14:17] * yoshi (~yoshi@80.30.51.242) Quit (Read error: Connection reset by peer)
[14:18] * The_Bishop__ (~bishop@e179012174.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[14:19] * yoshi (~yoshi@80.30.51.242) has joined #ceph
[14:19] * The_Bishop_ (~bishop@e179012174.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[14:22] * yoshi (~yoshi@80.30.51.242) Quit (Read error: Connection reset by peer)
[14:23] * yoshi (~yoshi@80.30.51.242) has joined #ceph
[14:29] * yoshi (~yoshi@80.30.51.242) Quit (Remote host closed the connection)
[14:36] * loicd (~loic@178.20.50.225) Quit (Read error: Operation timed out)
[15:00] * aliguori (~anthony@cpe-70-113-5-4.austin.res.rr.com) has joined #ceph
[15:04] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[15:09] * tezra (~rolson@116.226.37.139) Quit (Read error: Operation timed out)
[15:18] * loicd (~loic@83.167.43.235) has joined #ceph
[15:18] * tezra (~rolson@122.226.73.136) has joined #ceph
[15:19] * nhorman (~nhorman@nat-pool-rdu.redhat.com) has joined #ceph
[15:26] <renzhi> why the 3 mons can have a stable quorum?
[15:28] <Psi-jack> renzhi: Same way a tribunal works. 3 judges.,
[15:28] <renzhi> two of them keeps saying " we are not in quorum", the other reports "/0 pipe(0x67cfa00 sd=50 pgs=0 cs=0 l=0).accept failed to getpeername 107 Transport endpoint is not connected"
[15:30] <joao> renzhi, sorry, haven't had the chance to look into that
[15:30] <joao> will look in a few moments
[15:33] * flash (~user1@host86-164-217-4.range86-164.btcentralplus.com) has joined #ceph
[15:33] * flash (~user1@host86-164-217-4.range86-164.btcentralplus.com) Quit ()
[15:34] * dosaboy (~user1@host86-164-217-4.range86-164.btcentralplus.com) has joined #ceph
[15:35] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[15:41] <joao> renzhi, from your logs it looks like you hit one election loop bug that has been fixed in latest releases
[15:41] * loicd (~loic@83.167.43.235) Quit (Quit: Leaving.)
[15:41] * gregorg (~Greg@78.155.152.6) has joined #ceph
[15:43] <renzhi> joao: ok, sounds great, which version is it fixed in?
[15:43] <joao> looking :)
[15:43] <renzhi> my 3 mons are at total loss now
[15:43] <renzhi> can't hold up the quorum for more than 5 minutes
[15:45] <joao> the one you're hitting should be fixed in 0.55; there was another election-related bug that I fixed last week or so, that should be only on master
[15:45] * tezra (~rolson@122.226.73.136) Quit (Read error: Operation timed out)
[15:45] <joao> don't think it made it into 0.55.1
[15:45] <renzhi> is the one you just fixed critical?
[15:46] <renzhi> our cluster has been down for hours now
[15:46] * noob2 (~noob2@ext.cscinfo.com) has joined #ceph
[15:48] <renzhi> joao: is the bug listed in the bug tracking system? you have the bug number?
[15:49] <joao> renzhi, the latest one is 3587; you won't hit it unless one of your monitors is killed and brought back before the an election is triggered
[15:49] <joao> so, not really critical; it's hard to hit it
[15:49] <joao> the other one... still looking for it; it has been a while :\
[15:50] <joao> the other one is 3252
[15:50] <joao> it's a couple months old, so it should be fixed in 0.55 for sure
[15:50] <renzhi> ok
[15:50] <renzhi> I'll take a look
[15:51] <renzhi> but isn't 0.55 not long term stable?
[15:51] <joao> it's not; bobtail, 0.56, will be though
[15:51] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[15:51] <joao> I'm still looking if 3252 has been backported to argonaut
[15:51] <joao> (I'm far from a git power user)
[15:52] <renzhi> ok
[15:53] <renzhi> joao: so you said upgrading to 0.55 should have this unstable quorum issue fixed?
[15:53] <joao> yes
[15:53] <joao> renzhi, upgrading to 0.55 also brings cephx enabled by default
[15:54] <renzhi> that should be fine, we have it enabled anyway
[15:54] <joao> if you're not using cephx, before considering upgrading, please note that you should either enable it or disable it
[15:54] <joao> ok
[15:54] <joao> cool
[15:54] * dosaboy (~user1@host86-164-217-4.range86-164.btcentralplus.com) Quit (Quit: Leaving.)
[15:55] * gaveen (~gaveen@112.135.21.9) Quit (Remote host closed the connection)
[15:55] * norbi (~IceChat7@buerogw01.ispgateway.de) Quit (Quit: Give a man a fish and he will eat for a day. Teach him how to fish, and he will sit in a boat and drink beer all day)
[15:57] <joao> yeah, it wasn't backported to argonaut
[15:58] * tezra (~rolson@116.226.37.139) has joined #ceph
[15:58] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[15:58] <renzhi> err... neither is 3587?
[16:00] <joao> no
[16:01] <renzhi> joao: is it safe to upgrade? :) we are on live
[16:01] * ircolle (~ircolle@c-67-172-132-164.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[16:03] <joao> it should be, but I can cherry-pick those commits to a 0.48.2 tree if you prefer
[16:04] <joao> you could spin the monitors up and make sure that fixed your problem before going on a full upgrade to 0.55
[16:05] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Quit: Leaving)
[16:05] <renzhi> i.e. just upgrade the mon to see?
[16:06] <joao> i.e., patch the mon while maintaining your argonaut installation
[16:06] <renzhi> oh
[16:06] <joao> there have been a lot of fixes since argonaut, but there have been some other changes too, and it would be a shame if you'd hit some other issue
[16:07] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[16:07] * ChanServ sets mode +o scuttlemonkey
[16:09] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[16:09] <renzhi> ok, we are planning to upgrade now, and let's see. The cluster has been down for too long anyway
[16:13] * loicd (~loic@magenta.dachary.org) has joined #ceph
[16:15] <renzhi> joao: is 0.55 in the debian repo?
[16:16] <joao> should be, as should 0.55.1
[16:17] * occ (~onur@38.103.149.209) has joined #ceph
[16:17] * jmlowe1 (~Adium@c-71-201-31-207.hsd1.in.comcast.net) has joined #ceph
[16:18] <joao> renzhi, then again, maybe not
[16:18] <joao> glowell, do you know if we have debian packages for 0.55.x?
[16:18] <renzhi> not sure it's there
[16:19] <jmlowe1> sjust: I just finished reading the new blog post about bobtail, nicely done
[16:21] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[16:22] <renzhi> joao: we looked up the debian repo, it doesn't seem to be there
[16:22] <renzhi> we actually had it mirrored locally here
[16:22] <renzhi> only 0.48.2
[16:22] * ircolle (~ircolle@c-67-172-132-164.hsd1.co.comcast.net) has joined #ceph
[16:22] <joao> renzhi, looks that way, yes
[16:22] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[16:22] <joao> not sure if it is somewhere else :\
[16:22] <joao> and by not sure I mean I don't know
[16:22] <renzhi> where can we get the deb packages fast?
[16:23] <joao> our gitbuilder might be the best place
[16:24] <renzhi> :(
[16:24] <joao> I have no idea how to get them though
[16:24] <renzhi> anyway to put it up there real fast?
[16:24] <jmlowe1> Aren't there 0.55.x packages in debian-testing?
[16:25] <jmlowe1> deb http://ceph.com/debian-testing/ quantal main
[16:25] <joao> yep, looks like there are
[16:25] <renzhi> oh
[16:25] <joao> totally forgot about 'debian-testing'
[16:26] <joao> sorry about that
[16:26] <renzhi> np
[16:26] <joao> jmlowe1, thanks
[16:26] * nosebleedkt_ (~kostas@213.140.128.74) Quit (Quit: Leaving)
[16:27] <jmlowe1> np
[16:27] <jmlowe1> least I could do
[16:27] <renzhi> jmlowel: thanks
[16:29] <Kioob`Taff> about 0.55, is it normal to still have warnings about «syncfs» ?
[16:31] <Kioob`Taff> mount syncfs(2) syscall not supported // mount no syncfs(2), must use sync(2). // mount WARNING: multiple ceph-osd daemons on the same host will be slow
[16:45] * stxShadow (~jens@p4FD06953.dip.t-dialin.net) has joined #ceph
[16:47] * yoshi_ (~yoshi@80.30.51.242) has joined #ceph
[16:48] <wer> Kioob`Taff: I had to run 3.2.0 to get rid of that...
[16:54] * jlogan1 (~Thunderbi@2600:c00:3010:1:19e4:b73c:924b:79fd) has joined #ceph
[16:55] * low (~low@188.165.111.2) Quit (Quit: bbl)
[17:16] * aliguori (~anthony@cpe-70-113-5-4.austin.res.rr.com) Quit (Remote host closed the connection)
[17:22] * verwilst (~verwilst@d5152D6B9.static.telenet.be) Quit (Quit: Ex-Chat)
[17:23] * roald_ (~roaldvanl@139-63-21-176.nodes.tno.nl) Quit (Ping timeout: 480 seconds)
[17:24] * stxShadow (~jens@p4FD06953.dip.t-dialin.net) Quit (Remote host closed the connection)
[17:24] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[17:25] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:29] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[17:33] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) has joined #ceph
[17:36] * jdarcy (~quassel@66.187.233.206) has joined #ceph
[17:37] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Read error: Operation timed out)
[17:37] * jdarcy (~quassel@66.187.233.206) Quit ()
[17:38] <wer> after running ceph osd $id reweight .1 the health is HEALTH_WARN 276 pgs stuck unclean.... How to I fix this?
[17:39] <wer> Or what is this telling me exactly :)
[17:40] <wer> Cause if I put them back to 1 instead of .1 things are healthy again....
[17:41] * aliguori (~anthony@32.97.110.59) has joined #ceph
[17:42] <wer> Should I just have taken it "out" instead of reweighting it first? I am upgrading this node and just need to get them all out and in a good state... and move a mon, before I put them back in.
[17:46] <wer> either way I can see the data migrate away... but ceph isn't happy about it. Do I need to remove them completely from the crush in order to be completely healthy again? Or should I just be unhealthy until the osd come back up after the upgrade?
[17:46] * loicd (~loic@magenta.dachary.org) Quit (Ping timeout: 480 seconds)
[17:47] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[17:49] <wer> ok. I removed them from the crush and am good.
[17:51] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[17:52] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[17:52] * ScOut3R (~ScOut3R@dslC3E4E249.fixip.t-online.hu) Quit (Remote host closed the connection)
[17:52] <renzhi> wer: after reweighting, it's probably rebalancing and remapping
[17:53] <wer> renzhi: it never recovered to a healthy state.... taking them out of the crush all together finally did it....
[17:54] * The_Bishop (~bishop@2001:470:50b6:0:3532:6a57:4b2e:1ad2) has joined #ceph
[17:54] <renzhi> joao: after upgrading to 0.55.1, mons are doing fine, but the osds are not, they seem to timeout a lot, then are marked as down
[17:56] <wer> I got hit with the 0.55.1 in the middle of deploying a test :P I definately have had weirdness throughout with osd's getting in a really strange state. IT seems like once things settle they eventually started acting ok. But now I am updating the node that is running 0.55.....
[17:57] <paravoid> yehudasa: around?
[17:58] <wer> I had to restart them more then once.... and random osd's (out of 24 per node) were being marked down. I ended up starting them one by one and they came up.
[18:02] * gaveen (~gaveen@112.135.11.5) has joined #ceph
[18:04] <wer> If I only have one mon, and need to move it, is it better to add another and take the other one down? Or can I move it.
[18:04] <wer> ?
[18:05] <yehudasa> paravoid: yeah
[18:05] <paravoid> hi
[18:06] <paravoid> quick question
[18:06] <yehudasa> yeah
[18:06] <paravoid> I found an old thread that mentioned some scalability issues with radosgw and specifically with container listings
[18:06] * match (~mrichar1@pcw3047.see.ed.ac.uk) has left #ceph
[18:06] * match (~mrichar1@pcw3047.see.ed.ac.uk) has joined #ceph
[18:06] <paravoid> something about radosgw needing to traverse the whole pool to do a container/bucket listing
[18:07] <yehudasa> paravoid: that's not relevant anymore
[18:07] <paravoid> because tmap was replaced?
[18:07] <yehudasa> we changed that long time ago
[18:07] <rweeks> wer: you always want an odd number of monitors active
[18:07] <rweeks> http://ceph.com/docs/master/architecture/#monitor-quorums
[18:07] <wer> I know :) but I am just trying to move one.
[18:07] <yehudasa> paravoid: no, because we changed the 1:1 pool/bucket ratio and introduced the bucket index
[18:07] <wer> I only have one :)
[18:07] * match (~mrichar1@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[18:08] <yehudasa> paravoid: tmap replaced by omap is a different scalability issue
[18:08] <paravoid> I thought the 1:1 pool/bucket ratio was related to scaling the number of buckets, not objects?
[18:08] <wer> rweeks: I am waiting on 3 machines to run my mons.... until then I have just been running a single mon on 1 of my four nodes. And I need to upgrade the OS on the node :)
[18:08] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[18:09] <yehudasa> paravoid: both were a problem with that design
[18:09] <rweeks> well one is an odd number. But I'm not sure of the behavior if you bring up a second one
[18:09] <yehudasa> paravoid: bucket creation was problematic, and you could only have that many buckets in the system
[18:09] <yehudasa> paravoid: but also listing objects didn't work right
[18:10] <paravoid> I'm wondering if radosgw would work with a single bucket holding ~150 million files or so
[18:10] <wer> right :) If I shut it down and just move the directory to another host do you think it will start ok? Or should I bring the entire cluster down to do that?
[18:10] <paravoid> not exactly your average setup I suppose :-)
[18:10] <yehudasa> parvoid: I don't see why not, you just have to set your pool correctly (e.g., enough number of pgs)
[18:10] * BManojlovic (~steki@85.222.178.27) has joined #ceph
[18:11] <renzhi> joao: still around?
[18:11] <paravoid> yeah that was the setup were we tried to set 64k pgs and failed miserably :)
[18:11] <paravoid> now it's at 16k, which are probably going to be enough
[18:11] <paravoid> we'll see about that :)
[18:11] <paravoid> (64k pg fail = bug #3617)
[18:11] <yehudasa> hmm.. 64k failed? was that the 16 bit issue?
[18:11] <paravoid> possibly
[18:11] * l0nk (~alex@83.167.43.235) Quit (Quit: Leaving.)
[18:11] <rweeks> wer: I think one of the devs will have to answer that.
[18:13] <joao> <wer> If I only have one mon, and need to move it, is it better to add another and take the other one down? Or can I move it.
[18:13] <joao> fyi, this is a really bad idea
[18:13] <joao> you'll break quorum as soon as you remove the monitor
[18:13] <wer> LOL
[18:13] <yehudasa> paravoid: that's strange, we have a cluster that has 64k pgs set up, I wonder why it's working
[18:14] <joao> you should add 2 other monitors in order to remove the one you want
[18:14] <joao> I mean, get 3 monitors up and running, then remove the one you want to get rid of
[18:14] <wer> joao: ok. That is what I thought I might need to do. Then just remove it and add a third again?
[18:15] <joao> renzhi, sorry, was looking into other stuff; how is your 'ceph -s' looking?
[18:15] <rweeks> that's what I suspected, joao, wer, but I wanted someone to verify
[18:15] <joao> wer, if you only have one monitor and want to remove it, then you should add 2 new monitors to the cluster, bring them up, let them form quorum; only then remove the one you want
[18:16] <joao> that way you'll maintain quorum
[18:16] <joao> if you really want to keep only one monitor around, then you'll have to remove another monitor, but that might get messy
[18:16] <joao> rule of thumbs: use three monitors
[18:17] <renzhi> joao: not good :(
[18:17] <joao> renzhi, what's happening?
[18:17] <renzhi> mons are doing well, but osds keep on crashing
[18:17] <joao> oh
[18:17] <joao> that's not good, at all
[18:18] <renzhi> no
[18:18] <joao> renzhi, can you get us a log sample?
[18:18] <renzhi> hang on
[18:18] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[18:18] <wer> joao: ok thanks! Yeah I plan on having three... but I didn't yet. But I quess I will :) I will eventually have to move all three once my dedicated mon machines are available. Good stuff. I'll get a quorum going before I move this one.
[18:18] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[18:19] <glowell> joao: Are there still questions about rpms ?
[18:19] <joao> glowell, were about debs, and no, thanks ;)
[18:19] <joao> jmlowe1 pointed out we stashed the debs on 'debian-testing'
[18:20] * janos hopes the ceph repo gets the bobtail love
[18:20] <joao> wer, once you add the new servers, if you already have the monitors set up on other machines, you'll want to redo the same process (adding new monitors, establish quorum, remove monitors) to move the monitors to the other servers
[18:21] <joao> and that reminded me I forgot to update the docs with that kind of info (damn)
[18:22] <renzhi> joao: uploading ceph-osd.69.log.gz
[18:22] <wer> joao: cool thanks. I assumed that was the process. Well I will let you know well that works :)
[18:23] <noob2> i know a few of you guys said you messed around with the intel 910 ssd's right?
[18:23] <noob2> have you seen an instance where ubuntu 12.10 doesn't pick up more than 1 drive? i installed 9 in a box
[18:23] <noob2> it sees 1
[18:23] <renzhi> joao: upload done
[18:24] <joao> k thanks
[18:24] <rweeks> noob2: I suspect the disk controller
[18:24] <rweeks> is it a RAID controller?
[18:24] <noob2> well they are pci express cards
[18:25] <noob2> it *seems* like the raid controller is clobbering the detection of the pci slots
[18:25] <rweeks> I've had a number of linuxes choke on a raid controller that they didn't like
[18:25] <noob2> ah
[18:25] <noob2> this is a dl585 hp raid controller with flash cache
[18:25] <rweeks> so there's onboard RAID on the motherboard as well as PCI express cards?
[18:25] <noob2> right
[18:25] <rweeks> see if you can disable the onboard RAID entirely
[18:26] <noob2> ok i'll give that a shot. the raid card is an external pci card also
[18:26] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[18:26] <rweeks> ah, so not in the motherboard bios?
[18:26] <noob2> there's another one on the motherboard i think
[18:26] <rweeks> yeah
[18:26] <noob2> i saw some sata ports on there
[18:26] <rweeks> one of those two is likely interfering
[18:26] <jmlowe1> you may also want to enable drive cache
[18:26] <rweeks> I had nothing but trouble with onboard RAID vs PCI raid cards on some supermicro motherboards
[18:26] <noob2> yeah i saw it disabled in the bios
[18:27] * n-other (024a4254@ircip1.mibbit.com) has joined #ceph
[18:28] <n-other> hello
[18:28] <n-other> anyone here?
[18:29] <rweeks> no one here but us chickens
[18:29] <noob2> haha
[18:30] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[18:30] <n-other> ))
[18:30] <joao> sjust, around?
[18:31] <n-other> I am getting spurious mons sigterms
[18:31] <n-other> http://pastebin.com/5XX7LBf8
[18:31] <n-other> just as an example
[18:32] <joao> n-other, does you monitor have enough disk space available?
[18:32] <joao> I think it's the second time I see this error today
[18:33] <n-other> joao, hmm. you are right
[18:33] <n-other> damn
[18:33] <joao> what were the odds of both having the same problem on the same day? ;)
[18:34] <n-other> var is used 100%
[18:34] <n-other> heh ))
[18:39] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[18:41] <n-other> I guess this var 100% usage also caused the issue with stuck stat* I referred several hours ago
[18:41] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) has joined #ceph
[18:41] <wer> step 7 of http://ceph.com/docs/master/rados/operations/add-or-rm-mons/ 'ceph mon add <name> <ip>[:<port>]' is name the host in the config host entry for the mon?
[18:45] <wer> if my config has an entry for [mon.c] is the name mon.c... hmm.
[18:46] <wer> nope... name is just c. The docs drive me crazy.
[18:46] <wer> .... added mon.mon.b at
[18:47] <n-other> wer, just delete it and re-create
[18:48] <n-other> joao, I guess the problem with stuck stat* and reading is not gone
[18:48] <n-other> it's not var related
[18:48] <n-other> tried to start simple copy from the 3rd node
[18:48] <joao> wer> .... added mon.mon.b at
[18:49] <wer> n-other: ceph mon delete mon.mon.b ? give me a fault.
[18:49] <joao> wer, now that's going to be tricky
[18:49] <wer> what have I done!!! :)
[18:49] <wer> I need to add key I bet :)
[18:49] * kYann839 (~kYann@tui75-3-88-168-236-26.fbx.proxad.net) Quit (Quit: irc2go)
[18:49] <joao> wer, the solution is rather simple
[18:49] <wer> listening.
[18:49] <joao> you just have to create a monitor named 'mon.mon.b'
[18:49] <joao> that should work
[18:49] <wer> of course!
[18:50] <wer> ok lemmie try that.
[18:50] <joao> if that doesn't work, we'll have to do it manually
[18:50] <joao> by recreating a new monmap and injecting it into the old monitor
[18:50] <joao> and repeat the new monitor addition
[18:50] <wer> joao: define create? I already added a mon.mon.b :)
[18:51] <n-other> pr02:~# strace -p 6807 Process 6807 attached - interrupt to quit read(4,
[18:51] <n-other> copying is stuck on read
[18:51] <joao> wer, have you brought that monitor up? has quorum been established?
[18:52] <wer> joao: I don't think so. I did ceph mon add... never did ceph-mon -i....
[18:52] <joao> oh
[18:52] <n-other> and ceph -w says: 2012-12-18 23:49:55.237733 mds.0 [INF] closing stale session client.6798 192.168.7.4:0/119698983 after 301.107647
[18:54] <wer> oh my, I can't run any ceph commands any more. I think I broke it.
[18:54] <joao> you broke quorum; you need to bring the new monitor up
[18:54] <wer> I think the config now refers to a mon that doesn't exist.
[18:54] <wer> ok so run ceph-mon -i ???? what is newname?
[18:55] * Cube (~Cube@12.248.40.138) has joined #ceph
[18:55] <joao> wer, have you kept the monmap you obtained around?
[18:55] <wer> yeah, I have it in tmp
[18:55] * benpol (~benp@garage.reed.edu) has joined #ceph
[18:56] <joao> wer, best solution right now would be to kill mon.a, and 'ceph-mon -i a --inject-monmap /tmp/monmap'
[18:56] <joao> and repeat the 'ceph mon add b ...'
[18:57] <wer> ok. I am confused. Is mon.a all screwed up now too? So I tell it to use the map?
[18:59] <joao> mon.a is not screwed up; it knows about a given monitor 'mon.mon.b' that is not up
[18:59] <joao> thus mon.a can't establish quorum, and without quorum your cluster won't work
[18:59] <wer> joao: ok. So we kill it, and the old map will not have that new mon?
[19:00] <joao> no
[19:01] <joao> you'll just have to inject the old map back into mon.a, and it will take off where it was left *prior* to adding the new monitor
[19:01] * drokita (~drokita@199.255.228.10) has joined #ceph
[19:01] <wer> ok. So I don't need to actually kill it. just inject tje old monmap?
[19:01] <joao> you could also recreate mon.b's fs with id 'mon.mon.b' instead
[19:02] <joao> you can't communicate with it without quorum
[19:02] <joao> and I don't think the admin socket supports monmap injection
[19:02] <joao> so you'll have to kill it
[19:02] <joao> and bring it back up with the --inject-monmap option
[19:03] <joao> without quorum that monitor is no good anyway, so killing it and bringing it back up won't make a difference
[19:03] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[19:04] <wer> Ok. I will do that. Then will I still need to clean up the mon.mon.b mess? Or just add it again with the correct <name> which is just "c"
[19:04] <wer> err "b" I meant.
[19:04] <joao> there should be no mess to clean up after injecting the new map
[19:04] <wer> joao: ok. I follow you.
[19:05] <wer> I wish the ceph.conf didn't exist :) It would make understanding these roles a lot easier :)
[19:07] <noob2> rweeks: no luck with disabling the hp raid card :(
[19:07] <noob2> i'm thinking maybe i need to isntall the hp tools to get it to show up
[19:07] <wer> joao: mon.a didn't start.
[19:08] <noob2> ubuntu is detecting the raid disk as /dev/sda instead of the usual /dev/ccis like on rhel
[19:08] <wer> ok. I started it. healthy again :)
[19:08] <wer> joao: so now I try again :)
[19:09] <rweeks> I have zero experience with current HP hardware
[19:10] <jmlowe1> what card is it?
[19:10] <noob2> lemme check
[19:11] <jmlowe1> <- uses some hp gear
[19:12] <wer> joao: you should change 'ceph mon add <name> <ip>[:<port>]' to 'ceph mon add <mon-letter> <ip>[:<port>]' in the docs.
[19:12] <noob2> i believe it's an hp p410i
[19:12] <sstan> Does anyone know if there is a safe way to export RBD blocks by iSCSI ?
[19:12] <noob2> with 512mb flash cache
[19:12] <noob2> sstan: http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices
[19:12] <jmlowe1> I've got some of those
[19:13] <jmlowe1> what's the trouble?
[19:13] <noob2> well i installed 9 intel 910 ssd's into the pci slots
[19:13] <noob2> it only detects 1
[19:13] <noob2> it looks like the hp raid controller clobbers the detection on startup
[19:13] <sstan> noob2: thanks! Before I read that, do you know if it's feasible in a fault-tolerant manner?
[19:13] <joao> wer, yeah, noted; thanks
[19:13] <noob2> i tried disabling it but it didn't change anything
[19:13] <noob2> sstan: yes it's feasible with pacemaker :)
[19:13] <jmlowe1> hmm
[19:13] <sstan> thank you!
[19:13] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:14] <noob2> it won't be the fastest thing on the planet but it'll work
[19:14] <wer> joao: so will I not have a quorum until I bring up the third mon?
[19:15] <joao> you'll have quorum as soon as you bring the second monitor up
[19:15] <noob2> jmlowe1: does your 410i's usually show up in ubuntu as /dev/sd* ?
[19:16] * yasu` (~yasu`@soenat3.cse.ucsc.edu) has joined #ceph
[19:17] <houkouonchi-work> noob2: the block devices? yeah they are /dev/sdX
[19:17] <houkouonchi-work> they have a LSI 1068e based SAS/sata controller on them
[19:17] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[19:17] <jmlowe1> noob2: yes, they switched from the cciss driver to the new hpsa driver
[19:18] <noob2> ahh
[19:18] <noob2> interesting
[19:19] <noob2> intel doesn't seem to have any special driver for these things either
[19:19] <jmlowe1> https://www.kernel.org/doc/Documentation/scsi/hpsa.txt
[19:19] <wer> joao: I am confused on how to start mon.b. I think 'ceph-mon -i b --public-addr ip:port but is public-add equivalent to 'mon add' in ceph.conf. IE can I just use init to start it at this point if the config has the mon I am trying to start?
[19:20] <wer> s/public-add/public-port/ sorry I can' type.
[19:20] <joao> init should be able to bring it up
[19:20] <noob2> thanks for the link
[19:21] <noob2> how hard is it to swap to the ccsis driver?
[19:21] <jmlowe1> noob2: when you disabled it did you disable the option rom in the bios?
[19:21] * drokita (~drokita@199.255.228.10) Quit (Read error: Connection reset by peer)
[19:22] * Ryan_Lane (~Adium@216.38.130.167) has joined #ceph
[19:22] <jmlowe1> noob2: depends on the card, for a p410 I'd say impossible
[19:22] <noob2> that'll require a kernel rebuild i'm guessing
[19:23] <wer> joao: weird. mon.a quit complaining... and ceph return ok health... but mon.b isn't actually running.
[19:24] <jmlowe1> http://cciss.sourceforge.net/ by my read, as of 2.6.36 if it loaded the hpsa driver then then the cciss driver won't work
[19:24] <wer> joao: when I started it using init.....
[19:24] <joao> wer, what does 'ceph quorum_status' say?
[19:24] <joao> can you pastebin it?
[19:25] <wer> joao: weird. No now I am getting a fault.... I have some mon.b logs if that will help. I don't think mon.b started. This is weird.
[19:26] <joao> wer, 'ps xau | grep ceph-mon' ?
[19:27] <wer> yeah it isn't running. I could have sworn ceph health returned nothing... then I started using init... then ceph helth returned and mon.a quit complaining. Then when I looked to see if mon.b was running... it wasn't.
[19:28] * n-other (024a4254@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[19:28] <noob2> jmlowe1: yeah i disabled it in the bios
[19:28] <wer> joao: heh. yeah it isn't running. but if I start it... a hung ceph health will return OK once :)
[19:29] <joao> wer, can you pastebin mon.b's log?
[19:29] <wer> yeah I am working on it.
[19:32] <wer> mon/OSDMonitor.cc: 1293: FAILED assert(0)... http://pastebin.com/mLj5JyRu
[19:33] <jmlowe1> noob2: so it doesn't pop up with the menu where you can configure it during boot?
[19:34] <joao> wth
[19:34] <joao> wer, give me a minute here
[19:34] <wer> no worries.
[19:35] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[19:35] <occ> hi all- is there a problem w/ using different size osds?
[19:37] <wer> joao: I must have screwed something up.... cause this is strange. I started it again and it is running.... but both a/b are complaining that they are not in a quorum.
[19:37] <joao> wer, can you rerun mon.b again with '--debug-mon 20 --debug-paxos 20 --debug-ms 10' ?
[19:38] <wer> sure. But I was still unclear on <newname> and --public-addr :)
[19:39] <wer> sorry.
[19:40] <joao> I have to step away for a bit
[19:40] <joao> brb
[19:40] <wer> ceph-mon -i b --public-addr 10.5.0.191:6789 if my config has 'mon addr = 10.5.0.190:6789' defined. Sorry I am just unclear what public-addr and name newname mean.
[19:40] <wer> ok
[19:43] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) Quit (Quit: Leaving.)
[19:45] <noob2> jwlowe1: i just tried ubuntu 12.04 and all the drives show up :D
[19:46] <noob2> which puts me in a pickle. i need a 3.5+ kernel for lio utils
[19:46] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[19:46] <jmlowe1> huh, what kernel were you using before?
[19:47] <noob2> well i was trying to use ubuntu 12.10
[19:47] <noob2> something broke between 04 to 10
[19:47] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:47] <jmlowe1> how about this one? http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.7.1-raring/
[19:47] <noob2> is that a ppa i can just install?
[19:47] <noob2> oh nice he has debs
[19:47] <noob2> lemme try to install 12.04 and then see if i can install that deb
[19:47] <noob2> best of both worlds :D
[19:48] <jmlowe1> the ubuntu kernel guys auto build all of the stock kernels to check regressions
[19:48] <noob2> oh ok
[19:48] <jmlowe1> that's what that ppa is
[19:48] <noob2> maybe it was a tool in the newer version that broke something
[19:48] <jmlowe1> I'm using the 3.7.0 from that ppa
[19:48] <noob2> the thing lights up like a christmas tree with 12.04's live disk
[19:48] <jmlowe1> with quantal
[19:48] <noob2> cool
[19:48] <wer> joao: hmm... I got it going. trying another.
[19:48] <noob2> i see
[19:49] <noob2> i could try that first, that would be faster
[19:49] <noob2> install that with quantal
[19:49] <noob2> if that bombs, try to use 12.04 with that kernel
[19:49] <jmlowe1> yeah, I just grab the debs and dpkg -i *3.7.1*
[19:49] <noob2> nice
[19:49] <noob2> alright lemme try that. you the man :D
[19:50] <jmlowe1> I think the be2net driver was broken sometime after 12.04, that and btrfs is why I try to stick to the main line
[19:53] <noob2> yeah probably best to do that
[19:53] <sjust> renzhi: can can you post the osd log from one of the crashed nodes?
[19:54] <joao> sjust, renzhi sent me this one earlier today http://tracker.newdream.net/attachments/596/ceph-osd.69.log.gz
[19:56] * wschulze (~wschulze@static-108-12-138-23.nycmny.east.verizon.net) has joined #ceph
[19:57] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) has joined #ceph
[19:57] <noob2> jmlowe1: do you know where the headers are that go with that kernel? dpkg is complaining
[19:57] <noob2> nvm i think it was just a warning
[20:01] * wschulze (~wschulze@static-108-12-138-23.nycmny.east.verizon.net) Quit ()
[20:02] <wer> joao: hmmm... I guess mon.b keeps dying. It was working for a little while now .... I am going to try to restore to the single mon again. This is super scary.
[20:04] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[20:04] <jmlowe1> noob2: install *amd64.deb and *all.deb
[20:04] <jmlowe1> noob2: 4 debs in total
[20:05] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) has joined #ceph
[20:08] * nhorman (~nhorman@nat-pool-rdu.redhat.com) Quit (Quit: Leaving)
[20:18] <wer> joao: I think one of the nodes was blocking its port 6789..... perhaps that is why it died eventually. I have a quorum of three now. weird.
[20:19] <joao> cool
[20:19] <joao> I have to go now
[20:19] <joao> later #ceph
[20:19] <houkouonchi-work> joao: get back to work!
[20:19] <wer> ty joao !
[20:19] <houkouonchi-work> =P
[20:20] <houkouonchi-work> all work and no irc mak jack a dull boy
[20:20] <noob2> jmlowe1: yeah i think i missed all.deb
[20:21] <noob2> would you suggest 12.04 for the new ceph cluster i'm setting up?
[20:21] * Meths_ is now known as Meths
[20:21] <noob2> i think bobcat is due out next week if i remember right
[20:21] <Kioob> bobtail ?
[20:22] <noob2> that's it
[20:22] <noob2> starts with a b :)
[20:23] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[20:23] <noob2> i'm thinking i will wait until bobtail lands for the build
[20:26] * benpol (~benp@garage.reed.edu) has left #ceph
[20:26] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[20:32] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) Quit (Quit: Leaving.)
[20:33] * yoshi_ (~yoshi@80.30.51.242) Quit (Remote host closed the connection)
[20:34] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[20:35] <noob2> the iop improvement in the blog post is impressive!
[20:43] * noob2 (~noob2@ext.cscinfo.com) Quit (Read error: Connection reset by peer)
[20:44] * noob2 (~noob2@ext.cscinfo.com) has joined #ceph
[20:46] * noob2 (~noob2@ext.cscinfo.com) Quit ()
[20:47] <wer> I can't remove a monitor once I stop it because I get a fault.... hmm. This is tricky.
[20:50] * jjgalvez (~jjgalvez@12.248.40.138) has joined #ceph
[20:51] * noob2 (~noob2@ext.cscinfo.com) has joined #ceph
[20:51] * dmick (~dmick@2607:f298:a:607:55f2:b245:82d:3a20) has joined #ceph
[20:52] <noob2> jmlowe1: i found the regression. it's the kernel. 12.04 works until i update the kernel to 3.7.1
[20:52] <noob2> i'm going to try 3.5 and pray it works haha
[20:52] <jmlowe1> I'm sure the ubuntu kernel guys would like to hear it
[20:52] <noob2> yeah i should file a bug
[20:52] <janos> woah there's a 3.71 out?
[20:52] <janos> 3.7.1
[20:53] <noob2> ya :)
[20:53] * janos will avoid for a little while
[20:53] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[20:53] <jmlowe1> can't put out a new kernel without somebody having a panic attack and throwing out a couple of late bug fixes
[20:53] <noob2> hehe
[20:53] <stxShadow> 12.04 with 3.6.10 works fine here
[20:54] <dmick> "but wait, just the one more fix!"
[20:54] <noob2> stxShadow: yeah. my problem is with a pci card not detecting
[20:54] <stxShadow> ah .... i see
[20:54] <stxShadow> which one ?
[20:54] <noob2> intel 910 ssd's
[20:54] <noob2> one card will detect and nothing else on the newer kernel
[20:54] <noob2> the old one see's all 9 of them
[20:55] <noob2> so is bobtail going to be the christmas special ceph ? :)
[20:55] <stxShadow> i dont think so
[20:56] <noob2> getting pushed back to jan?
[20:56] <noob2> i'm eager to build my new ceph cluster
[20:56] <stxShadow> -> 0.55 is out a to short time -> they will have to do a lot of qa for bobtail
[20:56] <noob2> just got the hardware racked today
[20:56] <noob2> so i should stick with argonaut then?
[20:57] <jmlowe1> I'd do 0.55
[20:57] <noob2> so i'd need 12.10 ubuntu
[20:57] <noob2> think that's stable enough for a production build?
[20:57] <stxShadow> depends on your workload and pressure to get it running ;)
[20:57] <noob2> not too much pressure but people are excited to use it
[20:57] <stxShadow> all testing is done with 12.04
[20:57] <stxShadow> and 12.04 ist recommended
[20:57] <noob2> yeah i thought that was the case
[20:57] <stxShadow> cause its long term stable
[20:58] <noob2> i'd imagine once bobtail comes out there would be an upgrade path right?
[20:58] <stxShadow> yes ..... but not for format 1 to 2 images
[20:59] <noob2> format 1 to 2?
[20:59] <stxShadow> if you use rbd
[20:59] <jmlowe1> stxShadow: that's just rbd
[20:59] <noob2> yeah that's all i'd use
[20:59] <stxShadow> format 1 -> old format (argonaut)
[20:59] <dmick> well they don't auto-upgrade, no, but you can export/import, and you can continue to use format 1 images with bobtail and later
[20:59] <jmlowe1> planning on cloning block devices?
[20:59] <dmick> you may want the newer format 2 features, but, if so, you can move the images
[20:59] <noob2> probably don't need cloning that i can think of
[21:00] <stxShadow> format 2 -> cloning etc
[21:00] <noob2> gotcha
[21:00] <jmlowe1> won't make much difference to you then
[21:00] <noob2> ok
[21:00] <stxShadow> cloning will be a nice feature for rolling out larger amounts of rbd images
[21:00] <stxShadow> or even faster install
[21:01] <noob2> for cloudstack ?
[21:01] <jmlowe1> dmick: move the images? you mean rbd mv can change the version?
[21:01] <stxShadow> don't think that will work
[21:02] <stxShadow> sarge wrote on the devel list, that there is no migration path
[21:02] <jmlowe1> noob2: quick vm provisioning is the best use of cloning I can think of
[21:02] <stxShadow> execpt -> export and reimport
[21:02] <dmick> jmlowe1: no, export/import
[21:02] <jmlowe1> yeah that makes more sense
[21:02] <stxShadow> ;)
[21:02] <noob2> yeah that's all i can think to use it for also
[21:02] * Psi-jack (~psi-jack@psi-jack.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:03] * zynzel (zynzel@spof.pl) Quit (Remote host closed the connection)
[21:03] <stxShadow> even to save space
[21:03] <stxShadow> if you dont flatten the snapshot
[21:06] * Kioob (~kioob@luuna.daevel.fr) Quit (Remote host closed the connection)
[21:06] <dmick> quick provisioning and space savings, yes, those are the wins
[21:07] <dmick> which also translate to "inexpensive experimentation"
[21:07] <stxShadow> noob2 -> are you going to host ceph on ssd drives only ?
[21:07] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[21:07] <stxShadow> dmick: right
[21:08] <stxShadow> our deployment time drops from 2 minutes to 10 Seconds for a 50 G VM
[21:08] <janos> is format 2 something that's only in bobtail?
[21:08] <stxShadow> (resizing etc included)
[21:08] <janos> ie: no lower version yet. i've been testing .52 since that's the most the ceph fedora repo has
[21:09] <stxShadow> janos ... no 0.55 has this feature too
[21:09] * janos hopes for ceph repo update
[21:09] <stxShadow> maybe even earlier versions
[21:10] <janos> ah, i should go back and look at release notes
[21:10] <stxShadow> build your own packages ;)
[21:10] <janos> i'm trying to avoid that
[21:10] <dmick> 54 iirc
[21:10] <dmick> not sure
[21:10] <janos> this is such a small fraction of my duties that i'm trying to keep it in check
[21:13] * gaveen (~gaveen@112.135.11.5) Quit (Remote host closed the connection)
[21:15] <stxShadow> hehe ... ok
[21:19] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[21:29] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:32] * triqon (~chatzilla@2001:980:a085:1:9013:4823:c100:a6a2) Quit (Quit: ChatZilla 0.9.89 [Firefox 17.0.1/20121128204232])
[21:42] * Psi-jack (~psi-jack@psi-jack.user.oftc.net) has joined #ceph
[21:49] * fzylogic (~fzylogic@69.170.166.146) has joined #ceph
[21:49] * Steki (~steki@85.222.179.85) has joined #ceph
[21:49] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[21:50] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:51] * gucki (~smuxi@46-126-114-222.dynamic.hispeed.ch) has joined #ceph
[21:51] <gucki> hey guys
[21:52] * roald (~Roald@87.209.150.214) has joined #ceph
[21:52] <gucki> is there any new release date for bobtail? i'm asking because i'll need to setup a new ceph cluster soon, but i'd wait for bobtail if it'll be available within the next 1-2 weeks... :)
[21:55] * BManojlovic (~steki@85.222.178.27) Quit (Ping timeout: 480 seconds)
[22:10] <noob2> stdShadow: i wasn't planning on it no
[22:11] <noob2> stxShadow: although the thought has crossed my mind :D
[22:12] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[22:13] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:19] <stxShadow> :)
[22:22] <noob2> esp after reading that blog post about ceph supporting 22K iops now on the osd's
[22:27] <sjust> noob2: only on the lowest level against a ramdisk!
[22:27] <sjust> not in real life
[22:27] <noob2> ah
[22:27] <noob2> i see :)
[22:28] <noob2> progress is progress though right?
[22:28] <sjust> that said, against an ssd, there probably is a genuine improvement
[22:28] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[22:39] <Psi-jack> hmm
[22:39] * Psi-jack is now known as Psi-Jack
[22:39] <Psi-Jack> That was wierd..
[22:40] <Psi-Jack> Several of my VM's just became unresponsive out of nowhere, connecting via network resulted in "No route to host", then, several minutes later.. It's all running again. like. WTF? ;)
[22:40] <jmlowe1> what kind of nic?
[22:41] * aliguori (~anthony@32.97.110.59) Quit (Quit: Ex-Chat)
[22:42] <jmlowe1> I had trouble with the older firmware of hp nc522sfp, card would crash and reset giving similar symptoms
[22:42] <Psi-Jack> virtio. :)
[22:42] <Psi-Jack> It was my virtualized cluster that was acting up/
[22:45] <Psi-Jack> 42m of just out of the blue downtime.
[22:46] <Psi-Jack> Before that was 12m downtime.
[22:46] * roald_ (~Roald@87.209.150.214) has joined #ceph
[22:48] * vjarjadian (~IceChat7@5ad6d005.bb.sky.com) has joined #ceph
[22:51] <vjarjadian> so anything been happening with ceph recently?
[22:51] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) Quit (Read error: Connection reset by peer)
[22:52] <ircolle> http://ceph.com/dev-notes/whats-new-in-the-land-of-osd/
[22:53] * steve (~astalsi@c-69-255-38-71.hsd1.md.comcast.net) has joined #ceph
[22:54] * roald (~Roald@87.209.150.214) Quit (Ping timeout: 480 seconds)
[22:54] <wer> I removed and attempted to delete 24 osd's and they still show as up....
[22:54] * steve is now known as Guest1894
[22:54] <wer> following this http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
[22:54] * Guest1894 is now known as astalsi
[22:57] * roald_ (~Roald@87.209.150.214) Quit (Read error: Connection reset by peer)
[22:57] * roald_ (~Roald@87.209.150.214) has joined #ceph
[22:58] * roald__ (~Roald@87.209.150.214) has joined #ceph
[23:01] * roald (~Roald@87.209.150.214) has joined #ceph
[23:05] * roald_ (~Roald@87.209.150.214) Quit (Read error: Operation timed out)
[23:06] <vjarjadian> the dev blog looks good...
[23:07] * roald__ (~Roald@87.209.150.214) Quit (Ping timeout: 480 seconds)
[23:08] <wer> I mark the osd down and it still show up. This is tremendous fun.
[23:09] <janos> sounds like an Edgar Allen Poe story
[23:09] <janos> the stain of that dead OSD will haunt you
[23:09] <vjarjadian> sounds great... especially when that can mean your data isnt shuffled around and reduplicated
[23:09] <wer> :)
[23:09] * roald_ (~Roald@87.209.150.214) has joined #ceph
[23:09] <wer> yeah. I am displeased.
[23:09] <janos> wer - what version?
[23:10] <wer> 0.55 on the node I am trying to take out.
[23:10] <vjarjadian> i'm looking forward to when Ceph can tolerate being run with multiple sites over WAN...
[23:10] <wer> janos: scratch that. actually.55-1
[23:12] <wer> They are all down now...... weird.
[23:12] <janos> they are trolling you
[23:12] <wer> LOL this whole thing is a troll! :)
[23:13] <wer> I hate having to bug devs just to figure out syntax and methodology :)
[23:13] <dmick> wer: you can't just mark it down; it'll come back up on its own if there are no problesm
[23:13] * noob2 (~noob2@ext.cscinfo.com) has left #ceph
[23:13] <dmick> you have to set noup on the cluster if you want down to stick on a healthy running osd
[23:13] <ircolle> "I'm not dead yet!"
[23:13] <dmick> "shut up, you'll be stone dead in a week"
[23:13] <vjarjadian> it went out for a night out and came back 5 mins late :)
[23:13] <wer> Well I had completely removed it AFAIK.
[23:14] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[23:14] <wer> you guys are funny.
[23:14] <ircolle> This from the guy whose name is wer@youfarted?
[23:14] <wer> :) you saw that huh?
[23:14] <ircolle> :-)
[23:14] <vjarjadian> the roadmap page says 'coming soon'... any idea what the devs are working on at the moment?
[23:15] <ircolle> yes
[23:15] <ircolle> talking to people in IRC
[23:15] <wer> and thank god for that!
[23:15] <dmick> lol
[23:16] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[23:16] * roald (~Roald@87.209.150.214) Quit (Ping timeout: 480 seconds)
[23:18] * roald_ (~Roald@87.209.150.214) Quit (Read error: Operation timed out)
[23:18] <ircolle> vjarjadian - check out this slideshare, especially later slides: http://www.slideshare.net/Inktank_Ceph
[23:18] <wer> anyone want to tell me I did this all wrong? http://pastebin.com/VjxQY0LM
[23:22] <wer> http://pastebin.com/SDUaVCL1 actually... is what I did.
[23:23] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[23:23] * wschulze (~wschulze@199.108.68.9) has joined #ceph
[23:26] * aliguori (~anthony@cpe-70-113-5-4.austin.res.rr.com) has joined #ceph
[23:26] <wer> devops are friendly.....
[23:29] <vjarjadian> well... nothing obvious on the roadmap that would make it more WAN friendly. maybe they'll add something in a later version
[23:30] <ircolle> vjarjadian - is there a specific issue you're worried about?
[23:32] <vjarjadian> well, all the ceph stuff i've read says it writes to all the blocks before saying the write is complete... in my environment currently i have 2 sites... one is offsite backup/testing network and the other is the production network... if it had to write all data to both sites before continuing it would be extremely slow...
[23:34] <vjarjadian> having it write one copy and then continuing would be better for me... obviously not ideal for all, but having it sync up later on as part of the self healing would be nice for me...
[23:35] <wer> vjarjadian: are you managing replication with your client on writes?
[23:35] <vjarjadian> currently i'm using a sync program once a day... having something like ceph would be an advantage... especially if we expanded to a third site
[23:35] <vjarjadian> the self repair facility would be very powerful to have...
[23:39] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[23:39] <vjarjadian> if a site went down i could just join another server and it would immediately rebalance onto it... at WAN speeds but still automatic :)
[23:40] <ircolle> http://www.slideshare.net/Inktank_Ceph/ceph-lisa12-presentation Slide 72
[23:41] <vjarjadian> hmm... how did i miss that ROFL
[23:41] * Ryan_Lane (~Adium@216.38.130.167) Quit (Quit: Leaving.)
[23:42] * Ryan_Lane (~Adium@216.38.130.167) has joined #ceph
[23:42] <ircolle> That's why we're here :-)
[23:42] <vjarjadian> well, at least i can keep my eye on ceph and start running some virtual tests before that comes out
[23:45] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) Quit (Remote host closed the connection)
[23:45] <dmick> wer: the procedure doesn't say to reweight for out; just out, and watch to see the data move
[23:46] <wer> dmick: can't that be explosive?
[23:46] <wer> :)
[23:47] <lurbs> dmick: I've seen OSDs bounce back into the cluster, with 'wrongly marked me down' messages after a 'ceph osd out $x'.
[23:47] <slang> Around?
[23:48] <slang> Psi-Jack: around?
[23:48] * slang eyes himself warily
[23:48] <dmick> and I'm not sure what /etc/init.d/ceph stop does with no args, but I *think* it's "stop all ceph daemons on this host"; if the host only had osds that were already marked out on it, that would be safe, but that's not obvious from your notes
[23:49] <wer> dmick: it will not stop the mon.
[23:49] <dmick> wer: it shouldn't be, according to the docs
[23:49] <dmick> wer: why not?
[23:49] * wschulze (~wschulze@199.108.68.9) has left #ceph
[23:49] <dmick> lurbs: are you sure it was after ceph osd out? or ceph osd down?
[23:49] <dmick> with down, you have to set noup on the cluster, or they will rise from the dead
[23:50] <dmick> with out, I don't think they're supposed to rejoin until told
[23:50] <lurbs> Yeah, sorry, down.
[23:50] <dmick> yeah. that's different.
[23:50] <lurbs> out triggered an immediate backfill, which I didn't want.
[23:50] <dmick> and perhaps not all that obvious.
[23:50] <wer> dmick: I guess the mon likes to be there? I have not dug into it... I think I just noticed and ran stop again and then it took down the mon on that host.
[23:50] <dmick> and, yes; with out, it starts moving data right away
[23:50] * gucki (~smuxi@46-126-114-222.dynamic.hispeed.ch) Quit (Remote host closed the connection)
[23:50] <dmick> wer: if you have no mon, you have no cluster to talk to
[23:51] <dmick> or if you have too few mons for a quorum
[23:51] <lurbs> Which is fair enough, it's a different sort of state.
[23:51] <wer> dmick: right. So I think init with no args does not stop the mon.
[23:51] <dmick> I dunno. I think it does. I'll try it
[23:51] <lurbs> And noup is a per cluster, not a per OSD, setting?
[23:52] <lurbs> Looks that way.
[23:52] <dmick> wer: it sure does. gets a list of all daemons on this host by parsing ceph.conf
[23:52] <dmick> if you want to stop a particular osd, or all osds on the host, you can give it args
[23:52] <dmick> but otherwise it's "every daemon on this host"
[23:53] <dmick> noup is a cluster setting, yes. refers to the OSD rejoining the cluster, applies to all
[23:55] <wer> dmick: you sure :) My mon would never die. I was running it on a the same host as a bunch of osd's. /etc/init.d/ceph stop always left the mon running?
[23:56] <wer> root
[23:56] <dmick> well, you can try -v, but it certainly tries to kill it
[23:56] <dmick> at least mine does
[23:56] <dmick> the mon's in the ceph.conf, and was started with init.d?
[23:57] <dmick> (and you can read the script, of course)
[23:57] <wer> yeah. I will take another look :) I had to read the script to make radosgw work....

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.