#ceph IRC Log

Index

IRC Log for 2013-12-19

Timestamps are in GMT/BST.

[0:04] * DarkAceZ (~BillyMays@50-32-43-159.drr01.hrbg.pa.frontiernet.net) Quit (Ping timeout: 480 seconds)
[0:13] * dmsimard (~Adium@ap02.wireless.co.mtl.iweb.com) has joined #ceph
[0:13] * dmsimard (~Adium@ap02.wireless.co.mtl.iweb.com) Quit ()
[0:16] * sarob (~sarob@205.234.30.69) has joined #ceph
[0:19] * dmsimard1 (~Adium@108.163.152.2) Quit (Ping timeout: 480 seconds)
[0:20] * yanzheng (~zhyan@134.134.139.74) has joined #ceph
[0:23] * sarob (~sarob@205.234.30.69) Quit (Remote host closed the connection)
[0:26] * yanzheng (~zhyan@134.134.139.74) Quit (Remote host closed the connection)
[0:26] * clayb (~kvirc@199.172.169.97) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[0:30] * gregsfortytwo (~Adium@2607:f298:a:607:b4d6:27ea:1e5a:65ee) has joined #ceph
[0:31] * mwarwick (~mwarwick@2407:7800:400:1011:6e88:14ff:fe48:57e4) has joined #ceph
[0:32] * mwarwick (~mwarwick@2407:7800:400:1011:6e88:14ff:fe48:57e4) has left #ceph
[0:40] <loicd> http://camlistore.org looks sexy I wonder how ceph fits in this. Just discovered camlistore tonight ;-)
[0:41] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has left #ceph
[0:42] * yanzheng (~zhyan@134.134.139.74) has joined #ceph
[0:42] * darkfader (~floh@88.79.251.60) Quit (Read error: Connection reset by peer)
[0:43] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:45] * darkfader (~floh@88.79.251.60) has joined #ceph
[0:45] * DarkAce-Z is now known as DarkAceZ
[0:48] * loicd lags a lot
[0:54] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[0:56] * linuxkidd__ (~linuxkidd@cpe-066-057-020-180.nc.res.rr.com) has joined #ceph
[0:59] * linuxkidd_ (~linuxkidd@rrcs-70-62-120-189.midsouth.biz.rr.com) Quit (Ping timeout: 480 seconds)
[1:00] * ScOut3R (~scout3r@4E5C7421.dsl.pool.telekom.hu) Quit ()
[1:11] * xmltok_ (~xmltok@cpe-23-240-222-226.socal.res.rr.com) has joined #ceph
[1:12] * xmltok_ (~xmltok@cpe-23-240-222-226.socal.res.rr.com) Quit ()
[1:12] * yanzheng (~zhyan@134.134.139.74) Quit (Remote host closed the connection)
[1:14] <ircolle> loicd - still around?
[1:14] <loicd> yes
[1:14] <ircolle> http://video.renater.fr/jres/2013/index.php?play=jres2013_article_48_720p.mp4
[1:15] <loicd> would you like me to translate ? :-)
[1:15] <ircolle> Becomes clear to me that we could use official version of architecture translated into French and other languages. Perhaps a good thing for the User Committee to look into?
[1:15] * Pedras (~Adium@216.207.42.134) Quit (Quit: Leaving.)
[1:16] <loicd> certainly worth trying to see if there is interest indeed
[1:17] <ircolle> Seems would be nice to offer for download so multiple users don't have to repeat the pain :-)
[1:18] * flaxy (~afx@78.130.171.68) has left #ceph
[1:19] * DarkAce-Z (~BillyMays@50-32-40-56.drr01.hrbg.pa.frontiernet.net) has joined #ceph
[1:22] <loicd> ircolle: I sent a mail to Yann to invite him to share a link to his talk.
[1:22] * mattbenjamin (~matt@aa2.linuxbox.com) Quit (Quit: Leaving.)
[1:23] <loicd> after looking around I'm under the impression that camlistore tries to do too many things at once. It's a nice pet project though.
[1:24] * DarkAceZ (~BillyMays@50-32-44-201.drr01.hrbg.pa.frontiernet.net) Quit (Ping timeout: 480 seconds)
[1:24] * linuxkidd__ (~linuxkidd@cpe-066-057-020-180.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:25] * ikla (~lbz@c-71-237-62-220.hsd1.co.comcast.net) has joined #ceph
[1:25] * ikla (~lbz@c-71-237-62-220.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[1:25] <ircolle> Thanks, loicd - I'm curious why he's using RAID5 on the OSDs (from what I can garner not knowing French)
[1:25] * ikla (~lbz@c-71-237-62-220.hsd1.co.comcast.net) has joined #ceph
[1:26] * xmltok (~xmltok@cpe-23-240-222-226.socal.res.rr.com) Quit (Quit: Leaving...)
[1:26] <ikla> anyone avail. ?
[1:26] <loicd> http://dachary.org/?p=2087 ; In each machines, the physical disks are grouped using RAID5 to minimize the probability of a failure. For instance 12 disks are divided in 3 RAID5 disks, each containing 4 disks. The operating system only sees three disks. This is redundant with the service provided by Ceph and if new hardware had to be bought it would be recommended to not purchase a RAID5 controller to reduce the cost. However, the hardware is already
[1:26] <loicd> available and using RAID5 reduces the probability of a Ceph failure. If Ceph or the underlying components are to fail, chances are they will do so while recovering from an OSD failure. By taking advantage of the existing RAID5, the odds of this event happening are reduced. It can save a few hours of work over a year.
[1:26] <loicd> ircolle: ^
[1:27] * linuxkidd__ (~linuxkidd@cpe-066-057-020-180.nc.res.rr.com) has joined #ceph
[1:27] <ircolle> loicd - thank you!
[1:29] * ircolle (~Adium@2601:1:8380:2d9:64e1:b0e8:1b45:4345) Quit (Quit: Leaving.)
[1:30] <ikla> i got 10 servers with 8 4TB disks and 2 small drives for the os, what would be the recommended setup for ceph?
[1:32] <pmatulis2> ikla: 8 OSDs per server, you would also need 3 MONs but i wouldn't waste those large servers on them
[1:33] <pmatulis2> ikla: if you don't have any other h/w then you could run the MONs on 3 of those servers as well
[1:33] <loicd> this goes back to http://tracker.ceph.com/issues/6301 which turns out to be an xfs bug that was probably fixed two weeks ago http://oss.sgi.com/archives/xfs/2013-12/msg00087.html
[1:33] <ikla> I got other servers
[1:33] <ikla> :)
[1:33] <pmatulis2> ikla: oh goodie
[1:36] <loicd> ikla: like pmatulis2 says ;-)
[1:45] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[1:47] * jks (~jks@3e6b5724.rev.stofanet.dk) Quit (Ping timeout: 480 seconds)
[1:48] <nwf> Two dumb questions for the channel: how hard is it to move a MDS from one machine to another (it's OK if the move is not live), and are replicated MDSes going to be supported in production "soon"?
[1:50] * tsnider (~tsnider@ip68-102-128-87.ks.ok.cox.net) has joined #ceph
[1:53] * tsnider1 (~tsnider@198.95.226.40) has joined #ceph
[1:53] <iggy> I imagine it's probably safe to run multiple MDSes for the time it takes to move from one server to another
[1:55] <nwf> What's unsafe about multiple MDSes anyway?
[1:56] <iggy> iirc, recovery
[1:56] <iggy> which you hopefully wouldn't hit in the short span of moving MDSes
[1:57] <iggy> but you may want to get a second opinion, I've been somewhat out of the loop for a bit ceph wise
[1:58] * tsnider (~tsnider@ip68-102-128-87.ks.ok.cox.net) Quit (Ping timeout: 480 seconds)
[1:58] * JC (~JC@nat-dip5.cfw-a-gci.corp.yahoo.com) Quit (Quit: Leaving.)
[2:00] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[2:03] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[2:06] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[2:06] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[2:08] * gregsfortytwo (~Adium@2607:f298:a:607:b4d6:27ea:1e5a:65ee) Quit (Ping timeout: 480 seconds)
[2:12] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[2:17] * LeaChim (~LeaChim@host81-159-251-38.range81-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:18] * angdraug (~angdraug@12.164.168.116) Quit (Quit: Leaving)
[2:24] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit (Ping timeout: 480 seconds)
[2:25] <via> has anyone seen giant floods in the kernel log starting with "front: " that look like packet dumps when using ceph?
[2:25] * sagelap (~sage@243.sub-70-197-80.myvzw.com) has joined #ceph
[2:26] <via> perhaps related to getting errors doing listings on giant directors repeatedly
[2:26] <via> where the error is often the folder in question claiming it doesn't exist
[2:29] <via> also, does anyone know what impact readdir_max_entries has?
[2:29] <via> i'm wondering if its relevent to my problem, seeing as i'm running a program that just gets massive directory listings
[2:29] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[2:30] * flaxy (~afx@78.130.171.68) has joined #ceph
[2:31] * _Tassadar (~tassadar@tassadar.xs4all.nl) Quit (Ping timeout: 480 seconds)
[2:31] * flaxy (~afx@78.130.171.68) Quit (Quit: WeeChat 0.4.2)
[2:32] * flaxy (~afx@78.130.171.68) has joined #ceph
[2:33] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[2:34] <yanzheng> via kernel and ceph-mds version?
[2:35] <via> mds.alpha: running {"version":"0.72.1"}
[2:35] <via> 3.12.1-1.el6.elrepo.x86_64
[2:36] * _Tassadar (~tassadar@tassadar.xs4all.nl) has joined #ceph
[2:41] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[2:49] * fred_ (~fred@c83-248-221-150.bredband.comhem.se) has joined #ceph
[2:50] * haomaiwang (~haomaiwan@118.186.151.36) has joined #ceph
[2:51] <fred_> anyone else experienced an active mds going to standby but the other standby or cluster not recognizing it resulting in hanged cephfs?
[2:57] <gregsfortytwo1> fred_: what kind of timeline are you talking about? does the other standby eventually take over?
[2:58] <yanzheng> via, did you see other message in the kernel log
[2:58] * haomaiwang (~haomaiwan@118.186.151.36) Quit (Ping timeout: 480 seconds)
[2:59] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) Quit (Ping timeout: 480 seconds)
[2:59] <yanzheng> via, I encountered similar error when available memory is tighten
[3:02] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) has joined #ceph
[3:03] <via> hmm
[3:03] <via> yanzheng: this isn't a tight-on-memory situation afaict
[3:04] * hemantb (~hemantb@14.99.218.7) has joined #ceph
[3:04] <via> its a machine with 32 gigs of ram
[3:05] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[3:06] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) Quit (Remote host closed the connection)
[3:08] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[3:09] * haomaiwang (~haomaiwan@211.155.113.224) has joined #ceph
[3:10] * sagelap (~sage@243.sub-70-197-80.myvzw.com) Quit (Ping timeout: 480 seconds)
[3:12] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit (Remote host closed the connection)
[3:16] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:20] * tsnider1 (~tsnider@198.95.226.40) has left #ceph
[3:23] * Dark-Ace-Z (~BillyMays@50-32-20-151.drr01.hrbg.pa.frontiernet.net) has joined #ceph
[3:25] * sherry (~ssha@en-279303.engad.foe.auckland.ac.nz) has joined #ceph
[3:25] * yanzheng (~zhyan@134.134.139.72) has joined #ceph
[3:26] <sherry> what is the best tool to benchmark cephfs? IOzone, FFSB or etc?!
[3:27] * DarkAce-Z (~BillyMays@50-32-40-56.drr01.hrbg.pa.frontiernet.net) Quit (Ping timeout: 480 seconds)
[3:30] * Dark-Ace-Z is now known as DarkAceZ
[3:35] <bkero> sherry: the vfs? bonnie maybe. The object store? swift-bench maybe.
[3:35] <bkero> rbd? probably phoronix disk-test
[3:35] * yanzheng (~zhyan@134.134.139.72) Quit (Ping timeout: 480 seconds)
[3:37] * jcsp (~jcsp@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:39] <fred_> gregsfortytwo1: No, the last entry in the logs on the active mds is that it goes to standby, however ceph status reports it as active.. all clients hang/block eventually needed to be unmounted and remounted
[3:40] <sherry> bkero: distributed file system, bonnnie will get the CDF?
[3:41] * joao|lap (~JL@a79-168-11-205.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[3:44] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[3:52] * mozg (~andrei@host86-184-120-168.range86-184.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[3:52] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit (Remote host closed the connection)
[3:56] * sankey (~sankey@cpe-76-93-167-254.san.res.rr.com) Quit (Ping timeout: 481 seconds)
[3:58] * diegows (~diegows@190.190.17.57) Quit (Ping timeout: 480 seconds)
[4:06] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) has joined #ceph
[4:11] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:11] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[4:20] * mxmln (~mxmln@212.79.49.66) Quit (Quit: mxmln)
[5:05] * fireD_ (~fireD@93-139-139-129.adsl.net.t-com.hr) has joined #ceph
[5:07] * fireD (~fireD@93-142-235-66.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:08] * Hakisho_ (~Hakisho@p4FC268CC.dip0.t-ipconnect.de) has joined #ceph
[5:10] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[5:10] <yanzheng> via, still there ?
[5:11] <via> i am
[5:11] <yanzheng> how many files in the directory
[5:11] <via> lot of variety but some have in the millions
[5:12] <yanzheng> how many ram client has
[5:12] <via> 32g
[5:12] * hemantb (~hemantb@14.99.218.7) Quit (Quit: hemantb)
[5:12] <yanzheng> ok
[5:13] * Hakisho (~Hakisho@0001be3c.user.oftc.net) Quit (Ping timeout: 480 seconds)
[5:13] * Hakisho_ is now known as Hakisho
[5:13] <yanzheng> please send the kernel message to zheng.z.yan@intel.com
[5:13] <via> okay
[5:13] <via> for some reason it didn't log things to the /var/log/kern file
[5:13] <via> so i only have what is in dmesg, its not complete
[5:15] <yanzheng> /var/log/message should contains the message
[5:15] <via> it does not, centos doesn't log kernel messages there by default
[5:16] <yanzheng> ok, please send the dmesg to me
[5:22] <via> sent, thank you
[5:28] * hemantb (~hemantb@117.192.243.253) has joined #ceph
[5:37] * Vacum_ (~vovo@88.130.202.167) has joined #ceph
[5:44] * Vacum (~vovo@i59F792C2.versanet.de) Quit (Ping timeout: 480 seconds)
[5:53] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) Quit (Quit: Leaving.)
[5:56] * hemantb (~hemantb@117.192.243.253) Quit (Quit: hemantb)
[6:02] * ponyofdeath (~vladi@cpe-75-80-165-117.san.res.rr.com) Quit (Quit: leaving)
[6:15] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:17] * sarob (~sarob@2601:9:7080:13a:f1d1:52f1:68de:78a2) has joined #ceph
[6:19] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:23] * Cube1 (~Cube@66-87-67-132.pools.spcsdns.net) has joined #ceph
[6:23] * Cube (~Cube@66-87-66-177.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[6:24] <yanzheng> aarontc, do you still have log for the missing object issue
[6:25] * Cube1 (~Cube@66-87-67-132.pools.spcsdns.net) Quit (Read error: No route to host)
[6:25] * Cube (~Cube@66-87-67-132.pools.spcsdns.net) has joined #ceph
[6:33] * mancdaz_away (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:7ca6) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * sileht (~sileht@gizmo.sileht.net) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * tobru (~quassel@2a02:41a:3999::94) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * guerby (~guerby@ip165-ipv6.tetaneutral.net) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * SubOracle (~quassel@00019f1e.user.oftc.net) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * Pauline (~middelink@2001:838:3c1:1:be5f:f4ff:fe58:e04) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * asmaps (~quassel@2a03:4000:2:3c5::80) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * godog (~filo@0001309c.user.oftc.net) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * al (d@niel.cx) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * bauruine (~bauruine@2a01:4f8:150:6381::545) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * Svedrin (svedrin@ketos.funzt-halt.net) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * fireD_ (~fireD@93-139-139-129.adsl.net.t-com.hr) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * haomaiwang (~haomaiwan@211.155.113.224) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * briancline (~bc@taco.sh) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * dis (~dis@109.110.66.29) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * yeled (~yeled@spodder.com) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * ofu (ofu@dedi3.fuckner.net) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * Husky (~sam@host81-138-206-9.in-addr.btopenworld.com) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * `10_ (~10@juke.fm) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * musca (musca@tyrael.eu) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * clayg (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * Esmil (esmil@horus.0x90.dk) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * wogri (~wolf@nix.wogri.at) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * chutz (~chutz@rygel.linuxfreak.ca) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * Djinh (~alexlh@ardbeg.funk.org) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * dlan (~dennis@116.228.88.131) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * nyerup_ (irc@jespernyerup.dk) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * brother (foobaz@vps1.hacking.dk) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * yo61_ (~yo61@lin001.yo61.net) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * liiwi (liiwi@idle.fi) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * Daviey (~DavieyOFT@bootie.daviey.com) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * fred_ (~fred@c83-248-221-150.bredband.comhem.se) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * eternaleye (~eternaley@c-24-17-202-252.hsd1.wa.comcast.net) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * jdmason (~jon@134.134.137.75) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * saturnine (~saturnine@66.219.20.211) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * ZyTer_ (~ZyTer@ghostbusters.apinnet.fr) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * Underbyte (~jerrad@206.222.208.4) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * paradon (~thomas@60.234.66.253) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * Kioob (~kioob@luuna.daevel.fr) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * amospalla (~amospalla@0001a39c.user.oftc.net) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * philips (~philips@ec2-23-22-175-220.compute-1.amazonaws.com) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * toabctl (~toabctl@toabctl.de) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * EWDurbin (~ernestd@ewd3do.ernest.ly) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * nwf (~nwf@67.62.51.95) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * warrenu (~Warren@2607:f298:a:607:2c5f:a706:2dcb:3163) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * joao (~joao@a79-168-11-205.cpe.netcabo.pt) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * jochen (~jochen@laevar.de) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * CAPSLOCK2000 (~oftc@2001:610:748:1::8) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * TheBittern (~thebitter@195.10.250.233) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * jefferai (~quassel@corkblock.jefferai.org) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * peetaur (~peter@CPE788df73fb301-CM788df73fb300.cpe.net.cable.rogers.com) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * gregorg (~Greg@78.155.152.6) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * vhasi (vhasi@vha.si) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * raso (~raso@deb-multimedia.org) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * dwm (~dwm@northrend.tastycake.net) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * miniyo (~miniyo@0001b53b.user.oftc.net) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * elmo (~james@faun.canonical.com) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * Meyer^ (meyer@c64.org) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * tomaw (tom@tomaw.netop.oftc.net) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * Elbandi (~ea333@elbandi.net) Quit (resistance.oftc.net reticulum.oftc.net)
[6:33] * psieklFH (psiekl@wombat.eu.org) Quit (resistance.oftc.net reticulum.oftc.net)
[6:37] * wogri (~wolf@nix.wogri.at) has joined #ceph
[6:37] * liiwi (liiwi@idle.fi) has joined #ceph
[6:37] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[6:37] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[6:37] * Daviey (~DavieyOFT@bootie.daviey.com) has joined #ceph
[6:37] * Djinh (~alexlh@ardbeg.funk.org) has joined #ceph
[6:37] * dlan (~dennis@116.228.88.131) has joined #ceph
[6:37] * yo61_ (~yo61@lin001.yo61.net) has joined #ceph
[6:37] * nyerup_ (irc@jespernyerup.dk) has joined #ceph
[6:37] * brother (foobaz@vps1.hacking.dk) has joined #ceph
[6:37] * Esmil (esmil@horus.0x90.dk) has joined #ceph
[6:37] * clayg (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) has joined #ceph
[6:37] * musca (musca@tyrael.eu) has joined #ceph
[6:37] * `10_ (~10@juke.fm) has joined #ceph
[6:37] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) has joined #ceph
[6:37] * Husky (~sam@host81-138-206-9.in-addr.btopenworld.com) has joined #ceph
[6:37] * ofu (ofu@dedi3.fuckner.net) has joined #ceph
[6:37] * yeled (~yeled@spodder.com) has joined #ceph
[6:37] * dis (~dis@109.110.66.29) has joined #ceph
[6:37] * briancline (~bc@taco.sh) has joined #ceph
[6:37] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[6:37] * haomaiwang (~haomaiwan@211.155.113.224) has joined #ceph
[6:37] * fireD_ (~fireD@93-139-139-129.adsl.net.t-com.hr) has joined #ceph
[6:37] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[6:37] * eternaleye (~eternaley@c-24-17-202-252.hsd1.wa.comcast.net) has joined #ceph
[6:37] * jdmason (~jon@134.134.137.75) has joined #ceph
[6:37] * saturnine (~saturnine@66.219.20.211) has joined #ceph
[6:37] * mancdaz_away (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:7ca6) has joined #ceph
[6:37] * ZyTer_ (~ZyTer@ghostbusters.apinnet.fr) has joined #ceph
[6:37] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[6:37] * Underbyte (~jerrad@206.222.208.4) has joined #ceph
[6:37] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[6:37] * paradon (~thomas@60.234.66.253) has joined #ceph
[6:37] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[6:37] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[6:37] * amospalla (~amospalla@0001a39c.user.oftc.net) has joined #ceph
[6:37] * philips (~philips@ec2-23-22-175-220.compute-1.amazonaws.com) has joined #ceph
[6:37] * toabctl (~toabctl@toabctl.de) has joined #ceph
[6:37] * EWDurbin (~ernestd@ewd3do.ernest.ly) has joined #ceph
[6:37] * nwf (~nwf@67.62.51.95) has joined #ceph
[6:37] * sileht (~sileht@gizmo.sileht.net) has joined #ceph
[6:37] * tobru (~quassel@2a02:41a:3999::94) has joined #ceph
[6:37] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[6:37] * guerby (~guerby@ip165-ipv6.tetaneutral.net) has joined #ceph
[6:37] * SubOracle (~quassel@00019f1e.user.oftc.net) has joined #ceph
[6:37] * bauruine (~bauruine@2a01:4f8:150:6381::545) has joined #ceph
[6:37] * Pauline (~middelink@2001:838:3c1:1:be5f:f4ff:fe58:e04) has joined #ceph
[6:37] * asmaps (~quassel@2a03:4000:2:3c5::80) has joined #ceph
[6:37] * godog (~filo@0001309c.user.oftc.net) has joined #ceph
[6:37] * al (d@niel.cx) has joined #ceph
[6:37] * Svedrin (svedrin@ketos.funzt-halt.net) has joined #ceph
[6:37] * warrenu (~Warren@2607:f298:a:607:2c5f:a706:2dcb:3163) has joined #ceph
[6:37] * joao (~joao@a79-168-11-205.cpe.netcabo.pt) has joined #ceph
[6:37] * jochen (~jochen@laevar.de) has joined #ceph
[6:37] * CAPSLOCK2000 (~oftc@2001:610:748:1::8) has joined #ceph
[6:37] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[6:37] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[6:37] * jefferai (~quassel@corkblock.jefferai.org) has joined #ceph
[6:37] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[6:37] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[6:37] * peetaur (~peter@CPE788df73fb301-CM788df73fb300.cpe.net.cable.rogers.com) has joined #ceph
[6:37] * gregorg (~Greg@78.155.152.6) has joined #ceph
[6:37] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[6:37] * vhasi (vhasi@vha.si) has joined #ceph
[6:37] * raso (~raso@deb-multimedia.org) has joined #ceph
[6:37] * dwm (~dwm@northrend.tastycake.net) has joined #ceph
[6:37] * miniyo (~miniyo@0001b53b.user.oftc.net) has joined #ceph
[6:37] * elmo (~james@faun.canonical.com) has joined #ceph
[6:37] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[6:37] * Meyer^ (meyer@c64.org) has joined #ceph
[6:37] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[6:37] * tomaw (tom@tomaw.netop.oftc.net) has joined #ceph
[6:37] * Elbandi (~ea333@elbandi.net) has joined #ceph
[6:37] * psieklFH (psiekl@wombat.eu.org) has joined #ceph
[6:40] * ChanServ sets mode +v joao
[6:49] * fred2 (~fred@c83-248-221-150.bredband.comhem.se) has joined #ceph
[6:50] * KindOne (KindOne@0001a7db.user.oftc.net) has joined #ceph
[6:53] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[6:57] * jcsp (~jcsp@0001bf3a.user.oftc.net) has joined #ceph
[6:57] * jcsp (~jcsp@0001bf3a.user.oftc.net) Quit ()
[7:00] * bandrus1 (~Adium@108.246.13.166) Quit (Quit: Leaving.)
[7:02] * sarob (~sarob@2601:9:7080:13a:f1d1:52f1:68de:78a2) Quit (Remote host closed the connection)
[7:02] * hemantb (~hemantb@182.71.241.130) has joined #ceph
[7:02] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[7:04] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[7:06] * ponyofdeath (~vladi@cpe-75-80-165-117.san.res.rr.com) has joined #ceph
[7:07] * bandrus (~Adium@108.246.13.166) has joined #ceph
[7:10] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:21] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) Quit (Quit: Leaving)
[7:39] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[7:53] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) Quit (Remote host closed the connection)
[7:53] * sleinen (~Adium@2001:620:0:26:b9c5:b7a4:4341:791) Quit (Quit: Leaving.)
[7:53] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[8:01] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:03] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[8:11] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[8:11] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:29] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[8:30] * Sysadmin88 (~IceChat77@90.208.9.12) Quit (Quit: Always try to be modest, and be proud about it!)
[8:35] * jks (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[8:36] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[8:41] * sleinen (~Adium@2001:620:0:26:c085:b1cc:b919:a0c8) has joined #ceph
[8:43] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) has joined #ceph
[8:48] * haomaiwa_ (~haomaiwan@117.79.232.155) has joined #ceph
[8:49] * fouxm (~fouxm@185.23.92.11) has joined #ceph
[8:55] * haomaiwang (~haomaiwan@211.155.113.224) Quit (Ping timeout: 480 seconds)
[8:55] * Kioob (~kioob@luuna.daevel.fr) Quit (Quit: Leaving.)
[9:00] * haomaiwa_ (~haomaiwan@117.79.232.155) Quit (Remote host closed the connection)
[9:00] * haomaiwang (~haomaiwan@199.30.140.94) has joined #ceph
[9:04] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[9:05] * rendar (~s@87.1.177.0) has joined #ceph
[9:05] * haomaiwa_ (~haomaiwan@106.120.176.71) has joined #ceph
[9:07] * haomaiwa_ (~haomaiwan@106.120.176.71) Quit (Remote host closed the connection)
[9:08] * haomaiwa_ (~haomaiwan@199.30.140.94) has joined #ceph
[9:08] * haomaiwang (~haomaiwan@199.30.140.94) Quit (Read error: Connection reset by peer)
[9:12] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:12] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[9:18] * garphy`aw is now known as garphy
[9:18] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) has joined #ceph
[9:19] * sarob (~sarob@2601:9:7080:13a:2868:a08f:8f3f:366e) has joined #ceph
[9:19] * yanzheng (~zhyan@134.134.137.73) has joined #ceph
[9:22] <aarontc> yanzheng: yes, I do have the logs from the MDS about the missing object
[9:25] * matt__ (~matt@ccpc-mwr.bath.ac.uk) has joined #ceph
[9:25] * matt__ is now known as _matt
[9:27] * _matt (~matt@ccpc-mwr.bath.ac.uk) Quit ()
[9:27] * sarob (~sarob@2601:9:7080:13a:2868:a08f:8f3f:366e) Quit (Ping timeout: 480 seconds)
[9:27] <yanzheng> aarontc, please send it to me
[9:27] <yanzheng> zheng.z.yan@intel.com
[9:28] * yanzheng (~zhyan@134.134.137.73) Quit (Quit: Leaving)
[9:29] * _matt (~matt@ccpc-mwr.bath.ac.uk) has joined #ceph
[9:33] * haomaiwang (~haomaiwan@106.120.176.71) has joined #ceph
[9:38] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[9:39] * haomaiwa_ (~haomaiwan@199.30.140.94) Quit (Ping timeout: 480 seconds)
[9:58] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) has joined #ceph
[10:00] * hjjg (~hg@p3EE322AA.dip0.t-ipconnect.de) has joined #ceph
[10:01] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[10:01] * ChanServ sets mode +v andreask
[10:03] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[10:05] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[10:07] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Ping timeout: 480 seconds)
[10:08] <_matt> hi, does anybody know what client permissios are needed to map a rbd block? If i use the admin key it maps fine but if I use a client key I get: rbd: add failed: (34) Numerical result out of range
[10:09] <_matt> I currently have: caps osd = "allow class-read object_prefix rbd_children, allow rwx pool=data"
[10:10] <_matt> oh wait, wouldn't pool be rbd ..
[10:11] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:12] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) Quit (Quit: Leaving)
[10:12] <_matt> ok nm, adding allow rwx pool=rbd works :)
[10:13] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Ping timeout: 480 seconds)
[10:17] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:20] * LeaChim (~LeaChim@host81-159-251-38.range81-159.btcentralplus.com) has joined #ceph
[10:21] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[10:28] * shang_ (~ShangWu@175.41.48.77) has joined #ceph
[10:33] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:33] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[10:35] * thomnico (~thomnico@2a01:e35:8b41:120:9935:fe41:68ca:e870) has joined #ceph
[10:36] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit ()
[10:36] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:48] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[10:54] * Siva (~sivat@117.192.50.168) has joined #ceph
[10:56] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[10:58] * hemantb_ (~hemantb@182.71.241.130) has joined #ceph
[10:58] * hemantb (~hemantb@182.71.241.130) Quit (Read error: Connection reset by peer)
[10:58] * hemantb_ is now known as hemantb
[10:58] * Siva_ (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[11:03] * sarob (~sarob@2601:9:7080:13a:49be:ac8f:a863:cc57) has joined #ceph
[11:04] * Siva (~sivat@117.192.50.168) Quit (Ping timeout: 480 seconds)
[11:04] * Siva_ is now known as Siva
[11:24] * hemantb (~hemantb@182.71.241.130) Quit (Ping timeout: 480 seconds)
[11:28] * sarob (~sarob@2601:9:7080:13a:49be:ac8f:a863:cc57) Quit (Ping timeout: 480 seconds)
[11:32] * cronix (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[11:36] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[11:39] * Siva (~sivat@117.192.50.168) has joined #ceph
[11:39] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) has joined #ceph
[11:39] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has left #ceph
[11:40] * Siva (~sivat@117.192.50.168) Quit ()
[11:45] * hemantb (~hemantb@182.71.241.130) has joined #ceph
[11:46] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[11:51] * xdeller (~xdeller@91.218.144.129) has joined #ceph
[11:58] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[12:02] * haomaiwang (~haomaiwan@106.120.176.71) Quit (Remote host closed the connection)
[12:03] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[12:05] * shang (~ShangWu@175.41.48.77) Quit (Read error: Operation timed out)
[12:06] * shang_ (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[12:07] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[12:09] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[12:10] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[12:11] * Cube (~Cube@66-87-67-132.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[12:19] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[12:20] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[12:23] * diegows (~diegows@190.190.17.57) has joined #ceph
[12:23] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Read error: Connection reset by peer)
[12:24] * mancdaz_away is now known as mancdaz
[12:26] * KindTwo (KindOne@h77.23.131.174.dynamic.ip.windstream.net) has joined #ceph
[12:27] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:27] * KindTwo is now known as KindOne
[12:31] * i_m (~ivan.miro@deibp9eh1--blueice2n1.emea.ibm.com) has joined #ceph
[12:40] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[12:51] * thomnico (~thomnico@2a01:e35:8b41:120:9935:fe41:68ca:e870) Quit (Quit: Ex-Chat)
[12:56] * haomaiwang (~haomaiwan@117.79.232.164) has joined #ceph
[12:58] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) Quit (Remote host closed the connection)
[13:03] * sarob (~sarob@2601:9:7080:13a:55ad:c0ec:8e63:6f4d) has joined #ceph
[13:06] * tobru (~quassel@2a02:41a:3999::94) Quit (Remote host closed the connection)
[13:07] * tobru (~quassel@2a02:41a:3999::94) has joined #ceph
[13:10] * hemantb (~hemantb@182.71.241.130) Quit (Quit: hemantb)
[13:10] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[13:14] * zhyan_ (~zhyan@134.134.139.72) has joined #ceph
[13:19] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[13:21] * MrNPP_ (~MrNPP@216.152.240.194) Quit (Ping timeout: 480 seconds)
[13:23] * yanzheng (~zhyan@101.229.190.24) has joined #ceph
[13:26] * garphy is now known as garphy`aw
[13:27] * zhyan_ (~zhyan@134.134.139.72) Quit (Remote host closed the connection)
[13:30] * garphy`aw is now known as garphy
[13:36] * thomnico (~thomnico@2a01:e35:8b41:120:9935:fe41:68ca:e870) has joined #ceph
[13:40] * sarob (~sarob@2601:9:7080:13a:55ad:c0ec:8e63:6f4d) Quit (Ping timeout: 480 seconds)
[13:43] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[13:47] * LeaChim (~LeaChim@host81-159-251-38.range81-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[13:49] * dzianis (~dzianis@86.57.255.91) has joined #ceph
[13:50] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[13:55] <dzianis> Hi, what is a difference between placement group number and placement group of placement number? I can see in doc (http://ceph.com/docs/master/rados/operations/placement-groups) that they should be equal.
[14:03] * sarob (~sarob@2601:9:7080:13a:d57a:80fa:c8e9:e7fd) has joined #ceph
[14:22] * yanzheng (~zhyan@101.229.190.24) Quit (Ping timeout: 480 seconds)
[14:26] * haomaiwang (~haomaiwan@117.79.232.164) Quit (Remote host closed the connection)
[14:26] * haomaiwang (~haomaiwan@199.30.140.94) has joined #ceph
[14:28] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[14:29] * tsnider (~tsnider@nat-216-240-30-23.netapp.com) has joined #ceph
[14:29] * BillK (~BillK-OFT@106-68-227-246.dyn.iinet.net.au) Quit (Ping timeout: 481 seconds)
[14:30] * BillK (~BillK-OFT@106-68-227-246.dyn.iinet.net.au) has joined #ceph
[14:30] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Quit: Leaving.)
[14:31] * haomaiwa_ (~haomaiwan@118.186.151.36) has joined #ceph
[14:37] * allsystemsarego (~allsystem@188.26.167.169) has joined #ceph
[14:38] * haomaiwang (~haomaiwan@199.30.140.94) Quit (Ping timeout: 480 seconds)
[14:39] * hemantb (~hemantb@117.192.243.253) has joined #ceph
[14:39] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[14:39] * thebigm (~thebigm@2001:8d8:1fe:7:a6ba:dbff:fefc:c429) Quit (Remote host closed the connection)
[14:40] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[14:40] * sarob (~sarob@2601:9:7080:13a:d57a:80fa:c8e9:e7fd) Quit (Ping timeout: 480 seconds)
[14:41] * markbby (~Adium@168.94.245.1) has joined #ceph
[14:42] * elder_ (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[14:42] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Read error: Connection reset by peer)
[14:46] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[14:53] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[14:54] * Enigmagic (~nathan@c-98-234-189-23.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:02] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[15:03] * sarob (~sarob@2601:9:7080:13a:1472:fea8:a6a6:1bd) has joined #ceph
[15:06] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) has joined #ceph
[15:08] * linuxkidd__ is now known as linuxkidd
[15:08] * TiCPU (~jeromepou@190-130.cgocable.ca) has joined #ceph
[15:11] * sarob (~sarob@2601:9:7080:13a:1472:fea8:a6a6:1bd) Quit (Ping timeout: 480 seconds)
[15:14] * Enigmagic (~nathan@c-98-234-189-23.hsd1.ca.comcast.net) has joined #ceph
[15:17] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) Quit (Quit: Leaving.)
[15:19] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Remote host closed the connection)
[15:22] * hemantb (~hemantb@117.192.243.253) Quit (Read error: Connection timed out)
[15:23] * hemantb (~hemantb@117.192.243.253) has joined #ceph
[15:26] * haomaiwang (~haomaiwan@117.79.232.254) has joined #ceph
[15:30] * haomaiwa_ (~haomaiwan@118.186.151.36) Quit (Ping timeout: 480 seconds)
[15:32] <_matt> does anybody know of any windows / android / ios appls that will automatically sync local folders and works with radosgw?
[15:35] * gertux (~kvirc@200.0.230.234) has joined #ceph
[15:37] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[15:42] * root (~ganders@200.0.230.234) has joined #ceph
[15:42] * gertux (~kvirc@200.0.230.234) Quit (Quit: KVIrc 4.1.3 Equilibrium http://www.kvirc.net/)
[15:44] * clayb (~kvirc@proxy-ny1.bloomberg.com) has joined #ceph
[15:46] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[15:46] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[15:47] <root> q
[15:47] * root (~ganders@200.0.230.234) Quit (Quit: WeeChat 0.4.0)
[15:48] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) has joined #ceph
[15:48] * hemantb (~hemantb@117.192.243.253) Quit (Read error: Connection timed out)
[15:49] * hemantb (~hemantb@117.192.243.253) has joined #ceph
[15:50] * ganders (~gertux@200.0.230.234) has joined #ceph
[15:50] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:56] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[15:56] * haomaiwang (~haomaiwan@117.79.232.254) Quit (Remote host closed the connection)
[15:57] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:59] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) Quit (Ping timeout: 480 seconds)
[15:59] * gregsfortytwo (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[16:03] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[16:03] * sarob (~sarob@2601:9:7080:13a:1082:f2c9:883c:3076) has joined #ceph
[16:07] * DarkAce-Z (~BillyMays@50-32-22-236.drr01.hrbg.pa.frontiernet.net) has joined #ceph
[16:12] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[16:13] * DarkAceZ (~BillyMays@50-32-20-151.drr01.hrbg.pa.frontiernet.net) Quit (Ping timeout: 480 seconds)
[16:15] * sarob (~sarob@2601:9:7080:13a:1082:f2c9:883c:3076) Quit (Ping timeout: 480 seconds)
[16:16] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) Quit (Quit: Leaving.)
[16:16] * i_m (~ivan.miro@deibp9eh1--blueice2n1.emea.ibm.com) Quit (Quit: Leaving.)
[16:16] * i_m (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) has joined #ceph
[16:16] * i_m (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) Quit ()
[16:17] * i_m (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) has joined #ceph
[16:21] * danieagle (~Daniel@179.176.54.173.dynamic.adsl.gvt.net.br) has joined #ceph
[16:22] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[16:24] * hemantb (~hemantb@117.192.243.253) Quit (Quit: hemantb)
[16:30] * i_m (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) Quit (Read error: Connection reset by peer)
[16:36] * mattbenjamin (~matt@aa2.linuxbox.com) has joined #ceph
[16:43] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[16:46] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Ping timeout: 480 seconds)
[16:54] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[16:55] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[16:55] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Quit: Leaving.)
[16:56] * yanzheng (~zhyan@101.229.190.24) has joined #ceph
[16:56] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[17:00] * gregsfortytwo (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[17:01] <alphe> so at the end I have a xfs filesystem that refuse to grow anymore
[17:01] <alphe> and I have to restart my ceph cluster from scratch
[17:02] <janos> :O
[17:02] <alphe> yep the 6 th xfs_growfs returns en error ...
[17:03] <alphe> and xfs is not super fantastique in the resize particion field ...
[17:03] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[17:03] <alphe> and xfs is not super fantastic in the resize partition field ...
[17:04] * sagelap (~sage@2600:1012:b021:1e0f:74c1:f9cf:bb2:c587) has joined #ceph
[17:04] <alphe> as my ceph cluster is planned to grow as long I recycle my actual storage server into ceph box
[17:05] <alphe> I need a file system that is rock stable and allows resizing anytime any amount
[17:05] <alphe> xfs can t shrink
[17:05] <alphe> and on growth of xfs there is a big prob
[17:06] * mancdaz is now known as mancdaz_away
[17:07] * julien_ (~julien@213-245-29-151.rev.numericable.fr) has joined #ceph
[17:07] * julien_ is now known as Discard
[17:07] * Discard (~julien@213-245-29-151.rev.numericable.fr) Quit ()
[17:07] * Discard (~discard@213-245-29-151.rev.numericable.fr) has joined #ceph
[17:08] <Discard> hi there
[17:08] <Discard> could anyone help me to understand my mistake ?
[17:09] <alphe> Discard if I understand it sure
[17:09] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[17:09] <Discard> ok
[17:10] <Discard> i've tried to use ceph-deploy mon create s4.13h.com
[17:10] <alphe> how can I tell ceph-deploy to use ext4fs instead of xfs
[17:10] <Discard> and I have an erro like this
[17:10] <Discard> s4][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[17:11] <Discard> in logs I have :
[17:11] <alphe> Discard the hostname is without the FQDN
[17:11] <Discard> i've tried too
[17:11] <alphe> dis you need to set hostnames in the local /etc/hosts file
[17:11] <Discard> yep
[17:11] <Discard> ceph@p1:~$ ceph-deploy mon create s4
[17:11] <Discard> [ceph_deploy.cli][INFO ] Invoked (1.3.3): /usr/bin/ceph-deploy mon create s4
[17:12] <Discard> [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts s4
[17:12] <Discard> [ceph_deploy.mon][DEBUG ] detecting platform for host s4 ...
[17:12] <Discard> [s4][DEBUG ] connected to host: s4
[17:12] <Discard> [s4][DEBUG ] detect platform information from remote host
[17:12] <Discard> [s4][DEBUG ] detect machine type
[17:12] <alphe> and then you need to create a public ssh key that you will share to all your nodes
[17:12] <Discard> [ceph_deploy.mon][INFO ] distro info: Ubuntu 12.04 precise
[17:12] <Discard> [s4][DEBUG ] determining if provided host has same hostname in remote
[17:12] <Discard> [s4][DEBUG ] get remote short hostname
[17:12] <Discard> [s4][DEBUG ] deploying mon to s4
[17:12] <Discard> [s4][DEBUG ] get remote short hostname
[17:12] <Discard> [s4][DEBUG ] remote hostname: s4
[17:12] <Discard> [s4][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[17:12] <alphe> Dsicard !
[17:12] <alphe> Dsicard !
[17:12] <Discard> [s4][DEBUG ] create the mon path if it does not exist
[17:12] <Discard> [s4][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-s4/done
[17:12] * garphy is now known as garphy`aw
[17:12] <Discard> [s4][DEBUG ] create a done file to avoid re-doing the mon deployment
[17:12] <Discard> [s4][DEBUG ] create the init path if it does not exist
[17:12] <Discard> [s4][DEBUG ] locating the `service` executable...
[17:12] <Discard> [s4][INFO ] Running command: sudo initctl emit ceph-mon cluster=ceph id=s4
[17:12] <Discard> [s4][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.s4.asok mon_status
[17:12] <Discard> [s4][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[17:12] <Discard> [s4][WARNIN] monitor: mon.s4, might not be running yet
[17:12] <Discard> [s4][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.s4.asok mon_status
[17:12] <alphe> ... stop that flood ...
[17:12] <Discard> [s4][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[17:12] <Discard> [s4][WARNIN] monitor s4 does not exist in monmap
[17:12] <Discard> [s4][WARNIN] neither `public_addr` nor `public_network` keys are defined for monitors
[17:12] <Discard> [s4][WARNIN] monitors may not be able to form quorum
[17:12] <Discard> ?
[17:12] <Discard> sorry
[17:13] <alphe> discard mankind have invented a fabulous website to show copy paste from terminals and it is called www.pastebin.com
[17:13] <alphe> you can create an account there in 2 secs
[17:14] <alphe> and put all your copypaste reedit them and share them providing a tiny url
[17:14] <Discard> ok
[17:14] <alphe> take your time I will stay around some more hours
[17:15] <Discard> http://pastebin.com/CRzX2YAe
[17:17] <Discard> alphe: I've made an updaye
[17:17] <Discard> update
[17:17] <Discard> with logs on host s4
[17:19] <alphe> ok see that
[17:19] <alphe> thanks much more clear
[17:19] <alphe> what I see is that in your ceph.conf file you didn t create a public and a cluster field
[17:20] <alphe> public 192.168.0.1/24
[17:20] <alphe> cluster 10.0.0.0./24
[17:20] <alphe> in youre ceph.conf
[17:20] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[17:20] <Discard> you right
[17:21] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[17:21] <alphe> that ceph.conf is the one that is created in your local dir (e.i /root/myceph-install/ ) and not the one in /etc/ceph/ceph.conf
[17:21] <Discard> but publics ips are not on the same
[17:21] <Discard> subnet
[17:21] <alphe> Discard they have to be on the same subnet
[17:21] <Discard> public ips ?
[17:21] <alphe> yes
[17:22] <alphe> and cluster ip too
[17:22] <alphe> you give ceph a rage of ip to look for
[17:22] <alphe> and it is implicite if you don t give anything
[17:23] <alphe> so if I create a monitor with ip 192.169.3.10
[17:23] <Discard> ok but i have deployed a cluster without problem
[17:23] <Discard> initial is ok
[17:23] <Discard> but when i want to add
[17:23] <alphe> ceph expect to find the other elements of its cluster in that ip subnet 192.169.3.X
[17:24] <alphe> see your ceph.conf file and look how it is set
[17:24] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:25] <alphe> on the log of your monitor line 36 of your pastbin there is a problem with the creation of a new store.db
[17:25] <alphe> leveldb store
[17:25] <alphe> so the mon service can not write from some odd resean in /var/lib/ceph/
[17:26] <alphe> so the mon service can not write from some odd resean in /var/lib/ceph/mon/store.db
[17:26] <Discard> i've update pastebin
[17:26] <Discard> i have already 3 mons ok
[17:27] <alphe> hum in your ceph.conf file the monitors are on the same subnet
[17:27] <alphe> mon_host = 10.90.50.30,10.90.52.30,10.90.52.85
[17:29] <Discard> yep but not my public ips
[17:29] <Discard> but all is private
[17:29] <janos> uh, that first one...
[17:29] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[17:29] <janos> 10.90.50 versus the otehr two 10.90.52?
[17:31] <Discard> janos it's a /16 subnet
[17:33] * Sysadmin88 (~IceChat77@90.208.9.12) has joined #ceph
[17:34] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:34] <mtanski> Sage: is there a way you can help me navigate that fscache patch upstream. It seams to have gotten lost and it certainly does impact ceph / fscache users.
[17:35] * ircolle (~Adium@2601:1:8380:2d9:6c5f:7132:ca76:5b5d) has joined #ceph
[17:35] <alphe> janos yes ...
[17:35] <alphe> they have all to be in the same subnet
[17:36] <alphe> 1 subnet for public 1 subnet for cluster replication talking
[17:36] <alphe> public is from where the data will come to your ceph nodes from ceph clients
[17:36] * BillK (~BillK-OFT@106-68-227-246.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[17:36] <Discard> alphe: ok but in my case my mon and replication are on the same subnets
[17:37] <alphe> Discard that is not the case
[17:37] <alphe> mon_host = 10.90.50.30,10.90.52.30,10.90.52.85
[17:37] <Discard> 255.255.0.0
[17:37] <alphe> your first monitor as a ip of 10.90.X.X
[17:37] <Discard> ok
[17:37] <alphe> discard you confound subnet and netmask
[17:37] <alphe> :)
[17:38] <Discard> right
[17:38] <Discard> but all my nodes are on the same subnet right ?
[17:38] <alphe> actally nope
[17:38] <Discard> i have no public ips
[17:38] <Discard> my subnet is 10.0.0.0/16
[17:39] <alphe> you have first host in 10.90.50.x and the others on 10.90.52.x
[17:39] <Discard> ok it's not a /24 subnet it's a /16 subsets
[17:39] <Discard> subnet
[17:40] <alphe> even if there is a gateway allowing data to passe from 10.90.50.x to 10.90.52.x they are not on the same network
[17:40] <Discard> nope
[17:40] <kraken> http://i.imgur.com/xKYs9.gif
[17:40] <Discard> it is full layer 2
[17:41] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:41] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[17:41] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[17:41] <alphe> kraken finally on the glorious day of dicember the 19th of AD 2013 mankind discover that dogs dislake sour things !
[17:42] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has left #ceph
[17:42] <alphe> discard put everyon on 10.90.50 and restart installation after ceph-deploy purge
[17:42] <alphe> and ceph-deploy purge-all of your hosts
[17:42] * hemantb (~hemantb@117.192.243.253) has joined #ceph
[17:42] <Discard> it's not possible
[17:42] <alphe> ip are ip ...
[17:42] <Discard> i have already 10TB online
[17:43] <Discard> and there is no problem
[17:43] <alphe> discard ?
[17:43] <Discard> yep
[17:43] <Discard> I just want to add another node
[17:43] <Discard> i have already 3 nodes ok
[17:44] <Discard> and there i no problem
[17:44] <Gugge-47527> alphe: why do you think 10.90.50.x is not on the same subnet as 10.90.52.x in a /16 ?
[17:44] <Discard> is really a problem of ip ?
[17:44] <Gugge-47527> Discard: no
[17:45] <alphe> the problem could be that you try to push a new monitor in a monitor map that is already existing
[17:45] <alphe> after the initial keyring exchange fase
[17:46] <Gugge-47527> add the new mon ip to the mon_host config
[17:46] <alphe> mon.s4 does not exist in monmap, will attempt to join an existing cluster
[17:46] <Gugge-47527> try again :)
[17:46] <Discard> already done
[17:46] <alphe> no public_addr or public_network specified, and mon.s4 not present in monmap or ceph.conf
[17:46] <alphe> mon_initial_members = s1, s2, s3
[17:46] <alphe> discard you have to add s4
[17:47] <alphe> at that line of youre ceph.conf then push it on all your existing nodes
[17:47] <alphe> ofcours you have to add the ip of s4 in that line too
[17:47] * sagelap1 (~sage@38.122.20.226) has joined #ceph
[17:47] <alphe> mon_host = 10.90.50.30,10.90.52.30,10.90.52.85
[17:48] <Discard> done
[17:48] <alphe> you modify your /root/mycephdepfiles/ceph.conf
[17:48] <Discard> yep
[17:48] <alphe> then you do ceph-deploy config push s{1..4}
[17:48] <Discard> just done and pushed
[17:49] <Discard> whiteout error
[17:49] <Discard> whitout
[17:49] <alphe> ok then now you wait a bit and it should say I joined the quorum of the monmap and I am a so happy peon
[17:49] * sagelap (~sage@2600:1012:b021:1e0f:74c1:f9cf:bb2:c587) Quit (Ping timeout: 480 seconds)
[17:49] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[17:50] <Discard> alphe: dont think
[17:50] <Discard> because i've not start mon on s4
[17:50] <Gugge-47527> add it again with ceph-deploy :)
[17:51] <Discard> same error
[17:51] <Gugge-47527> the _same_ error ?
[17:51] <Gugge-47527> telling you s4 is not in ceph.conf?
[17:52] <alphe> it should be an error of keyring or something like that
[17:52] <Discard> and mon.s4 not present in monmap or ceph.conf
[17:53] <Discard> how could I regen monmap ?
[17:53] <alphe> discard it is automatically done
[17:53] <Gugge-47527> try manually starting the mon on s4 and check the mon log
[17:54] * DarkAce-Z is now known as DarkAceZ
[17:54] <Gugge-47527> paste the mon log after manually starting it too :)
[17:54] <alphe> and copy us the log content
[17:54] <alphe> heheh ..
[17:55] * markbby (~Adium@168.94.245.1) has joined #ceph
[17:55] <alphe> why removing RDB image has to be soooo slow !?
[17:55] <Discard> 2013-12-19 17:49:43.386389 7f670d064780 0 mon.s4 does not exist in monmap, will attempt to join an existing cluster
[17:55] <Discard> 2013-12-19 17:49:43.386836 7f670d064780 -1 no public_addr or public_network specified, and mon.s4 not present in monmap or ceph.conf
[17:55] <Discard> 2013-12-19 17:54:49.228523 7feb5b27a780 0 mon.s4 does not exist in monmap, will attempt to join an existing cluster
[17:55] <Discard> 2013-12-19 17:54:49.228935 7feb5b27a780 -1 no public_addr or public_network specified, and mon.s4 not present in monmap or ceph.conf
[17:56] <Gugge-47527> you forget quickly dont you?
[17:56] * vata (~vata@2607:fad8:4:6:d594:116f:e4be:aece) has joined #ceph
[17:56] <alphe> is the local /etc/ceph/ceph.conf on s4 as been updated ?
[17:56] <Discard> mon_initial_members = s1, s2, s3, s4
[17:56] <Discard> mon_host = 10.90.50.30,10.90.52.30,10.90.52.85, 10.90.80.191
[17:56] <Gugge-47527> when you paste stuff, use pastebin :)
[17:57] <Discard> ok
[17:57] <Discard> sorry again
[17:57] <Gugge-47527> Discard: why did you put a space in the mon_host line?
[17:57] <Discard> because i'm stupid ?
[17:58] <alphe> discard or because you are human
[17:58] <Gugge-47527> i dont know if its allowed, but i would not do it when the original line does not contain spaces :)
[17:58] <Discard> alphe: :-)
[17:58] <Gugge-47527> remove the space, and try starting the mon again :)
[17:59] <alphe> Gugge-47527 don t ne need to restart the existing monitor for them to reload the modifyed ceph.conf ?
[17:59] <Gugge-47527> no
[17:59] <Gugge-47527> well yes, to reload the new config he needs to restart them
[17:59] <Gugge-47527> but they does not need to read the new config :)
[17:59] * hemantb (~hemantb@117.192.243.253) Quit (Read error: Connection timed out)
[18:00] <Discard> ok i have the same error
[18:00] <Discard> with modifs
[18:00] <Gugge-47527> and 10.90.80.191 is the only ip on s4?
[18:00] <alphe> ... weird ... they will trust any new commers that says hey my ceph.conf tells me I can play with you so accept me !
[18:00] <Gugge-47527> alphe: any new one with the correct keys yes :)
[18:00] * hemantb (~hemantb@117.192.243.253) has joined #ceph
[18:01] <alphe> Gugge-47527 and getting the key is done by ceph-deploy when it create the new mon
[18:01] * angdraug (~angdraug@12.164.168.116) has joined #ceph
[18:02] <Discard> Gugge-47527:
[18:02] <Discard> Gugge-47527: nope
[18:02] <Discard> i have a public ip
[18:03] <Discard> on the host
[18:03] <alphe> only 5% of my 19TB rbd image removal done ...
[18:03] <alphe> it s going to run a whole week
[18:03] <alphe> I should purge and zap the whole thing
[18:04] <alphe> it will be faster
[18:04] <janos> alphe: on my first test cluster in bobtail i math'd wrong and made a petabyte rbd... i ended up blowing it all away - was thankfully a test cluster
[18:04] <janos> went to remove and you can imagine how long that looked like it was going to take
[18:04] * hemantb (~hemantb@117.192.243.253) Quit ()
[18:05] <Gugge-47527> Discard: add public_network = 10.90.0.0/16 to ceph.conf then
[18:05] <alphe> janos hehehe sorry for your loss ...
[18:05] <janos> lol
[18:05] <Gugge-47527> Discard: to tell it what ip it needs to use
[18:05] <alphe> but yes I imagine ...how the heck is that so slow ...
[18:06] <Gugge-47527> alphe: it removes each object one at a time :)
[18:06] <alphe> creating an images is fast expanding it is lightning quick
[18:06] <Gugge-47527> and there is no info anywhere about what blocks are there, so it tries them all :)
[18:06] <janos> making a sparse image is WAY easier than making sure every possible bit of an image is gone
[18:06] <alphe> Gugge-47527 what a marvoulous idea ...
[18:06] * lai (~lai@200.144.254.28) has joined #ceph
[18:06] <janos> yeah, it has to check all the possibles
[18:06] <Gugge-47527> alphe: well, better than maintaining a list of objects ever used
[18:07] <Discard> Gugge-47527: you're my god :P
[18:07] <Gugge-47527> Discard: no problem. :P
[18:07] <Discard> and to create on new odd do I have to add another confi line ?
[18:07] <Discard> ods
[18:07] <alphe> osd
[18:07] <Gugge-47527> no
[18:07] <Discard> osd
[18:07] <Discard> ok
[18:07] <Discard> grat
[18:07] <Discard> great lt.'s try now :-)
[18:08] <Gugge-47527> it should only be the mon ip's thats needed in the config
[18:08] <alphe> no osd are not needing any special stuff
[18:08] <Gugge-47527> remember to add a 5th mon too :)
[18:08] <Gugge-47527> 4 is no better than 3 :)
[18:08] <alphe> Gugge-47527 but if i zap the disk instead of removing the image is that good ?
[18:08] <Gugge-47527> alphe: what disk?
[18:09] <alphe> the osd related one ...
[18:09] <Gugge-47527> i do not understand your question :)
[18:09] <alphe> hum but still somehow the image settings will remain ...
[18:09] <alphe> gugge-47527 i want to remove a rbd image ... and it is very slow
[18:10] * lai (~lai@200.144.254.28) Quit ()
[18:10] <Gugge-47527> if its a new image you can manually remove an object (i dont remember the name
[18:10] <Gugge-47527> if its a used image, you just have to wait
[18:10] <alphe> so my idea is lets skip that silly action and zap the disks of the osd then purge-data, purge and reinstall the cluster from scratch
[18:10] <Gugge-47527> or upgrade ceph, to get a version that uses multiple threads to delete rbd images :)
[18:11] <alphe> i have the emperor
[18:11] <Gugge-47527> yes, if you make a new cluster, the old rbd will be gone :)
[18:11] <alphe> 0.72.1-8 to be precise
[18:11] <alphe> fantastic
[18:11] <alphe> welcome time gain !
[18:11] * BillK (~BillK-OFT@106-68-227-246.dyn.iinet.net.au) has joined #ceph
[18:12] <Gugge-47527> im sure i saw something about quicker rbd delete on the mailing list
[18:12] <Gugge-47527> but i dont really remember :P
[18:12] * lai (~lai@200.144.254.28) has joined #ceph
[18:12] <Gugge-47527> its never really been a problem to wait the few minutes it takes to destroy my 10TB images :)
[18:12] * glambert (~glambert@37.157.50.80) has joined #ceph
[18:13] <alphe> when I reinstall the cluster i can specify in the ceph.conf file the file system i want for my osd or I have to use the --file-ssytem parameter of ceph-deploy ?
[18:13] <lai> whem i try to mount cephfs i got this error: mount error 22 = Invalid argument
[18:13] <alphe> Gugge-47527 you are lucky it takes me 1 hours to do 6% ...
[18:13] <alphe> lai because you dont have the key installed
[18:14] <Gugge-47527> alphe: using "order 25" on my images (32MB objects) helps :P
[18:14] * hemantb (~hemantb@117.192.243.253) has joined #ceph
[18:15] <alphe> Gugge-47527 sure
[18:15] <alphe> Gugge-47527 order 25 ?
[18:15] <alphe> in xfs ?
[18:15] <Discard> Gugge-47527: i've made a prepare and activate
[18:15] <Discard> on s4
[18:16] <Gugge-47527> alphe: the rbd image
[18:16] <Discard> but I don't sit it in ceph -w
[18:16] <Discard> see it
[18:16] <Gugge-47527> Discard: once again, check the osd log :)
[18:17] <Discard> is it possible that I have made a mistake because i've put s4.13h.com and not s4 ?
[18:17] <lai> alphe the key?
[18:17] <alphe> Gugge-47527 I can modify my rbd image to have order 25
[18:18] <jnq> if i put new nodes into my cluster with SSDs in, for example, and changed the rules for my virtual machine images to be stored only on machines with SSDs would old images be moved automagically or just new stuff?
[18:18] * sleinen (~Adium@2001:620:0:26:c085:b1cc:b919:a0c8) Quit (Quit: Leaving.)
[18:18] * sleinen (~Adium@130.59.94.132) has joined #ceph
[18:18] <Gugge-47527> alphe: its a create option
[18:18] <alphe> lai I imagine you try to mount a cephfs device right ?
[18:19] <Gugge-47527> alphe: but dont just change the object size, it could give you worse performance.
[18:19] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[18:19] <lai> alphe my command: mount -t ceph 10.2.2.2:6789:/ /mnt/mycephfs
[18:19] <alphe> something like mout -t ceph 192.168.0.15:6789:/ /mnt/myceph -o name=admin,secretfile=/etc/ceph/secret
[18:20] * hemantb (~hemantb@117.192.243.253) Quit (Quit: hemantb)
[18:20] <alphe> ok you need to specify what user is accessing and what key file to use
[18:20] <lai> alphe ok, will try
[18:20] <alphe> lai you use ceph-deploy ?
[18:21] <lai> alphe yes ceph-deploy
[18:21] <alphe> ok so first give the ssh id_rsa.pub to your client from the machine you use to use ceph-deploy
[18:22] <lai> alphe ok
[18:22] <alphe> put the content of that ssh id_rsa.pub into a .ssh/authorized_keys in your client root's /root directory
[18:22] <alphe> then from your ceph-deploy machine you do ceph-deploy config push clienthostname
[18:22] <alphe> then from your ceph-deploy machine you do ceph-deploy admin clienthostname
[18:23] <alphe> then you edit on your client machine the files /etc/ceph/client.ceph.admin.keyring
[18:23] * ScOut3R (~scout3r@4E5C7421.dsl.pool.telekom.hu) has joined #ceph
[18:23] <lai> alphe ok i will try this
[18:24] <alphe> you copy all the ZASV121ZXZ!3ASDF21 big line
[18:24] <alphe> and you paste it to your file /etc/ceph/secret
[18:24] <alphe> or what every name you want to give to it
[18:24] <lai> alphe ok
[18:25] <alphe> then you do you mount command not forgettingh -o name=admin,secretfile=/etc/ceph/secret
[18:25] <alphe> it should work
[18:25] <lai> alphe ok i will do this
[18:25] <alphe> ofcourse on your "admin cephdeploy machince" you need to be in the directory that contains the keyrings and ceph.log and ceph.conf files
[18:26] * sleinen (~Adium@130.59.94.132) Quit (Ping timeout: 480 seconds)
[18:26] <lai> alphe thank you
[18:27] <lai> \quit
[18:27] * lai (~lai@200.144.254.28) Quit (Quit: leaving)
[18:31] <Discard> Gugge-47527: I have another error: when I try to do ceph -w I have error: Error initializing cluster client: Error
[18:33] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:36] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[18:36] * ChanServ sets mode +v andreask
[18:36] * JC (~JC@nat-dip5.cfw-a-gci.corp.yahoo.com) has joined #ceph
[18:36] <Discard> anyone got an idea ?
[18:37] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[18:39] <Discard> alphe: ?
[18:39] * jcsp (~jcsp@0001bf3a.user.oftc.net) has joined #ceph
[18:41] <alphe> yes ?
[18:41] * ScOut3R (~scout3r@4E5C7421.dsl.pool.telekom.hu) Quit ()
[18:43] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[18:44] <Discard> alphe: Error initializing cluster client: Error when I type ceph -w
[18:44] <Discard> any idea ?
[18:45] <alphe> cluster client is the osd
[18:45] <alphe> look at the osd log
[18:45] <alphe> copy paste its content and share the url
[18:46] * hjjg (~hg@p3EE322AA.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[18:47] * pmatulis2 (~peter@64.34.151.178) has left #ceph
[18:47] * pmatulis2 (~peter@64.34.151.178) has joined #ceph
[18:48] <alphe> any idea why this happends to ceph-deploy ?
[18:48] <alphe> http://pastebin.com/Ftt5yCPc
[18:48] <alphe> seems like sgdisk has a probleme ...
[18:50] <Discard> alphe: which log ?
[18:50] <alphe> on s4 /var/log/ceph/ceph.osd.id.log
[18:50] <alphe> id stands for the osd id ...
[18:51] <alphe> if that is empty get the ceph.mon.log on the master node
[18:53] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:55] <Discard> which is the master node ? Admin node ?
[18:59] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[18:59] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:02] * linuxkidd (~linuxkidd@cpe-066-057-020-180.nc.res.rr.com) Quit (Quit: Konversation terminated!)
[19:02] * sarob_ (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:03] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[19:03] * TheBittern (~thebitter@195.10.250.233) Quit ()
[19:06] * linuxkidd (~linuxkidd@cpe-066-057-020-180.nc.res.rr.com) has joined #ceph
[19:07] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[19:10] * sarob_ (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[19:10] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:10] * ScOut3R (~scout3r@4E5C7421.dsl.pool.telekom.hu) has joined #ceph
[19:10] <Discard> alphe: http://pastebin.com/mqxXQMJA
[19:12] <alphe> verify_authorizer could not decrypt ticket info: error: NSS AES final round failed: -8190
[19:12] <alphe> that is the only problem I see
[19:12] <alphe> and it seems to me related to keyring
[19:12] <Discard> ok
[19:14] <Discard> alphe: and how could i correct keyring pb ?
[19:15] <alphe> how did you created the osd in s4 ?
[19:15] <Discard> i havent done
[19:16] * sarob_ (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:16] * BillK (~BillK-OFT@106-68-227-246.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[19:17] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[19:17] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[19:20] <stj> hi folks, I'm having some pretty consistent problems with ceph-deploy hanging when I try to add a 5th host (with 3 OSDs) to my cluster
[19:21] <stj> the ceph-deploy disk zap runs fine
[19:21] <stj> then when I do the `osd prepare', it claims to finish setting things up
[19:21] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) has joined #ceph
[19:21] * sarob_ (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[19:21] <stj> but it leaves the osd partition mounted in /var/lib/ceph/tmp/mnt*
[19:21] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:22] <stj> and if I try to activate the new OSD, it hangs indefinitely
[19:22] <stj> all I can see in the logs is " journal read_header error decoding journal header"
[19:22] <stj> which is referring to the new disk/OSD I just tried to activate
[19:22] <stj> anyone seen this?
[19:22] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[19:23] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[19:23] * danieagle (~Daniel@179.176.54.173.dynamic.adsl.gvt.net.br) Quit (Quit: inte+ e Obrigado Por tudo mesmo! :-D)
[19:24] * Sysadmin88 (~IceChat77@90.208.9.12) Quit (Quit: Easy as 3.14159265358979323846... )
[19:24] <stj> I also see some udev messages in the syslog that claim to be killing the ceph-disk-activate scripts after they time out
[19:25] <Discard> alphe: it's very strange because radios command is ok and i can access to the cluster but not ceph
[19:26] <alphe> stj are the related disks mounted ?
[19:27] <alphe> Discard indeed
[19:27] <alphe> I would do a full restart of all of my nodes
[19:27] <stj> alphe: I get the same hangs whether or not the disk are mounted
[19:27] <alphe> a stop ceph-all on all my nodes
[19:28] <alphe> then wait a little see if all ceph related services are gone and then startceph-all
[19:28] <alphe> stj disks has to have the right format
[19:28] <alphe> and they has to be mounted
[19:28] * fouxm (~fouxm@185.23.92.11) Quit (Remote host closed the connection)
[19:28] <stj> yeah, I'm fairly confident that they are the right format
[19:28] <stj> as I'm formatting them with ceph-deploy disk zap, and then osd prepare
[19:29] <alphe> fdisk -l to be sure
[19:29] * fouxm (~fouxm@185.23.92.11) has joined #ceph
[19:29] <stj> the osd prepare seems to fail to unmount the disk from its temporary mount, and then mount it correctly in /var/lib/ceph/osd/*
[19:29] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[19:30] * fouxm (~fouxm@185.23.92.11) Quit (Read error: Connection reset by peer)
[19:31] <alphe> stj this could explain the problems if they are mounted in the wrong place
[19:31] <stj> ok
[19:31] <stj> I've also tried unmounting the disk from the temporary location, and mounting it by hand in the right stop before activating, and ceph-disk-activate still hangs :/
[19:31] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has left #ceph
[19:32] <stj> what's strange is I didn't see this issue on any of the other 4 nodes I've deployed
[19:33] <stj> ...maybe there's something up with the hardware
[19:33] * joao|lap (~JL@a79-168-11-205.cpe.netcabo.pt) has joined #ceph
[19:33] * ChanServ sets mode +o joao|lap
[19:34] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:34] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[19:34] <alphe> something is strange ...
[19:34] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[19:34] <alphe> that is for sure
[19:35] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit ()
[19:40] <stj> hmm, fdisk and parted seem to disagree about the partition layout on the disk that I'm trying to zap
[19:43] * scalability-junk (uid6422@id-6422.ealing.irccloud.com) has joined #ceph
[19:44] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[19:45] <dmick> stj: maybe gpt vs fdisk?
[19:46] <stj> yeah, I think that's it
[19:46] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[19:46] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:46] <stj> yeah, same thing happens for all disks on this node
[19:46] <stj> osd prepare finishes, and says that the host is ready for osd use
[19:46] <stj> disk is still mounted in the wrong spot in /var/lib/ceph/tmp
[19:47] <stj> and the ceph create process on the osd node is still running/hung :/
[19:47] * thomnico (~thomnico@2a01:e35:8b41:120:9935:fe41:68ca:e870) Quit (Quit: Ex-Chat)
[19:48] <Discard> alphe: strange things i can't use ceph without a sudo
[19:49] <alphe> sudo ? but you are root no ?
[19:49] <alphe> discard whoami ?
[19:50] <Discard> i've created a user ceph
[19:50] <Discard> i'm not root
[19:50] <alphe> ceph is a root command
[19:50] <Discard> ok
[19:51] <alphe> since it is always accompagned with sudo
[19:51] <alphe> in the official docs
[19:51] <alphe> if you can t use ceph with any user it is because they can t read the files in /etc/ceph
[19:52] <alphe> i think it is something like that ..
[19:53] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[19:53] * sarob_ (~sarob@2001:4998:effd:600:319e:9085:8b8f:f06c) has joined #ceph
[19:54] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[19:55] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[19:55] * noob2 (~cjh@173.252.71.189) has joined #ceph
[19:57] <noob2> ceph: i'm having an issue with my cluster where i migrated drives to another server and i'm getting this error when i try to bring them up: .connect claims to be 192.168.1.20:6813/4861 not 192.168.1.20:6813/5183 - wrong node!
[19:57] <noob2> i'm running the latest 0.72 code on ubuntu 13.04
[19:59] * sagelap1 (~sage@38.122.20.226) Quit (Ping timeout: 480 seconds)
[20:01] * sarob_ (~sarob@2001:4998:effd:600:319e:9085:8b8f:f06c) Quit (Remote host closed the connection)
[20:01] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:03] * joao|lap (~JL@a79-168-11-205.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[20:05] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[20:05] * sarob_ (~sarob@2001:4998:effd:600:5898:cade:17e3:b65e) has joined #ceph
[20:06] * jcsp1 (~jcsp@2607:f298:a:607:cd42:5518:5a2e:8ae1) has joined #ceph
[20:07] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[20:12] * jcsp (~jcsp@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:16] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[20:16] * LeaChim (~LeaChim@host81-159-251-38.range81-159.btcentralplus.com) has joined #ceph
[20:18] * noob2 (~cjh@173.252.71.189) has left #ceph
[20:23] * aliguori (~anthony@74.202.210.82) has joined #ceph
[20:43] * illya (~illya_hav@16-158-133-95.pool.ukrtel.net) has joined #ceph
[20:44] <illya> hi
[20:44] * sarob_ (~sarob@2001:4998:effd:600:5898:cade:17e3:b65e) Quit (Remote host closed the connection)
[20:44] <alphe> is there a problem to install osd disk with xfs and then do a rados block device and format it to ext4 ?
[20:44] <alphe> illya hi
[20:44] * sarob (~sarob@2001:4998:effd:600:5898:cade:17e3:b65e) has joined #ceph
[20:45] <dmsimard> ircolle: :D
[20:45] * wwang001 (~wwang001@fbr.reston.va.neto-iss.comcast.net) has joined #ceph
[20:46] <alphe> is there a problem to install osd disk with xfs and then do a rados block device and format it to ext4 ?
[20:46] <illya> i tried to setup another ceph cluster by Chef cookbook, had several issues during deployment, so was redoing several steps
[20:47] <illya> finally I got it up
[20:47] <ircolle> dmsimard - see my reply ;-)
[20:47] <alphe> is it better to make ext4fs osd disk and then ext4fs rbd ?
[20:47] <illya> but not too healthy
[20:47] <illya> health HEALTH_WARN 192 pgs degraded; 192 pgs stale; 192 pgs stuck stale; 192 pgs stuck unclean; 8 requests are blocked > 32 sec; mds cluster is degraded
[20:48] <illya> any ideas what I should check
[20:48] <illya> I tried to grep all docs - no luck
[20:48] <illya> ceph version 0.67.4
[20:49] <illya> full output here
[20:49] <illya> http://pastebin.com/RUHWW6yb
[20:50] <illya> I'm ready to recreate all from the beginning if this would help
[20:51] * xmltok (~xmltok@216.103.134.250) has joined #ceph
[20:52] * sarob (~sarob@2001:4998:effd:600:5898:cade:17e3:b65e) Quit (Ping timeout: 480 seconds)
[21:01] <bandrus> are all your OSDs on one host?
[21:02] <illya> no
[21:03] <bandrus> are all OSDs started after cluster creation?
[21:03] <illya> yes
[21:04] <illya> i tried put them out then in
[21:04] <illya> no luck
[21:05] <ganders> hi to all, i want to setup in a node 12 OSD's (12 x 73GB disks) and 2 Journals (1 x 146GB disk) what would be the command to do that?
[21:05] <illya> just found some bench command
[21:05] * tsnider (~tsnider@nat-216-240-30-23.netapp.com) Quit (Ping timeout: 480 seconds)
[21:05] <bandrus> what size are your pools set to ilya?
[21:05] <bandrus> default of 2?
[21:06] <ganders> ceph-deploy osd prepare ceph-node01:sdb:/dev/sdc1 (for example for OSD#1)
[21:06] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:06] <bandrus> that should work, ganders
[21:06] <ganders> ceph-deploy osd prepare ceph-node01:sdd:/dev/sdc2 (for OSD#2) and so on... ?
[21:06] <bandrus> and then ceph-deploy activate
[21:07] <ganders> so first i need to partitioned the Journal disk in two right? and then associate 6 OSDs with one partition
[21:07] <ganders> and then the other 6 with the other partition?
[21:07] <illya> my config is very simple
[21:07] <bandrus> correct
[21:07] <illya> http://pastebin.com/vZWsvzqy
[21:07] <ganders> oh ok thanks a lot bandrus
[21:08] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[21:08] * sarob (~sarob@2001:4998:effd:600:9014:92af:bc5d:1586) has joined #ceph
[21:11] <illya> what means next in pg dump
[21:12] <illya> pool 0 0 0 0 0 0 0 0
[21:12] <illya> pool 1 0 0 0 0 0 0 0
[21:12] <illya> pool 2 0 0 0 0 0 0 0
[21:12] <illya> sum 0 0 0 0 0 0 0
[21:12] <bandrus> post a ceph health detail and a ceph osd tree (to pastebin)
[21:13] <illya> sec
[21:14] <illya> http://pastebin.com/Xs2ApEd0
[21:14] <Discard> Hi there
[21:14] <illya> osd tree
[21:14] <illya> http://pastebin.com/1FZh0XPM
[21:16] <Discard> I Have a strange problem to create mons : I have already 3 monitors and they're ok but when I want to deploy another one this installation made a special config : you can see here : http://pastebin.com/0t3ivSqv
[21:16] <Discard> if anyone got an id??e :-) ?
[21:16] <Discard> idea
[21:17] * SvenPHX (~Adium@wsip-174-79-34-244.ph.ph.cox.net) Quit (Quit: Leaving.)
[21:18] <bandrus> ilya, have you tried restarting your cluster?
[21:19] <illya> osd's or all daemons ?
[21:19] <bandrus> all daemons
[21:20] <bandrus> if possible
[21:22] * SvenPHX (~Adium@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[21:27] <illya> restarted
[21:27] <bandrus> same thing?
[21:28] <illya> health HEALTH_WARN 192 pgs degraded; 192 pgs stale; 192 pgs stuck stale; 192 pgs stuck unclean; mds cluster is degraded
[21:28] <illya> mon.0 [INF] pgmap v154: 192 pgs: 192 stale+active+degraded; 0 bytes data, 10401 MB used, 10162 GB / 10716 GB avail
[21:28] * ScOut3R (~scout3r@4E5C7421.dsl.pool.telekom.hu) Quit ()
[21:28] <bandrus> try a ceph pg force_create_pg 0.3f
[21:29] <bandrus> see if that number goes down to 191
[21:29] <illya> ceph pg force_create_pg 0.3f
[21:29] <illya> pg 0.3f now creating, ok
[21:29] <illya> yes it is
[21:29] <illya> now 191
[21:30] <illya> pgmap v158: 192 pgs: 1 creating, 191 stale+active+degraded;
[21:30] <bandrus> did it create successfully?
[21:31] <illya> not sure how to check
[21:31] <bandrus> does it still say "1 creating"?
[21:32] <illya> yes
[21:34] <illya> and still yes :(
[21:34] <bandrus> how about a "ceph pg send_pg_creates"
[21:35] <bandrus> does that put them all into a creating state?
[21:35] <illya> no
[21:36] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[21:36] <illya> I'm ready to rebuild all if it helps :(
[21:37] <illya> but I can probably get the same results
[21:38] <bandrus> I'm no OSD expert unfortunately
[21:38] <bandrus> did you initially only have one OSD in this cluster?
[21:39] <illya> yes
[21:39] <illya> really I have nothing in config
[21:40] <illya> but I started 1 osd
[21:40] <illya> and started second in 10-15mins
[21:40] <illya> I can do the same but start with 2
[21:40] <bandrus> wondering what your crush map looks like
[21:42] * ganders (~gertux@200.0.230.234) Quit (Quit: WeeChat 0.4.0)
[21:42] * sleinen (~Adium@2001:620:0:25:889:69f8:a668:b55c) has joined #ceph
[21:46] <illya> not sure I did all right
[21:46] <illya> but please take a look
[21:46] <illya> http://pastebin.com/p69q6PWW
[21:52] <illya> some news
[21:53] <bandrus> oh?
[21:53] <illya> creating -> active+clean now
[21:53] <bandrus> excellent
[21:53] <illya> should I write a simple script to force the all ?
[21:54] <bandrus> yeah, for i in `ceph health detail | grep stuck | awk '{print $1}'`; do blah blah blah
[21:54] <bandrus> something like that
[21:55] <bandrus> there very well may be an easier way to do that, I thought perhaps send_pg_creates would do it, but apprently that is not its purpose
[21:58] <Discard> hey bandrus, could you help me with monitors ?
[21:58] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[21:59] <bandrus> Discard: I don't quite understand where your problem is, which mon was giving you issues, and where in your pastebin do you encounter these issues?
[21:59] <illya> fyi
[21:59] <illya> for i in `ceph health detail|grep stuck| cut -d ' ' -f 2`; do ceph pg force_create_pg $i; done
[22:00] <bandrus> I see, your IPs
[22:00] <bandrus> cool, thanks illya
[22:00] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[22:00] <Discard> bandrus: I find errors
[22:00] <bandrus> Discard: is s4 able to properly resolve the other hosts?
[22:02] <Discard> yep bandrus
[22:02] <illya> bandrus: can you describe what can be the reason of my situation ?
[22:02] <Discard> i've updated http://pastebin.com/0t3ivSqv
[22:02] <flaxy> [Kryvyy Rih, Ukraine] Fog. Temp is 0*C but feels like -4*C. SW wind: 11 kph. Humidity: 100%.
[22:03] <bandrus> Illya, you may understand why they were in that state initially - they cannot become *active (i think)* with only one OSD (unless you modify your crush map)
[22:04] <bandrus> as far as why they did not automatically become active after adding other OSDs, I can't say
[22:04] <illya> so better always start from >= 2 OSDs
[22:05] <bandrus> well if we went by this particular case, then that would be an almost accurate statement. >= 2 OSDs *on two separate hosts
[22:05] <illya> I'm not sure how to deploy it
[22:05] <bandrus> but I'm under the impression that should have become healthy automatically. I just tested it and ceph became healthy after I simply started the OSDs on the other node
[22:06] <illya> probably deploy OSDs first and only then start MON
[22:06] <bandrus> illya: do you use ceph-deploy?
[22:06] <illya> nope
[22:06] <kraken> http://i.imgur.com/zCtbl.gif
[22:06] <Discard> bandrus: have you already seen this type of error ?
[22:06] <bandrus> okay, that might be one solution, but I can't say for sure.
[22:07] <bandrus> Discard, I have seen similar errors, but I am not aware of a solution off the top of my head
[22:07] <illya> I'm using this https://github.com/ceph/ceph-cookbooks
[22:07] <bandrus> unfortunately I can't stay much longer
[22:07] <illya> bandrus: thx a lot
[22:07] <Discard> bandrus: no problem thanks
[22:09] * mozg (~andrei@host86-184-120-168.range86-184.btcentralplus.com) has joined #ceph
[22:12] <illya> any specific config settings
[22:13] <illya> if I want to join 5 OSDs by 3 TB each ?
[22:13] * sarob (~sarob@2001:4998:effd:600:9014:92af:bc5d:1586) Quit (Remote host closed the connection)
[22:13] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:15] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[22:19] * AfC (~andrew@203-219-79-122.static.tpgi.com.au) has joined #ceph
[22:21] * zhyan_ (~zhyan@134.134.139.70) has joined #ceph
[22:21] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[22:29] * yanzheng (~zhyan@101.229.190.24) Quit (Ping timeout: 480 seconds)
[22:32] * illya (~illya_hav@16-158-133-95.pool.ukrtel.net) has left #ceph
[22:37] * rendar (~s@87.1.177.0) Quit (Read error: Connection reset by peer)
[22:39] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[22:46] * madkiss (~madkiss@p4FFCDE42.dip0.t-ipconnect.de) has joined #ceph
[22:50] * Cube (~Cube@12.248.40.138) has joined #ceph
[22:51] * sjm (~Adium@rtp-isp-nat1.cisco.com) has joined #ceph
[22:53] * zhyan_ (~zhyan@134.134.139.70) Quit (Remote host closed the connection)
[22:53] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[22:54] * Hakisho (~Hakisho@0001be3c.user.oftc.net) Quit (Ping timeout: 480 seconds)
[22:58] * allsystemsarego (~allsystem@188.26.167.169) Quit (Quit: Leaving)
[22:59] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[23:00] * sarob (~sarob@2001:4998:effd:600:d1cc:e60c:1304:8086) has joined #ceph
[23:03] * madkiss (~madkiss@p4FFCDE42.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[23:04] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[23:10] * BillK (~BillK-OFT@106-68-227-246.dyn.iinet.net.au) has joined #ceph
[23:11] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[23:11] * ChanServ sets mode +v andreask
[23:11] * Sysadmin88 (~IceChat77@90.208.9.12) has joined #ceph
[23:13] <andreask> ceph osd crush remove osd.0
[23:13] <andreask> device 'osd.0' does not appear in the crush map
[23:13] <andreask> anyone seeing an error here?
[23:13] * sarob (~sarob@2001:4998:effd:600:d1cc:e60c:1304:8086) Quit (Remote host closed the connection)
[23:14] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[23:17] * sarob_ (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[23:17] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[23:18] * mozg (~andrei@host86-184-120-168.range86-184.btcentralplus.com) Quit (Quit: Ex-Chat)
[23:18] * mozg (~andrei@host86-184-120-168.range86-184.btcentralplus.com) has joined #ceph
[23:23] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[23:26] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[23:28] * sprachgenerator (~sprachgen@130.202.135.213) has joined #ceph
[23:42] * nmtadam (~oftc-webi@pat.hitachigst.com) has joined #ceph
[23:43] * nmtadam (~oftc-webi@pat.hitachigst.com) Quit ()
[23:45] <alphe> is there a problem to install osd disk with xfs and then do a rados block device and format it to ext4 ?
[23:45] <alphe> is it better to make ext4fs osd disk and then ext4fs rbd ?
[23:47] <alphe> ?
[23:47] * grepory (foopy@lasziv.reprehensible.net) Quit (Read error: Connection reset by peer)
[23:50] <angdraug> alphe: no more a problem than creating a loopback device from a file on xfs and formatting _that_ to ext4
[23:50] <angdraug> in other words, no problem at all
[23:50] <alphe> ok
[23:51] <alphe> I had yesterday a problem extending my rbd image formated with xfs
[23:52] <alphe> it triggered a weird XIOC_ error and couldn t fix it even with a xfs_check
[23:52] <alphe> so as my rbd image is supposed to grow in the futur what would be the best choice ext4 or xfs ?
[23:53] <alphe> to be on the safe road I always user xfs_growfs -d /mountpoint
[23:54] <alphe> but suddently after en enhancement it went back and shrinked down
[23:54] <alphe> and then started with the XIOC errors
[23:54] * grepory (foopy@lasziv.reprehensible.net) has joined #ceph
[23:54] * sjm (~Adium@rtp-isp-nat1.cisco.com) Quit (Quit: Leaving.)
[23:56] * sleinen (~Adium@2001:620:0:25:889:69f8:a668:b55c) Quit (Quit: Leaving.)
[23:56] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[23:59] * mwarwick (~mwarwick@2407:7800:400:1011:3e97:eff:fe91:d9bf) has joined #ceph
[23:59] <Pedras> cache
[23:59] <Pedras> The cache mode to be used. The host pagecache provides cache memory. The cache value can be 'none', 'writethrough', or 'writeback'.
[23:59] <Pedras> 'writethrough' provides read caching. 'writeback' provides read and write caching.

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.