#ceph IRC Log

Index

IRC Log for 2015-06-12

Timestamps are in GMT/BST.

[0:00] <flaf> TheSov: you said "i noticed a 4 percentish speed difference by seperating out the monitors from OSD servers". But when monitors were in OSD servers, where do had put the working directory of monitors? In a dedicated disk?
[0:00] * arbrandes (~arbrandes@191.254.207.134) has joined #ceph
[0:02] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[0:08] * Shesh (~Shadow386@8Q4AABGFI.tor-irc.dnsbl.oftc.net) Quit ()
[0:08] * Frymaster (~spidu_@185.77.129.88) has joined #ceph
[0:14] * georgem (~Adium@184.151.190.34) Quit (Quit: Leaving.)
[0:14] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[0:14] * yanzheng (~zhyan@125.71.107.110) Quit (Quit: This computer has gone to sleep)
[0:18] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) Quit (Ping timeout: 480 seconds)
[0:22] * jks (~jks@178.155.151.121) Quit (Ping timeout: 480 seconds)
[0:24] * arbrandes (~arbrandes@191.254.207.134) Quit (Quit: Leaving)
[0:25] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[0:27] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[0:32] * sleinen (~Adium@2001:620:0:82::106) Quit (Ping timeout: 480 seconds)
[0:38] * Frymaster (~spidu_@9S0AAA0H3.tor-irc.dnsbl.oftc.net) Quit ()
[0:38] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[0:39] * ircolle is now known as ircolle-afk
[0:46] * bobrik_ (~bobrik@83.243.64.45) Quit (Remote host closed the connection)
[0:50] * jwilkins (~jwilkins@2601:9:703:f100:ea2a:eaff:fe08:3f1d) has joined #ceph
[0:51] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has joined #ceph
[0:51] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) Quit (Quit: Leaving)
[0:53] * rlrevell (~leer@184.52.129.221) has joined #ceph
[0:58] * bandrus1 (~brian@234.sub-70-211-67.myvzw.com) Quit (Quit: Leaving.)
[1:00] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[1:01] * rlrevell (~leer@184.52.129.221) has left #ceph
[1:04] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[1:06] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[1:08] * Random (~rushworld@vmi24716.contabo.host) has joined #ceph
[1:08] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[1:11] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (Ping timeout: 480 seconds)
[1:15] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[1:19] * Concubidated (~Adium@192.41.52.12) Quit (Quit: Leaving.)
[1:27] * alram (~alram@192.41.52.12) Quit (Ping timeout: 480 seconds)
[1:30] * evilrob00 (~evilrob00@cpe-72-179-14-118.austin.res.rr.com) has joined #ceph
[1:37] * moore (~moore@97-124-90-185.phnx.qwest.net) has joined #ceph
[1:38] * moore (~moore@97-124-90-185.phnx.qwest.net) Quit (Remote host closed the connection)
[1:38] * Random (~rushworld@2FBAACBF9.tor-irc.dnsbl.oftc.net) Quit ()
[1:38] * Frostshifter (~storage@patti.ge.ieiit.cnr.it) has joined #ceph
[1:38] * moore (~moore@64.202.160.233) has joined #ceph
[1:43] * Concubidated (~Adium@66.87.127.21) has joined #ceph
[1:47] * alram (~alram@64.134.221.151) has joined #ceph
[1:50] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has joined #ceph
[2:00] * oms101 (~oms101@p20030057EA0A7800C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:03] * vata (~vata@cable-21.246.173-197.electronicbox.net) has joined #ceph
[2:04] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[2:08] * Frostshifter (~storage@5NZAADNK9.tor-irc.dnsbl.oftc.net) Quit ()
[2:08] * dusti (~Dinnerbon@176.10.104.240) has joined #ceph
[2:09] * oms101 (~oms101@p20030057EA096F00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[2:10] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) Quit (Quit: Leaving)
[2:17] <flaf> Hi, I think I have something wrong after the upgrade to Hammer 0.94.2 and the fix for rgw objects.
[2:17] * Concubidated (~Adium@66.87.127.21) Quit (Read error: Connection reset by peer)
[2:18] * Concubidated (~Adium@66-87-127-21.pools.spcsdns.net) has joined #ceph
[2:18] <flaf> For a bucket, the cmd `radosgw-admin --id=radosgw.gw1 bucket check --check-head-obj-locator --bucket=$bucket` displays objects with "status" == "needs_fixing"
[2:19] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[2:19] <flaf> If I run again the cmd with the --fix option, I have errors.
[2:20] <flaf> like that => 7f8051c8a840 -1 ERROR: ioctx.operate(oid=default.763616.1___multipart_registry/images/1483a2ea4c3f5865d4d583fb484bbe11afe709a6f3d1baef102904d4d9127909/layer.2~QorD8QaGiDc4HPUP7VVpx4LS-e_7f0u.meta) returned ret=-2
[2:20] <flaf> And I have always objects with "status" == "needs_fixing".
[2:21] <flaf> Have I done something wrong?
[2:24] * yguang11 (~yguang11@2001:4998:effd:600:8ee:c661:a9d7:68b4) has joined #ceph
[2:26] <flaf> It seems to me that the --fix option triggers errors for me.
[2:26] * Concubidated (~Adium@66-87-127-21.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[2:26] * Concubidated (~Adium@66.87.127.21) has joined #ceph
[2:28] * espeer (~quassel@phobos.isoho.st) Quit (Remote host closed the connection)
[2:29] <flaf> In fact, the typical and consecutive error is:
[2:29] <flaf> ERROR: fix_head_object_locator() returned ret=-2
[2:29] <flaf> 2015-06-12 02:28:33.915092 7f857f1ed840 -1 ERROR: ioctx.operate(oid=default.763616.1___multipart_registry/images/1483a2ea4c3f5865d4d583fb484bbe11afe709a6f3d1baef102904d4d9127909/layer.2~QorD8QaGiDc4HPUP7VVpx4LS-e_7f0u.meta) returned ret=-2
[2:30] * espeer (~quassel@phobos.isoho.st) has joined #ceph
[2:30] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[2:31] * Concubidated (~Adium@66.87.127.21) Quit (Read error: No route to host)
[2:31] * Concubidated (~Adium@66.87.127.21) has joined #ceph
[2:31] * xarses (~andreww@166.175.56.12) Quit (Ping timeout: 480 seconds)
[2:32] * Concubidated (~Adium@66.87.127.21) Quit (Read error: Connection reset by peer)
[2:32] * alram (~alram@64.134.221.151) Quit (Ping timeout: 480 seconds)
[2:36] * mookins (~mookins@induct3.lnk.telstra.net) has joined #ceph
[2:37] * scuttlemonkey is now known as scuttle|afk
[2:38] * dusti (~Dinnerbon@5NZAADNM1.tor-irc.dnsbl.oftc.net) Quit ()
[2:38] * chrisinajar (~Nanobot@5.61.34.63) has joined #ceph
[2:43] * haomaiwa_ (~haomaiwan@118.244.254.29) Quit (Remote host closed the connection)
[2:49] * scuttle|afk is now known as scuttlemonkey
[2:49] * evilrob00 (~evilrob00@cpe-72-179-14-118.austin.res.rr.com) Quit (Remote host closed the connection)
[2:50] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[3:01] * kefu (~kefu@114.86.215.22) has joined #ceph
[3:03] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has left #ceph
[3:06] * georgem (~Adium@65-110-212-116.cpe.pppoe.ca) has joined #ceph
[3:08] * chrisinajar (~Nanobot@5NZAADNN8.tor-irc.dnsbl.oftc.net) Quit ()
[3:08] * Phase (~ChauffeR@5NZAADNPJ.tor-irc.dnsbl.oftc.net) has joined #ceph
[3:08] * Phase is now known as Guest1320
[3:17] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[3:27] * moore_ (~moore@97-124-90-185.phnx.qwest.net) has joined #ceph
[3:30] * owasserm (~owasserm@206.169.83.146) Quit (Ping timeout: 480 seconds)
[3:31] * evilrob00 (~evilrob00@cpe-72-179-14-118.austin.res.rr.com) has joined #ceph
[3:34] * moore (~moore@64.202.160.233) Quit (Ping timeout: 480 seconds)
[3:36] * bjornar_ (~bjornar@93.187.84.175) Quit (Ping timeout: 480 seconds)
[3:36] * LeaChim (~LeaChim@host86-175-32-176.range86-175.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[3:38] * Guest1320 (~ChauffeR@5NZAADNPJ.tor-irc.dnsbl.oftc.net) Quit ()
[3:38] * QuantumBeep (~ahmeni@destiny.enn.lu) has joined #ceph
[3:42] * calvinx (~calvin@101.100.172.246) Quit (Quit: calvinx)
[3:43] * xarses (~andreww@12.10.113.130) has joined #ceph
[3:51] * jclm (~jclm@122.181.21.134) Quit (Quit: Leaving.)
[3:53] * evilrob00 (~evilrob00@cpe-72-179-14-118.austin.res.rr.com) Quit (Remote host closed the connection)
[3:56] * yanzheng (~zhyan@125.71.107.110) has joined #ceph
[3:57] * yguang11 (~yguang11@2001:4998:effd:600:8ee:c661:a9d7:68b4) Quit (Remote host closed the connection)
[4:05] <TheSov> i have an idea
[4:05] <TheSov> lets put ceph a bunch of raspberry pi's!
[4:06] <TheSov> you could use normal machines as monitors, and use raspberry pi's as "1 off" osds
[4:06] <TheSov> simply connect a usb hard drive!
[4:06] <TheSov> it already has a ssd as its primary disk :)
[4:08] * QuantumBeep (~ahmeni@9S0AAA0O1.tor-irc.dnsbl.oftc.net) Quit ()
[4:08] * ylmson (~dux0r@lumumba.torservers.net) has joined #ceph
[4:09] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[4:10] * fam is now known as fam_away
[4:11] * fam_away is now known as fam
[4:12] <TheSov> nevermind its been done
[4:13] <destrudo> I use a radxa rock as a mon
[4:13] <destrudo> I want to get ahold of the rock 2 with the full baseboard tho
[4:14] <destrudo> 4 cores, 4 gigs of memory and dual gigabit that's technically limited to 400 megabit
[4:14] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:15] <destrudo> http://store.radxa.com/collections/frontpage/products/rock2-square
[4:16] <destrudo> they use linaro to roll up ubuntu and it works damned well
[4:16] <TheSov> heh nice
[4:16] <TheSov> im trying to take my virtual setup and move it hardware
[4:17] <destrudo> I was going to get 4 of em' and stick em' in a 1U chassis, but then they started to tease with the rock2 and I thought it was stupid to buy in when I saw dual gigabit and sata
[4:18] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[4:18] <destrudo> I've been tempted to just go apeshit and slam a bunch of 1U's with 4x3.5 and 2x2.5 and an avoton proc with 8 gigs of ddr3 ecc
[4:19] <destrudo> just stack em' up for a proper amount of failover at home
[4:20] <destrudo> Right now I'm rolling with 2 OSD nodes, 12 drives a pop
[4:20] <destrudo> it just ain't very good
[4:22] <destrudo> anyways, I'll probably do the 4 nodes in a 1U when they release the full board
[4:22] <destrudo> I'll need to prepare a little rear I/O panel PCB, but that's not really a big issue
[4:25] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[4:26] * CAPSLOCK2000 (~oftc@2001:984:3be3:1::8) Quit (Ping timeout: 480 seconds)
[4:29] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[4:30] <TheSov> wait 2 osds with 12 drives?
[4:30] <TheSov> u running raid cards?
[4:31] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:32] <TheSov> i wonder if its possible to install ceph osd on a seagate kinetic's system board
[4:32] <TheSov> it has an ethernet out on it aleady
[4:32] <TheSov> already...
[4:32] <TheSov> you could literally build petabytes in a single rack
[4:34] * Concubidated (~Adium@66.87.127.21) has joined #ceph
[4:34] <TheSov> holy shit... https://storageservers.wordpress.com/2014/05/09/hgst-open-ethernet-vs-seagate-kinetic-open-storage/
[4:34] <TheSov> hgst's open ethernet already supports user created firmware
[4:36] <gleam> there's already work being done to make ceph osd work with the kinetics
[4:36] <gleam> not sure of the state
[4:36] <gleam> apparently 0.84 has a prototype
[4:36] <TheSov> so all you need is basically monitors....
[4:36] <TheSov> and a fuckton of backplanes
[4:37] <gleam> with the kinetics i don't believe hte osd process runs on the drives themselves
[4:37] <gleam> you still need boxes to run the osd processes, but actual writes are via the api to the drive
[4:37] <gleam> i might be misremembering though
[4:37] <TheSov> well im saying u wipe the stupid seagate firmware and install a microlinux with osd on it
[4:38] <gleam> i'd be concerned about cpu performance given osd process requirements
[4:38] <gleam> but who knows
[4:38] <TheSov> would it really matter?
[4:38] * ylmson (~dux0r@9S0AAA0PS.tor-irc.dnsbl.oftc.net) Quit ()
[4:38] * demonspork (~roaet@ns330308.ip-37-187-119.eu) has joined #ceph
[4:38] <TheSov> you have so many disks attached that it would be fast non the less. journals would be a problem
[4:39] <TheSov> unless you could write them to RAMfs directly on the cache of the disk
[4:39] <TheSov> but then you could never turn the drive off heh
[4:40] <TheSov> basically the underside of those kinetic drives is 2 arm processors memory and an ethernet card
[4:40] <TheSov> some ssd for firmware
[4:40] * alram (~alram@64.134.221.151) has joined #ceph
[4:40] <TheSov> though i imagine that ssd is not of high quality
[4:41] <destrudo> OSD /hosts/ not OSD's
[4:41] <destrudo> I'm not crazy enough to stick raid on the bottom. It's rolling on HBA's
[4:41] <destrudo> random mixed drives
[4:42] <destrudo> all the same size
[4:42] <destrudo> it oddly does not screw with performance in any noticably significant way
[4:43] <destrudo> I had been experiementing with cache tiering, but then my backplane blew out and took all of my poor SSD samples with it
[4:49] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[4:52] * evilrob00 (~evilrob00@cpe-72-179-14-118.austin.res.rr.com) has joined #ceph
[4:53] * nsoffer (~nsoffer@bzq-79-180-80-86.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[4:53] * ghartz_ (~ghartz@AStrasbourg-651-1-211-89.w109-221.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[4:57] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[4:58] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:58] * evilrob00 (~evilrob00@cpe-72-179-14-118.austin.res.rr.com) Quit (Quit: Leaving...)
[5:02] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[5:03] <mongo> kinetics didn't support atomic transactions in an exposed way IIRC.
[5:04] <mongo> at least in it's current form
[5:06] * fxmulder (~fxmulder@cpe-24-55-6-128.austin.res.rr.com) Quit ()
[5:08] * overclk (~overclk@121.244.87.117) has joined #ceph
[5:08] * demonspork (~roaet@9S0AAA0QM.tor-irc.dnsbl.oftc.net) Quit ()
[5:08] * Tarazed (~VampiricP@marylou.nos-oignons.net) has joined #ceph
[5:10] * jwilkins (~jwilkins@2601:9:703:f100:ea2a:eaff:fe08:3f1d) Quit (Ping timeout: 480 seconds)
[5:11] <steveeJ> it has been a while since I've looked into this, but I've been wondering for a while: can I set a preferred primary OSD per client and pool?
[5:12] <TheSov> how do you list the contents of a pool
[5:12] <TheSov> my google fu is lacking
[5:12] <steveeJ> are you talking about images?
[5:12] <TheSov> i know ceph osd lspools
[5:12] <TheSov> yes
[5:12] <steveeJ> rbd -p <pool> ls
[5:12] <TheSov> ahhh thanks!
[5:12] <steveeJ> yw
[5:13] <TheSov> so i have full cluster deployed as vm's to work on a test lab
[5:13] <TheSov> its all running on 1 proxmox and 1 freenas box
[5:14] <TheSov> i gotta say its pretty amazing, cant wait to see it on hardware
[5:14] <TheSov> i have a deploy server, 3 monitors servers, and 3 osd servers with 1 osd each
[5:14] <TheSov> so a production deployment is at least 7 servers
[5:14] <TheSov> thats a helluva upfront cost
[5:14] <mongo> steveeJ: per pool but I don't think per client.
[5:15] <mongo> steveeJ: good blog on it http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
[5:15] * calvinx (~calvin@203.126.171.206) has joined #ceph
[5:16] <steveeJ> mongo: thanks, I've seen that post already. the per client part is actually important in this case ;)
[5:17] * midnight_ (~midnightr@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[5:17] <TheSov> how do you limit a client to a specific image?
[5:17] <TheSov> ceph auth get-or-create client.cephclientname osd 'allow rwx' mon 'allow r' -o /etc/ceph/ceph.client.cephclientname.keyring - i use this command to create a client keyring but how do i make sure the client doesnt use wrong image
[5:18] * Vacuum__ (~Vacuum@i59F79A26.versanet.de) has joined #ceph
[5:18] <mongo> that may be hard, I think the design of crush is antithetical to that option
[5:18] <lurbs> Auth is per pool, not per image.
[5:18] <TheSov> ok then
[5:18] <TheSov> how do i limit per pool?
[5:19] <mongo> you can mask objects but I couldn't get masking of rbd devices to work.
[5:19] * midnigh__ (~midnightr@216.113.160.71) has joined #ceph
[5:19] <mongo> allow rwx pool=volumes
[5:19] <TheSov> ahhh nice!
[5:19] <TheSov> thanks!
[5:20] <TheSov> ok thats enough ceph for one day
[5:20] <TheSov> have a good night gentlemen
[5:20] <TheSov> thanks for all the help
[5:22] <mongo> steveeJ: While there are many options on how to do distributed systems crush is really ment to allow all cluster members to know the current and correct location for all replicas. I may be wrong but it would be hard for systems to have diffrent placement rules in a single pool.
[5:24] * midnightrunner (~midnightr@216.113.160.71) Quit (Ping timeout: 480 seconds)
[5:24] * squ (~Thunderbi@46.109.36.167) has joined #ceph
[5:24] * alram (~alram@64.134.221.151) Quit (Quit: leaving)
[5:24] <steveeJ> mongo: it's a valid point. having multiple clients with different primary OSDs would defeat the algorithm
[5:25] * Vacuum_ (~Vacuum@i59F79560.versanet.de) Quit (Ping timeout: 480 seconds)
[5:25] * fxmulder (~fxmulder@cpe-24-55-6-128.austin.res.rr.com) has joined #ceph
[5:25] * midnight_ (~midnightr@c-67-174-241-112.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:28] * vbellur (~vijay@122.167.138.43) Quit (Ping timeout: 480 seconds)
[5:28] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[5:31] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has joined #ceph
[5:34] * guillermo (~guillermo@200.77.224.239) has joined #ceph
[5:35] <guillermo> Hello, anybody there?
[5:35] * jclm (~jclm@121.244.87.117) has joined #ceph
[5:36] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) Quit (Ping timeout: 480 seconds)
[5:36] <guillermo> Just looking for some guidance with a mds issue, i have been searching all the day but I am not able to fix my cluster. I am using ceph with openstack.
[5:38] * Tarazed (~VampiricP@9S0AAA0RJ.tor-irc.dnsbl.oftc.net) Quit ()
[5:38] * Sami345 (~straterra@37.157.195.143) has joined #ceph
[5:40] <guillermo> So, this is my log:
[5:40] <guillermo> 2015-06-11 22:39:23.568012 7f4099338700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
[5:40] <guillermo> 2015-06-11 22:39:23.568023 7f4099338700 0 -- 172.19.2.31:6800/208210 >> 172.19.2.35:6819/121640 pipe(0x50f9000 sd=21 :8047 s=1 pgs=0 cs=0 l=1 c=0x50355a0).failed verifying authorize reply
[5:40] <guillermo> 2015-06-11 22:40:11.036923 7f409b343700 -1 mds.0.17 *** got signal Terminated ***
[5:40] <guillermo> 2015-06-11 22:40:11.036960 7f409b343700 1 mds.0.17 suicide. wanted down:dne, now up:replay
[5:40] <guillermo> -------------------------------------------------------------------------------------------
[5:44] <guillermo> This is my ceph -s
[5:44] <guillermo> ceph -s
[5:44] <guillermo> cluster 26f862bf-87b2-4f2c-b541-6fcb1b37b21a
[5:44] <guillermo> health HEALTH_WARN
[5:44] <guillermo> 42 pgs peering
[5:44] <guillermo> 42 pgs stuck inactive
[5:44] <guillermo> 42 pgs stuck unclean
[5:45] <guillermo> mds cluster is degraded
[5:45] <guillermo> monmap e9: 3 mons at {qrof5-pod00-b17-kl02-ctrl-net02=172.19.2.31:6789/0,qrof5-pod00-b17-kl02-stor01=172.19.2.35:6789/0,qrof5-pod00-b17-kl02-stor03=172.19.2.37:6789/0}
[5:45] <guillermo> election epoch 108, quorum 0,1,2 qrof5-pod00-b17-kl02-ctrl-net02,qrof5-pod00-b17-kl02-stor01,qrof5-pod00-b17-kl02-stor03
[5:45] <guillermo> mdsmap e48: 1/1/1 up {0=qrof5-pod00-b17-kl02-ctrl-net02=up:replay}
[5:45] <guillermo> osdmap e1588: 22 osds: 22 up, 22 in
[5:45] <guillermo> pgmap v706369: 1472 pgs, 5 pools, 297 GB data, 79801 objects
[5:45] <guillermo> 597 GB used, 15789 GB / 16386 GB avail
[5:45] <guillermo> 1430 active+clean
[5:45] <guillermo> 42 peering
[5:45] <gleam> wowspam
[5:45] <gleam> not that anyone is here to be offended
[5:46] <guillermo> any advice would be appreciated
[5:46] <gleam> wish i could help
[5:46] <guillermo> This is not the right channel for this questions?
[5:47] <gleam> no, it is
[5:47] <gleam> i just don't know the answer
[5:48] <guillermo> I see..., thank
[5:48] <guillermo> Thanks
[5:49] <lurbs> Looks like a cephx auth issue. Check your keyrings, the permissions on them, and also that your NTP's working and syncing time correctly.
[5:52] <guillermo> Ok, ntp is fixed
[5:52] <guillermo> I am checking the keyrings and permissions on them
[5:52] <guillermo> Thanks a lot
[5:56] <guillermo> 4.0K -rw------- 1 root root 71 Jun 11 21:51 ceph.bootstrap-mds.keyring
[5:56] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has left #ceph
[5:59] <guillermo> I only have one MDS server
[5:59] <guillermo> I have 3 data nodes, and they are mons
[5:59] <mongo> Not sure about the cephfs part but the peering issues can look like that if there are networking issues, especially around MTU.
[6:00] <mongo> make sure all nodes can hit all other nodes with what ever MTU you are using if you are on jumboframes
[6:00] <guillermo> OK
[6:00] <guillermo> Btw I lose one node
[6:00] <guillermo> *lost
[6:00] <mongo> you didn't delete the OSD's did you, it looks like they are all up.
[6:01] <guillermo> before was 33
[6:01] <guillermo> 11 per node
[6:01] <guillermo> Now I hav 22 osd iin 2 nodes
[6:02] <mongo> oh, the issue may be with the replica count and placement, two nodes is never enough but you can force it to work.
[6:02] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[6:03] <mongo> http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/
[6:03] <mongo> make sure to check out the info about the stuck pgs
[6:03] <guillermo> But, I think the problem is the MDS deamon
[6:03] <guillermo> 2015-06-11 22:39:23.568012 7f4099338700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
[6:03] <guillermo> I will check that link
[6:03] <guillermo> Thanks
[6:04] <mongo> the mds has nothing to do with your placement issues
[6:04] <mongo> and may be caused by the placement issues but I don't have enough cephfs experiance to say.
[6:04] <guillermo> root@qrof5-pod00-b17-kl02-ctrl-net02:/etc/ceph# tail -f /var/log/ceph/ceph-mds.qrof5-pod00-b17-kl02-ctrl-net02.log
[6:04] <guillermo> 2015-06-11 23:03:50.766608 7fe29abb4700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
[6:04] <guillermo> 2015-06-11 23:03:50.766616 7fe29abb4700 0 -- 172.19.2.31:6800/220532 >> 172.19.2.35:6819/121640 pipe(0x3bdd000 sd=21 :22299 s=1 pgs=0 cs=0 l=1 c=0x3b195a0).failed verifying authorize reply
[6:04] <guillermo> 2015-06-11 23:03:55.766487 7fe29abb4700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
[6:04] <guillermo> 2015-06-11 23:03:55.766494 7fe29abb4700 0 -- 172.19.2.31:6800/220532 >> 172.19.2.35:6819/121640 pipe(0x3bdd000 sd=21 :22354 s=1 pgs=0 cs=0 l=1 c=0x3b195a0).failed verifying authorize reply
[6:04] <guillermo> 2015-06-11 23:04:00.766546 7fe29abb4700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
[6:04] <guillermo> 2015-06-11 23:04:00.766554 7fe29abb4700 0 -- 172.19.2.31:6800/220532 >> 172.19.2.35:6819/121640 pipe(0x3bdd000 sd=21 :22396 s=1 pgs=0 cs=0 l=1 c=0x3b195a0).failed verifying authorize reply
[6:04] <guillermo> 2015-06-11 23:04:05.766835 7fe29abb4700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
[6:04] <guillermo> 2015-06-11 23:04:05.766844 7fe29abb4700 0 -- 172.19.2.31:6800/220532 >> 172.19.2.35:6819/121640 pipe(0x3bdd000 sd=21 :22449 s=1 pgs=0 cs=0 l=1 c=0x3b195a0).failed verifying authorize reply
[6:04] <guillermo> 2015-06-11 23:04:10.766933 7fe29abb4700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
[6:05] <guillermo> 2015-06-11 23:04:10.766940 7fe29abb4700 0 -- 172.19.2.31:6800/220532 >> 172.19.2.35:6819/121640 pipe(0x3bdd000 sd=21 :22493 s=1 pgs=0 cs=0 l=1 c=0x3b195a0).failed verifying authorize reply
[6:05] <guillermo> 2015-06-11 23:04:15.767195 7fe29abb4700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
[6:05] <guillermo> 2015-06-11 23:04:15.767203 7fe29abb4700 0 -- 172.19.2.31:6800/220532 >> 172.19.2.35:6819/121640 pipe(0x3bdd000 sd=21 :22544 s=1 pgs=0 cs=0 l=1 c=0x3b195a0).failed verifying authorize reply
[6:05] <guillermo> I will check my placement issues
[6:05] <guillermo> Thanks
[6:05] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[6:06] <mongo> depending on your config it may not have enough replicas to allow writes, the stuck pg issue should be worked before the MDS.
[6:06] <gleam> try restarting the relevant ceph-osd processes if you haven't
[6:06] <gleam> but yes, that would be my concern to
[6:06] <gleam> o
[6:06] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[6:08] <guillermo> Ok
[6:08] * Sami345 (~straterra@5NZAADNWW.tor-irc.dnsbl.oftc.net) Quit ()
[6:08] <guillermo> I will work first with the pg issue
[6:08] <guillermo> Thanks for your help
[6:08] * kiasyn (~rcfighter@5.61.34.63) has joined #ceph
[6:08] * MrHeavy (~MrHeavy@pool-108-54-190-117.nycmny.fios.verizon.net) has joined #ceph
[6:09] <guillermo> I'll keep trying
[6:11] <mongo> Is the other host completely dead? adding in a 3rd node would be ideal (actually more)
[6:13] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has left #ceph
[6:15] * sankarshan (~sankarsha@121.244.87.116) has joined #ceph
[6:16] <guillermo> i am facing hardware problems, in the dead node the disks became readonly
[6:17] <MrHeavy> http://gitbuilder.ceph.com/apache2-deb-trusty-x86_64-basic/ seems to be gone now. Is that intentional?
[6:21] * xarses (~andreww@12.10.113.130) Quit (Ping timeout: 480 seconds)
[6:22] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:27] * calvinx (~calvin@203.126.171.206) Quit (Quit: calvinx)
[6:31] * frickler (~jens@v1.jayr.de) Quit (Remote host closed the connection)
[6:33] * georgem (~Adium@65-110-212-116.cpe.pppoe.ca) Quit (Quit: Leaving.)
[6:38] * kiasyn (~rcfighter@8Q4AABGK7.tor-irc.dnsbl.oftc.net) Quit ()
[6:38] * OODavo (~Ian2128@e4-10.rana.at) has joined #ceph
[6:39] * kefu (~kefu@114.86.215.22) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:41] * guillermo (~guillermo@200.77.224.239) Quit (Quit: Leaving)
[6:54] * rdas (~rdas@121.244.87.116) has joined #ceph
[6:54] * frickler (~jens@v1.jayr.de) has joined #ceph
[7:00] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[7:02] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[7:02] * flisky1 (~Thunderbi@106.39.60.34) has joined #ceph
[7:05] * flisky (~Thunderbi@106.39.60.34) Quit (Read error: Connection reset by peer)
[7:08] * OODavo (~Ian2128@5NZAADNZQ.tor-irc.dnsbl.oftc.net) Quit ()
[7:08] * rf` (~ulterior@195.169.125.226) has joined #ceph
[7:09] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[7:17] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[7:19] * sleinen1 (~Adium@2001:620:0:82::104) has joined #ceph
[7:19] * owasserm (~owasserm@216.1.187.164) has joined #ceph
[7:25] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:26] * raw (~raw@37.48.65.169) has joined #ceph
[7:29] * vbellur (~vijay@121.244.87.124) has joined #ceph
[7:33] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[7:34] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) has joined #ceph
[7:36] * yguang11 (~yguang11@12.31.82.125) Quit (Remote host closed the connection)
[7:36] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[7:38] * rf` (~ulterior@3DDAAA342.tor-irc.dnsbl.oftc.net) Quit ()
[7:38] * PierreW (~Gibri@politkovskaja.torservers.net) has joined #ceph
[7:44] * yguang11 (~yguang11@12.31.82.125) Quit (Ping timeout: 480 seconds)
[7:46] * fred`` (fred@earthli.ng) Quit (Ping timeout: 480 seconds)
[7:49] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[7:50] * calvinx (~calvin@101.100.172.246) has joined #ceph
[7:54] * kefu (~kefu@114.86.215.22) has joined #ceph
[7:57] * fred`` (fred@earthli.ng) has joined #ceph
[8:01] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[8:05] * cok (~chk@2a02:2350:18:1010:19c4:600:f40d:dbb5) has joined #ceph
[8:07] * haomaiwang (~haomaiwan@223.104.3.194) has joined #ceph
[8:08] * PierreW (~Gibri@3DDAAA36K.tor-irc.dnsbl.oftc.net) Quit ()
[8:08] * Knuckx (~Aethis@tor-exit-node.7by7.de) has joined #ceph
[8:15] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[8:16] * shylesh (~shylesh@121.244.87.118) has joined #ceph
[8:16] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[8:23] * sleinen1 (~Adium@2001:620:0:82::104) Quit (Ping timeout: 480 seconds)
[8:24] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[8:25] * tganguly (~tganguly@121.244.87.117) has joined #ceph
[8:27] <tganguly> loicd, ping
[8:29] * yguang11 (~yguang11@12.31.82.125) Quit (Remote host closed the connection)
[8:32] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[8:33] <treenerd> Hi, is there a possibility to get an information how full is an journal. We use the first 8GB of our spinning disks for the journal. For example /dev/sdb1 ist the ceph journal /dev/sdb2 is ceph data partition.
[8:33] * oro (~oro@188-143-118-55.pool.digikabel.hu) has joined #ceph
[8:33] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit ()
[8:34] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[8:35] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[8:38] * Knuckx (~Aethis@0SGAABC85.tor-irc.dnsbl.oftc.net) Quit ()
[8:38] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[8:38] * Zombiekiller (~Tumm@ncc-1701-a.tor-exit.network) has joined #ceph
[8:42] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[8:42] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[8:45] * yguang11 (~yguang11@12.31.82.125) Quit (Remote host closed the connection)
[8:46] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[8:53] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[8:54] * yguang11 (~yguang11@12.31.82.125) Quit (Ping timeout: 480 seconds)
[9:03] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[9:03] * xarses (~andreww@12.10.113.130) has joined #ceph
[9:04] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[9:04] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:05] * CAPSLOCK2000 (~oftc@2001:984:3be3:1::8) has joined #ceph
[9:05] <Be-El> hi
[9:05] <raw> hi
[9:05] <steveeJ> 'ceph status' segfaults after displaying it's knowledge. that's probably not good
[9:08] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[9:08] * Zombiekiller (~Tumm@9S0AAA0Y1.tor-irc.dnsbl.oftc.net) Quit ()
[9:08] * richardus1 (~rapedex@7R2AABNSA.tor-irc.dnsbl.oftc.net) has joined #ceph
[9:09] <loicd> tganguly: how can I be of service ?
[9:13] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[9:13] * amote (~amote@121.244.87.116) has joined #ceph
[9:14] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:14] * branto (~branto@178-253-140-142.3pp.slovanet.sk) has joined #ceph
[9:18] * analbeard (~shw@support.memset.com) has joined #ceph
[9:23] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[9:23] * oro (~oro@188-143-118-55.pool.digikabel.hu) Quit (Ping timeout: 480 seconds)
[9:27] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[9:28] <loicd> flaf: hi ! could you please open an urgent bug report regarding the problem you have with rgw on v0.94.2 ? http://tracker.ceph.com/projects/rgw/issues/new
[9:28] * Flynn (~stefan@ip-81-30-69-189.fiber.nl) Quit (Quit: Flynn)
[9:29] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:30] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:31] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[9:31] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[9:31] * Flynn (~stefan@ip-81-30-69-189.fiber.nl) has joined #ceph
[9:31] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[9:31] * zW (~wesley@spider.pfoe.be) Quit (Quit: leaving)
[9:32] * kefu (~kefu@114.86.215.22) Quit (Max SendQ exceeded)
[9:32] * kefu (~kefu@114.86.215.22) has joined #ceph
[9:33] * Flynn (~stefan@ip-81-30-69-189.fiber.nl) Quit ()
[9:33] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[9:36] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[9:37] * frickler_ (~jens@v1.jayr.de) has joined #ceph
[9:37] * frickler_ (~jens@v1.jayr.de) Quit ()
[9:38] * richardus1 (~rapedex@7R2AABNSA.tor-irc.dnsbl.oftc.net) Quit ()
[9:38] * tritonx (~jacoo@ncc-1701-d.tor-exit.network) has joined #ceph
[9:40] * owasserm (~owasserm@216.1.187.164) Quit (Ping timeout: 480 seconds)
[9:40] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:42] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[9:44] * macjack (~macjack@122.146.93.152) has joined #ceph
[9:52] * kefu (~kefu@114.86.215.22) Quit (Max SendQ exceeded)
[9:52] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[9:52] * kefu (~kefu@114.86.215.22) has joined #ceph
[9:54] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[9:58] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[9:58] * The1w (~jens@node3.survey-it.dk) has joined #ceph
[9:59] * T1w (~jens@node3.survey-it.dk) Quit (Remote host closed the connection)
[10:01] * mookins (~mookins@induct3.lnk.telstra.net) Quit ()
[10:08] * tritonx (~jacoo@5NZAADN7T.tor-irc.dnsbl.oftc.net) Quit ()
[10:14] * yuanz (~yzhou67@192.102.204.38) has joined #ceph
[10:16] * yuan (~yzhou67@shzdmzpr02-ext.sh.intel.com) Quit (Read error: Connection reset by peer)
[10:24] * MACscr (~Adium@2601:d:c800:de3:dc30:1f08:4fb8:6970) Quit (Quit: Leaving.)
[10:27] * madkiss (~madkiss@2001:6f8:12c3:f00f:2c77:f2ed:c6bc:462f) Quit (Quit: Leaving.)
[10:31] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[10:37] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:38] * hgjhgjh (~superdug@9S0AAA03Y.tor-irc.dnsbl.oftc.net) has joined #ceph
[10:46] * oro (~oro@deibp9eh1--blueice1n1.emea.ibm.com) has joined #ceph
[10:50] * frickler (~jens@v1.jayr.de) Quit (Remote host closed the connection)
[10:50] * frickler (~jens@v1.jayr.de) has joined #ceph
[10:50] * bitserker (~toni@88.87.194.130) has joined #ceph
[10:50] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Remote host closed the connection)
[10:51] * Zethrok (~martin@95.154.26.34) Quit (Remote host closed the connection)
[10:52] * dostrow_ (~dostrow@bunker.bloodmagic.com) Quit (Remote host closed the connection)
[10:52] * dostrow (~dostrow@bunker.bloodmagic.com) has joined #ceph
[10:52] * Zethrok (~martin@95.154.26.34) has joined #ceph
[10:52] * danieagle (~Daniel@191.205.91.50) has joined #ceph
[10:53] * destrudo (~destrudo@64.142.74.180) Quit (Remote host closed the connection)
[10:54] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:54] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Connection reset by peer)
[10:55] * Vacuum__ (~Vacuum@i59F79A26.versanet.de) Quit (Remote host closed the connection)
[10:55] * Vacuum_ (~Vacuum@i59F79A26.versanet.de) has joined #ceph
[10:56] * rotbeard (~redbeard@185.32.80.238) Quit (Ping timeout: 480 seconds)
[10:56] * kawa2014 (~kawa@89.184.114.246) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * xarses (~andreww@12.10.113.130) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * calvinx (~calvin@101.100.172.246) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * flisky1 (~Thunderbi@106.39.60.34) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * midnigh__ (~midnightr@216.113.160.71) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * espeer (~quassel@phobos.isoho.st) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * dlan_ (~dennis@116.228.88.131) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * lcurtis (~lcurtis@47.19.105.250) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * ircolle-afk (~Adium@2601:1:a580:1735:914f:6dfb:ef0a:e28c) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * fdmanana__ (~fdmanana@bl13-130-142.dsl.telepac.pt) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * kraken (~kraken@gw.sepia.ceph.com) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * ifur (~osm@0001f63e.user.oftc.net) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * davidz1 (~davidz@cpe-23-242-27-128.socal.res.rr.com) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * JohnPreston78 (sid31393@id-31393.charlton.irccloud.com) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * kklimonda_ (sid72883@id-72883.highgate.irccloud.com) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * kamalmarhubi (sid26581@id-26581.highgate.irccloud.com) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * martineg (~martin@shell01.copyleft.no) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * burley (~khemicals@cpe-98-28-239-78.cinci.res.rr.com) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * stj (~stj@2604:a880:800:10::2cc:b001) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * dmick (~dmick@206.169.83.146) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * bjornar (~bjornar@109.247.131.38) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * Tene (~tene@173.13.139.236) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * infinity1 (~brendon@web2.artsopolis.com) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * eqhmcow_ (~eqhmcow@adsl-74-242-202-15.rmo.bellsouth.net) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * gsilvis (~andovan@c-73-159-49-122.hsd1.ma.comcast.net) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * JoeJulian (~JoeJulian@shared.gaealink.net) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * danderson (~dave@atlas.natulte.net) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * lbarfiel1 (~logan@ralvm.saturne.in) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * real (~lalelu@invincible.the-real.org) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * kingcu_ (~kingcu@kona.ridewithgps.com) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * nigwil (~Oz@li1101-124.members.linode.com) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * zerick (~zerick@irc.quassel.zerick.me) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * wolsen (~wolsen@152.34.213.162.lcy-01.canonistack.canonical.com) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * Meths (~meths@2.27.78.187) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * _robbat21irssi (nobody@www2.orbis-terrarum.net) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * _nick (~nick@zarquon.dischord.org) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * palmeida (~palmeida@gandalf.wire-consulting.com) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * Georgyo (~georgyo@shamm.as) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * BranchPredictor (branch@predictor.org.pl) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * ktdreyer (~kdreyer@polyp.adiemus.org) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * zz_hitsumabushi (~hitsumabu@175.184.30.148) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * sbadia (~sbadia@marcellin.sebian.fr) Quit (synthon.oftc.net weber.oftc.net)
[10:56] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (synthon.oftc.net weber.oftc.net)
[10:57] * thomnico (~thomnico@2a01:e35:8b41:120:345c:b52c:84e9:e5a5) has joined #ceph
[10:57] * raw (~raw@37.48.65.169) Quit (Ping timeout: 480 seconds)
[10:58] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:58] * bobrik (~bobrik@83.243.64.45) has joined #ceph
[11:02] * raw (~raw@37.48.65.169) has joined #ceph
[11:03] * karnan (~karnan@121.244.87.117) has joined #ceph
[11:05] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) Quit (Read error: Connection timed out)
[11:05] * thomnico (~thomnico@2a01:e35:8b41:120:345c:b52c:84e9:e5a5) Quit (Ping timeout: 480 seconds)
[11:05] * bjornar_ (~bjornar@93.187.84.175) has joined #ceph
[11:05] * MK_FG (~MK_FG@188.226.62.174) has joined #ceph
[11:05] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[11:05] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[11:05] * xarses (~andreww@12.10.113.130) has joined #ceph
[11:05] * calvinx (~calvin@101.100.172.246) has joined #ceph
[11:05] * flisky1 (~Thunderbi@106.39.60.34) has joined #ceph
[11:05] * midnigh__ (~midnightr@216.113.160.71) has joined #ceph
[11:05] * espeer (~quassel@phobos.isoho.st) has joined #ceph
[11:05] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[11:05] * dlan_ (~dennis@116.228.88.131) has joined #ceph
[11:05] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) has joined #ceph
[11:05] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[11:05] * ircolle-afk (~Adium@2601:1:a580:1735:914f:6dfb:ef0a:e28c) has joined #ceph
[11:05] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[11:05] * fdmanana__ (~fdmanana@bl13-130-142.dsl.telepac.pt) has joined #ceph
[11:05] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[11:05] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[11:05] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) has joined #ceph
[11:05] * davidz1 (~davidz@cpe-23-242-27-128.socal.res.rr.com) has joined #ceph
[11:05] * JohnPreston78 (sid31393@id-31393.charlton.irccloud.com) has joined #ceph
[11:05] * kklimonda_ (sid72883@id-72883.highgate.irccloud.com) has joined #ceph
[11:05] * kamalmarhubi (sid26581@id-26581.highgate.irccloud.com) has joined #ceph
[11:05] * martineg (~martin@shell01.copyleft.no) has joined #ceph
[11:05] * burley (~khemicals@cpe-98-28-239-78.cinci.res.rr.com) has joined #ceph
[11:05] * stj (~stj@2604:a880:800:10::2cc:b001) has joined #ceph
[11:05] * dmick (~dmick@206.169.83.146) has joined #ceph
[11:05] * real (~lalelu@invincible.the-real.org) has joined #ceph
[11:05] * bjornar (~bjornar@109.247.131.38) has joined #ceph
[11:05] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) has joined #ceph
[11:05] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[11:05] * Tene (~tene@173.13.139.236) has joined #ceph
[11:05] * infinity1 (~brendon@web2.artsopolis.com) has joined #ceph
[11:05] * eqhmcow_ (~eqhmcow@adsl-74-242-202-15.rmo.bellsouth.net) has joined #ceph
[11:05] * gsilvis (~andovan@c-73-159-49-122.hsd1.ma.comcast.net) has joined #ceph
[11:05] * JoeJulian (~JoeJulian@shared.gaealink.net) has joined #ceph
[11:05] * danderson (~dave@atlas.natulte.net) has joined #ceph
[11:05] * lbarfiel1 (~logan@ralvm.saturne.in) has joined #ceph
[11:05] * kingcu_ (~kingcu@kona.ridewithgps.com) has joined #ceph
[11:05] * nigwil (~Oz@li1101-124.members.linode.com) has joined #ceph
[11:05] * zerick (~zerick@irc.quassel.zerick.me) has joined #ceph
[11:05] * wolsen (~wolsen@152.34.213.162.lcy-01.canonistack.canonical.com) has joined #ceph
[11:05] * Georgyo (~georgyo@shamm.as) has joined #ceph
[11:05] * BranchPredictor (branch@predictor.org.pl) has joined #ceph
[11:05] * ktdreyer (~kdreyer@polyp.adiemus.org) has joined #ceph
[11:05] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[11:05] * Meths (~meths@2.27.78.187) has joined #ceph
[11:05] * _robbat21irssi (nobody@www2.orbis-terrarum.net) has joined #ceph
[11:05] * _nick (~nick@zarquon.dischord.org) has joined #ceph
[11:05] * palmeida (~palmeida@gandalf.wire-consulting.com) has joined #ceph
[11:05] * zz_hitsumabushi (~hitsumabu@175.184.30.148) has joined #ceph
[11:05] * sbadia (~sbadia@marcellin.sebian.fr) has joined #ceph
[11:05] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) has joined #ceph
[11:05] * ChanServ sets mode +o dmick
[11:08] * guerby (~guerby@ip165-ipv6.tetaneutral.net) Quit (Quit: Leaving)
[11:08] * hgjhgjh (~superdug@9S0AAA03Y.tor-irc.dnsbl.oftc.net) Quit ()
[11:08] * andrew_m (~jwandborg@60.248.162.179) has joined #ceph
[11:08] * vbellur (~vijay@121.244.87.117) has joined #ceph
[11:10] * garphy`aw is now known as garphy
[11:14] * The1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[11:14] * mivaho_ (~quassel@xternal.xs4all.nl) has joined #ceph
[11:16] * mivaho (~quassel@xternal.xs4all.nl) Quit (Read error: Connection reset by peer)
[11:16] * MACscr (~Adium@2601:d:c800:de3:b9b7:a177:fabe:de82) has joined #ceph
[11:17] * kaisan_ (~kai@zaphod.xs4all.nl) has joined #ceph
[11:17] * raw (~raw@37.48.65.169) Quit (Read error: Connection timed out)
[11:17] * haomaiwang (~haomaiwan@223.104.3.194) Quit (Remote host closed the connection)
[11:18] * kaisan (~kai@zaphod.xs4all.nl) Quit (Read error: Connection reset by peer)
[11:19] * guerby (~guerby@ip165-ipv6.tetaneutral.net) has joined #ceph
[11:19] * rotbeard (~redbeard@185.32.80.238) Quit (Ping timeout: 480 seconds)
[11:19] * KevinPerks (~Adium@2606:a000:80ad:1300:f8de:96ea:4cc0:8237) Quit (Quit: Leaving.)
[11:20] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) has joined #ceph
[11:20] * raw (~raw@37.48.65.169) has joined #ceph
[11:20] * jclm (~jclm@121.244.87.117) Quit (Quit: Leaving.)
[11:22] * The1w (~jens@node3.survey-it.dk) has joined #ceph
[11:33] * ade (~abradshaw@tmo-109-191.customers.d1-online.com) has joined #ceph
[11:36] * flisky1 (~Thunderbi@106.39.60.34) Quit (Quit: flisky1)
[11:36] * bjornar_ (~bjornar@93.187.84.175) Quit (Ping timeout: 480 seconds)
[11:38] * andrew_m (~jwandborg@5NZAADODP.tor-irc.dnsbl.oftc.net) Quit ()
[11:41] * cooldharma06 (~chatzilla@14.139.180.40) has joined #ceph
[11:42] * Mika_c (~Mk@122.146.93.152) has joined #ceph
[11:45] * Concubidated1 (~Adium@66-87-126-128.pools.spcsdns.net) has joined #ceph
[11:48] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[11:49] * Concubidated (~Adium@66.87.127.21) Quit (Ping timeout: 480 seconds)
[11:50] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) has joined #ceph
[11:51] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:52] * toabctl (~toabctl@toabctl.de) Quit (Quit: Adios)
[11:52] * toabctl (~toabctl@toabctl.de) has joined #ceph
[11:53] * avib (~Ceph@al.secure.elitehosts.com) Quit (Ping timeout: 480 seconds)
[11:53] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) has joined #ceph
[11:58] * ksperis (~ksperis@46.218.42.103) has joined #ceph
[12:03] <todin> does anyone know this sandisk box, the if500? does it really use ceph internally?
[12:07] <jcsp> todin: sandisk have announced that it does, why are you in doubt?
[12:07] <todin> jcsp: I am not in doubt. I am just courios.
[12:08] <todin> does this box export rbd volumes?
[12:08] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[12:13] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[12:14] <Mika_c> hello everyone, have any one ever try to set pool quota???I have a question about when quota already set on pool "rbd" max_bytes = 1024. But I found that still can create a image more than quota limit.
[12:16] * nsoffer (~nsoffer@bzq-79-180-80-86.red.bezeqint.net) has joined #ceph
[12:17] <Mika_c> After I mount this rbd image and formated to ext4. I can put 10G file into this partition. I'm so comfuse.
[12:17] * Mika_c (~Mk@122.146.93.152) Quit (Quit: Konversation terminated!)
[12:19] <jcsp> Mika_c: if you're doing a "set-quota max_bytes", that's setting the number of bytes that can be stored in the pool. It's not an RBD setting, and it's not per-image
[12:20] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[12:20] <jcsp> haven't checked RBD's behaviour on a full system, but it's likely that it will respond by pausing ops
[12:21] <jcsp> so you can write to the RBD device from a client, but when it gets around to flushing anything to the RADOS pool, it will block.
[12:22] * steveeJ (~junky@ip174-67-192-62.oc.oc.cox.net) Quit (Ping timeout: 480 seconds)
[12:29] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[12:30] * kefu (~kefu@114.86.215.22) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:37] * oro (~oro@deibp9eh1--blueice1n1.emea.ibm.com) Quit (Read error: Connection reset by peer)
[12:37] * oro (~oro@deibp9eh1--blueice1n2.emea.ibm.com) has joined #ceph
[12:39] * narthollis (~Pommesgab@tor-exit4-readme.dfri.se) has joined #ceph
[12:45] * oro (~oro@deibp9eh1--blueice1n2.emea.ibm.com) Quit (Read error: Connection reset by peer)
[12:45] * oro (~oro@deibp9eh1--blueice4n1.emea.ibm.com) has joined #ceph
[12:48] * kefu (~kefu@114.86.215.22) has joined #ceph
[12:51] * cooldharma06 (~chatzilla@14.139.180.40) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 21.0/20130515140136])
[12:51] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[12:55] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[12:57] * kefu (~kefu@114.86.215.22) Quit (Read error: Connection reset by peer)
[12:57] * kefu (~kefu@114.86.215.22) has joined #ceph
[12:57] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[12:57] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[12:58] * shang (~ShangWu@175.41.48.77) has joined #ceph
[12:58] * destrudo (~destrudo@64.142.74.180) has joined #ceph
[13:04] <flaf> loicd: hello, sorry I have just read your message. Ok, I create a isssue now. Thx.
[13:06] * KevinPerks (~Adium@2606:a000:80ad:1300:b011:f633:5741:7631) has joined #ceph
[13:08] <tganguly> loicd, ping
[13:08] * narthollis (~Pommesgab@3OZAAB7QA.tor-irc.dnsbl.oftc.net) Quit ()
[13:08] * FierceForm (~Catsceo@136.ip-167-114-114.net) has joined #ceph
[13:09] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[13:11] <flaf> loicd: issue created => http://tracker.ceph.com/issues/11984 (no problem for me it's a testing cluster ;))
[13:17] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[13:18] * MrHeavy (~MrHeavy@pool-108-54-190-117.nycmny.fios.verizon.net) Quit (Read error: Connection reset by peer)
[13:21] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) Quit (Quit: Leaving.)
[13:22] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:24] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) has joined #ceph
[13:25] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[13:26] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:27] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[13:27] * karnan (~karnan@121.244.87.117) has joined #ceph
[13:33] * oro (~oro@deibp9eh1--blueice4n1.emea.ibm.com) Quit (Read error: Connection reset by peer)
[13:33] * oro (~oro@deibp9eh1--blueice2n1.emea.ibm.com) has joined #ceph
[13:38] * FierceForm (~Catsceo@5NZAADOLM.tor-irc.dnsbl.oftc.net) Quit ()
[13:38] * MJXII (~Thayli@exit1.telostor.ca) has joined #ceph
[13:40] <loicd> tganguly: pong
[13:40] <loicd> flaf: thanks :-)
[13:40] * KevinPerks (~Adium@2606:a000:80ad:1300:b011:f633:5741:7631) Quit (Quit: Leaving.)
[13:41] <flaf> Oh, de rien... ;)
[13:41] <loicd> tganguly: give me a minute to try ceph osd erasure-code-profile set myprofile k=4 m=2 l=3 plugin=lrc mapping=__DD__DD layers='[[ "_cDD_cDD", "" ],[ "cDDD____", "" ],[ "____cDDD", "" ],]' ruleset-failure-domain=osd ruleset-root=default ruleset-locality=datacenter
[13:42] * maxxware (~maxx@149.210.133.105) Quit (Remote host closed the connection)
[13:42] <loicd> tganguly: what error do you see ?
[13:42] <loicd> the layers look good
[13:42] <tganguly> while creating pool i am seeing
[13:42] <tganguly> [root@Cluster1 ~]# ceph osd pool create ecpool5 12 12 erasure myprofile
[13:42] <tganguly> Traceback (most recent call last):
[13:42] <tganguly> File "/usr/bin/ceph", line 896, in <module>
[13:42] <tganguly> retval = main()
[13:42] <tganguly> File "/usr/bin/ceph", line 852, in main
[13:42] <tganguly> print >> sys.stderr, prefix + 'Error {0}: {1}'.format(errno.errorcode[ret], outs)
[13:42] <tganguly> KeyError: 4113
[13:43] <loicd> tganguly: ok
[13:44] <loicd> 4113 is actually an error from the lrc plugin. It's my fault that it's so cryptic and I've fixed that shortly after hammer. Let me dig the meaning of that.
[13:46] <loicd> tganguly: what ceph version are you using precisely ?
[13:46] <tganguly> ceph -v
[13:46] <tganguly> ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff)
[13:47] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[13:47] <treenerd> Hi, is there a possibility to get an information how full an journal is? We use the first 8GB of our spinning disks for the journal. For example /dev/sdb1 ist the ceph journal /dev/sdb2 is ceph data partition.
[13:47] * adrian (~abradshaw@tmo-108-58.customers.d1-online.com) has joined #ceph
[13:47] * adrian is now known as Guest1382
[13:47] * ganders (~root@190.2.42.21) has joined #ceph
[13:49] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[13:54] * ade (~abradshaw@tmo-109-191.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[13:55] <guerby> loicd, we're upgrading from 0.87.1 to 0.94.2 no issue so far (tetaneutral.net)
[13:55] <guerby> loicd, hopefully we didn't try 0.94.1 : "'could not find map for epoch 15650 on pg 78.9, but the pool is not present in the current map, so this is probably a result of bug 10617. Skipping the pg for now, you can use ceph_objectstore_tool to clean it up later"
[13:55] <loicd> tganguly: https://github.com/ceph/ceph/blob/master/src/erasure-code/lrc/ErasureCodeLrc.h#L42 is 4113 - 4095 https://github.com/ceph/ceph/blob/master/src/include/err.h#L7
[13:55] <kraken> guerby might be talking about http://tracker.ceph.com/issues/10617 [osd: pgs for deleted pools don't finish getting removed if osd restarts]
[13:56] <loicd> tganguly: note however that there should be a more informative message in the OSD logs
[13:57] <loicd> tganguly: something that says https://github.com/ceph/ceph/blob/master/src/erasure-code/lrc/ErasureCodeLrc.cc#L310
[13:58] <loicd> tganguly: ceph osd erasure-code-profile set myprofile plugin=lrc mapping=__DD__DD layers='[[ "_cDD_cDD", "" ],[ "cDDD____", "" ],[ "____cDDD", "" ],]' ruleset-failure-domain=osd ruleset-root=default ruleset-locality=datacenter
[13:58] <loicd> that should work (no k/m/l, they are mutually exclusive)
[13:59] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) Quit (Quit: Leaving)
[13:59] <loicd> tganguly: the change I made for infernalis is that you won't have to look for the monitor logs for a good error message
[13:59] <loicd> tganguly: it will be displayed instead of 4113
[14:00] <loicd> tganguly: and contrary to what I suggested a few lines above, the error message should be in the monitor logs, not in the osd logs. Because the monitor will try to use the erasure code profile even before the osd.
[14:03] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) has joined #ceph
[14:03] * ganders (~root@190.2.42.21) Quit (Quit: WeeChat 0.4.2)
[14:04] <tganguly> loicd, I tired to remove the k,m,l parameters and tried again
[14:04] <tganguly> now getting and Error
[14:04] <tganguly> ceph osd pool create ecpool6 12 12 erasure myprofileError EINVAL: unknown crush type host
[14:04] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[14:06] * macjack (~macjack@122.146.93.152) has left #ceph
[14:06] * bene (~ben@pool-71-181-38-232.cncdnh.fast.myfairpoint.net) has joined #ceph
[14:06] <tganguly> loicd, refer the crush map
[14:06] <tganguly> http://paste2.org/18JKbPLw
[14:08] * MJXII (~Thayli@8Q4AABGYE.tor-irc.dnsbl.oftc.net) Quit ()
[14:08] * Misacorp (~Kizzi@exit1.ipredator.se) has joined #ceph
[14:08] <loicd> hum
[14:09] <loicd> tganguly: is there more information in the mon logs ?
[14:10] <tganguly> loicd, dont see any message on mon log
[14:10] <tganguly> related to 4113
[14:10] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[14:11] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[14:11] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[14:12] * marrusl (~mark@nat-pool-rdu-u.redhat.com) has joined #ceph
[14:12] * brunoleon (~quassel@2a01:e35:8a42:b9d0:9555:4ed9:c739:c9a1) has joined #ceph
[14:14] * oro (~oro@deibp9eh1--blueice2n1.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[14:15] <loicd> https://github.com/ceph/ceph/blob/hammer/src/erasure-code/lrc/ErasureCodePluginLrc.cc#L45 prints the error in the logs. Can you grep all the logs looking for "when k, m, l are set" ? It must be somewhere.
[14:15] <loicd> tganguly: ^
[14:16] <loicd> https://github.com/ceph/ceph/blob/hammer/src/erasure-code/lrc/ErasureCodeLrc.cc#L311 sets it in https://github.com/ceph/ceph/blob/hammer/src/erasure-code/lrc/ErasureCodeLrc.cc#L285 which is called by https://github.com/ceph/ceph/blob/hammer/src/erasure-code/lrc/ErasureCodeLrc.cc#L482 which is called by https://github.com/ceph/ceph/blob/hammer/src/erasure-code/lrc/ErasureCodePluginLrc.cc#L43
[14:17] <loicd> there is little room for not having it in the logs
[14:17] * ganders (~root@190.2.42.21) has joined #ceph
[14:21] <tganguly> loicd, got it
[14:21] <tganguly> refer the link
[14:22] <tganguly> http://paste2.org/mXye1Zb5
[14:22] <tganguly> loicd, again seeing the same error
[14:22] <tganguly> ceph osd pool create ecpool7 12 12 erasure myprofileError EINVAL: unknown crush type host
[14:23] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:25] <loicd> tganguly: did you find "when k, m, l are set" in the logs ?
[14:25] * Guest1382 is now known as ade_b
[14:26] <tganguly> i tried this without k,l,m
[14:26] <tganguly> so you want me to try with those parameters
[14:26] <tganguly> loicd, ^^
[14:27] <loicd> tganguly: no, I'm still trying to figure out why you could not find the error message of the first error. If you don't have it, something is wrong.
[14:28] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[14:28] <tganguly> loicd, what should be the log level ?
[14:28] * bjornar_ (~bjornar@ti0099a430-1131.bb.online.no) has joined #ceph
[14:30] <loicd> tganguly: it's an error, it must show regardless of the log level
[14:30] <tganguly> loicd, ok
[14:33] <tganguly> got the error
[14:33] <tganguly> 2015-06-12 08:32:46.426356 7f802bf93700 -1 ErasureCodePluginLrc: The mapping parameter cannot be set when k, m, l are set in {directory=/usr/lib64/ceph/erasure-code,k=4,l=3,layers=[[ "_cDD_cDD", "" ],[ "cDDD____", "" ],[ "____cDDD", "" ],],m=2,mapping=__DD__DD,plugin=lrc,ruleset-failure-domain=osd,ruleset-locality=datacenter,ruleset-root=default}
[14:33] <tganguly> http://paste2.org/It9VNeFd
[14:35] <loicd> tganguly: coool !
[14:35] <loicd> ruleset-steps='[ [ "choose", "datacenter", 2 ], [ "chooseleaf", "osd", 4] ]' is what you need instead of ruleset-failure-domain=osd ruleset-root=default ruleset-locality=datacenter. There should be a sanity check.
[14:36] <tganguly> loicd, cool, let me try it
[14:36] <loicd> the general idea is that if you want the low level, you pay the full price and everything is complicated, you can't mix with helpers
[14:37] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[14:37] <loicd> and the benefit is that it allows you to do things that the higher level k,m,l won't let you do
[14:38] * Misacorp (~Kizzi@5NZAADOOY.tor-irc.dnsbl.oftc.net) Quit ()
[14:38] <tganguly> loicd, got it
[14:39] <tganguly> loicd, Thanks
[14:39] <tganguly> I am able to create the pool
[14:39] <tganguly> loicd, ^^
[14:40] * bitserker (~toni@88.87.194.130) Quit (Quit: Leaving.)
[14:40] * squ (~Thunderbi@46.109.36.167) Quit (Quit: squ)
[14:42] <loicd> tganguly: thanks for testing this, created a ticket to address this http://tracker.ceph.com/issues/11987
[14:43] * bitserker (~toni@88.87.194.130) has joined #ceph
[14:44] * avib (~Ceph@al.secure.elitehosts.com) has joined #ceph
[14:46] * smerz (~ircircirc@37.74.194.90) has joined #ceph
[14:47] * linuxkidd (~linuxkidd@207.236.250.131) has joined #ceph
[14:49] * tganguly (~tganguly@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:50] * kefu (~kefu@114.86.215.22) Quit (Max SendQ exceeded)
[14:51] * bitserker (~toni@88.87.194.130) Quit (Ping timeout: 480 seconds)
[14:51] * kefu (~kefu@114.86.215.22) has joined #ceph
[14:52] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) Quit (Quit: Verlassend)
[14:54] * vbellur (~vijay@122.167.138.43) has joined #ceph
[14:55] * bene (~ben@pool-71-181-38-232.cncdnh.fast.myfairpoint.net) Quit (Quit: Konversation terminated!)
[14:56] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[14:57] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has joined #ceph
[14:58] * bjornar_ (~bjornar@ti0099a430-1131.bb.online.no) Quit (Ping timeout: 480 seconds)
[14:59] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[15:02] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) has joined #ceph
[15:03] * kefu_ (~kefu@114.86.215.22) has joined #ceph
[15:03] * kefu (~kefu@114.86.215.22) Quit (Read error: Connection reset by peer)
[15:05] * kefu_ (~kefu@114.86.215.22) Quit (Max SendQ exceeded)
[15:06] * oleg (~oleg@178.72.71.206) has joined #ceph
[15:06] * kefu (~kefu@192.154.200.66) has joined #ceph
[15:07] * yanzheng (~zhyan@125.71.107.110) Quit (Quit: This computer has gone to sleep)
[15:08] * notarima (~SquallSee@justus.impium.de) has joined #ceph
[15:08] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[15:09] * vbellur (~vijay@122.167.138.43) Quit (Remote host closed the connection)
[15:09] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[15:13] * alram (~alram@209.63.137.130) has joined #ceph
[15:13] * vbellur (~vijay@122.167.138.43) has joined #ceph
[15:16] * thomnico (~thomnico@2a01:e35:8b41:120:345c:b52c:84e9:e5a5) has joined #ceph
[15:17] * midnightrunner (~midnightr@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[15:18] * oleg (~oleg@178.72.71.206) has left #ceph
[15:18] * dyasny (~dyasny@198.251.58.23) has joined #ceph
[15:19] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[15:20] <infernix> is there an easy way to determine progress, or even status, of data transfered off an OSD when it is marked as out but it is still up?
[15:21] * alram (~alram@209.63.137.130) Quit (Remote host closed the connection)
[15:21] * The1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[15:21] * kefu (~kefu@192.154.200.66) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:21] * alram (~alram@209.63.137.130) has joined #ceph
[15:22] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:22] * bene (~ben@c-24-60-237-191.hsd1.nh.comcast.net) has joined #ceph
[15:23] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[15:24] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[15:24] * midnigh__ (~midnightr@216.113.160.71) Quit (Ping timeout: 480 seconds)
[15:28] * yanzheng (~zhyan@125.71.107.110) has joined #ceph
[15:29] * thomnico (~thomnico@2a01:e35:8b41:120:345c:b52c:84e9:e5a5) Quit (Read error: No route to host)
[15:30] * thomnico (~thomnico@2a01:e35:8b41:120:345c:b52c:84e9:e5a5) has joined #ceph
[15:32] * alram (~alram@209.63.137.130) Quit (Ping timeout: 480 seconds)
[15:32] * arbrandes (~arbrandes@191.254.207.134) has joined #ceph
[15:32] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[15:33] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) has joined #ceph
[15:33] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) has joined #ceph
[15:33] * dyasny (~dyasny@198.251.58.23) Quit (Ping timeout: 480 seconds)
[15:34] * kefu (~kefu@114.86.215.22) has joined #ceph
[15:37] * kefu (~kefu@114.86.215.22) Quit (Max SendQ exceeded)
[15:37] * bitserker (~toni@88.87.194.130) has joined #ceph
[15:38] * notarima (~SquallSee@9S0AAA1ED.tor-irc.dnsbl.oftc.net) Quit ()
[15:38] * ghostnote (~AG_Scott@192.42.115.101) has joined #ceph
[15:39] * thomnico (~thomnico@2a01:e35:8b41:120:345c:b52c:84e9:e5a5) Quit (Ping timeout: 480 seconds)
[15:39] * kefu (~kefu@114.86.215.22) has joined #ceph
[15:39] * yanzheng (~zhyan@125.71.107.110) Quit (Quit: This computer has gone to sleep)
[15:39] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[15:39] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[15:40] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[15:40] * flakrat (~flakrat@fttu-216-41-245-223.btes.tv) has joined #ceph
[15:42] * ade_b (~abradshaw@tmo-108-58.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[15:42] * flakrat (~flakrat@fttu-216-41-245-223.btes.tv) Quit (Read error: Connection reset by peer)
[15:46] * sankarshan (~sankarsha@121.244.87.116) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[15:47] * bjornar_ (~bjornar@ti0099a430-1131.bb.online.no) has joined #ceph
[15:48] * Concubidated1 (~Adium@66-87-126-128.pools.spcsdns.net) Quit (Quit: Leaving.)
[15:50] * derjohn_mob (~aj@88.128.80.128) has joined #ceph
[15:51] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[15:54] * kefu (~kefu@114.86.215.22) Quit (Max SendQ exceeded)
[15:54] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit ()
[15:55] * kefu (~kefu@114.86.215.22) has joined #ceph
[16:00] * kefu (~kefu@114.86.215.22) Quit (Max SendQ exceeded)
[16:01] * kefu (~kefu@114.86.215.22) has joined #ceph
[16:03] * xarses (~andreww@12.10.113.130) Quit (Ping timeout: 480 seconds)
[16:08] * ghostnote (~AG_Scott@8Q4AABG1X.tor-irc.dnsbl.oftc.net) Quit ()
[16:10] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[16:10] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[16:12] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) has joined #ceph
[16:13] * kefu (~kefu@114.86.215.22) Quit (Max SendQ exceeded)
[16:14] * Concubidated (~Adium@192.41.52.12) has joined #ceph
[16:14] * kefu (~kefu@114.86.215.22) has joined #ceph
[16:14] * alram (~alram@192.41.52.12) has joined #ceph
[16:16] * kefu (~kefu@114.86.215.22) Quit (Max SendQ exceeded)
[16:17] * kefu (~kefu@114.86.215.22) has joined #ceph
[16:18] * madkiss (~madkiss@2001:6f8:12c3:f00f:9195:a3e6:fa22:e3bd) has joined #ceph
[16:19] * ircolle (~Adium@c-71-229-136-109.hsd1.co.comcast.net) has joined #ceph
[16:20] * ircolle-afk (~Adium@2601:1:a580:1735:914f:6dfb:ef0a:e28c) Quit (Ping timeout: 480 seconds)
[16:21] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[16:21] * kefu (~kefu@114.86.215.22) Quit (Max SendQ exceeded)
[16:22] * kefu (~kefu@114.86.215.22) has joined #ceph
[16:23] * Miouge (~Miouge@94.136.92.20) Quit (Ping timeout: 480 seconds)
[16:26] * kefu (~kefu@114.86.215.22) Quit (Max SendQ exceeded)
[16:27] * kefu (~kefu@114.86.215.22) has joined #ceph
[16:28] * bitserker (~toni@88.87.194.130) Quit (Remote host closed the connection)
[16:29] * yanzheng (~zhyan@125.71.107.110) has joined #ceph
[16:39] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:39] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[16:39] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[16:40] * moore_ (~moore@97-124-90-185.phnx.qwest.net) Quit (Remote host closed the connection)
[16:42] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[16:42] * kawa2014 (~kawa@2001:67c:1560:8007::aac:c1a6) has joined #ceph
[16:46] * derjohn_mob (~aj@88.128.80.128) Quit (Ping timeout: 480 seconds)
[16:50] * yanzheng (~zhyan@125.71.107.110) Quit (Quit: This computer has gone to sleep)
[16:50] <TheSov> where can i find a good tutorial on crush maps?
[16:52] * yanzheng (~zhyan@125.71.107.110) has joined #ceph
[16:52] * capri_on (~capri@212.218.127.222) has joined #ceph
[16:54] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[16:55] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[16:57] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[16:58] * xarses (~andreww@166.175.56.24) has joined #ceph
[16:58] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:01] * yanzheng (~zhyan@125.71.107.110) Quit (Quit: This computer has gone to sleep)
[17:01] * Redcavalier (~Redcavali@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[17:02] * branto (~branto@178-253-140-142.3pp.slovanet.sk) has left #ceph
[17:02] * yanzheng (~zhyan@125.71.107.110) has joined #ceph
[17:03] <Redcavalier> Hi guys, quick question. Is it possible to change the size of journals after OSDs have been provisioned? I set the option to 10 gb by default with journals on a specific intel SSD partitions into 3 journal partitions.
[17:06] * dyasny (~dyasny@173.231.115.59) has joined #ceph
[17:08] * Lattyware (~uhtr5r@ncc-1701-a.tor-exit.network) has joined #ceph
[17:08] * kawa2014 (~kawa@2001:67c:1560:8007::aac:c1a6) Quit (Ping timeout: 480 seconds)
[17:10] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[17:12] * danieagle (~Daniel@191.205.91.50) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[17:12] * shaunm (~shaunm@74.215.76.114) Quit (Ping timeout: 480 seconds)
[17:13] * xarses (~andreww@166.175.56.24) has left #ceph
[17:13] * xarses (~xarses@166.175.56.24) has joined #ceph
[17:14] * bitserker (~toni@88.87.194.130) has joined #ceph
[17:14] * bitserker (~toni@88.87.194.130) Quit ()
[17:14] * xarses (~xarses@166.175.56.24) Quit (Remote host closed the connection)
[17:14] * bitserker (~toni@88.87.194.130) has joined #ceph
[17:14] * bitserker (~toni@88.87.194.130) Quit ()
[17:14] * xarses (~xarses@166.175.56.24) has joined #ceph
[17:15] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[17:15] * ksperis (~ksperis@46.218.42.103) Quit (Quit: Leaving)
[17:15] * bitserker (~toni@88.87.194.130) has joined #ceph
[17:16] <flaf> Redcavalier: I'm not an expert but I think it's possible but you have to stop the osd of course during the "resize".
[17:17] <flaf> Redcavalier: warning, I have never tested but: 1. you stop your osd daemon.
[17:17] <flaf> 2. you "flush" the journal: ceph-osd -i $id --flush-journal
[17:18] <flaf> And after you can resize your journal partition (or recreate a new journal partition).
[17:18] <flaf> I suppose that in your osd working dir, the journal is just a symlink to the partition.
[17:18] <flaf> Voil??. ;)
[17:19] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) Quit (Quit: Leaving.)
[17:20] * kefu is now known as kefu|afk
[17:24] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[17:24] * yanzheng (~zhyan@125.71.107.110) Quit (Quit: This computer has gone to sleep)
[17:25] * raso (~raso@deb-multimedia.org) Quit (Quit: WeeChat 1.1.1)
[17:27] * yanzheng (~zhyan@125.71.107.110) has joined #ceph
[17:27] * raso (~raso@deb-multimedia.org) has joined #ceph
[17:28] * kefu|afk (~kefu@114.86.215.22) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:29] * cok (~chk@2a02:2350:18:1010:19c4:600:f40d:dbb5) Quit (Quit: Leaving.)
[17:30] * jrocha (~jrocha@vagabond.cern.ch) Quit (Remote host closed the connection)
[17:31] * owasserm (~owasserm@216.1.187.164) has joined #ceph
[17:32] * fretb_ (frederik@november.openminds.be) Quit (Quit: leaving)
[17:32] * fretb (frederik@november.openminds.be) has joined #ceph
[17:35] <mongo> Redcavalier: stopping the osd's resizing the journal and upading the uuid works. type `mount` and change into the directory where the OSD drive is mounted, you will see a symbolic link for `journal` which points at the external SSD journal device.
[17:36] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[17:36] <mongo> I would highly recomend testing it with vagrent beforehand to get use to the procedure with ceph-ansible or by some other method.
[17:38] <mongo> The recovery procedure here documents it better than I can http://www.sebastien-han.fr/blog/2014/11/27/ceph-recover-osds-after-ssd-journal-failure/
[17:38] * Lattyware (~uhtr5r@5NZAADO34.tor-irc.dnsbl.oftc.net) Quit ()
[17:38] * ricin (~Zyn@37.187.129.166) has joined #ceph
[17:42] <cmdrk> i have a cache tier + erasure coded pool for CephFS. I filled up the cache with lots of files (enough to start evicting to the erasure coded pool) and then deleted them, but the objects in the EC pool have stuck around, it seems. any idea how I should debug this?
[17:42] * yanzheng (~zhyan@125.71.107.110) Quit (Quit: This computer has gone to sleep)
[17:43] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) has joined #ceph
[17:44] <TheSov> man sometimes i hate beaurocracy.... no wait.. always. the word i was looking for was always
[17:44] * haomaiwang (~haomaiwan@114.111.166.250) has joined #ceph
[17:45] <TheSov> mongo, if an osd journal fails, why not scrap that osd, add a new journal disk and rebuild?
[17:45] * yanzheng (~zhyan@125.71.107.110) has joined #ceph
[17:45] <mongo> That is the process, no real diffrence between deleting the partition to resize and replacing the disk as far as the OSD is concerned.
[17:46] * moore (~moore@64.202.160.88) has joined #ceph
[17:46] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[17:46] <TheSov> so you you destroy the osd's involved with the bad journal disk... add a new journal disk and create new osd's correct?
[17:46] * moore (~moore@64.202.160.88) has joined #ceph
[17:46] * shaunm (~shaunm@mbd5a36d0.tmodns.net) has joined #ceph
[17:47] * midnight_ (~midnightr@216.113.160.71) has joined #ceph
[17:47] * loicd is now known as dscully
[17:48] * dscully is now known as loicd
[17:49] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:50] <Redcavalier> flaf thanks for the reply. so, from what you and mongo are saying, does that mean that ceph takes the whole partition disregarding osd_journal_size in the ceph.conf? If it takes the whole partition, that's fine. The issue is if it's limited at osd_journal_size even in that case.
[17:50] * yanzheng (~zhyan@125.71.107.110) Quit ()
[17:51] * shylesh (~shylesh@121.244.87.118) Quit (Remote host closed the connection)
[17:51] <flaf> Redcavalier: good question. I don't think so. http://ceph.com/docs/master/rados/configuration/osd-config-ref/#journal-settings
[17:52] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Remote host closed the connection)
[17:52] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[17:52] <Redcavalier> flaf, oh right, I'd never noticed the second part of the description there. I guess that solves things then. Thank you
[17:52] <flaf> Redcavalier: personally I put 0 for this parameter and the osd takes entire block device target of the symlink /var/lib/ceph/osd/$cluster-$id/journal (not sure of the path)
[17:53] * midnightrunner (~midnightr@c-67-174-241-112.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:53] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[17:53] <flaf> (checked, the path is correct ;))
[17:59] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:00] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:00] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) has joined #ceph
[18:03] * owasserm (~owasserm@216.1.187.164) Quit (Ping timeout: 480 seconds)
[18:04] * yanzheng (~zhyan@125.71.107.110) has joined #ceph
[18:05] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[18:05] * burley (~khemicals@cpe-98-28-239-78.cinci.res.rr.com) Quit (Quit: burley)
[18:05] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has joined #ceph
[18:08] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[18:08] * ricin (~Zyn@5NZAADO6K.tor-irc.dnsbl.oftc.net) Quit ()
[18:08] * Quackie (~Kizzi@9S0AAA1KK.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:11] * nsoffer (~nsoffer@bzq-79-180-80-86.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[18:13] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[18:14] * frickler_ (~jens@v1.jayr.de) has joined #ceph
[18:14] * frickler_ (~jens@v1.jayr.de) Quit ()
[18:14] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) Quit (Quit: Ex-Chat)
[18:15] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) Quit (Remote host closed the connection)
[18:15] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) has joined #ceph
[18:27] * smerz (~ircircirc@37.74.194.90) Quit (Remote host closed the connection)
[18:27] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) has joined #ceph
[18:31] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[18:33] * jskinner (~jskinner@host-95-2-129.infobunker.com) has joined #ceph
[18:33] * Krazypoloc (~Krazypolo@rrcs-67-52-43-151.west.biz.rr.com) has joined #ceph
[18:34] <Krazypoloc> Hey guys - I see the trusty repo got pulled?
[18:34] <gleam> maybe becasue of the broken ceph-common package?
[18:34] * owasserm (~owasserm@206.169.83.146) has joined #ceph
[18:35] <Krazypoloc> Oh?
[18:35] <Krazypoloc> I wasn't aware of that
[18:35] <gleam> http://tracker.ceph.com/issues/11388,
[18:36] <gleam> i dunno
[18:37] <gleam> ah it looks like 0.94.2 just got tagged
[18:37] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[18:37] * raw (~raw@37.48.65.169) Quit (Remote host closed the connection)
[18:38] * Quackie (~Kizzi@9S0AAA1KK.tor-irc.dnsbl.oftc.net) Quit ()
[18:39] <Krazypoloc> Thanks for the info gleam
[18:41] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[18:43] * sleinen1 (~Adium@2001:620:0:82::102) has joined #ceph
[18:43] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[18:44] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) Quit (Remote host closed the connection)
[18:45] * mschiff (~mschiff@mx10.schiffbauer.net) Quit (Remote host closed the connection)
[18:45] * mschiff (~mschiff@mx10.schiffbauer.net) has joined #ceph
[18:46] * bene (~ben@c-24-60-237-191.hsd1.nh.comcast.net) Quit (Quit: Konversation terminated!)
[18:46] * yanzheng (~zhyan@125.71.107.110) Quit (Quit: This computer has gone to sleep)
[18:47] * _prime_ (~oftc-webi@199.168.44.192) Quit (Quit: Page closed)
[18:47] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[18:47] * bjornar_ (~bjornar@ti0099a430-1131.bb.online.no) Quit (Ping timeout: 480 seconds)
[18:48] * kawa2014 (~kawa@212.110.41.244) Quit (Ping timeout: 480 seconds)
[18:49] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[18:50] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:50] * Debesis (~0x@104.202.38.86.mobile.mezon.lt) has joined #ceph
[18:52] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has left #ceph
[18:53] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[18:54] <TheSov> gleam, what does that mean?
[18:54] * linuxkidd (~linuxkidd@207.236.250.131) Quit (Ping timeout: 480 seconds)
[18:55] <gleam> which?
[18:56] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[18:56] <gleam> 0.94.2 came out yesterday
[18:56] * garphy is now known as garphy`aw
[19:00] * shaunm (~shaunm@mbd5a36d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[19:07] * Debesis (~0x@104.202.38.86.mobile.mezon.lt) Quit (Quit: Leaving)
[19:07] * Debesis (~0x@104.202.38.86.mobile.mezon.lt) has joined #ceph
[19:08] * notmyname (~pakman__@176.10.104.240) has joined #ceph
[19:08] * Debesis (~0x@104.202.38.86.mobile.mezon.lt) Quit ()
[19:09] * Debesis (~0x@104.202.38.86.mobile.mezon.lt) has joined #ceph
[19:09] * Debesis (~0x@104.202.38.86.mobile.mezon.lt) Quit (Remote host closed the connection)
[19:10] * Debesis (~0x@104.202.38.86.mobile.mezon.lt) has joined #ceph
[19:10] * Debesis (~0x@104.202.38.86.mobile.mezon.lt) Quit (Remote host closed the connection)
[19:11] <jidar> anybody know what this means?
[19:11] <jidar> [node2][WARNIN] 2015-06-12 17:10:43.316122 7f36ec4a6700 0 -- :/1003810 >> 192.168.50.11:6789/0 pipe(0x7f36e8026050 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f36e8022c20).fault
[19:11] * Debesis (~0x@104.202.38.86.mobile.mezon.lt) has joined #ceph
[19:11] * Debesis (~0x@104.202.38.86.mobile.mezon.lt) Quit (Remote host closed the connection)
[19:11] <jidar> oh, it's a firewall issue
[19:11] <jidar> what about this one
[19:11] <jidar> [node2][WARNIN] ceph-disk: Error: ceph osd create failed: Command '/usr/bin/ceph' returned non-zero exit status 1:
[19:11] <jidar> [node2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[19:11] * linuxkidd (~linuxkidd@207.236.250.131) has joined #ceph
[19:12] <jidar> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init sysvinit --mount /var/local/osd0
[19:12] * Debesis (~0x@104.202.38.86.mobile.mezon.lt) has joined #ceph
[19:13] * Debesis (~0x@104.202.38.86.mobile.mezon.lt) Quit ()
[19:15] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[19:16] * Hemanth (~Hemanth@117.221.97.102) has joined #ceph
[19:17] * Debesis (~0x@104.202.38.86.mobile.mezon.lt) has joined #ceph
[19:19] <jidar> it looks like maybe the quickstart guide misses when you have to overwrite the bootstrap-osd.keyring file?
[19:21] * brunoleon (~quassel@2a01:e35:8a42:b9d0:9555:4ed9:c739:c9a1) Quit (Ping timeout: 480 seconds)
[19:21] * vsi (vsi@kapsi.fi) has joined #ceph
[19:27] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) has joined #ceph
[19:33] * bitserker (~toni@88.87.194.130) Quit (Quit: Leaving.)
[19:37] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[19:38] * notmyname (~pakman__@5NZAADPDP.tor-irc.dnsbl.oftc.net) Quit ()
[19:38] * Sophie1 (~Dysgalt@5NZAADPF5.tor-irc.dnsbl.oftc.net) has joined #ceph
[19:47] * jiyer (~chatzilla@63.229.31.161) has joined #ceph
[19:52] * naga1 (~oftc-webi@idp01webcache2-z.apj.hpecore.net) has joined #ceph
[19:53] <naga1> i configured ceph with openstack cinder, i am facing some issues when i restart cinder-volume service
[19:54] <naga1> File "/opt/stack/venv/cinder-20150611T084659Z/lib/python2.7/site-packages/rados.py", line 293, in conf_read_file
[19:54] <naga1> raise make_ex(ret, "error calling conf_read_file")
[19:54] <naga1> Error: error calling conf_read_file: errno EACCES
[19:54] <naga1> can somebody help me out
[19:55] <naga1> all above errors are from cinder-volume.log
[20:01] * pressureman (~pressurem@62.217.45.26) Quit (Quit: Ex-Chat)
[20:03] * JFQ (~ghartz@AStrasbourg-651-1-211-89.w109-221.abo.wanadoo.fr) has joined #ceph
[20:08] * steveeJ (~steveeJ@virthost3.stefanjunker.de) has joined #ceph
[20:08] * Sophie1 (~Dysgalt@5NZAADPF5.tor-irc.dnsbl.oftc.net) Quit ()
[20:18] <ircolle> http://files.meetup.com/10524692/Relocatable%20Docker%20Containers%20with%20CEPH.pdf - Presentation on Docker Containers with Ceph
[20:19] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[20:30] * rhysidris (~oftc-webi@192.30.60.201) has joined #ceph
[20:31] * dopesong (~dopesong@lan126-981.elekta.lt) has joined #ceph
[20:34] * rhysidris is now known as Guest1431
[20:34] * linuxkidd (~linuxkidd@207.236.250.131) Quit (Quit: Leaving)
[20:35] * Guest1431 (~oftc-webi@192.30.60.201) has left #ceph
[20:37] * rhysidris (~oftc-webi@192.30.60.201) has joined #ceph
[20:38] * Moriarty (~nupanick@h-42-226.a357.priv.bahnhof.se) has joined #ceph
[20:39] * dopesong_ (~dopesong@lb1.mailer.data.lt) has joined #ceph
[20:40] * marrusl (~mark@nat-pool-rdu-u.redhat.com) Quit (Remote host closed the connection)
[20:40] <rhysidris> is it possible for a cache pool to overlay more than one cold pools ?
[20:40] * naga1 (~oftc-webi@idp01webcache2-z.apj.hpecore.net) Quit (Remote host closed the connection)
[20:43] * dopesong (~dopesong@lan126-981.elekta.lt) Quit (Ping timeout: 480 seconds)
[20:46] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) has joined #ceph
[20:48] * mgolub (~Mikolaj@91.225.200.209) has joined #ceph
[20:55] * civik (~civik@161.225.196.41) has joined #ceph
[21:07] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) Quit (Quit: Leaving)
[21:08] * Moriarty (~nupanick@9S0AAA1Q8.tor-irc.dnsbl.oftc.net) Quit ()
[21:08] * Doodlepieguy (~AotC@exit1.torproxy.org) has joined #ceph
[21:15] * ibravo (~ibravo@72.198.142.104) Quit (Quit: This computer has gone to sleep)
[21:16] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[21:16] * ibravo (~ibravo@72.198.142.104) Quit ()
[21:17] * dopesong_ (~dopesong@lb1.mailer.data.lt) Quit (Remote host closed the connection)
[21:21] * Bosse (~bosse@rifter2.klykken.com) Quit (Remote host closed the connection)
[21:21] * Bosse (~bosse@rifter2.klykken.com) has joined #ceph
[21:23] * bene (~ben@50.153.148.156) has joined #ceph
[21:23] * scuttlemonkey is now known as scuttle|afk
[21:25] * bene is now known as bene_at-car-dealer
[21:25] * civik (~civik@161.225.196.41) Quit ()
[21:30] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[21:32] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[21:35] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[21:35] * andrew (~oftc-webi@32.97.110.54) has joined #ceph
[21:36] <andrew> does anyone know if ceph-deploy is supported on rhel 7.1?
[21:36] <andrew> im getting this error: [ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported: Red Hat Enterprise Linux Server 7.17.17.17.17.17.17.1
[21:36] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Ping timeout: 480 seconds)
[21:36] <gleam> haha
[21:36] <alfredodeza> andrew: that is weird
[21:37] <alfredodeza> mind pasting the whole output somewhere?
[21:37] <andrew> im doing ceph-deploy new <hostname>
[21:37] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:37] <alfredodeza> andrew: the output is going to be super useful :)
[21:38] <andrew> http://pastebin.com/4JZCn21Y
[21:38] * Doodlepieguy (~AotC@3DDAAA5QY.tor-irc.dnsbl.oftc.net) Quit ()
[21:40] <alfredodeza> andrew: what is the output of the following command on that server: python -c "import platform; print platform.linux_distribution()"
[21:40] <andrew> ('Red Hat Enterprise Linux Server', '7.17.17.17.17.17.17.1', '')
[21:41] <alfredodeza> heh, python got weird there I guess
[21:41] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Quit: Leaving.)
[21:42] * ganders (~root@190.2.42.21) Quit (Quit: WeeChat 0.4.2)
[21:44] <TheSov> where can i find a newbies guide to crushmaps
[21:44] <TheSov> i have no idea how ceph knows what osd's to keep data on
[21:44] <alfredodeza> andrew: one sec, investigating
[21:44] <TheSov> im guessint it knows not to keep the same ones on the same host
[21:45] * wer (~wer@2600:1003:b852:96c4:ad60:f2f7:8bc7:e9f3) has joined #ceph
[21:46] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:46] * nsoffer (~nsoffer@bzq-79-180-80-86.red.bezeqint.net) has joined #ceph
[21:47] <andrew> alfredodeza, i changed /etc/redhat-release to "Red Hat Enterprise Linux Server release 7.1 (Maipo)" and it worked
[21:47] <alfredodeza> yeah but that shouldn't happen though
[21:47] <alfredodeza> something is not right
[21:49] <alfredodeza> andrew: sorry, this is a bug
[21:50] <alfredodeza> would you mind creating a ticket in the tracker?
[21:50] <alfredodeza> just confirmed it
[21:50] <andrew> sure
[21:50] <alfredodeza> one sec, let me get you a link
[21:50] <alfredodeza> http://tracker.ceph.com/projects/ceph-deploy/issues/new
[21:50] <alfredodeza> andrew: ^ ^
[21:50] <andrew> thanks
[21:53] <alfredodeza> mind pinging me the ticket when you create it?
[21:53] <alfredodeza> I have a fix for it
[21:54] <andrew> sure
[21:54] <andrew> 1 sec
[21:55] * vbellur (~vijay@122.167.138.43) Quit (Ping timeout: 480 seconds)
[21:56] <andrew> http://tracker.ceph.com/issues/12001
[22:06] * telnoratti (~telnoratt@pr0n.vtluug.org) has joined #ceph
[22:08] * Moriarty (~BlS@edwardsnowden1.torservers.net) has joined #ceph
[22:08] <telnoratti> I hate to be that guy, but following the steps http://docs.ceph.com/docs/master/start/quick-start-preflight/ the gpgkey for the rpm repo is hosted on ipv4 only as far as I can tell
[22:10] <telnoratti> I solved my issue, but git.ceph.com should probably have a AAAA record if possible
[22:14] * andrew (~oftc-webi@32.97.110.54) Quit (Remote host closed the connection)
[22:16] <dmick> telnoratti: I would assume all sorts of the infrastructure is v4
[22:16] <dmick> as it is everywhere; does this key URL seem to be an outlier?
[22:17] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[22:18] <telnoratti> I haven't really looked too closely at the rest of ceph, the machine I was trying to do this on only has ipv6 (and will only ever be provided that)
[22:20] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) Quit (Ping timeout: 480 seconds)
[22:20] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[22:21] <telnoratti> It would be nice if at least the key and packages were on v6 for v6 only hosts
[22:24] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[22:25] * derjohn_mob (~aj@x590cae37.dyn.telefonica.de) has joined #ceph
[22:25] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[22:27] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[22:28] <TheSov> can you disable journals for SSD tiers?
[22:28] <TheSov> or is that bad
[22:29] <dmick> telnoratti: I see. Yeah, sadly, v6 is not available in lots of places yet. my current internet connection does not support it at all, for example
[22:33] <telnoratti> Yeah, neither does mine, but work is ip starved so most new hosts don't get ipv4 unless they absolutely need it. It looks like the gpgkey is coming straight from git and the repos are hosted on ceph.com, maybe the gpgkey should be hosted with them anyways
[22:33] <dmick> git.ceph.com is "them" (us)
[22:34] <dmick> it's a separate server for load reasons
[22:35] <dmick> in general I'm a fan of v6 and want to push it where possible tho so I'll send a note
[22:35] * derjohn_mob (~aj@x590cae37.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[22:38] * Moriarty (~BlS@3DDAAA5UW.tor-irc.dnsbl.oftc.net) Quit ()
[22:40] * mgolub (~Mikolaj@91.225.200.209) Quit (Quit: away)
[22:40] * bene_at-car-dealer is now known as bene
[22:40] * bene (~ben@50.153.148.156) Quit (Quit: Konversation terminated!)
[22:41] * georgem (~Adium@207.164.79.38) has joined #ceph
[22:46] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[22:47] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[22:47] * moore (~moore@wsip-98-174-204-205.ph.ph.cox.net) has joined #ceph
[22:48] * derjohn_mob (~aj@x590cae37.dyn.telefonica.de) has joined #ceph
[22:54] * georgem (~Adium@207.164.79.38) has left #ceph
[22:55] * Redcavalier (~Redcavali@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: Leaving)
[22:55] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) Quit (Remote host closed the connection)
[22:58] * dyasny (~dyasny@173.231.115.59) Quit (Ping timeout: 480 seconds)
[22:59] * Hemanth (~Hemanth@117.221.97.102) Quit (Ping timeout: 480 seconds)
[23:05] * puffy (~puffy@64.191.206.83) has joined #ceph
[23:06] * nsoffer (~nsoffer@bzq-79-180-80-86.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[23:08] * TehZomB (~brannmar@5.61.34.63) has joined #ceph
[23:10] * infinity_ (~brendon@web2.artsopolis.com) has joined #ceph
[23:12] * infinity1 (~brendon@web2.artsopolis.com) Quit (Ping timeout: 480 seconds)
[23:12] * kingcu (~kingcu@kona.ridewithgps.com) has joined #ceph
[23:14] * kingcu_ (~kingcu@kona.ridewithgps.com) Quit (Ping timeout: 480 seconds)
[23:21] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Quit: Leaving.)
[23:21] * derjohn_mob (~aj@x590cae37.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[23:28] * visbits (~textual@8.29.138.28) has joined #ceph
[23:31] * derjohn_mob (~aj@x590e001c.dyn.telefonica.de) has joined #ceph
[23:32] * bjornar_ (~bjornar@ti0099a430-1131.bb.online.no) has joined #ceph
[23:34] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[23:38] * nsoffer (~nsoffer@bzq-79-180-80-86.red.bezeqint.net) has joined #ceph
[23:38] * Sysadmin88 (~IceChat77@2.124.164.69) has joined #ceph
[23:38] * TehZomB (~brannmar@5NZAADPVY.tor-irc.dnsbl.oftc.net) Quit ()
[23:38] * Malcovent (~raindog@tortest.ip-eend.nl) has joined #ceph
[23:45] * rlrevell (~leer@184.52.129.221) has joined #ceph
[23:50] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[23:51] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[23:54] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:56] * moore (~moore@wsip-98-174-204-205.ph.ph.cox.net) Quit (Read error: Connection reset by peer)
[23:57] * moore (~moore@wsip-98-174-204-205.ph.ph.cox.net) has joined #ceph
[23:58] * The_Ball (~ballen@42.80-202-192.nextgentel.com) Quit (Quit: Leaving)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.