#ceph IRC Log

Index

IRC Log for 2014-08-24

Timestamps are in GMT/BST.

[0:02] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[0:04] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[0:06] * angdraug (~angdraug@131.252.204.134) Quit (Quit: Leaving)
[0:24] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[0:26] * angdraug (~angdraug@131.252.204.134) has joined #ceph
[0:30] * rendar (~I@host56-176-dynamic.37-79-r.retail.telecomitalia.it) Quit ()
[0:32] <tremon> ok, now I'm stuck again. An in-place upgrade from 0.56.7 to 0.61.9 fails with "unable to read magic from mon data", and I can't seem to add a new 0.61 monitor either, it gives a lot of "couldn't decrypt" errors
[0:33] <tremon> but I've verified the keyring files of the monitors, and they have matching md5sums
[0:33] <tremon> anyone know what I've missed there?
[0:37] <tremon> the "monitor changes" blog post by joao describes the monitor upgrades for 0.58, noting that it entails moving all separate gv's into a leveldb
[0:39] <tremon> could it be that my earlier attempt at upgrading from 0.48 to 0.80 broke that upgrade process? I see I do have a store.db directory, could I retrigger the 0.58 conversion by deleting that dir
[0:39] <tremon> ?
[0:44] * fdmanana (~fdmanana@bl13-150-5.dsl.telepac.pt) has joined #ceph
[0:47] <tremon> oh well, no time like the present. Manually removing store.db and re-doing the upgrade solved both issues: the existing monitor upgraded its disk format, and the new monitor no longer complains about decryption errors
[0:48] * fdmanana (~fdmanana@bl13-150-5.dsl.telepac.pt) Quit ()
[0:52] * loicd (~loicd@cmd179.fsffrance.org) Quit (Ping timeout: 480 seconds)
[0:55] * sage__ is now known as sage
[0:57] * linuxkidd (~linuxkidd@cpe-066-057-017-151.nc.res.rr.com) Quit (Quit: Leaving)
[1:15] * loicd (~loicd@cmd179.fsffrance.org) has joined #ceph
[1:23] * loicd (~loicd@cmd179.fsffrance.org) Quit (Ping timeout: 480 seconds)
[1:23] * Nacer (~Nacer@2001:41d0:fe82:7200:f0a4:603d:cd4a:d5c6) Quit (Remote host closed the connection)
[1:28] * oms101 (~oms101@p20030057EA32B100C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:28] <tremon> monitors are now at 0.67, but the arm boxes are still at 0.48, and upgrading them requires upgrading to debian testing
[1:36] * oms101 (~oms101@p20030057EA5A1A00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:37] * diegows (~diegows@190.190.5.238) has joined #ceph
[1:40] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[1:45] * angdraug (~angdraug@131.252.204.134) Quit (Quit: Leaving)
[2:07] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[2:08] * angdraug (~angdraug@131.252.204.134) has joined #ceph
[2:16] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[2:24] * smiley_ (~smiley@pool-173-66-4-176.washdc.fios.verizon.net) Quit (Quit: smiley_)
[2:42] * sage (~quassel@cpe-172-248-35-102.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[2:45] * LeaChim (~LeaChim@host86-159-115-162.range86-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:49] * Concubidated1 (~Adium@66-87-130-109.pools.spcsdns.net) has joined #ceph
[2:49] * Concubidated (~Adium@66-87-130-109.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[2:52] * Concubidated (~Adium@66-87-130-109.pools.spcsdns.net) has joined #ceph
[2:52] * Concubidated1 (~Adium@66-87-130-109.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[2:59] * loicd (~loicd@cmd179.fsffrance.org) has joined #ceph
[3:05] * Alberta21 (~Alberta21@93.115.95.5) has joined #ceph
[3:05] <Alberta21> Here some videos. I hope you like them! http://bit.do/sexxxvideohot
[3:05] * hflai (~hflai@alumni.cs.nctu.edu.tw) Quit (Remote host closed the connection)
[3:07] * Alberta21 (~Alberta21@93.115.95.5) Quit (Read error: Connection reset by peer)
[3:08] * smiley_ (~smiley@pool-173-66-4-176.washdc.fios.verizon.net) has joined #ceph
[3:11] * angdraug (~angdraug@131.252.204.134) Quit (Quit: Leaving)
[3:12] * angdraug (~angdraug@131.252.204.134) has joined #ceph
[3:13] * angdraug (~angdraug@131.252.204.134) Quit ()
[3:28] * Concubidated (~Adium@66-87-130-109.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[3:28] * Concubidated1 (~Adium@66-87-130-109.pools.spcsdns.net) has joined #ceph
[3:32] <smiley_> I am wondering if there is an easy way to get per bucket disk usage info on each of the radosgw buckets???from the cli???radosgw-admin,etc
[3:34] * Concubidated1 (~Adium@66-87-130-109.pools.spcsdns.net) Quit (Read error: No route to host)
[3:34] * Concubidated (~Adium@66-87-130-109.pools.spcsdns.net) has joined #ceph
[4:22] * Concubidated (~Adium@66-87-130-109.pools.spcsdns.net) Quit (Read error: No route to host)
[4:23] * Concubidated (~Adium@66.87.130.109) has joined #ceph
[4:31] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[4:35] * Concubidated1 (~Adium@66-87-130-109.pools.spcsdns.net) has joined #ceph
[4:35] * Concubidated (~Adium@66.87.130.109) Quit (Read error: Connection reset by peer)
[4:40] * Concubidated1 (~Adium@66-87-130-109.pools.spcsdns.net) Quit (Read error: No route to host)
[4:49] * lupu (~lupu@93.114.114.176) Quit (Ping timeout: 480 seconds)
[4:58] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[5:03] * Vacum_ (~vovo@i59F79B65.versanet.de) has joined #ceph
[5:04] * angdraug (~angdraug@131.252.204.134) has joined #ceph
[5:10] * Vacum__ (~vovo@i59F79D23.versanet.de) Quit (Ping timeout: 480 seconds)
[5:11] * angdraug (~angdraug@131.252.204.134) Quit (Quit: Leaving)
[5:18] * hasues (~hazuez@108.236.232.243) has joined #ceph
[5:18] * RonaldReagan (~ronaldrea@pool-108-48-115-173.washdc.fios.verizon.net) has joined #ceph
[5:20] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[5:29] * RonaldReagan (~ronaldrea@pool-108-48-115-173.washdc.fios.verizon.net) Quit ()
[5:31] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[5:39] * zack_dol_ (~textual@p843a3d.tokynt01.ap.so-net.ne.jp) has joined #ceph
[5:42] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) Quit (Ping timeout: 480 seconds)
[5:46] * mtl2 (~Adium@c-98-245-49-17.hsd1.co.comcast.net) has joined #ceph
[5:53] * mtl1 (~Adium@c-98-245-49-17.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[5:56] * pfsense_rookie (~socram@bl19-247-139.dsl.telepac.pt) has joined #ceph
[6:08] * Concubidated (~Adium@66.87.65.66) has joined #ceph
[6:08] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[6:08] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[6:16] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:17] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[6:20] * alexxy[home] (~alexxy@79.173.81.171) has joined #ceph
[6:20] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Ping timeout: 480 seconds)
[6:25] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[6:32] * tinklebear (~tinklebea@108-61-56-203ch.openskytelcom.net) Quit (Quit: Nettalk6 - www.ntalk.de)
[6:43] * manohar (~manohar@14.99.12.146) has joined #ceph
[6:44] * haomaiwa_ (~haomaiwan@203.69.59.199) has joined #ceph
[6:44] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Connection reset by peer)
[6:51] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[6:51] * haomaiwa_ (~haomaiwan@203.69.59.199) Quit (Read error: Connection reset by peer)
[6:59] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[7:09] * i_m1 (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[7:12] * hasues (~hazuez@108.236.232.243) Quit (Quit: Leaving.)
[7:14] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[7:25] * michalefty (~micha@ip250461f1.dynamic.kabel-deutschland.de) has joined #ceph
[7:35] * manohar (~manohar@14.99.12.146) Quit (Quit: manohar)
[7:43] * lupu (~lupu@176.223.2.250) has joined #ceph
[7:46] * markl (~mark@knm.org) Quit (Ping timeout: 480 seconds)
[7:51] * pfsense_rookie (~socram@bl19-247-139.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[8:28] * cok (~chk@46.30.211.29) has joined #ceph
[8:30] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:54] * Concubidated (~Adium@66.87.65.66) Quit (Quit: Leaving.)
[8:55] * i_m1 (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[9:22] * jrcresawn (~jrcresawn@ip24-251-38-21.ph.ph.cox.net) has joined #ceph
[9:22] * jrcresawn (~jrcresawn@ip24-251-38-21.ph.ph.cox.net) Quit (Remote host closed the connection)
[9:49] * michalefty (~micha@ip250461f1.dynamic.kabel-deutschland.de) has left #ceph
[9:52] * cok (~chk@46.30.211.29) has left #ceph
[9:53] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[9:55] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[9:58] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[9:59] * lupu (~lupu@176.223.2.250) Quit (Quit: Leaving.)
[9:59] * lupu (~lupu@176.223.2.250) has joined #ceph
[10:05] * lupu1 (~lupu@176.223.2.250) has joined #ceph
[10:05] * lupu (~lupu@176.223.2.250) Quit (Read error: Connection reset by peer)
[10:09] * rendar (~I@87.19.183.131) has joined #ceph
[10:10] * lupu1 (~lupu@176.223.2.250) Quit (Read error: No route to host)
[10:10] * lupu (~lupu@176.223.2.250) has joined #ceph
[10:11] * BManojlovic (~steki@95.180.4.243) has joined #ceph
[10:20] <tremon> ok, now having upgraded the osd's from 0.48 to 0.67, they stay down. The ceph-osd processes start up fine, and give "updating collection" messages, but they never go up
[10:21] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[10:41] <tremon> should I wait out that process? It is running for multiple hours already, and doesn't seem to get past the "186/216 processed"
[10:42] * cok (~chk@46.30.211.29) has joined #ceph
[10:52] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[10:58] * oro (~oro@84-75-253-80.dclient.hispeed.ch) has joined #ceph
[10:58] * oro_ (~oro@84-75-253-80.dclient.hispeed.ch) has joined #ceph
[11:05] * Elissa21 (~Elissa21@37.221.169.150) has joined #ceph
[11:05] <Elissa21> Here some videos. I hope you like them! http://bit.do/sexxxvideohot
[11:07] * Elissa21 (~Elissa21@37.221.169.150) Quit (Read error: Connection reset by peer)
[11:11] * Zoup (~Zoup@37.32.28.28) has joined #ceph
[11:11] <Zoup> i have a question, is ceph is basically a raid 1 over network?
[11:11] <Zoup> with 4 servers each having 10tb (raid0), i'll have total of 10tb of storage, right?
[11:11] * cok (~chk@46.30.211.29) has left #ceph
[11:12] <Zoup> can i change replication from 4 to 2, and gain 20tb of total storage?
[11:12] <tremon> you'd gain 10, and have 20tb of storage
[11:13] <Zoup> in second case? when i change replication into 2?
[11:13] <Vacum_> Zoup: with ceph, each pool you create on the cluster can have a different number of replicas (or with erasure code a different number of shards / checksums)
[11:14] <Zoup> Vacum_: so, total storage is not smallest storage, much unlike traditional raid, right?
[11:14] <Vacum_> Zoup: default replication size is 3. so with 40 TB gross available storage you will get 40TB/3 = 13TB net storage
[11:14] <kraken> http://i.imgur.com/XEEI0Rn.gif
[11:14] <Vacum_> Zoup: correct. you can put osds with different sizes into the cluster. their "weight" then influence how many placement groups are put on each osd
[11:15] <Zoup> Vacum_: very interesting, and thank you for 'the future of storage' :)
[11:15] <Vacum_> Zoup: I discourage using a raid0 on the osd hosts, btw
[11:15] <Vacum_> Zoup: just use each drive directly as one OSD
[11:28] * LeaChim (~LeaChim@host86-159-115-162.range86-159.btcentralplus.com) has joined #ceph
[11:35] <Zoup> Vacum_: thanks for the advice
[11:45] * vbellur (~vijay@111.93.151.94) has joined #ceph
[11:50] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[12:05] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[12:08] * Zoup_ (~Zoup@5.237.202.24) has joined #ceph
[12:09] * Zoup_ (~Zoup@5.237.202.24) Quit ()
[12:11] * Zoup (~Zoup@37.32.28.28) Quit (Ping timeout: 480 seconds)
[12:23] <s3an2> Hi,
[12:23] * vbellur (~vijay@111.93.151.94) Quit (Quit: Leaving.)
[12:24] <s3an2> I am looking at our cluster this morning and I see 85 pgs stuck unclean, I can't really see why these are not recovering - any tips where to look?
[12:49] * Venturi (~Venturi@89-212-99-37.dynamic.t-2.net) has joined #ceph
[12:53] * Vacum_ (~vovo@i59F79B65.versanet.de) Quit (Quit: leaving)
[13:05] <tremon> wut... ceph --admin-daemon config show on the just-upgraded osd and still out osd gives osd_uuid and fsid both as "00000000-0000-0000-0000-000000000000", is that normal?
[13:06] <steveeJ> s3an2: what's the state of the PGs? unclean includes several distinct conditions
[13:06] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[13:37] * jordanP (~jordan@78.193.36.209) has joined #ceph
[13:38] * jordanP (~jordan@78.193.36.209) Quit ()
[14:12] * sleinen (~Adium@2001:620:1000:3:7ed1:c3ff:fedc:3223) has joined #ceph
[14:24] * vbellur (~vijay@122.166.169.103) has joined #ceph
[14:34] * JC (~JC@AMontpellier-651-1-385-190.w81-251.abo.wanadoo.fr) has joined #ceph
[14:37] * JC (~JC@AMontpellier-651-1-385-190.w81-251.abo.wanadoo.fr) Quit ()
[14:42] * aschuring (~aschuring@d594e6a3.dsl.concepts.nl) has joined #ceph
[14:42] * tremon (~aschuring@d594e6a3.dsl.concepts.nl) Quit (Read error: Connection reset by peer)
[14:43] * aschuring is now known as tremon
[14:43] * fdmanana (~fdmanana@bl13-150-5.dsl.telepac.pt) has joined #ceph
[14:59] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (Quit: Leaving)
[15:03] * BManojlovic (~steki@95.180.4.243) Quit (Remote host closed the connection)
[15:15] * vbellur (~vijay@122.166.169.103) Quit (Ping timeout: 480 seconds)
[15:16] <dignus> h
[15:19] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[15:26] * vbellur (~vijay@122.178.234.250) has joined #ceph
[15:34] * vbellur (~vijay@122.178.234.250) Quit (Ping timeout: 480 seconds)
[15:46] * dneary (~dneary@96.237.180.105) has joined #ceph
[15:52] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[15:53] * diegows (~diegows@190.190.5.238) has joined #ceph
[15:54] * dneary (~dneary@96.237.180.105) Quit (Ping timeout: 480 seconds)
[15:56] * jerker (jerker@Psilocybe.Update.UU.SE) Quit (Quit: leaving)
[16:02] * ikrstic (~ikrstic@109-93-112-236.dynamic.isp.telekom.rs) has joined #ceph
[16:09] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[16:21] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[16:43] * analbeard (~shw@support.memset.com) has joined #ceph
[16:44] * oro (~oro@84-75-253-80.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[16:44] * oro_ (~oro@84-75-253-80.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[17:02] * dmsimard_away is now known as dmsimard
[17:09] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[17:09] * mtl2 (~Adium@c-98-245-49-17.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[17:28] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:30] <Midnightmyth> are there any checksumming build into ceph yet?
[17:42] * oro (~oro@84-75-253-80.dclient.hispeed.ch) has joined #ceph
[17:42] * oro_ (~oro@84-75-253-80.dclient.hispeed.ch) has joined #ceph
[17:48] * dmsimard is now known as dmsimard_away
[18:06] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[18:06] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[18:09] <jiffe> alright so I had the mds server back up again, this time I did catch logs for it
[18:15] * sironside (~sironside@2a00:a600:603:0:8e70:5aff:fec9:ee14) has joined #ceph
[18:20] * pentabular (~Adium@2601:9:4980:17d5:81fe:b22a:9c7d:46df) has joined #ceph
[18:21] * pentabular (~Adium@2601:9:4980:17d5:81fe:b22a:9c7d:46df) Quit ()
[18:21] * tom2 is now known as tom42
[18:24] * markbby (~Adium@199.21.242.141) has joined #ceph
[18:27] * markbby1 (~Adium@168.94.245.2) has joined #ceph
[18:29] * dmsimard_away is now known as dmsimard
[18:33] * markbby (~Adium@199.21.242.141) Quit (Ping timeout: 480 seconds)
[18:42] * BManojlovic (~steki@95.180.4.243) has joined #ceph
[18:45] * markbby1 (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[18:50] * Nacer_ (~Nacer@2001:41d0:fe82:7200:c032:7627:a535:319e) has joined #ceph
[18:50] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Read error: Connection reset by peer)
[18:53] * dmsimard is now known as dmsimard_away
[19:00] * dmsimard_away is now known as dmsimard
[19:24] * pfsense_rookie (~socram@2.80.247.139) has joined #ceph
[19:28] * dmsimard is now known as dmsimard_away
[19:50] * BManojlovic (~steki@95.180.4.243) Quit (Ping timeout: 480 seconds)
[19:55] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) has joined #ceph
[20:05] * dmsimard_away is now known as dmsimard
[20:10] * dmsimard is now known as dmsimard_away
[20:12] * dmsimard_away is now known as dmsimard
[20:13] * serencus (~fbloggs@host86-135-39-203.range86-135.btcentralplus.com) has joined #ceph
[20:26] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[20:27] * lupu (~lupu@176.223.2.250) Quit (Quit: Leaving.)
[20:39] * dmsimard is now known as dmsimard_away
[20:43] * madkiss (~madkiss@2001:6f8:12c3:f00f:9071:30a3:96f1:dbda) has joined #ceph
[20:48] * dmsimard_away is now known as dmsimard
[20:49] * dmsimard is now known as dmsimard_away
[20:49] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:205c:d7e:3b17:806) Quit (Ping timeout: 480 seconds)
[21:05] * Rose21 (~Rose21@37.221.169.147) has joined #ceph
[21:05] <Rose21> Here some videos. I hope you like them! http://bit.do/sexxxvideohot
[21:07] * Rose21 (~Rose21@37.221.169.147) Quit (Read error: Connection reset by peer)
[21:12] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[21:21] * BManojlovic (~steki@95.180.4.243) has joined #ceph
[21:28] * vovo (~vovo@i59F79B65.versanet.de) has joined #ceph
[21:29] * dmsimard_away is now known as dmsimard
[21:42] * steki (~steki@95.180.4.243) has joined #ceph
[21:42] * BManojlovic (~steki@95.180.4.243) Quit (Remote host closed the connection)
[21:44] * rendar (~I@87.19.183.131) Quit (Ping timeout: 480 seconds)
[21:46] * rendar (~I@87.19.183.131) has joined #ceph
[21:51] * angdraug (~angdraug@131.252.204.134) has joined #ceph
[21:55] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[22:06] * madkiss (~madkiss@2001:6f8:12c3:f00f:9071:30a3:96f1:dbda) Quit (Quit: Leaving.)
[22:10] * dmsimard is now known as dmsimard_away
[22:14] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[22:18] <tremon> hi all, back again. I'm still trying to upgrade my cluster from debian stable to debian testing, i.e. from 0.48 to 0.80. Using the ceph.com packages I have been able to stepwise upgrade the monitors up to 0.67. That still worked with the 0.48 osd's, however the first osd that I upgraded to 0.67 is no longer coming up
[22:20] <tremon> the osd took its time updating the collection, but that seems to be finished now, and now the osd is no longer starting at all
[22:20] <tremon> There are no error messages, but the osd just exits a few seconds after starting
[22:34] <Venturi>
[22:35] <aarontc> any idea what to do about PG stuck "active+remapped"?
[22:37] <aarontc> tried restarting the affected OSDs, which has cleared that in the past, but no luck
[22:47] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[22:49] * joao (~joao@a79-168-5-220.cpe.netcabo.pt) Quit (Quit: Leaving)
[22:54] * todayman (~quassel@magellan.acm.jhu.edu) Quit (Remote host closed the connection)
[22:56] * sleinen (~Adium@2001:620:1000:3:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[22:57] * Nacer_ (~Nacer@2001:41d0:fe82:7200:c032:7627:a535:319e) Quit (Remote host closed the connection)
[22:59] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[23:01] * oro_ (~oro@84-75-253-80.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:02] * oro (~oro@84-75-253-80.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:05] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[23:15] <Venturi> From the perspective of Ceph consistency, how does the reading work, in case of an OSD failure. Let's say that I have 3 copies of data on three different OSDs and it occours that some app want's to read data. Does CEPH always ckeks if all three copies of data are the same, when read of objcet is accessed? What happens if not all three objects are accessible?
[23:18] <lurbs> Reads are (currently) always from the primary.
[23:19] <lurbs> A read doesn't check for consistency between the copies, there's a period check for that.
[23:20] <lurbs> ( http://ceph.com/docs/master/rados/configuration/osd-config-ref/#scrubbing )
[23:22] <lurbs> As to exactly what's passed to a client if there are no copies available, I'm not sure.
[23:27] <Venturi> what in case primary osd is not accessible?
[23:33] * dis (~dis@109.110.66.158) Quit (Ping timeout: 480 seconds)
[23:35] <lurbs> If the primary's down then another will be promoted.
[23:36] * michaelg (~michaelg@208.87.60.83) has joined #ceph
[23:39] * dmsimard_away is now known as dmsimard
[23:42] * michaelg (~michaelg@208.87.60.83) has left #ceph
[23:44] * dmsimard is now known as dmsimard_away
[23:48] * kfei (~root@36-238-177-40.dynamic-ip.hinet.net) Quit (Ping timeout: 480 seconds)
[23:50] <Venturi> thx lurbs
[23:51] * ikrstic (~ikrstic@109-93-112-236.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[23:51] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[23:59] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.