#ceph IRC Log

Index

IRC Log for 2013-10-16

Timestamps are in GMT/BST.

[0:00] * AfC (~andrew@101.119.15.205) has joined #ceph
[0:04] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[0:05] * rturk-away is now known as rturk
[0:07] * tsnider (~tsnider@nat-216-240-30-23.netapp.com) Quit (Quit: Leaving.)
[0:16] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[0:17] <dmsimard> xarses: ping
[0:18] * JoeGruher (~JoeGruher@134.134.137.71) Quit (Remote host closed the connection)
[0:18] <dmsimard> This might be of interest - there is a discussion and blueprint about ceph, puppet and openstack: https://groups.google.com/a/puppetlabs.com/forum/?fromgroups=#!topic/puppet-openstack/ibnrmXBAxVg https://wiki.openstack.org/wiki/Puppet-openstack/ceph-blueprint
[0:21] * AfC (~andrew@101.119.15.205) Quit (Quit: Leaving.)
[0:23] * onizo (~onizo@wsip-70-166-5-159.sd.sd.cox.net) has joined #ceph
[0:25] <angdraug> dmsimard: thanks, me and xarses are following this
[0:26] <dmsimard> :)
[0:33] <saaby> guys, can any of you explain to me, when a pg changes state from "recovering" to "backfilling"?
[0:35] <saaby> it looks as if "recovering" will recover, or copy, a certain amount of objects, but it there is more than a certain amount it will go into "backfilling" ?
[0:36] * sprachgenerator (~sprachgen@130.202.135.137) Quit (Quit: sprachgenerator)
[0:36] <dmick> saaby: I think that's the difference, yes; if the OSD decides it just needs too much, it says "just give me the whole thing at once rather than piecemeal"
[0:37] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Read error: Connection reset by peer)
[0:37] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[0:38] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[0:38] <saaby> right, ok. - so backfill is actually the same as recovery - just probably recovering all objects rather than an incremental sync.
[0:39] <saaby> which makes having two different queues, and max concurrent values, for them a bit.. something to think about.
[0:39] <saaby> dmick: thanks.
[0:41] <dmick> can't find a good reference that says that, but I bleive it to be true
[0:46] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) Quit (Quit: Leaving.)
[0:48] <gregaf> fyi, I believe the limit for recovery moving into backfill is generally just "whoops, we don't have overlapping logs"
[0:49] * BManojlovic (~steki@198.199.65.141) has joined #ceph
[0:52] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:53] <xarses> dmsimard: thrid person to bring it up :)
[0:53] <saaby> gregaf: right. makes sense.
[0:53] * Steki (~steki@fo-d-130.180.254.37.targo.rs) Quit (Ping timeout: 480 seconds)
[0:53] * sarob (~sarob@206.117.102.4) has joined #ceph
[0:54] * rturk is now known as rturk-away
[0:55] * dmsimard (~Adium@2607:f748:9:1666:9940:5a6e:7539:82e2) Quit (Ping timeout: 480 seconds)
[1:00] * BManojlovic (~steki@198.199.65.141) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:00] * sarob (~sarob@206.117.102.4) Quit (Read error: Operation timed out)
[1:00] * mschiff (~mschiff@85.182.236.82) Quit (Remote host closed the connection)
[1:03] * onizo (~onizo@wsip-70-166-5-159.sd.sd.cox.net) Quit (Remote host closed the connection)
[1:04] * onizo (~onizo@wsip-70-166-5-159.sd.sd.cox.net) has joined #ceph
[1:06] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Ping timeout: 480 seconds)
[1:06] * onizo (~onizo@wsip-70-166-5-159.sd.sd.cox.net) Quit (Read error: Connection reset by peer)
[1:07] * ScOut3R (~scout3r@dsl51B61603.pool.t-online.hu) Quit (Remote host closed the connection)
[1:14] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[1:15] * rturk-away is now known as rturk
[1:18] * Jakdaw (~chris@puma-mxisp.mxtelecom.com) Quit (Ping timeout: 480 seconds)
[1:22] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[1:23] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Quit: Ex-Chat)
[1:28] * danieagle (~Daniel@186.214.58.78) Quit (Quit: inte+ e Obrigado Por tudo mesmo! :-D)
[1:28] * Jakdaw (~chris@puma-aaisp.mxtelecom.com) has joined #ceph
[1:39] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) Quit (Ping timeout: 480 seconds)
[1:46] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[1:56] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[1:58] * LeaChim (~LeaChim@host86-174-76-26.range86-174.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:01] * nwat (~nwat@c-24-5-146-110.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:03] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) Quit (Read error: Connection reset by peer)
[2:05] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) has joined #ceph
[2:05] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) Quit ()
[2:12] * Guest2459 (~a@209.12.169.218) Quit (Quit: This computer has gone to sleep)
[2:14] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[2:14] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[2:16] * freedomhui (~freedomhu@117.79.232.251) has joined #ceph
[2:17] * sagelap (~sage@2600:1012:b015:4b57:1d48:79b9:e61d:8732) has joined #ceph
[2:22] * nhm (~nhm@184-97-129-163.mpls.qwest.net) has joined #ceph
[2:22] * ChanServ sets mode +o nhm
[2:24] * freedomhui (~freedomhu@117.79.232.251) Quit (Ping timeout: 480 seconds)
[2:26] * sagelap (~sage@2600:1012:b015:4b57:1d48:79b9:e61d:8732) Quit (Ping timeout: 480 seconds)
[2:27] * sagelap (~sage@2600:1012:b021:603d:1d48:79b9:e61d:8732) has joined #ceph
[2:30] * sagelap (~sage@2600:1012:b021:603d:1d48:79b9:e61d:8732) Quit (Read error: Connection reset by peer)
[2:34] * MoRoSKiT (~jean@115.7.139.88.rev.sfr.net) has joined #ceph
[2:42] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[2:43] * yanzheng (~zhyan@134.134.137.73) has joined #ceph
[2:46] * sagelap (~sage@2600:1012:b021:603d:8d93:3a94:94d9:906c) has joined #ceph
[2:46] * yy-nm (~Thunderbi@122.224.154.38) has joined #ceph
[2:47] * xmltok (~xmltok@216.103.134.250) Quit (Read error: Operation timed out)
[2:50] * MoRoSKiT (~jean@115.7.139.88.rev.sfr.net) Quit (Quit: Quitte)
[2:51] * freedomhui (~freedomhu@117.79.232.219) has joined #ceph
[2:54] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[3:01] * The_Bishop (~bishop@2a02:2450:102f:4:1c9a:7388:96b9:613e) Quit (Ping timeout: 480 seconds)
[3:01] * rturk is now known as rturk-away
[3:02] * sagelap (~sage@2600:1012:b021:603d:8d93:3a94:94d9:906c) Quit (Read error: Connection reset by peer)
[3:04] * The_Bishop (~bishop@2001:470:50b6:0:1c9a:7388:96b9:613e) has joined #ceph
[3:08] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[3:08] * yehudasa (~yehudasa@2607:f298:a:607:ea03:9aff:fe98:e8ff) Quit (Ping timeout: 480 seconds)
[3:20] * angdraug (~angdraug@64-79-127-122.static.wiline.com) Quit (Quit: Leaving)
[3:21] * diegows (~diegows@190.190.11.42) Quit (Ping timeout: 480 seconds)
[3:28] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:28] * KindTwo (KindOne@h24.32.28.71.dynamic.ip.windstream.net) has joined #ceph
[3:28] * KindTwo is now known as KindOne
[3:29] * houkouonchi-work (~linux@12.248.40.138) Quit (Quit: Client exiting)
[3:30] * glzhao (~glzhao@118.195.65.67) has joined #ceph
[3:33] * freedomhui (~freedomhu@117.79.232.219) Quit (Quit: Leaving...)
[3:36] * The_Bishop (~bishop@2001:470:50b6:0:1c9a:7388:96b9:613e) Quit (Ping timeout: 480 seconds)
[3:44] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:45] * The_Bishop (~bishop@2001:470:50b6:0:6915:da3e:ce9d:d5df) has joined #ceph
[3:48] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[3:50] * onizo (~onizo@cpe-24-94-21-246.san.res.rr.com) has joined #ceph
[3:50] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit ()
[3:51] * onizo (~onizo@cpe-24-94-21-246.san.res.rr.com) Quit (Remote host closed the connection)
[3:53] * cofol1986 (~xwrj@110.90.119.113) Quit (Read error: Connection reset by peer)
[3:57] * aliguori (~anthony@74.202.210.82) Quit (Remote host closed the connection)
[3:57] * haomaiwa_ (~haomaiwan@117.79.232.243) Quit (Read error: Connection reset by peer)
[3:58] * haomaiwang (~haomaiwan@117.79.232.204) has joined #ceph
[4:00] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit (Ping timeout: 480 seconds)
[4:06] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[4:11] * haomaiwa_ (~haomaiwan@211.155.113.208) has joined #ceph
[4:14] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) has joined #ceph
[4:15] <skullone> has anyone tried loadinf ceph with billions of small objects (like 4KB in size)?
[4:15] <skullone> loading*
[4:15] * ircolle (~Adium@2601:1:8380:2d9:7449:13ba:da7d:15c4) Quit (Quit: Leaving.)
[4:18] * haomaiwang (~haomaiwan@117.79.232.204) Quit (Ping timeout: 480 seconds)
[4:21] * freedomhui (~freedomhu@117.79.232.251) has joined #ceph
[4:22] * Andes (~oftc-webi@183.62.249.162) has joined #ceph
[4:22] <Andes> helo~??
[4:22] * diegows (~diegows@190.190.11.42) has joined #ceph
[4:22] <Andes> had someone tried amanda backup with radosgw??
[4:26] <yy-nm> hello, i have a problem about ceph mon and pid file and socket file. they are missing And how can i fix it?
[4:27] * a (~a@pool-173-55-143-200.lsanca.fios.verizon.net) has joined #ceph
[4:27] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Remote host closed the connection)
[4:28] * yguang11 (~yguang11@corp-nat.peking.corp.yahoo.com) has joined #ceph
[4:28] * a is now known as Guest2515
[4:31] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[4:31] * diegows (~diegows@190.190.11.42) Quit (Ping timeout: 480 seconds)
[4:31] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) has joined #ceph
[4:36] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[4:43] * freedomhui (~freedomhu@117.79.232.251) Quit (Quit: Leaving...)
[4:44] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:50] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[4:55] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[4:56] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:57] * Andes (~oftc-webi@183.62.249.162) Quit (Remote host closed the connection)
[5:01] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[5:03] * freedomhui (~freedomhu@117.79.232.251) has joined #ceph
[5:05] * fireD (~fireD@93-139-179-118.adsl.net.t-com.hr) has joined #ceph
[5:07] * fireD_ (~fireD@93-142-197-139.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:11] * sarob (~sarob@mobile-198-228-210-242.mycingular.net) has joined #ceph
[5:19] * sarob (~sarob@mobile-198-228-210-242.mycingular.net) Quit (Ping timeout: 480 seconds)
[5:24] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) has joined #ceph
[5:33] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[5:59] * yy-nm (~Thunderbi@122.224.154.38) Quit (Quit: yy-nm)
[6:04] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Bye!)
[6:04] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[6:10] * gregaf1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[6:10] * houkouonchi-home (~linux@houkouonchi-1-pt.tunnel.tserv15.lax1.ipv6.he.net) Quit (Ping timeout: 480 seconds)
[6:15] * houkouonchi-home (~linux@2001:470:c:c69::2) has joined #ceph
[6:29] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[6:30] * gregaf1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[6:30] * gregaf1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[6:35] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[6:36] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[6:36] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[6:41] * glzhao (~glzhao@118.195.65.67) Quit (Ping timeout: 480 seconds)
[6:41] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[6:42] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[6:45] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Read error: Operation timed out)
[7:07] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) Quit (Quit: Leaving.)
[7:16] * gregaf1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[7:41] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[7:48] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) Quit (Ping timeout: 480 seconds)
[7:49] * rendar (~s@host249-181-dynamic.20-87-r.retail.telecomitalia.it) has joined #ceph
[7:50] * j1mbo (~jimbo@host213-123-216-210.in-addr.btopenworld.com) has joined #ceph
[7:53] * themgt_ (~themgt@201-223-204-131.baf.movistar.cl) has joined #ceph
[7:56] * themgt (~themgt@201-223-195-217.baf.movistar.cl) Quit (Ping timeout: 480 seconds)
[7:56] * themgt_ is now known as themgt
[8:00] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[8:03] * shig (~davidb@faith.oztechninja.com) has joined #ceph
[8:13] * wenjianhn (~wenjianhn@114.245.46.123) has joined #ceph
[8:17] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[8:20] * j1mbo (~jimbo@host213-123-216-210.in-addr.btopenworld.com) Quit (Ping timeout: 480 seconds)
[8:24] * themgt (~themgt@201-223-204-131.baf.movistar.cl) Quit (Quit: themgt)
[8:24] * glzhao (~glzhao@118.195.65.67) has joined #ceph
[8:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[8:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[8:30] * shang (~ShangWu@175.41.48.77) Quit (Remote host closed the connection)
[8:34] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:38] * onizo (~onizo@cpe-24-94-21-246.san.res.rr.com) has joined #ceph
[8:40] * Vjarjadian (~IceChat77@94.1.37.151) Quit (Quit: Oops. My brain just hit a bad sector)
[8:42] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[8:54] * phil (~quassel@chello062178179058.16.14.vie.surfer.at) has joined #ceph
[8:54] * phil is now known as Guest2528
[8:56] * foosinn (~stefan@office.unitedcolo.de) has joined #ceph
[9:10] * mschiff (~mschiff@p4FD7C107.dip0.t-ipconnect.de) has joined #ceph
[9:11] * mattt_ (~textual@94.236.7.190) has joined #ceph
[9:13] * Cube1 (~Cube@66-87-65-227.pools.spcsdns.net) has joined #ceph
[9:13] * Cube (~Cube@66-87-65-201.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[9:15] * Cube (~Cube@66-87-67-16.pools.spcsdns.net) has joined #ceph
[9:15] * Cube1 (~Cube@66-87-65-227.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[9:25] * mschiff_ (~mschiff@tmo-107-82.customers.d1-online.com) has joined #ceph
[9:25] * yy-nm (~Thunderbi@122.224.154.38) has joined #ceph
[9:28] * Guest2528 (~quassel@chello062178179058.16.14.vie.surfer.at) Quit (Remote host closed the connection)
[9:32] * mschiff (~mschiff@p4FD7C107.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:33] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[9:36] * sarob (~sarob@2601:9:7080:13a:81a5:9ae0:e7f:4a2c) has joined #ceph
[9:43] * ScOut3R (~ScOut3R@catv-89-133-21-203.catv.broadband.hu) has joined #ceph
[9:46] * sarob (~sarob@2601:9:7080:13a:81a5:9ae0:e7f:4a2c) Quit (Ping timeout: 480 seconds)
[9:50] * The_Bishop (~bishop@2001:470:50b6:0:6915:da3e:ce9d:d5df) Quit (Ping timeout: 480 seconds)
[9:52] * shang (~ShangWu@175.41.48.77) has joined #ceph
[9:54] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[9:57] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[9:57] * ChanServ sets mode +v andreask
[10:00] * The_Bishop (~bishop@2001:470:50b6:0:1c9a:7388:96b9:613e) has joined #ceph
[10:02] * thomnico (~thomnico@2a01:e35:8b41:120:39c6:84fb:f7bf:bd77) has joined #ceph
[10:05] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Read error: Connection reset by peer)
[10:12] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[10:13] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[10:13] * yy-nm (~Thunderbi@122.224.154.38) Quit (Quit: yy-nm)
[10:18] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:23] * freedomhui (~freedomhu@117.79.232.251) Quit (Quit: Leaving...)
[10:33] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[10:35] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[10:42] * LeaChim (~LeaChim@host86-174-76-26.range86-174.btcentralplus.com) has joined #ceph
[10:44] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:56] * yanzheng (~zhyan@134.134.137.73) Quit (Quit: Leaving)
[11:09] * hijacker (~hijacker@213.91.163.5) Quit (Remote host closed the connection)
[11:15] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[11:17] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) has joined #ceph
[11:20] * allsystemsarego (~allsystem@5-12-37-46.residential.rdsnet.ro) has joined #ceph
[11:22] * onizo (~onizo@cpe-24-94-21-246.san.res.rr.com) Quit (Remote host closed the connection)
[11:24] * freedomhui (~freedomhu@117.79.232.251) has joined #ceph
[11:35] * mattt__ (~textual@92.52.76.140) has joined #ceph
[11:37] * mattt_ (~textual@94.236.7.190) Quit (Read error: Connection reset by peer)
[11:43] * thomnico (~thomnico@2a01:e35:8b41:120:39c6:84fb:f7bf:bd77) Quit (Ping timeout: 480 seconds)
[11:43] * fabioFVZ (~fabiofvz@213.187.20.119) has joined #ceph
[11:43] * fabioFVZ (~fabiofvz@213.187.20.119) Quit (Remote host closed the connection)
[11:43] * fabioFVZ (~fabiofvz@213.187.20.119) has joined #ceph
[11:51] * thomnico (~thomnico@2a01:e35:8b41:120:39c6:84fb:f7bf:bd77) has joined #ceph
[11:59] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[12:01] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[12:02] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[12:08] * guerby (~guerby@ip165-ipv6.tetaneutral.net) Quit (Read error: Connection reset by peer)
[12:10] * guerby (~guerby@ip165-ipv6.tetaneutral.net) has joined #ceph
[12:18] * guerby (~guerby@ip165-ipv6.tetaneutral.net) Quit (Ping timeout: 480 seconds)
[12:23] * Meths_ (~meths@2.25.214.231) has joined #ceph
[12:28] * Meths (~meths@2.27.72.113) Quit (Ping timeout: 480 seconds)
[12:30] * thomnico (~thomnico@2a01:e35:8b41:120:39c6:84fb:f7bf:bd77) Quit (Ping timeout: 480 seconds)
[12:30] * BillK (~BillK-OFT@58-7-67-236.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[12:32] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) has joined #ceph
[12:32] <mozg> hello guys
[12:32] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[12:32] <mozg> does anyone know a good howto for integrating ceph and xenserver?
[12:33] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[12:33] * ChanServ sets mode +v andreask
[12:36] <mozg> has anyone here experimented with ceph and xenserver?
[12:37] <HauM1> https://www.youtube.com/watch?v=qgsQE71Hhgg
[12:37] <fabioFVZ> Hello, someone know why when i set a read permission for all to a bucket i read only the bucket...when i try always directory or files i received the "access denied" error...
[12:37] <HauM1> this is a video about ceph and xenserver
[12:38] <HauM1> mozg: based on centos
[12:39] <mozg> HauM1, cheers man!
[12:39] * BillK (~BillK-OFT@58-7-67-236.dyn.iinet.net.au) has joined #ceph
[12:39] <mozg> I will give it a go
[12:39] <mozg> any idea if it is close to being production ready?
[12:40] <HauM1> mozg: in terms of beeing part of centos-repos: NO
[12:40] * Cube (~Cube@66-87-67-16.pools.spcsdns.net) Quit (Quit: Leaving.)
[12:41] <HauM1> mozg: in terms of stable enough to use it in production: maybe
[12:41] <mozg> thanks
[12:42] <mozg> have you tried it yourself?
[12:42] <HauM1> its a fast moving target
[12:42] <mozg> yeah, i've read they were planning to have it production ready for the next xenserver release
[12:42] * guerby (~guerby@ip165-ipv6.tetaneutral.net) has joined #ceph
[12:42] <mozg> not sure if it is still the case
[12:42] <kraken> ≖_≖
[12:43] <HauM1> i haven't tried it yet (no time)
[12:49] * yanzheng (~zhyan@134.134.137.75) has joined #ceph
[12:57] * michalefty (~oftc-webi@proxy1.t-online.net) has joined #ceph
[13:06] * michalefty_ (~oftc-webi@88.128.80.3) has joined #ceph
[13:07] * michalefty (~oftc-webi@proxy1.t-online.net) Quit (Remote host closed the connection)
[13:09] * yanzheng (~zhyan@134.134.137.75) Quit (Ping timeout: 480 seconds)
[13:10] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[13:12] * Shmouel1 (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) has joined #ceph
[13:13] * michalefty (~micha@88.128.80.3) has joined #ceph
[13:15] * michalefty_ (~oftc-webi@88.128.80.3) Quit (Quit: Page closed)
[13:17] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[13:21] * thomnico (~thomnico@2a01:e35:8b41:120:39c6:84fb:f7bf:bd77) has joined #ceph
[13:24] * thomnico (~thomnico@2a01:e35:8b41:120:39c6:84fb:f7bf:bd77) Quit ()
[13:31] * mschiff (~mschiff@p4FD7C107.dip0.t-ipconnect.de) has joined #ceph
[13:37] * Cube (~Cube@12.248.40.138) has joined #ceph
[13:38] * mschiff_ (~mschiff@tmo-107-82.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[13:46] * mschiff (~mschiff@p4FD7C107.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[13:55] * aarontc (~aaron@static-50-126-79-226.hlbo.or.frontiernet.net) Quit (Ping timeout: 480 seconds)
[13:57] * marrusl_ (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[13:58] * marrusl_ (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit ()
[13:58] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[13:59] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[14:05] * thomnico (~thomnico@2a01:e35:8b41:120:39c6:84fb:f7bf:bd77) has joined #ceph
[14:07] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[14:07] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[14:11] * mschiff (~mschiff@p4FD7C107.dip0.t-ipconnect.de) has joined #ceph
[14:20] * mschiff_ (~mschiff@p4FD7C107.dip0.t-ipconnect.de) has joined #ceph
[14:20] * mschiff (~mschiff@p4FD7C107.dip0.t-ipconnect.de) Quit (Read error: Connection reset by peer)
[14:20] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[14:21] * diegows (~diegows@190.190.11.42) has joined #ceph
[14:26] * j1mbo (~jimbo@31.221.83.224) has joined #ceph
[14:31] * mschiff (~mschiff@p4FD7C107.dip0.t-ipconnect.de) has joined #ceph
[14:31] * mschiff_ (~mschiff@p4FD7C107.dip0.t-ipconnect.de) Quit (Read error: Connection reset by peer)
[14:33] * tsnider (~tsnider@nat-216-240-30-23.netapp.com) has joined #ceph
[14:34] * mschiff_ (~mschiff@p4FD7C107.dip0.t-ipconnect.de) has joined #ceph
[14:36] * mschiff (~mschiff@p4FD7C107.dip0.t-ipconnect.de) Quit (Read error: Connection reset by peer)
[14:42] * mschiff_ (~mschiff@p4FD7C107.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[14:46] * j1mbo (~jimbo@31.221.83.224) Quit (Ping timeout: 480 seconds)
[14:46] * mschiff (~mschiff@p4FD7C107.dip0.t-ipconnect.de) has joined #ceph
[14:49] * ChoppingBrocoli (~quassel@rrcs-74-218-204-10.central.biz.rr.com) has joined #ceph
[14:51] * ChoppingBrocoli (~quassel@rrcs-74-218-204-10.central.biz.rr.com) has left #ceph
[14:52] <wenjianhn> is it possible to use rbd in a linux container?
[14:53] * Grasshopper (~quassel@rrcs-74-218-204-10.central.biz.rr.com) has joined #ceph
[14:53] <Grasshopper> Does increasing the journal size increase read performance or only write?
[14:58] * mschiff_ (~mschiff@tmo-107-82.customers.d1-online.com) has joined #ceph
[15:02] <andreask> Grasshopper: only writes ... assuming it is now to small and fills up regularly
[15:03] * mschiff (~mschiff@p4FD7C107.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[15:09] <andreask> wenjianhn: I don't think so ... read http://goo.gl/BXxRBb , they mention LXC and lack of network kernel namespaces support in rbd
[15:12] * gucki (~smuxi@p549F8B9E.dip0.t-ipconnect.de) has joined #ceph
[15:12] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Quit: wogri_risc)
[15:15] * xoJIog (~xoJIog@195.13.218.197) Quit (Quit: Konversation terminated!)
[15:17] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[15:17] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Remote host closed the connection)
[15:18] * BillK (~BillK-OFT@58-7-67-236.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:20] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[15:20] * liiwi (liiwi@idle.fi) Quit (Ping timeout: 480 seconds)
[15:23] * liiwi (liiwi@idle.fi) has joined #ceph
[15:29] * ajazdzewski (~quassel@lpz-66.sprd.net) has joined #ceph
[15:29] <ajazdzewski> hi
[15:31] <ajazdzewski> i have some quetions about fixing a inconsistent pg can somone give me som hints thanks
[15:31] <wenjianhn> andreask, thanks. i will have a test
[15:37] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[15:37] * yanzheng (~zhyan@101.82.102.135) has joined #ceph
[15:41] <andreask> ajazdzewski: just ask
[15:43] * thomnico (~thomnico@2a01:e35:8b41:120:39c6:84fb:f7bf:bd77) Quit (Quit: Ex-Chat)
[15:48] * themgt (~themgt@201-223-204-131.baf.movistar.cl) has joined #ceph
[15:48] <ajazdzewski> andreask: i did trys to fix the brocken pg i tryed to stop on pg with the inconsistent obejct in it but after the rebuild it is stil inconsitent
[15:48] * thomnico (~thomnico@2a01:e35:8b41:120:39c6:84fb:f7bf:bd77) has joined #ceph
[15:48] <ajazdzewski> i tryed a "ceph pg deep-scrub 2.5" but it also not fixed it
[15:49] <fabioFVZ> someone know something about radosgw permission?
[15:50] <andreask> ajazdzewski: can you pastepin "ceph pg 2.5 query"?
[15:52] * ksingh1 (~Adium@2001:708:10:10:85ad:6fb7:33ff:d3e5) has joined #ceph
[15:54] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:55] <ajazdzewski> andreask: http://pastebin.com/wnpuv08V
[15:57] * j1mbo (~jimbo@31.221.83.224) has joined #ceph
[15:57] <skm> Does anyone know if you have for example osd's 1 - 10 and you remove number 4 (from the crush map, etc)...then add in another osd at a later date...will it get named osd.4 or will it get named osd.11
[15:58] * anyinchen (~anyinchen@183.225.56.21) has joined #ceph
[15:58] <anyinchen> There is no Italian girl ??
[15:59] <andreask> ajazdzewski: you did a "ceph pg repair 2.5" ?
[16:04] <ajazdzewski> yes but i will try it one more time
[16:05] <anyinchen> hehe
[16:06] <peetaur> skm: I played with something like that and they would get the first available number... so eg. I had 3 nodes with 3 xfs osds, and converted them to 4 btrfs osds, and I ended up with 0,1,2,9 3,4,5,10 6,7,8,11 even though I removed them all and recreated
[16:06] <peetaur> skm: with ceph-deploy
[16:08] <anyinchen> Has been imitated, never surpassed
[16:08] <ajazdzewski> somthing will go on in the cluster pg 2.5 is active+clean+scrubbing+deep+inconsistent+repair, acting [9,4] so i will give a update in a minute
[16:08] <anyinchen> Has been imitated, never surpassed
[16:09] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[16:10] * dmsimard (~Adium@2607:f748:9:1666:312a:4191:cfe2:1de3) has joined #ceph
[16:10] <andreask> ajazdzewski: any error messages in the logs?
[16:10] <skm> ok thank you
[16:11] <mozg> hello guys
[16:11] <anyinchen> lol
[16:11] <mozg> i currently have 5 mons and I would like to remove one mon making the total 4
[16:11] <mozg> would this give me an issue in terms of quorum?
[16:12] <mozg> would my ceph cluster function with 4 mons (for a short term)?
[16:12] <andreask> mozg: it would work yes
[16:13] * skm (~smiley@205.153.36.170) Quit (Remote host closed the connection)
[16:13] <mozg> can i remove the mon from the cluster with ceph-deploy?
[16:13] <mozg> or is it something that i need to do manually?
[16:13] <alfredodeza> mozg: you can do a mon destroy with ceph-deploy
[16:14] <alfredodeza> but that depends on what you mean by remove a mon from a cluster
[16:14] <mozg> would this work if the mon server I am about to destroy is down?
[16:14] <alfredodeza> mon destroy does a bunch of stuff
[16:14] <alfredodeza> including removal of data iirc
[16:17] <mozg> okay
[16:17] <mozg> so, i guess if the mon server is already down it will fail somewhere
[16:17] <anyinchen>
[16:17] <anyinchen>
[16:17] <anyinchen>
[16:17] <anyinchen>
[16:17] <anyinchen>
[16:17] <anyinchen>
[16:17] <anyinchen>
[16:17] <anyinchen>
[16:17] * freedomhui (~freedomhu@117.79.232.251) Quit (Quit: Linkinus - http://linkinus.com)
[16:17] <mozg> and I will need to manually remove it
[16:17] <anyinchen>
[16:17] <anyinchen>
[16:17] <anyinchen>
[16:17] <anyinchen>
[16:17] <anyinchen>
[16:17] <anyinchen>
[16:17] <anyinchen>
[16:18] <anyinchen>
[16:18] <mozg> anyinchen, have you fallen asleep on your keyboard?
[16:18] <anyinchen>
[16:18] <anyinchen>
[16:18] <anyinchen>
[16:18] <anyinchen>
[16:18] <anyinchen>
[16:18] <anyinchen>
[16:18] <anyinchen>
[16:18] <anyinchen>
[16:18] <fabioFVZ> :(
[16:19] <anyinchen> no
[16:19] * j1mbo (~jimbo@31.221.83.224) Quit (Ping timeout: 480 seconds)
[16:20] * Tagi (~Tagi@145.33.225.241) has joined #ceph
[16:20] <alfredodeza> anyinchen: please do not flood the channel
[16:20] <Tagi> Hi guys
[16:20] <anyinchen> hello
[16:21] * The_Bishop (~bishop@2001:470:50b6:0:1c9a:7388:96b9:613e) Quit (Ping timeout: 480 seconds)
[16:22] <Tagi> I've got a problem with setting up a client using ceph-common. The cluster of my projectteam (testing purposes) is not using authentication (cephx). I've downloaded the ceph.conf and ceph.client.admin.keyring from one of the nodes to my client-system. However, I can't issue commands through the command 'ceph'.
[16:22] * joao sets mode +b *!*anyinchen@183.225.56.*
[16:22] <Tagi> ceph -k /etc/ceph/ceph.client.admin.keyring -c /etc/ceph/ceph.conf -m 10.1.1.53 status
[16:23] <Tagi> is the command I'm issueing.
[16:23] * anyinchen was kicked from #ceph by joao
[16:23] <Tagi> 2013-10-16 14:17:05.770579 mon <- [status]
[16:23] <Tagi> 2013-10-16 14:17:05.803696 mon.2 -> 'unparseable JSON status' (-22)
[16:23] <Tagi> Is my result..
[16:23] * joao sets mode +o alfredodeza
[16:24] <Tagi> I've been trying to Google for a solution, but it seems that there are no similar issues
[16:24] <Tagi> Does someone have an idea where to look for?
[16:25] <joao> Tagi, are you using the same version between the client (ceph-common) and the monitors?
[16:25] <joao> also, try -s
[16:25] <joao> can't honestly recall if 'status' is a thing
[16:25] <joao> should be
[16:26] <Tagi> I assume so, I believe my teammate installed it using ceph-deploy and the package from ubuntu's apt-get
[16:26] <Tagi> Hmm, this is indeed more helpful.
[16:27] <Tagi> Although I get an enormous flood telling me that it's hunting for new mon
[16:27] <Tagi> Odd
[16:30] * Yogi_Bear (~Yogi_Bear@187.10.62.98) has joined #ceph
[16:30] <Yogi_Bear> Invitation http://www.tatuuu.com.br Calling the world
[16:30] * Yogi_Bear (~Yogi_Bear@187.10.62.98) Quit ()
[16:33] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) Quit (Quit: Ex-Chat)
[16:33] * michalefty (~micha@88.128.80.3) Quit (Quit: Leaving.)
[16:35] * gsaxena (~gsaxena@pool-71-178-225-182.washdc.fios.verizon.net) has joined #ceph
[16:35] * gsaxena (~gsaxena@pool-71-178-225-182.washdc.fios.verizon.net) Quit (Remote host closed the connection)
[16:36] <Tagi> http://tracker.ceph.com/issues/3647 is the only thing I can find for now
[16:36] * yanzheng (~zhyan@101.82.102.135) Quit (Ping timeout: 480 seconds)
[16:40] * mattbenjamin (~matt@aa2.linuxbox.com) has joined #ceph
[16:41] * The_Bishop (~bishop@2a02:2450:102f:4:1c9a:7388:96b9:613e) has joined #ceph
[16:42] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[16:42] * diegows (~diegows@190.190.11.42) Quit (Ping timeout: 480 seconds)
[16:43] * Tagi1 (~Tagi@145.37.95.110) has joined #ceph
[16:44] * Tagi (~Tagi@145.33.225.241) Quit (Ping timeout: 480 seconds)
[16:44] * Tagi1 is now known as Tagi
[16:46] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Remote host closed the connection)
[16:47] <ajazdzewski> so the ceph pg repair 2.5 did not have any effect
[16:48] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[16:48] <fabioFVZ> Hello, I set "set_canned_acl('public-read')" on my bucket but i received "permission denied" when try in my browser write http://bucket.xx.yy/dir1/myfile.txt , i need to access in "read only" via http on my file in the bucket... before version 0.67 everything works fine..what changed?
[16:49] * sjm (~sjm@rrcs-24-103-116-8.nyc.biz.rr.com) has joined #ceph
[16:51] * ksingh1 (~Adium@2001:708:10:10:85ad:6fb7:33ff:d3e5) Quit (Quit: Leaving.)
[16:59] * ajazdzewski (~quassel@lpz-66.sprd.net) Quit (Remote host closed the connection)
[16:59] * ksingh (~Adium@teeri.csc.fi) has joined #ceph
[17:03] * sjm (~sjm@rrcs-24-103-116-8.nyc.biz.rr.com) Quit (Ping timeout: 480 seconds)
[17:04] * Tagi1 (~Tagi@145.37.95.110) has joined #ceph
[17:05] * Tagi (~Tagi@145.37.95.110) Quit (Read error: Operation timed out)
[17:05] <ksingh> geeks need help
[17:05] <ksingh> [root@ceph-client ~]# service ceph -a start osd.0
[17:05] <ksingh> === osd.0 ===
[17:05] <ksingh> No filesystem type defined!
[17:06] <ksingh> i have removed osd.0 and manually created using ceph documentation , and i landed up with this error
[17:06] <ksingh> i also tried using ceph-deploy creating OSD it was also giving me this error
[17:06] <ksingh> For your information my ONE osd is UP and Fine
[17:07] <ksingh> pls help
[17:07] <ksingh> i tried mkfs.xfs -f /dev/sdb1 command but its not helping
[17:08] <peetaur> ksingh: is there a symlink in /var/lib/ceph/osd that points to your osd file system?
[17:12] <ksingh> peetaur : nopes there is no link
[17:13] <ksingh> out put from ceph status : osdmap e92: 2 osds: 0 up, 2 in
[17:13] <ksingh> so my OSD are IN but NOT UP
[17:13] <ksingh> and while bringing OSD service UP its throwing error
[17:13] <ksingh> === osd.6 ===
[17:13] <ksingh> No filesystem type defined!
[17:14] <peetaur> ksingh: if you are using ceph-deploy, I believe there should be a link there that goes to the osd file system
[17:14] <peetaur> and so beign no link is like saying "No filesystem type defined" (defined by link?)
[17:14] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[17:15] <ksingh> earlier i tried with ceph-deploy , it was giving me same problem , then i removed OSD and manually Added ( with ceph-deploy )
[17:15] <ksingh> still no progress
[17:15] <ksingh> i cannot see any symbolic link here
[17:16] * sprachgenerator (~sprachgen@c-50-141-192-36.hsd1.il.comcast.net) has joined #ceph
[17:16] <ksingh> wait a minute i found inside OSD directory
[17:16] <ksingh> lrwxrwxrwx. 1 root root 58 Oct 15 07:40 journal -> /dev/disk/by-partuuid/5b1fb635-ea74-4d8a-a847-ec0b43ad39e6
[17:17] <ksingh> is this the one you are talking about
[17:18] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:20] <Grasshopper> what is the best way to increase read speed? (short of making all OSD's SSD)
[17:20] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[17:21] <Gugge-47527> Grasshopper: add more of them :)
[17:23] <peetaur> Grasshopper: more NICs, more OSDs, more nodes
[17:23] <peetaur> Grasshopper: and with rbd, there are some caching options you can put in the ceph.conf
[17:23] * Vjarjadian (~IceChat77@94.1.37.151) has joined #ceph
[17:23] <Grasshopper> peetaur do you have a link to those options?
[17:24] <peetaur> network: http://ceph.com/docs/master/rados/configuration/network-config-ref/
[17:24] <peetaur> and for osds and nodes ... just install more; you don't need a link
[17:24] <peetaur> and not sure if that link has bonding
[17:24] <peetaur> and the separate public vs cluster networks is probably more/only for writes
[17:25] <peetaur> but if writes are happening, obviously it slows reads if the NIC is bottlenecked, so has similar effect
[17:25] <peetaur> and if you have multiple racks and switches, you can also tune your CRUSH map to create a better balance between your switches
[17:26] <ksingh> peetaur : any suggestion for my problem
[17:27] <peetaur> ksingh: well I think the mkfs would do more harm than good... now it has an osd registered that no longer exists
[17:27] <peetaur> ksingh: and the link missing means that it isn't registered in the system properly I think (but can't be sure... can only give you examples on my test cluster)
[17:28] <peetaur> ksingh: so if you had a clean system simply missing a node, first you'd back up your data if you cared about it, then maybe you should try properly cleaning up the osd so it is properly removed, then add it again as a new osd
[17:28] <ksingh> well i am just testing ceph , so actually i do not have any data in OSD , i can run any command :^)
[17:28] <peetaur> if you ran mkfs, then the better option of adding the old OSD again is gone
[17:29] <ksingh> ok so is this correcty wah
[17:29] <ksingh> mkfs.xfs -f /dev/sdb1
[17:29] <ksingh> mount -t xfs /dev/sdb1 /var/lib/ceph/osd/ceph-0
[17:29] * haomaiwa_ (~haomaiwan@211.155.113.208) Quit (Remote host closed the connection)
[17:29] <ksingh> i tried this , it has mounted successfully with no errors
[17:29] <ksingh> but still on restarting ceph osd servcice , it says that no filesystem
[17:30] * haomaiwang (~haomaiwan@211.155.113.208) has joined #ceph
[17:30] <Grasshopper> cool looks good! last question peetaur before I do this I have 5 hosts/mons...If I stop 1 mon I need to stop 2 of them correct?
[17:31] <gchristensen> oh damn, I hadn't even considered that
[17:31] <ksingh> Anyone else who might encountered " No filesystem type defined "
[17:31] <ksingh> error
[17:31] <peetaur> Grasshopper: you mean because of the "odd number of mons" requirement?
[17:31] <Grasshopper> yes
[17:32] <peetaur> Grasshopper: I'm not sure on the official reason for that, but it might go like this: imaghine you have 3 mons. One dies, now you have quorum with 2/3. Imagine instead you decide you want 4 for more reliability, so now if 1 dies, 3/4 is quorate, but still 2/4 is not. So you actually gain nothing, but you lose some because now more servers are risking dying.
[17:32] <peetaur> Grasshopper: so there is no technical reason I can see why Ceph will blow up with an even number (my test lab cluster has 4 at the moment by the way, and works fine)
[17:32] <peetaur> Grasshopper: just that it's a useless bad idea to plan it that way; but temporarily is fine enough
[17:33] <Grasshopper> perfect thanks, yea it will just be down while I pop a few more disks in
[17:33] <peetaur> Grasshopper: and in your case it's better... 4/5 is way better than 3/5 (but 3/4 is no better than 2/3)
[17:33] <gchristensen> (totally inexperienced) the big issue I perceived by the docs is having an even number means there is no possibility of a tie-breaking vote.
[17:33] <gchristensen> and that was the long and short of the odd vs. even
[17:33] <peetaur> right, the tie breaking vote is what I tried to explain in my 2/3 vs 3/4 comparison
[17:34] <gchristensen> ah ... I ... *drinks more coffee*
[17:34] <peetaur> well your statement compliments with mine
[17:34] <peetaur> they are just different wording for different audience
[17:35] <peetaur> so if you judged the audience better, your answer is better ;)
[17:35] <gchristensen> :)
[17:37] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[17:39] * wenjianhn (~wenjianhn@114.245.46.123) Quit (Ping timeout: 480 seconds)
[17:45] * foosinn (~stefan@office.unitedcolo.de) Quit (Quit: Leaving)
[17:48] * aliguori (~anthony@74.202.210.82) has joined #ceph
[17:49] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[17:51] <Grasshopper> If an OSD does not show up in "sudo initctl list | grep ceph" does that mean it is not running? ceph -s says all OSD's are up and in...
[17:52] * Tagi1 (~Tagi@145.37.95.110) Quit (Ping timeout: 480 seconds)
[17:52] <peetaur> Grasshopper: only time I've seen ceph -s lie is when ALL osds are down. The monitors only tell you what the OSDs reported to them. ... if no OSDs are up, no reports ;)
[17:52] * ScOut3R (~ScOut3R@catv-89-133-21-203.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:52] <peetaur> OSDs report their own status and the result of their queries with others
[17:53] <peetaur> but I guess monitors don't test the OSDs themselves
[17:53] <Grasshopper> peetaur so if it doesnt show uo I should not worry?
[17:53] <peetaur> not sure... I'd also check ps instead of initctl
[17:53] <peetaur> I don't know what initctl is ;) (an upstart thing?)
[17:53] <Grasshopper> what is the ps command?
[17:54] <peetaur> maybe it is a list of standing orders rather than actual processes... so if it is restarting the OSDs every minute when they die, maybe it'll say it's up but it's not, but the standing order is "up/running"
[17:54] <peetaur> ps is the normal Linux process command ... also on Unix, solaris, freeBSD, etc.
[17:54] <peetaur> so for example: ps -ef | grep ceph
[17:54] <peetaur> or the BSD style: ps aux | grep ceph
[17:55] <peetaur> I wouldn't trust initctl... on my admin node, it says: ceph-mds-all start/running
[17:55] <peetaur> but I have no mds at all, let alone on the admin node :D
[17:55] <Grasshopper> init is upstart. It looks though that ps does not list indavidual daemons
[17:55] <peetaur> ps lists processes
[17:55] * mschiff_ (~mschiff@tmo-107-82.customers.d1-online.com) Quit (Remote host closed the connection)
[17:55] <peetaur> so if it doesn't list them, I would say they are not running
[17:56] <Grasshopper> right thats what I mean, yet ceph -s says they are
[17:56] <peetaur> well does it say 1 is up? or all are up?
[17:56] <peetaur> because I said that when i stopped all of mine one at a time, it said 1/4 are up even though all were down
[17:57] <Grasshopper> All up
[17:57] <Grasshopper> maybe same issue? I will restart them and see
[17:57] <peetaur> are monitors okay?
[17:57] <peetaur> it might be dangerous to restart monitors if they are messed slightly
[17:57] <peetaur> my monitors are on my osd nodes... don't know what you have
[17:58] <Grasshopper> Yea all mons are fine, same as you all hosts have mon/osd
[17:58] <peetaur> to test monitors, my procedure is to test ceph -s -m $ipofnodehere
[17:58] <peetaur> on all nodes
[17:58] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[17:59] <peetaur> and md5sum to compare them
[17:59] <peetaur> one time my test cluster died because one monitor was strange and I then did the cuttlefish to dumpling upgrade and it all died, and I blame the monitor problem
[17:59] <peetaur> randomly it would print some weird line ending in "fault" and I ignored it (if it always did that, it must be normal, right? ;) answer: noooo)
[18:00] <Grasshopper> strange, yea mine all look find and restart worked
[18:00] <peetaur> and the -m test would make it time out and fail on the bad one instead of randomly getting the fault only when it happened to pick that node when I ran a ceph command
[18:02] * sagelap (~sage@169.sub-70-197-81.myvzw.com) has joined #ceph
[18:02] * onizo (~onizo@wsip-184-182-190-131.sd.sd.cox.net) has joined #ceph
[18:05] * thomnico (~thomnico@2a01:e35:8b41:120:39c6:84fb:f7bf:bd77) Quit (Read error: Connection reset by peer)
[18:05] * mattt__ (~textual@92.52.76.140) Quit (Read error: Connection reset by peer)
[18:05] * thomnico (~thomnico@2a01:e35:8b41:120:39c6:84fb:f7bf:bd77) has joined #ceph
[18:07] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[18:09] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[18:09] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[18:14] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[18:14] * nwat (~nwat@eduroam-232-148.ucsc.edu) has joined #ceph
[18:14] * fabioFVZ (~fabiofvz@213.187.20.119) Quit (Remote host closed the connection)
[18:16] * j1mbo (~jimbo@31.221.83.224) has joined #ceph
[18:17] * mschiff (~mschiff@85.182.236.82) has joined #ceph
[18:17] * j1mbo (~jimbo@31.221.83.224) Quit ()
[18:18] * j1mbo (~jimbo@31.221.83.224) has joined #ceph
[18:20] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:22] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Quit: odyssey4me)
[18:24] * Guest2515 (~a@pool-173-55-143-200.lsanca.fios.verizon.net) Quit (Quit: This computer has gone to sleep)
[18:27] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[18:30] * mikedawson (~chatzilla@23-25-46-107-static.hfc.comcastbusiness.net) has joined #ceph
[18:31] * ircolle (~Adium@2601:1:8380:2d9:a042:caca:d54f:47ec) has joined #ceph
[18:32] * sagelap (~sage@169.sub-70-197-81.myvzw.com) Quit (Quit: Leaving.)
[18:38] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[18:38] * angdraug (~angdraug@64-79-127-122.static.wiline.com) has joined #ceph
[18:44] * thomnico (~thomnico@2a01:e35:8b41:120:39c6:84fb:f7bf:bd77) Quit (Ping timeout: 480 seconds)
[18:48] * a (~a@209.12.169.218) has joined #ceph
[18:49] * a is now known as Guest2566
[18:49] * ScOut3R (~scout3r@dsl51B61603.pool.t-online.hu) has joined #ceph
[18:51] * glzhao (~glzhao@118.195.65.67) Quit (Ping timeout: 480 seconds)
[18:52] * Tamil1 (~Adium@cpe-142-136-96-212.socal.res.rr.com) has joined #ceph
[18:52] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) has joined #ceph
[18:55] * yehudasa (~yehudasa@2602:306:330b:1980:ea03:9aff:fe98:e8ff) has joined #ceph
[19:02] <Grasshopper> ok big question, if I make a change in the global section do I need to restart the cluster for it to take effect?
[19:04] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) Quit (Quit: Leaving.)
[19:04] * j1mbo (~jimbo@31.221.83.224) Quit (Ping timeout: 480 seconds)
[19:04] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) has joined #ceph
[19:05] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[19:05] * ChanServ sets mode +v andreask
[19:06] * Vjarjadian (~IceChat77@94.1.37.151) Quit (Quit: A fine is a tax for doing wrong. A tax is a fine for doing well)
[19:08] <peetaur> Grasshopper: I think I had to restart something for the cluster network to get used when I added it in ceph.conf.
[19:08] <peetaur> probably just OSDs for that
[19:09] <nwat> `
[19:09] * WarrenUsui (~Warren@2607:f298:a:607:39b4:930:e5b7:e58b) Quit (Ping timeout: 480 seconds)
[19:09] * papamoose1 (~kauffman@hester.cs.uchicago.edu) has joined #ceph
[19:09] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[19:11] * onizo (~onizo@wsip-184-182-190-131.sd.sd.cox.net) Quit (Remote host closed the connection)
[19:14] <Grasshopper> what I want to do is increase journal size for all new osds, so if I put it in the global it should effect all new correct?
[19:14] <peetaur> does ceph scrub do more than just what an fsck would do (if not btrfs)?
[19:14] <peetaur> this ancient document says very little http://ceph.com/uncategorized/scrubbing/
[19:15] <Cube> peetaur: http://ceph.com/docs/master/rados/configuration/osd-config-ref/#scrubbing
[19:15] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Read error: Connection reset by peer)
[19:15] <Cube> Bit more info there
[19:16] * JoeGruher (~JoeGruher@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[19:16] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (Ping timeout: 480 seconds)
[19:17] <peetaur> that page doesn't even mention btrfs; does this mean that checksums are also used on XFS?
[19:17] <peetaur> and thanks for the link ... it's much more up to date I think
[19:18] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[19:18] * ChanServ sets mode +v andreask
[19:18] * The_Bishop (~bishop@2a02:2450:102f:4:1c9a:7388:96b9:613e) Quit (Ping timeout: 480 seconds)
[19:20] * yehudasa (~yehudasa@2602:306:330b:1980:ea03:9aff:fe98:e8ff) Quit (Ping timeout: 480 seconds)
[19:23] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[19:23] * Gamekiller77 (~oftc-webi@128-107-239-233.cisco.com) has joined #ceph
[19:24] <Gamekiller77> Good day i having problems with cinder creating volumes. I do not see any errors in the cinder logges that tell me anything.
[19:25] * wusui (~Warren@2607:f298:a:607:c960:8ea:5374:da97) has joined #ceph
[19:27] <Gamekiller77> i added line in the ceph.conf file in volume section for logging but we see nothing
[19:27] <xarses> gamekiller: are you using syslog for cinder.conf?
[19:28] * The_Bishop (~bishop@2001:470:50b6:0:6915:da3e:ce9d:d5df) has joined #ceph
[19:29] <xarses> gamekiller77^
[19:31] * allsystemsarego (~allsystem@5-12-37-46.residential.rdsnet.ro) Quit (Quit: Leaving)
[19:33] * skm (~smiley@205.153.36.170) has joined #ceph
[19:34] * onizo (~onizo@wsip-184-182-190-131.sd.sd.cox.net) has joined #ceph
[19:39] * j1mbo (~jimbo@94.117.209.218) has joined #ceph
[19:39] <Gamekiller77> no just file
[19:40] <Gamekiller77> we do have logstash collecting logs
[19:40] <Gamekiller77> all services in openstack are in verbose and debug on
[19:41] <xarses> i find that if you have any log config, syslog options that it blocks the python traces
[19:41] <xarses> try to use just log_file option and no others
[19:43] <xarses> also, what "problem" are you seeing with creating volume
[19:43] <xarses> ?
[19:43] <Gamekiller77> that what i have
[19:43] <Gamekiller77> when doing CLI
[19:43] <Gamekiller77> create volume it gose
[19:43] <Gamekiller77> but then do list
[19:43] <Gamekiller77> just says erroe
[19:43] <Gamekiller77> erroe
[19:43] <Gamekiller77> error
[19:44] <Gamekiller77> first time check showed the volumes pool was bad so deleted it and made it again
[19:45] <Gamekiller77> rbd -ls volumes was erroring out
[19:45] <Gamekiller77> now it fine
[19:46] <Gamekiller77> my dev setup has no cephx this new setup does
[19:46] <xarses> ok
[19:47] * lanfear2667 (~oftc-webi@rtp-isp-nat1.cisco.com) has joined #ceph
[19:47] * j1mbo (~jimbo@94.117.209.218) Quit (Ping timeout: 480 seconds)
[19:48] <xarses> and ceph -s returns HEALTH_OK?
[19:49] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[19:50] <Gamekiller77> xarses: yes all good
[19:50] <Gamekiller77> glance is working great
[19:51] <xarses> key file is owned by cinder?
[19:51] <xarses> which OS?
[19:51] * ksingh (~Adium@teeri.csc.fi) Quit (Quit: Leaving.)
[19:51] <Gamekiller77> yes all files keys and .conf
[19:51] <Gamekiller77> centos 6.4
[19:52] <Gamekiller77> using RDO repo
[19:52] <Gamekiller77> ceph installed
[19:52] <Gamekiller77> with core conf files in place
[19:52] <xarses> did you create the /etc/sysconfig/openstack-cinder-volume file?
[19:53] <Gamekiller77> no i did not
[19:53] <Gamekiller77> my team mate is here he working on the new setup
[19:53] <Gamekiller77> lanfear2667: you see what xarses is saying
[19:54] <Gamekiller77> i never saw a doc about this creation of that files
[19:54] <lanfear2667> looking now
[19:54] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[19:54] <xarses> export CEPH_ARGS='--id volumes'
[19:54] <xarses> where volumes is the ceph pool name you want cinder to use
[19:55] <Gamekiller77> yah see i saw that but it was stated for ubuntu
[19:56] <xarses> yep
[19:56] <Gamekiller77> was not sure that was needed for RHEL
[19:57] <xarses> issue 6127
[19:57] <kraken> xarses might be talking about: http://tracker.ceph.com/issues/6127 [CEPH_ARGS example for RHEL]
[19:57] <Gamekiller77> that may be it
[19:58] <Gamekiller77> lanfear2667: you see that
[20:00] <xarses> i guess i should just pr the docs for that
[20:04] * Meths_ is now known as Meths
[20:07] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[20:07] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:12] <xarses> Gamekiller77: any luck?
[20:13] * yehudasa (~yehudasa@2607:f298:a:607:ea03:9aff:fe98:e8ff) has joined #ceph
[20:13] <Gamekiller77> yes
[20:13] <Gamekiller77> it worked
[20:13] <Gamekiller77> just got the message from lanfear2667
[20:13] <lanfear2667> and we have a volume...thanks xarses!
[20:14] <xarses> good luck
[20:15] <Gamekiller77> now for me to hit havana
[20:20] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[20:20] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:21] * gregmark (~Adium@68.87.42.115) has joined #ceph
[20:23] * gregmark (~Adium@68.87.42.115) Quit ()
[20:23] * gregmark (~Adium@68.87.42.115) has joined #ceph
[20:24] * onizo (~onizo@wsip-184-182-190-131.sd.sd.cox.net) Quit (Remote host closed the connection)
[20:24] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Read error: Operation timed out)
[20:25] * nwat (~nwat@eduroam-232-148.ucsc.edu) Quit (Read error: Operation timed out)
[20:28] * LPG_ (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[20:28] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) Quit (Read error: Connection reset by peer)
[20:29] * BillK (~BillK-OFT@58-7-67-236.dyn.iinet.net.au) has joined #ceph
[20:33] * JoeGruher (~JoeGruher@jfdmzpr02-ext.jf.intel.com) Quit (Remote host closed the connection)
[20:37] <loicd> wusui: would you be interested in joining #ceph-devel ?
[20:42] * sjm (~sjm@pool-96-234-124-66.nwrknj.fios.verizon.net) has joined #ceph
[20:42] * themgt (~themgt@201-223-204-131.baf.movistar.cl) Quit (Read error: Connection reset by peer)
[20:43] * themgt (~themgt@201-223-204-131.baf.movistar.cl) has joined #ceph
[20:46] * JoeGruher (~JoeGruher@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[20:52] * skm (~smiley@205.153.36.170) Quit (Remote host closed the connection)
[20:54] * onizo (~onizo@wsip-184-182-190-131.sd.sd.cox.net) has joined #ceph
[21:11] * Gamekiller77 (~oftc-webi@128-107-239-233.cisco.com) Quit (Quit: Page closed)
[21:11] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:19] * nwat (~nwat@eduroam-232-148.ucsc.edu) has joined #ceph
[21:22] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit (Quit: Leaving.)
[21:22] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[21:33] * BillK (~BillK-OFT@58-7-67-236.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[21:34] * mikedawson (~chatzilla@23-25-46-107-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[21:36] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit (Quit: Leaving.)
[21:36] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[21:38] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit ()
[21:38] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[21:39] * ScOut3R (~scout3r@dsl51B61603.pool.t-online.hu) Quit (Remote host closed the connection)
[21:40] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit ()
[21:40] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[21:42] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit ()
[21:42] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[21:44] * Henson_D (~kvirc@lord.uwaterloo.ca) has joined #ceph
[21:45] <Henson_D> hi everyone, I just noticed the following error after a deep scrub on my system following the failure of 1 of the 2 nodes "deep-scrub 3.9 3082bd09/rb.0.102c.2ae8944a.000000003020/head//3 on disk size (4194304) does not match object info size (4096)".
[21:45] * Vjarjadian (~IceChat77@94.1.37.151) has joined #ceph
[21:45] <Henson_D> in this mailing list posting: http://thread.gmane.org/gmane.comp.file-systems.ceph.user/557 the poster says simply to truncate the replicas of the file on the backing filesystem to correct the problem. Is that still recommended?
[21:46] <Henson_D> what if the file size is correct, but the object size is wrong and needs to be changed?
[21:47] <Henson_D> the object is part of a doubly-replicated pool that contains a bunch of RBD devices with EXT4 filesystems on them.
[21:51] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit (Quit: Leaving.)
[21:51] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[21:53] * ksingh (~Adium@91-157-122-80.elisa-laajakaista.fi) has joined #ceph
[21:55] <davidz> Henson_D: I would see if the data after 4096 is all zeros anyway, Also, if this is the only error, then it must mean that the other copy of this object has a size of 4096. You should verify that. I would be very comfortable truncating the file if this checks out.
[22:00] <Henson_D> davidz: the files and MD5sums are the same on both backing filesystems, and the file is not empty after 4096
[22:00] <Henson_D> davidz: the files are the same, and are of size 4194304 and not 4095
[22:01] <Henson_D> davidz: oops, not 4096
[22:04] <davidz> Henson_D: hmm…So we should assume that this is all good data and we don't want to truncate then.
[22:07] * ishkabob (~c7a82cc0@webuser.thegrebs.com) has joined #ceph
[22:07] <Henson_D> davidz: is there a way to figure out which RBD image an object maps onto? I know what pool it's in, but there are a bunch of images in the pool. I would be interested in trying to figure out what files or files the object belongs to.
[22:08] <Henson_D> davidz: or maybe resetting the object info size is the solution?
[22:10] <Henson_D> davidz: or maybe I should unmount the filesystems, "rados get" the object into a file, "rados rm" it, then "rados put" it back?
[22:11] <dmick> Henson_D: the objects are named in a way that's related to the image
[22:12] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[22:12] <dmick> rbd info <imagename> will. show you the block_name_prefix
[22:13] <dmick> (rbd ls will list all the imagenames if you have a lot)
[22:13] * ksingh (~Adium@91-157-122-80.elisa-laajakaista.fi) Quit (Quit: Leaving.)
[22:14] <Henson_D> dmick: ahh, ok, I found out which image it corresponds to. Does the filename tell me anything else about where it's located in the image?
[22:14] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit (Quit: Leaving.)
[22:15] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[22:15] <dmick> the last digits are a block number; multiply that by the blocksize (also from rbd info)
[22:15] <dmick> I believe the block number is in hex
[22:15] <dmick> this is assuming you're not doing any fancy striping
[22:15] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit ()
[22:16] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[22:18] <Henson_D> dmick: perfect! I found out where it is.
[22:19] <Henson_D> dmick: now to find out what in the ext4 filesystem is in that 4MB chunk.
[22:19] <dmick> hoo, good luck with that one :)
[22:19] <dmick> it's probably faster to search files for a hunk of binary
[22:19] <Henson_D> dmick: I did an fsck of the filesystem a little while ago, and everything checked out
[22:20] <Henson_D> dmick: it's a 250 GB filesystem :-)
[22:20] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) has joined #ceph
[22:20] <dmick> so, small enough that a good computer search will terminate in reasonable time
[22:20] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit ()
[22:20] <Henson_D> dmick: :-)
[22:21] <ishkabob> hey guys, has anyone tried to connect an RBD image to a linux container (docker)?
[22:21] * mozg (~andrei@host86-184-120-113.range86-184.btcentralplus.com) has joined #ceph
[22:24] <Henson_D> dmick: It looks like the file contains a bunch of different things.
[22:24] <dmick> ?
[22:25] <Henson_D> dmick: I mean, it doesn't map onto a single file, probably several files or filesystem data. So truncating it could screw things up.
[22:26] <Henson_D> unfortunately I have to run. I'll come back on tomorrow to ask this question again.
[22:26] <Henson_D> davidz, dmick: thank you for your help
[22:26] * Shmouel1 (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[22:27] <dmick> ah. yes.
[22:27] <dmick> k, gl
[22:28] * Henson_D (~kvirc@lord.uwaterloo.ca) Quit (Quit: KVIrc KVIrc Equilibrium 4.1.3, revision: 5988, sources date: 20110830, built on: 2011-12-05 12:15:22 UTC http://www.kvirc.net/)
[22:30] * BillK (~BillK-OFT@58-7-67-236.dyn.iinet.net.au) has joined #ceph
[22:30] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit (Quit: Leaving.)
[22:31] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[22:31] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit (Read error: Connection reset by peer)
[22:34] <hughsaunders> hey all, how can I delete a pool by ID rather than name?
[22:35] <hughsaunders> am running into http://tracker.ceph.com/issues/6046 where I have a pool named ''
[22:36] <hughsaunders> https://gist.github.com/hughsaunders/7014391
[22:38] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[22:39] <dmsimard> leseb: Ping
[22:40] * yanzheng (~zhyan@101.83.61.114) has joined #ceph
[22:45] * bandrus1 is now known as bandrus
[22:46] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[22:47] * zhyan_ (~zhyan@101.83.100.174) has joined #ceph
[22:47] <dmick> hughsaunders: I can't think of a good way
[22:47] * ksingh (~Adium@91-157-122-80.elisa-laajakaista.fi) has joined #ceph
[22:48] <hughsaunders> dmick: to the blat mobile then!
[22:53] * yanzheng (~zhyan@101.83.61.114) Quit (Read error: Operation timed out)
[22:54] * rudolfsteiner (~federicon@200.68.116.185) has joined #ceph
[22:56] * JoeGruher (~JoeGruher@jfdmzpr02-ext.jf.intel.com) Quit (Remote host closed the connection)
[22:57] * sjm (~sjm@pool-96-234-124-66.nwrknj.fios.verizon.net) has left #ceph
[22:59] * JoeGruher (~JoeGruher@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[23:00] * sprachgenerator (~sprachgen@c-50-141-192-36.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[23:09] * ksingh (~Adium@91-157-122-80.elisa-laajakaista.fi) has left #ceph
[23:12] * aarontc (~aaron@static-50-126-79-226.hlbo.or.frontiernet.net) has joined #ceph
[23:12] * sprachgenerator (~sprachgen@c-50-141-192-36.hsd1.il.comcast.net) has joined #ceph
[23:12] * sjm (~sjm@pool-96-234-124-66.nwrknj.fios.verizon.net) has joined #ceph
[23:16] * zhyan_ (~zhyan@101.83.100.174) Quit (Ping timeout: 480 seconds)
[23:20] * danieagle (~Daniel@179.176.61.26.dynamic.adsl.gvt.net.br) has joined #ceph
[23:22] * diegows (~diegows@190.190.11.42) has joined #ceph
[23:24] * MACscr1 (~Adium@c-98-214-103-147.hsd1.il.comcast.net) has joined #ceph
[23:24] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) Quit (Read error: Connection reset by peer)
[23:32] * rudolfsteiner (~federicon@200.68.116.185) Quit (Quit: rudolfsteiner)
[23:33] * JoeGruher (~JoeGruher@jfdmzpr02-ext.jf.intel.com) Quit (Remote host closed the connection)
[23:34] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[23:37] * yanzheng (~zhyan@101.82.164.110) has joined #ceph
[23:41] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[23:41] * rudolfsteiner (~federicon@200.68.116.185) has joined #ceph
[23:42] * sjm (~sjm@pool-96-234-124-66.nwrknj.fios.verizon.net) Quit (Remote host closed the connection)
[23:43] * rudolfsteiner (~federicon@200.68.116.185) Quit ()
[23:45] * JoeGruher (~JoeGruher@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[23:45] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[23:45] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[23:46] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[23:46] * yanzheng (~zhyan@101.82.164.110) Quit (Ping timeout: 480 seconds)
[23:47] * yanzheng (~zhyan@101.82.164.110) has joined #ceph
[23:49] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit (Read error: No route to host)
[23:50] * albionandrew (~albionand@64.25.15.100) has joined #ceph
[23:51] <tsnider> Is it possible to add external journal devices if the cluster and osds were orginally created without explicit journals?
[23:52] <albionandrew> Hi, if I run ceph -s I get the following - health HEALTH_WARN 392 pgs degraded; 392 pgs stuck unclean; recovery 5143/10286 degraded (50.000%
[23:52] <dmick> tsnider: journals have been required for a long time now; chances are good you have them
[23:52] <dmick> albionandrew: how many OSDss?
[23:53] <tsnider> dmick: I could've phrased the question better. Can the default osd journals be migrated to a separate external device?
[23:53] <albionandrew> 2
[23:53] <albionandrew> [root@node-60 ceph-1]# /etc/init.d/ceph -a start osd.1
[23:53] <albionandrew> === osd.1 ===
[23:53] <albionandrew> 2013-10-16 21:52:26.030152 7f84a6b6e760 -1 unable to authenticate as osd.1
[23:53] <albionandrew> 2013-10-16 21:52:26.030490 7f84a6b6e760 -1 ceph_tool_common_init failed.
[23:53] <albionandrew> Starting Ceph osd.1 on node-60...
[23:53] <albionandrew> starting osd.1 at :/0 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
[23:54] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[23:56] <dmick> tsnider: possibly. I know there's a --flush-journal option to the OSDs, and if you do that, I think it's possible you can then change the journal dev and use --mkjournal to recreate it based on the filestore. I've never done this but it seems possible.
[23:56] <dmick> albionandrew: 1) don't paste multiple lines; if you must, use pastebin or similar. 2) does ceph osd tree show both running and happy?
[23:56] * ScOut3R (~scout3r@dsl51B61603.pool.t-online.hu) has joined #ceph
[23:56] * ishkabob (~c7a82cc0@webuser.thegrebs.com) Quit (Quit: TheGrebs.com CGI:IRC)
[23:56] * gucki (~smuxi@p549F8B9E.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[23:57] <albionandrew> sorry - http://pastebin.com/GWqYVNFt
[23:57] <dmick> ok, there's your problem; with 1 osd you can't replicate or become healthy
[23:57] <albionandrew> We are trying to set yp openstack with 2 cinder+CEPH nodes
[23:59] <albionandrew> So I must add another node?
[23:59] * sputnik13 (~sputnik13@64-73-250-90.static-ip.telepacific.net) has joined #ceph
[23:59] <albionandrew> Sorry might seem a dumb question but first openstack/ceph deployment
[23:59] <dmick> you have two nodes; none of the osds are running on the second node

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.