#ceph IRC Log

Index

IRC Log for 2015-02-22

Timestamps are in GMT/BST.

[0:12] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[0:14] * yehudasa_ (~yehudasa@2607:f298:a:607:cd77:18f1:8c32:62c2) Quit (Ping timeout: 480 seconds)
[0:15] * eternaleye (~eternaley@50.245.141.77) Quit (Quit: Quit)
[0:22] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) Quit (Quit: Miouge)
[0:23] * yehudasa_ (~yehudasa@2607:f298:a:607:548b:86d1:f0e4:6ac5) has joined #ceph
[0:37] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) Quit (Quit: Leaving.)
[0:57] * Manshoon (~Manshoon@ip-64-134-229-24.public.wayport.net) Quit (Remote host closed the connection)
[0:58] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[1:04] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[1:05] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit ()
[1:26] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has left #ceph
[1:33] * slopshid (~oftc-webi@104-54-225-178.lightspeed.austtx.sbcglobal.net) has joined #ceph
[1:34] <slopshid> hi all, does anyone have a few minutes to help me? my cluster is all screwed up and I'm not sure I'm doing the right things to fix it
[1:49] * slopshid (~oftc-webi@104-54-225-178.lightspeed.austtx.sbcglobal.net) Quit (Remote host closed the connection)
[1:59] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:01] * MACscr (~Adium@2601:d:c800:de3:585:c05:5a04:2db6) has joined #ceph
[2:04] * kefu (~kefu@114.92.100.153) has joined #ceph
[2:19] * kefu (~kefu@114.92.100.153) Quit (Max SendQ exceeded)
[2:25] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[2:38] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[2:39] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[2:40] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[2:41] * cronix1 (~cronix@5.199.139.166) Quit (Ping timeout: 480 seconds)
[2:48] * kefu (~kefu@114.92.100.153) has joined #ceph
[2:49] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[3:01] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[3:12] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[3:15] * jclm1 (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Quit: Leaving.)
[3:15] * [Leeloo] (~Leeloo@ec2-54-88-140-156.compute-1.amazonaws.com) Quit ()
[3:16] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[3:22] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[3:24] * kefu (~kefu@114.92.100.153) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[3:49] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Quit: Leaving.)
[3:50] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[4:00] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) has joined #ceph
[4:02] * smokedmeets (~smokedmee@c-67-174-241-112.hsd1.ca.comcast.net) Quit (Quit: smokedmeets)
[4:04] * smokedmeets (~smokedmee@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[4:21] * ohnomrbill (~ohnomrbil@c-67-174-241-112.hsd1.ca.comcast.net) Quit (Quit: ohnomrbill)
[4:21] * smokedmeets (~smokedmee@c-67-174-241-112.hsd1.ca.comcast.net) Quit (Quit: smokedmeets)
[4:21] * ohnomrbill (~ohnomrbil@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[4:28] * Manshoon (~Manshoon@ip-64-134-229-24.public.wayport.net) has joined #ceph
[4:33] * L2SHO_ (~L2SHO@2001:19f0:1000:5123:8c84:23f:8ca:f675) has joined #ceph
[4:34] * LeaChim (~LeaChim@host86-147-114-247.range86-147.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[4:35] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[4:36] * Manshoon (~Manshoon@ip-64-134-229-24.public.wayport.net) Quit (Ping timeout: 480 seconds)
[4:38] * L2SHO (~L2SHO@2001:19f0:1000:5123:8c84:23f:8ca:f675) Quit (Ping timeout: 480 seconds)
[4:43] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[4:43] * Nacer (~Nacer@c2s31-2-83-152-89-17.fbx.proxad.net) has joined #ceph
[4:49] * zack_dolby (~textual@p2104-ipbf6307marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[4:57] * Nacer (~Nacer@c2s31-2-83-152-89-17.fbx.proxad.net) Quit (Remote host closed the connection)
[5:16] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:16] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[5:18] * MACscr (~Adium@2601:d:c800:de3:585:c05:5a04:2db6) Quit (Quit: Leaving.)
[5:21] * Vacuum_ (~vovo@i59F79238.versanet.de) has joined #ceph
[5:23] * zack_dolby (~textual@p2104-ipbf6307marunouchi.tokyo.ocn.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[5:25] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[5:28] * Vacuum (~vovo@i59F79E6B.versanet.de) Quit (Ping timeout: 480 seconds)
[6:05] * stephan (~Adium@dslb-178-008-020-087.178.008.pools.vodafone-ip.de) has joined #ceph
[6:27] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[6:52] * stephan (~Adium@dslb-178-008-020-087.178.008.pools.vodafone-ip.de) Quit (Quit: Leaving.)
[7:03] * danieagle (~Daniel@200-148-39-11.dsl.telesp.net.br) has joined #ceph
[7:13] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:18] * stephan (~Adium@dslb-178-008-020-087.178.008.pools.vodafone-ip.de) has joined #ceph
[7:20] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) Quit (Quit: Leaving.)
[7:24] * stephan (~Adium@dslb-178-008-020-087.178.008.pools.vodafone-ip.de) Quit (Quit: Leaving.)
[7:26] * logan__ (~logan@63.143.49.103) has joined #ceph
[7:26] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[7:26] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit ()
[7:58] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[8:01] * mykola (~Mikolaj@91.225.201.255) has joined #ceph
[8:22] * linjan_ (~linjan@176.195.72.185) has joined #ceph
[8:24] * stephan (~Adium@dslb-178-008-020-087.178.008.pools.vodafone-ip.de) has joined #ceph
[8:34] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) has joined #ceph
[8:41] * rljohnsn (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:44] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Quit: Leaving.)
[8:47] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[8:51] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[8:58] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[8:59] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[9:01] * Concubidated (~Adium@71.21.5.251) Quit ()
[9:02] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[9:22] * lestat151 (474a9b56@107.161.19.109) has joined #ceph
[9:23] <lestat151> come watch the hottest girls on the net 100% free to join! http://www.myif.cc/18Q8
[9:23] * lestat151 (474a9b56@107.161.19.109) has left #ceph
[9:58] * danieagle (~Daniel@200-148-39-11.dsl.telesp.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[9:59] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) Quit (Read error: Connection reset by peer)
[10:04] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[10:05] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) has joined #ceph
[10:20] * alfredodeza (~alfredode@198.206.133.89) Quit (Ping timeout: 480 seconds)
[10:27] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[10:27] * MentalRay (~MentalRay@107.171.161.165) Quit ()
[10:32] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[10:35] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[10:43] * alfredodeza (~alfredode@198.206.133.89) Quit (Ping timeout: 480 seconds)
[10:49] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[11:03] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[11:05] * alfredodeza (~alfredode@198.206.133.89) Quit (Ping timeout: 480 seconds)
[11:13] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) has joined #ceph
[11:23] * tuxcrafter (~jelle@ebony.powercraft.nl) has joined #ceph
[11:30] * linjan_ (~linjan@176.195.72.185) Quit (Ping timeout: 480 seconds)
[11:33] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[11:38] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Remote host closed the connection)
[11:40] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[11:50] * linjan_ (~linjan@195.110.41.9) has joined #ceph
[11:52] * avozza (~avozza@83.162.204.36) has joined #ceph
[11:56] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[12:10] * linjan_ (~linjan@195.110.41.9) Quit (Quit: ?????????? ?? ???? ??????)
[12:11] * linjan (~linjan@176.195.72.185) has joined #ceph
[12:12] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[12:19] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: This computer has gone to sleep)
[12:30] * fmanana (~fdmanana@bl13-157-248.dsl.telepac.pt) Quit (Quit: Leaving)
[12:40] * avozza_ (~avozza@83.162.204.36) has joined #ceph
[12:40] * avozza (~avozza@83.162.204.36) Quit (Read error: Connection reset by peer)
[12:56] * avozza (~avozza@83.162.204.36) has joined #ceph
[12:56] * avozza_ (~avozza@83.162.204.36) Quit (Read error: Connection reset by peer)
[12:58] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[13:01] * alfredodeza (~alfredode@198.206.133.89) Quit (Ping timeout: 480 seconds)
[13:08] * linjan (~linjan@176.195.72.185) Quit (Ping timeout: 480 seconds)
[13:18] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[13:18] * Sysadmin88 (~IceChat77@2.125.213.8) Quit (Quit: Light travels faster then sound, which is why some people appear bright, until you hear them speak)
[13:18] * Sysadmin88 (~IceChat77@2.125.213.8) has joined #ceph
[13:29] * linjan (~linjan@195.110.41.9) has joined #ceph
[13:51] * zack_dolby (~textual@p2104-ipbf6307marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[13:51] * zack_dolby (~textual@p2104-ipbf6307marunouchi.tokyo.ocn.ne.jp) Quit ()
[14:26] * todin (tuxadero@kudu.in-berlin.de) Quit (Remote host closed the connection)
[14:26] * sh (~sh@2001:6f8:1337:0:50f0:a8fe:9b20:7f3e) Quit (Ping timeout: 480 seconds)
[14:31] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[14:34] * sh (~sh@2001:6f8:1337:0:6059:4d33:2454:d16d) has joined #ceph
[14:37] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[14:41] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[14:42] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[14:47] * LeaChim (~LeaChim@host86-159-234-113.range86-159.btcentralplus.com) has joined #ceph
[14:48] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[14:54] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[15:00] * vbellur (~vijay@122.167.249.176) has joined #ceph
[15:09] * JaCe` (~JaCe@212-83-161-112.rev.poneytelecom.eu) has joined #ceph
[15:09] <JaCe`> hello
[15:10] <JaCe`> i'm realy new to ceph and i follow ceph.com documentation
[15:10] <JaCe`> i have a question about production environment
[15:10] <JaCe`> is ceph-deploy a reliable tool for production mode ? or it's better to manage the ceph's cluster manually ?
[15:14] <kevinkevin> I guess ceph-deploy is no more harmfull to production than ceph itself
[15:14] <kevinkevin> works well, in usual cases.
[15:15] <kevinkevin> not with LVM-based journals, ... but anything could be arranged later on
[15:17] <JaCe`> thanks kevinkevin
[15:18] <JaCe`> i have to read more documentation
[15:18] <JaCe`> some people told me that ceph founder said that block device mode wasn't ready for production
[15:18] <JaCe`> and the more reliable was object storage
[15:19] <JaCe`> (fosdem source)
[15:19] <JaCe`> and i don't find any warning about this in the documentattion
[15:19] <JaCe`> is that a fact or a rumour ?
[15:19] <Sysadmin88> what date is on your source?
[15:20] <Sysadmin88> all the stuff i've seen says everything except cephfs is 'awesome' and cephfs is 'nearly awesome'
[15:22] <kevinkevin> I've already lost a few VMs running from rdb. Shit happend, ... in a size:2 pool, a PG was not replicated while I lost a disk. 5 days later, I still have my PG 'creating', slow osd requests, ... and I'm rebuilding my cluster within a new pool. Awesome, in a way. Production-ready: depends of your use case.
[15:23] <Kingrat> and this is why you use 3 replicas
[15:24] <kevinkevin> if a 2 replica can't ensure the 2 replicas are there, I see no point in growing to 3
[15:24] <Sysadmin88> depends on your cluster
[15:24] <Sysadmin88> and how fast it was set to 'repair'
[15:24] <Sysadmin88> which depends how many OSD and hosts you have
[15:27] <JaCe`> i have to store some PB of flat file
[15:27] <JaCe`> thats' why i'm looking crph
[15:27] <JaCe`> i trie rozofs
[15:27] <JaCe`> but master drdb redundancy is weird
[15:34] * mlausch (~mlausch@2001:8d8:1fe:7:893:30c1:d742:22fb) Quit (Ping timeout: 480 seconds)
[15:37] * Sysadmin88 (~IceChat77@2.125.213.8) Quit (Read error: Connection reset by peer)
[15:43] * mlausch (~mlausch@2001:8d8:1fe:7:3862:2d1d:6bd7:5984) has joined #ceph
[15:52] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Quit: Leaving)
[16:10] * bkopilov (~bkopilov@bzq-109-67-165-243.red.bezeqint.net) has joined #ceph
[16:14] * vbellur (~vijay@122.167.249.176) Quit (Ping timeout: 480 seconds)
[16:16] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) Quit (Quit: Miouge)
[16:17] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) has joined #ceph
[16:18] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) has joined #ceph
[16:24] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) has joined #ceph
[16:26] * vbellur (~vijay@122.167.70.195) has joined #ceph
[16:31] * RayTracer (~RayTracer@153.19.7.39) has joined #ceph
[16:49] * Manshoon (~Manshoon@50.95.221.217) has joined #ceph
[16:50] * vbellur (~vijay@122.167.70.195) Quit (Ping timeout: 480 seconds)
[17:00] * vbellur (~vijay@122.167.221.109) has joined #ceph
[17:01] * RayTracer (~RayTracer@153.19.7.39) Quit (Remote host closed the connection)
[17:04] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[17:07] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[17:07] * fdmanana (~fdmanana@bl13-157-248.dsl.telepac.pt) has joined #ceph
[17:20] * Sysadmin88 (~IceChat77@2.125.213.8) has joined #ceph
[17:22] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[17:26] * MACscr (~Adium@2601:d:c800:de3:4deb:427e:189f:c9a9) has joined #ceph
[17:43] * Nacer (~Nacer@c2s31-2-83-152-89-17.fbx.proxad.net) has joined #ceph
[17:45] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[18:07] * Nacer (~Nacer@c2s31-2-83-152-89-17.fbx.proxad.net) Quit (Remote host closed the connection)
[18:08] * TomB_ (~tom@167.88.45.146) Quit (Ping timeout: 480 seconds)
[18:10] * Manshoon (~Manshoon@50.95.221.217) Quit (Remote host closed the connection)
[18:11] * vbellur (~vijay@122.167.221.109) Quit (Ping timeout: 480 seconds)
[18:11] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[18:15] * BManojlovic (~steki@cable-89-216-233-224.dynamic.sbb.rs) has joined #ceph
[18:15] * Manshoon (~Manshoon@ip-64-134-229-24.public.wayport.net) has joined #ceph
[18:22] * vbellur (~vijay@122.178.232.5) has joined #ceph
[18:23] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[18:23] * Manshoon (~Manshoon@ip-64-134-229-24.public.wayport.net) Quit (Ping timeout: 480 seconds)
[18:24] * kawa2014 (~kawa@2001:67c:1560:8007::aac:c1a6) has joined #ceph
[18:28] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[18:29] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) Quit (Quit: Ex-Chat)
[18:35] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[18:41] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) has joined #ceph
[18:55] * vbellur (~vijay@122.178.232.5) Quit (Ping timeout: 480 seconds)
[19:06] * kawa2014 (~kawa@2001:67c:1560:8007::aac:c1a6) Quit (Ping timeout: 480 seconds)
[19:15] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[19:23] * RayTracer (~RayTracer@host-81-190-2-156.gdynia.mm.pl) has joined #ceph
[19:23] * RayTracer (~RayTracer@host-81-190-2-156.gdynia.mm.pl) Quit (Remote host closed the connection)
[19:25] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[19:35] * rljohnsn (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) has joined #ceph
[19:40] * linjan (~linjan@80.179.241.26) has joined #ceph
[19:56] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[19:56] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[20:10] * stephan (~Adium@dslb-178-008-020-087.178.008.pools.vodafone-ip.de) Quit (Quit: Leaving.)
[20:17] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) has joined #ceph
[20:23] * linjan (~linjan@80.179.241.26) Quit (Ping timeout: 480 seconds)
[20:25] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:26] * Manshoon (~Manshoon@ip-64-134-229-24.public.wayport.net) has joined #ceph
[20:37] * derjohn_mob (~aj@ip-95-223-126-17.hsi16.unitymediagroup.de) Quit (Ping timeout: 480 seconds)
[20:42] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[21:16] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[21:21] * RayTracer (~RayTracer@host-81-190-2-156.gdynia.mm.pl) has joined #ceph
[21:21] * Manshoon (~Manshoon@ip-64-134-229-24.public.wayport.net) Quit (Remote host closed the connection)
[21:32] * dyasny (~dyasny@198.251.59.151) has joined #ceph
[21:51] * dyasny (~dyasny@198.251.59.151) Quit (Ping timeout: 480 seconds)
[21:52] * stephan (~Adium@dslb-178-008-020-087.178.008.pools.vodafone-ip.de) has joined #ceph
[21:53] * garphy`aw is now known as garphy
[21:54] * pi (~pi@host-81-190-2-156.gdynia.mm.pl) has joined #ceph
[21:57] <RayTracer> Hi. What is an acceptable difference in OSD usage within a cluster? We notice there is something about 10-20% diffrence in usage on some of our osd capacity. Is there a way to better handle that balance? I read that maybe recalculate a pg_num value for our pools can help with it a little.
[22:00] <via> does ceph osd tree show that the weights are all relatively equal?
[22:00] * haomaiwa_ (~haomaiwan@115.218.158.93) has joined #ceph
[22:02] <Kingrat> RayTracer, how many osd do you have and how many pgs do you have per pool? usually imbalance is caused by not having enough pgs
[22:05] <RayTracer> Kingrat: we have 14 OSDs with 3 pools with 512 pgp_num each.
[22:06] <Kingrat> RayTracer, and what replica size? 3?
[22:07] <RayTracer> Kingrat: 2 replicas
[22:07] * haomaiwang (~haomaiwan@115.218.153.142) Quit (Ping timeout: 480 seconds)
[22:08] <Kingrat> are your weights appropriate for the size of the drive?
[22:08] <RayTracer> Oh. I just spoted that one of our pool have only 128 pg_num.
[22:09] <Kingrat> yeah, 512 is ok, 128 is a bit low for that many osd
[22:10] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[22:14] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) Quit (Quit: Miouge)
[22:14] * mykola (~Mikolaj@91.225.201.255) Quit (Quit: away)
[22:16] <RayTracer> Kingrat: Is changing pg value very stressfull for cluster?
[22:18] <Kingrat> it will rebalance, it depends on your settings, in a smaller cluster you may see more stress with the default settings
[22:19] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[22:19] * MACscr (~Adium@2601:d:c800:de3:4deb:427e:189f:c9a9) Quit (Ping timeout: 480 seconds)
[22:20] <Kingrat> i have 16osd in mine, and im using osd max backfills = 2, and osd recovery max active = 2
[22:20] <Kingrat> the default settings are much higher, and were causing me some issues
[22:21] * MACscr (~Adium@2601:d:c800:de3:4deb:427e:189f:c9a9) has joined #ceph
[22:22] <RayTracer> When you mention this i remembered that some other time i met this settings before. Afair there is also osd_recovery_op_priority that can adjust recovery upon rebalance.
[22:22] <RayTracer> \some other time i met this setings before.. 'guess it is time for sleep to me.
[22:22] <RayTracer> :P
[22:24] <RayTracer> Kingrat: Ok! I'll schedule this rebalance then. Thank you very much for help.
[22:24] <RayTracer> :]
[22:24] <Kingrat> np, if it isnt enough you could probably go to 1024, or do a reweigh by utilization
[22:24] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[22:27] * PaulC (~paul@122-60-36-115.jetstream.xtra.co.nz) Quit (Ping timeout: 480 seconds)
[22:29] * Manshoon (~Manshoon@ip-64-134-229-24.public.wayport.net) has joined #ceph
[22:29] * MACscr1 (~Adium@2601:d:c800:de3:4deb:427e:189f:c9a9) has joined #ceph
[22:30] * MACscr (~Adium@2601:d:c800:de3:4deb:427e:189f:c9a9) Quit (Ping timeout: 480 seconds)
[22:33] <RayTracer> Kingrat: One more question. Changing pg_num will not affect overall usage of space on cluster anyway but only specify object distribution across OSD?
[22:34] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[22:36] <Kingrat> correct
[22:38] <RayTracer> Ok. Thank you very much and cya. :]
[22:38] * RayTracer (~RayTracer@host-81-190-2-156.gdynia.mm.pl) Quit (Quit: Leaving...)
[22:39] <halbritt> am I missing something
[22:40] <halbritt> looking for QEMU packages for el7 with rbd support.
[22:40] <halbritt> they don't exist in the ceph-extras repo
[22:40] * derjohn_mob (~aj@tmo-100-99.customers.d1-online.com) has joined #ceph
[22:41] <halbritt> and I'm wondering if that's because they haven't been built yet, or there's support with the standard QEMU and I just haven't enabled it somehow
[22:58] <Anticimex> halbritt: the regular packages have suppor tin el7
[22:59] * volter (~volter@geofrogger.net) has joined #ceph
[23:07] * delatte (~cdelatte@67.197.3.123) Quit (Quit: This computer has gone to sleep)
[23:10] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) Quit (Quit: Leaving.)
[23:12] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[23:13] <halbritt> Anticimex: thanks
[23:17] <halbritt> Are you certain?
[23:17] <halbritt> Centos 7 qemu out of the box does not support rbd.
[23:17] <halbritt> "Centos 7 qemu out of the box does not support rbd.
[23:17] <halbritt> http://www.spinics.net/lists/ceph-users/msg12153.html
[23:17] * Manshoon (~Manshoon@ip-64-134-229-24.public.wayport.net) Quit (Remote host closed the connection)
[23:18] <halbritt> keep seeing comments like that.
[23:18] <Anticimex> hmm
[23:18] <Anticimex> i thought it did, and that i tested, maybe i'm wrong
[23:18] <halbritt> well
[23:18] <halbritt> I'm working in a test environment.
[23:18] <Anticimex> the machine where i tested is reinstalled. i do think it was there
[23:18] <halbritt> one solution I read suggested to use the ovirt repo
[23:19] <Anticimex> let me know what you figure out :]
[23:19] <halbritt> same versions of qemu, just supposedly compiled with enable-rbd
[23:19] <halbritt> yeah, I'll try it and blow it away if it doesn't work.
[23:20] <Anticimex> how can i test for a qemu's rbd support?
[23:20] <halbritt> qemu-img -f
[23:20] <halbritt> I get:
[23:20] <halbritt> Supported formats: vvfat vpc vmdk vhdx vdi sheepdog sheepdog sheepdog raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd nbd nbd iscsi gluster gluster gluster gluster dmg cow cloop bochs blkverify blkdebug
[23:20] <Anticimex> i see, no rbd there no
[23:20] <Anticimex> (in mine neither)
[23:21] <Anticimex> then i dont remember what i did
[23:21] <halbritt> maybe used the kernel module?
[23:21] <Anticimex> no
[23:22] <halbritt> hopefully, this doesn't suck for tracking dependencies.
[23:23] <halbritt> ovirt has cleverly named everything "_0.2"
[23:23] <halbritt> 1.5.3-60.el7_0.2
[23:23] <halbritt> as such
[23:23] <halbritt> and now:
[23:23] <halbritt> Supported formats: vvfat vpc vmdk vhdx vdi sheepdog sheepdog sheepdog rbd raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd nbd nbd iscsi gluster gluster gluster gluster dmg cow cloop bochs blkverify blkdebug
[23:23] <Anticimex> \o/
[23:24] <halbritt> well, that seems to work
[23:24] <halbritt> [root@openstack tmp]# sudo qemu-img convert -O raw rbd:images/af5600e9-30bd-4700-a28d-7db777797cbc rbd:images/$(uuidgen)
[23:24] <Kingrat> why run vms from centos anyway? just wondering why not use some other linux based vm platform, like proxmox, ovirt, opennebula, etc
[23:25] <halbritt> I have a very large investment in terms of time and automation in CentOS
[23:25] <Kingrat> so you have your own management platform that you set up then
[23:26] <halbritt> in this case, I intend to use openstack
[23:26] <halbritt> RDO
[23:27] * Sysadmin88 (~IceChat77@2.125.213.8) has left #ceph
[23:32] <halbritt> If this makes it into production, it probably make more sense to just build RPMs from source and add it to our internal repo, rather than use ovirt.
[23:32] <Anticimex> yeah
[23:32] <halbritt> it would, rather.
[23:33] <Anticimex> but i'm curious if there aren't official qemu pkgs
[23:33] <halbritt> I'm not finding 'em
[23:33] <halbritt> seems strange.
[23:33] <Anticimex> maybe 7.1 will contain some stuff
[23:33] <Anticimex> yes
[23:34] <halbritt> from what I can tell, all that's require is --enable-rbd
[23:34] <halbritt> given that gluster is in there.
[23:34] <halbritt> no reason that ceph couldn't be
[23:34] <Anticimex> arne't there qemu pkgs in rdo ?
[23:35] <halbritt> lemme look
[23:35] <Anticimex> https://openstack.redhat.com/Using_Ceph_for_Cinder_with_RDO_Havana some 6.5 variant
[23:35] <halbritt> https://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/
[23:35] <halbritt> nope
[23:35] <kraken> http://i.imgur.com/zCtbl.gif
[23:35] <halbritt> there are in ceph-extras repo for el6
[23:35] <halbritt> but not el7
[23:36] <halbritt> lol'd at that gif
[23:36] <Anticimex> heh
[23:37] <Anticimex> so basically, take srpm of regular qemu, reconfigure with --enable-rbd, recompile
[23:37] <Anticimex> wonder what rh's plan is with that, having bought inktank and all..
[23:38] <halbritt> that's what I gather to be the correct methodology.
[23:38] <halbritt> as for RH
[23:38] <halbritt> I wager there's no plan
[23:38] <Anticimex> https://encrypted.google.com/search?hl=en&q=qemu%20with%20rbd%20for%20el7 , you're not the first to wonder
[23:39] <halbritt> "RHEL 7.1 also includes Ceph userspace components and the Ceph RADOS Block Devices (RBD) kernel module for easier access to Ceph block storage devices."
[23:39] <Anticimex> great
[23:40] <halbritt> that could be meaningful
[23:40] <Anticimex> rebuild howto https://ask.openstack.org/en/question/59480/how-can-i-get-kvm-rpm-package-which-support-ceph-rbd-for-centos7-or-rhel-7/?answer=60209#post-id-60209
[23:41] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[23:42] <Anticimex> another variant: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-June/040760.html
[23:42] <halbritt> yup
[23:43] <Anticimex> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7-Beta/html/7.1_Release_Notes/index.html
[23:49] <halbritt> heh
[23:49] <halbritt> btrfs "supported as a technology preview"
[23:49] <halbritt> any day now....
[23:54] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: This computer has gone to sleep)
[23:56] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.