#ceph IRC Log

Index

IRC Log for 2014-08-27

Timestamps are in GMT/BST.

[0:00] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Quit: Exeunt dneary)
[0:04] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[0:07] * elder (~elder@50.250.6.142) Quit (Quit: Leaving)
[0:12] * ikrstic (~ikrstic@109-93-112-236.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[0:12] * joef (~Adium@2620:79:0:131:c9dc:7c4b:11a2:e334) Quit (Quit: Leaving.)
[0:15] * angdraug_ (~angdraug@131.252.204.134) has joined #ceph
[0:16] * angdraug (~angdraug@131.252.204.134) Quit (Ping timeout: 480 seconds)
[0:19] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:21] <Venturi> to possible add some other features to Ceph RadosGW, let's say to compute something over objects within RadosGW
[0:23] <Venturi> Or let's say some form post middleware for users to upload data within HTML form, so they do not have to botter with some sort of authorization&staff..
[0:24] <Venturi> to add some web-based functionality to Ceph RadosGW
[0:24] <Venturi> I guess this would be possible through Ceph FastCGI interface?
[0:26] <Venturi> similar as other storage solutions are using WSGI pipline together with some Python written middleware...
[0:30] * jobewan (~jobewan@snapp.centurylink.net) Quit (Quit: Leaving)
[0:31] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[0:32] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[0:45] * smiley_ (~smiley@pool-173-66-4-176.washdc.fios.verizon.net) has joined #ceph
[0:48] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:51] <qhartman> It seems that increasing the number of disks and OSDs in a cluster would improve performance. I assume there's some amount of overhead that imposes some diminishing returns though
[0:53] <qhartman> If I have a cluster with 7 nodes, each running three OSDs, each OSD managing a single TB disk, and I add an eighth node with a similar configuration, what should my expectations be around performance? 1/7 faster?
[0:56] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[0:58] * lightspeed (~lightspee@2001:8b0:16e:1:8326:6f70:89f:8f9c) has joined #ceph
[1:02] <Sysadmin88> probably depends on too many factors to give a definitive answer
[1:02] <Sysadmin88> network, server specs, crush rules
[1:03] * zack_dolby (~textual@p843a3d.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:04] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:06] * sprachgenerator_ (~sprachgen@130.202.135.20) has joined #ceph
[1:09] * sprachgenerator (~sprachgen@130.202.135.20) Quit (Ping timeout: 480 seconds)
[1:09] * sprachgenerator_ is now known as sprachgenerator
[1:10] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[1:11] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[1:17] * fsimonce (~simon@host135-17-dynamic.8-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:19] * nwat (~textual@eduroam-238-17.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:24] * oms101 (~oms101@p20030057EA003A00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:25] * ircolle (~Adium@2601:1:a580:145a:d34:8769:496a:13e3) Quit (Quit: Leaving.)
[1:26] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Quit: Computer has gone to sleep.)
[1:26] * valeech (~valeech@ip72-205-7-86.dc.dc.cox.net) has joined #ceph
[1:32] * bandrus (~oddo@216.57.72.205) Quit (Quit: Leaving.)
[1:33] * oms101 (~oms101@p20030057EA520E00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:43] * joef (~Adium@2601:9:280:f2e:2c8e:5900:7909:cf5f) has joined #ceph
[1:51] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[1:51] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit ()
[1:51] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[1:51] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[1:56] * zack_dolby (~textual@e0109-114-22-3-49.uqwimax.jp) has joined #ceph
[2:00] * sjustwork (~sam@2607:f298:a:607:7c12:c0ee:7ade:8759) Quit (Quit: Leaving.)
[2:00] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[2:01] * yanzheng (~zhyan@171.221.137.238) Quit (Remote host closed the connection)
[2:02] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[2:05] <Venturi> what's the bigest object uploadet to RadosGW?
[2:07] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:09] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[2:13] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[2:18] * Concubidated (~Adium@66-87-131-234.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[2:19] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[2:19] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:20] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:23] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[2:25] * Pedras1 (~Adium@216.207.42.140) has joined #ceph
[2:26] * Concubidated (~Adium@66.87.66.180) has joined #ceph
[2:27] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[2:27] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[2:30] * Venturi (~Venturi@93-103-91-169.dynamic.t-2.net) Quit ()
[2:30] * Pedras (~Adium@216.207.42.140) Quit (Ping timeout: 480 seconds)
[2:31] * angdraug_ (~angdraug@131.252.204.134) Quit (Quit: Leaving)
[2:31] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[2:34] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[2:39] * Concubidated (~Adium@66.87.66.180) Quit (Ping timeout: 480 seconds)
[2:39] * MACscr (~Adium@c-98-214-170-53.hsd1.il.comcast.net) has joined #ceph
[2:44] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Quit: Computer has gone to sleep.)
[2:45] * longguang (~chatzilla@123.126.33.253) Quit (Read error: Connection reset by peer)
[2:45] * longguang (~chatzilla@123.126.33.253) has joined #ceph
[2:46] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[2:49] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[2:49] * huangjun (~kvirc@111.174.12.80) has joined #ceph
[2:50] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[2:50] * hijacker (~hijacker@213.91.163.5) Quit (Ping timeout: 480 seconds)
[2:50] <huangjun> i have a cluster with replica 2 of data pool, when i set it to 3, some osd will used 100%, and i can not restart that osd again
[2:55] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:55] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[2:58] * elder (~elder@216-238-60-2.tncionline.net) has joined #ceph
[2:58] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:59] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[3:00] * longguang (~chatzilla@123.126.33.253) Quit (Read error: Connection reset by peer)
[3:00] * Chendi_Xue (~oftc-webi@134.134.139.74) has joined #ceph
[3:00] * longguang (~chatzilla@123.126.33.253) has joined #ceph
[3:00] <Chendi_Xue> hi, can I ask ceph-deploy questions here
[3:01] <Chendi_Xue> wanna know if I had some tuning in ceph.conf, how to apply these setting by ceph-deploy, used to use mkcephfs to rebuild ceph cluster or restart osd daemon
[3:03] <huangjun> i think you can use ceph-deploy cofig push HOST to update your conf in /etc/ceph/ceph.conf
[3:03] <Chendi_Xue> just push, did you tried this before, seems it just copy my ceph.conf from admin_node to ceph cluster
[3:04] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[3:04] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[3:05] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[3:05] <huangjun> yes, you can modify your ceph.conf first(adding tuning config items) and then push it to cluster
[3:05] <huangjun> and reboot all cluster daemons (osd,mds,mon)
[3:05] * scuttlemonkey is now known as scuttle|afk
[3:07] <Chendi_Xue> Ok, that will be great, then, if I need to modify some mkfs setting, like make.xfs using bigger sector size etc, do you think I can set to cepg.conf first then ceph-deploy create osd?
[3:07] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Ping timeout: 480 seconds)
[3:12] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) has joined #ceph
[3:13] * joef (~Adium@2601:9:280:f2e:2c8e:5900:7909:cf5f) Quit (Quit: Leaving.)
[3:15] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[3:15] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[3:16] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[3:19] <huangjun> yes, you can,just add it into ceph.conf before you create osd
[3:21] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[3:23] * kevinc (~kevinc__@client64-180.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[3:24] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:28] * Concubidated (~Adium@66.87.66.180) has joined #ceph
[3:29] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[3:30] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[3:30] * aldavud is now known as dgurtner
[3:30] * dgurtner (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit ()
[3:30] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[3:31] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[3:35] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[3:35] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit ()
[3:36] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit ()
[3:48] * nhm (~nhm@107-1-123-195-ip-static.hfc.comcastbusiness.net) has joined #ceph
[3:48] * ChanServ sets mode +o nhm
[3:52] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[3:56] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[4:00] * Concubidated (~Adium@66.87.66.180) Quit (Ping timeout: 480 seconds)
[4:03] * dneary (~dneary@96.237.180.105) has joined #ceph
[4:04] * zhaochao (~zhaochao@111.204.252.1) has joined #ceph
[4:07] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:07] * longguang_ (~chatzilla@123.126.33.253) has joined #ceph
[4:08] * longguang (~chatzilla@123.126.33.253) Quit (Ping timeout: 480 seconds)
[4:08] * longguang_ is now known as longguang
[4:18] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[4:19] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[4:20] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[4:24] * elder (~elder@216-238-60-2.tncionline.net) Quit (Quit: Leaving)
[4:27] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[4:28] * valeech (~valeech@ip72-205-7-86.dc.dc.cox.net) Quit (Quit: valeech)
[4:31] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:34] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) has joined #ceph
[4:34] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Remote host closed the connection)
[4:35] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[4:40] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[4:43] * haomaiwa_ (~haomaiwan@223.223.183.114) has joined #ceph
[4:45] * apolloJess (~Thunderbi@202.60.8.252) has joined #ceph
[4:48] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Operation timed out)
[4:50] * fmanana (~fdmanana@bl4-181-106.dsl.telepac.pt) has joined #ceph
[4:50] * Concubidated (~Adium@66-87-66-180.pools.spcsdns.net) has joined #ceph
[4:56] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[4:57] * fdmanana (~fdmanana@bl13-150-5.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[5:10] * dneary (~dneary@96.237.180.105) Quit (Ping timeout: 480 seconds)
[5:12] * Chendi_Xue (~oftc-webi@134.134.139.74) Quit (Quit: Page closed)
[5:14] * zhaochao (~zhaochao@111.204.252.1) has left #ceph
[5:21] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[5:21] * shang (~ShangWu@175.41.48.77) has joined #ceph
[5:21] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[5:26] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[5:30] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[5:34] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[5:47] * JC1 (~JC@46.189.28.234) has joined #ceph
[5:47] * drankis (~drankis__@89.111.13.198) Quit (Read error: Connection reset by peer)
[5:53] * vbellur (~vijay@122.172.63.53) has joined #ceph
[5:54] * JC (~JC@46.189.28.234) Quit (Ping timeout: 480 seconds)
[5:54] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[5:56] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[5:58] * Vacuum_ (~vovo@88.130.210.224) has joined #ceph
[5:59] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[6:00] * nhm (~nhm@107-1-123-195-ip-static.hfc.comcastbusiness.net) Quit (Read error: Operation timed out)
[6:05] * Vacuum (~vovo@88.130.193.115) Quit (Ping timeout: 480 seconds)
[6:06] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[6:15] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Quit: Computer has gone to sleep.)
[6:16] * vbellur (~vijay@122.172.63.53) Quit (Quit: Leaving.)
[6:17] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[6:27] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[6:33] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[6:42] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[6:46] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[6:46] * Nats__ (~Nats@2001:8000:200c:0:19dd:866a:4f13:861e) Quit (Quit: Leaving)
[6:49] * ashishchandra (~ashish@49.32.0.250) has joined #ceph
[6:49] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[6:49] * KevinPerks (~Adium@2606:a000:80a1:1b00:119e:eaf3:e2a0:9451) Quit (Quit: Leaving.)
[6:50] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[6:54] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Read error: Operation timed out)
[7:04] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[7:06] * bkopilov (~bkopilov@213.57.16.103) has joined #ceph
[7:10] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[7:11] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:18] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:29] * adamcrume (~quassel@2601:9:6680:47:45e8:53c8:8a75:fb5b) Quit (Remote host closed the connection)
[7:36] * michalefty (~micha@p20030071CF0301001D5319909125892E.dip0.t-ipconnect.de) has joined #ceph
[7:37] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[7:45] * rdas (~rdas@110.227.40.66) has joined #ceph
[7:46] * rendar (~I@host27-176-dynamic.35-79-r.retail.telecomitalia.it) has joined #ceph
[7:49] * Pedras1 (~Adium@216.207.42.140) Quit (Ping timeout: 480 seconds)
[7:51] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[7:55] * blackmen (~Ajit@121.244.87.115) has joined #ceph
[7:55] * gregmark1 (~Adium@68.87.42.115) has joined #ceph
[7:57] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) Quit (Read error: Operation timed out)
[7:57] * haomaiwa_ (~haomaiwan@223.223.183.114) Quit (Ping timeout: 480 seconds)
[8:00] <longguang> how to get and print mdsmap?
[8:01] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[8:03] * karnan (~karnan@121.244.87.117) has joined #ceph
[8:07] * Concubidated1 (~Adium@66-87-131-216.pools.spcsdns.net) has joined #ceph
[8:07] * zultron (~zultron@99-190-134-148.lightspeed.austtx.sbcglobal.net) has joined #ceph
[8:08] * JC1 (~JC@46.189.28.234) Quit (Quit: Leaving.)
[8:10] * longguang_ (~chatzilla@123.126.33.253) has joined #ceph
[8:10] * Concubidated1 (~Adium@66-87-131-216.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[8:13] * Concubidated (~Adium@66-87-66-180.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[8:16] * longguang (~chatzilla@123.126.33.253) Quit (Ping timeout: 480 seconds)
[8:16] * longguang_ is now known as longguang
[8:20] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[8:20] * Concubidated (~Adium@66.87.65.19) has joined #ceph
[8:20] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[8:21] * shang (~ShangWu@175.41.48.77) has joined #ceph
[8:24] * [fred] (fred@earthli.ng) has joined #ceph
[8:31] * lcavassa (~lcavassa@89.184.114.246) has joined #ceph
[8:34] * rdas (~rdas@110.227.40.66) Quit (Quit: Leaving)
[8:35] * linjan (~linjan@86.62.114.202) has joined #ceph
[8:37] * fghaas (~florian@91-119-223-7.dynamic.xdsl-line.inode.at) has joined #ceph
[8:40] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[8:42] * karnan (~karnan@121.244.87.117) Quit (Quit: Leaving)
[8:42] * karnan (~karnan@121.244.87.117) has joined #ceph
[8:47] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[8:48] <longguang> hi
[8:48] <longguang> if i use xfs, is omap still needed?
[8:49] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[8:54] * Nats_ (~natscogs@2001:8000:200c:0:c11d:117a:c167:16df) Quit (Read error: Connection reset by peer)
[8:54] * oblu (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[8:57] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[8:57] * JC (~JC@195.127.188.220) has joined #ceph
[8:59] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[9:03] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Quit: leaving)
[9:03] * rdas (~rdas@121.244.87.115) has joined #ceph
[9:04] * sleinen1 (~Adium@2001:620:0:68::100) Quit (Ping timeout: 480 seconds)
[9:09] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:15] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[9:16] * analbeard (~shw@support.memset.com) has joined #ceph
[9:19] * steki (~steki@91.195.39.5) has joined #ceph
[9:24] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[9:26] * jordanP (~jordan@185.23.92.11) has joined #ceph
[9:26] * vbellur (~vijay@209.132.188.8) has joined #ceph
[9:26] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[9:29] * kalleeh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[9:30] * fsimonce (~simon@host135-17-dynamic.8-79-r.retail.telecomitalia.it) has joined #ceph
[9:34] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[9:38] * davidz (~Adium@cpe-23-242-12-23.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[9:38] * davidz (~Adium@cpe-23-242-12-23.socal.res.rr.com) has joined #ceph
[9:39] * gregsfortytwo (~Adium@cpe-107-184-64-126.socal.res.rr.com) has joined #ceph
[9:40] * linjan (~linjan@86.62.114.202) Quit (Read error: Operation timed out)
[9:43] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:43] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[9:44] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[9:44] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:46] * oro_ (~oro@2001:620:20:16:3196:689e:5894:cae9) has joined #ceph
[9:46] * oro (~oro@2001:620:20:16:3196:689e:5894:cae9) has joined #ceph
[9:47] * cok (~chk@2a02:2350:18:1012:d98f:ee86:15fe:5815) has joined #ceph
[9:50] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Ping timeout: 480 seconds)
[9:50] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[9:52] * lucas1 (~Thunderbi@222.247.57.50) Quit (Quit: lucas1)
[9:55] * linjan (~linjan@86.62.114.202) has joined #ceph
[9:58] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[9:59] * vbellur (~vijay@209.132.188.8) Quit (Ping timeout: 480 seconds)
[10:00] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[10:03] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit ()
[10:05] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[10:05] * kalleeh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[10:09] * maxxware (~maxx@149.210.133.105) Quit (Quit: leaving)
[10:09] * maxxware (~maxx@149.210.133.105) has joined #ceph
[10:10] * michalefty (~micha@p20030071CF0301001D5319909125892E.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[10:10] * AfC (~andrew@CPE-124-184-163-168.lns16.cht.bigpond.net.au) has joined #ceph
[10:11] * vbellur (~vijay@121.244.87.117) has joined #ceph
[10:14] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Remote host closed the connection)
[10:15] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[10:16] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[10:18] * AfC (~andrew@CPE-124-184-163-168.lns16.cht.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[10:21] * michalefty (~micha@p20030071CF06E2001D5319909125892E.dip0.t-ipconnect.de) has joined #ceph
[10:24] * haomaiwa_ (~haomaiwan@223.223.183.114) has joined #ceph
[10:24] * b0e (~aledermue@213.95.15.4) has joined #ceph
[10:26] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[10:30] * drankis (~drankis__@89.111.13.198) has joined #ceph
[10:30] * haomaiwa_ (~haomaiwan@223.223.183.114) Quit (Read error: Operation timed out)
[10:31] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[10:32] * cok (~chk@2a02:2350:18:1012:d98f:ee86:15fe:5815) has left #ceph
[10:34] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[10:36] * oro (~oro@2001:620:20:16:3196:689e:5894:cae9) Quit (Ping timeout: 480 seconds)
[10:36] * oro_ (~oro@2001:620:20:16:3196:689e:5894:cae9) Quit (Ping timeout: 480 seconds)
[10:44] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[10:45] * oro (~oro@2001:620:20:222:8818:2a3:ca0a:4319) has joined #ceph
[10:45] * oro_ (~oro@2001:620:20:222:8818:2a3:ca0a:4319) has joined #ceph
[10:50] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[10:54] * zack_dolby (~textual@e0109-114-22-3-49.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[10:57] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[10:58] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[10:58] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:58] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[11:03] * jks (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[11:04] * b0e (~aledermue@213.95.15.4) Quit (Quit: Leaving.)
[11:07] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[11:09] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[11:23] -coulomb.oftc.net- *** Looking up your hostname...
[11:23] -coulomb.oftc.net- *** Checking Ident
[11:23] -coulomb.oftc.net- *** Couldn't look up your hostname
[11:24] -coulomb.oftc.net- *** No Ident response
[11:24] * CephLogBot (~PircBot@92.63.168.213) has joined #ceph
[11:24] * Topic is 'http://ceph.com/get || dev channel #ceph-devel || Calamari is Open Source! http://ceph.com/?p=5862'
[11:24] * Set by scuttlemonkey!~scuttle@nat-pool-rdu-t.redhat.com on Fri Jun 20 18:43:30 CEST 2014
[11:24] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[11:25] * sleinen (~Adium@130.59.94.121) Quit (Ping timeout: 480 seconds)
[11:28] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Ping timeout: 480 seconds)
[11:31] <joao> wido, ping
[11:32] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:33] * RameshN (~rnachimu@121.244.87.117) Quit (Read error: Operation timed out)
[11:33] * joerocklin (~joe@cpe-65-185-149-56.woh.res.rr.com) Quit (Ping timeout: 480 seconds)
[11:35] * lpabon (~lpabon@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[11:36] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (Read error: Operation timed out)
[11:36] * gregsfortytwo (~Adium@cpe-107-184-64-126.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[11:36] * rturk|afk (~rturk@nat-pool-rdu-t.redhat.com) Quit (Read error: Operation timed out)
[11:37] * linuxkidd (~linuxkidd@cpe-066-057-017-151.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[11:37] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[11:38] * ponyofde1th (~vladi@cpe-66-27-98-26.san.res.rr.com) Quit (Ping timeout: 480 seconds)
[11:38] * sleinen1 (~Adium@2001:620:0:68::101) Quit (Ping timeout: 480 seconds)
[11:38] * sage (~quassel@cpe-172-248-35-102.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[11:39] * davidz (~Adium@cpe-23-242-12-23.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[11:40] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[11:41] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[11:42] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[11:44] * vbellur (~vijay@209.132.188.8) has joined #ceph
[11:44] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[11:48] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:24b3:e763:61ad:a33e) has joined #ceph
[11:49] * vmx (~vmx@dslb-084-056-058-177.084.056.pools.vodafone-ip.de) has joined #ceph
[11:50] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[11:51] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[11:54] * madkiss (~madkiss@2001:6f8:12c3:f00f:59f0:806d:d6e3:fb4d) Quit (Ping timeout: 480 seconds)
[11:58] * sleinen (~Adium@130.59.94.121) has joined #ceph
[11:59] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[12:00] * sleinen1 (~Adium@2001:620:0:68::101) has joined #ceph
[12:02] * madkiss (~madkiss@213162068027.public.t-mobile.at) has joined #ceph
[12:03] <huangjun> how to avoid heartbeat check suicide on osd
[12:04] <huangjun> should we assert failed when the suicide timeout arrived
[12:06] * lpabon (~lpabon@nat-pool-bos-t.redhat.com) has joined #ceph
[12:06] * sleinen (~Adium@130.59.94.121) Quit (Ping timeout: 480 seconds)
[12:07] * rturk|afk (~rturk@nat-pool-rdu-t.redhat.com) has joined #ceph
[12:07] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:24b3:e763:61ad:a33e) Quit (Ping timeout: 480 seconds)
[12:08] <flaf> Hi @all, is there a pdf version of the ceph online documentation (http://ceph.com/docs/master)?
[12:28] * linjan (~linjan@86.62.114.202) Quit (Ping timeout: 480 seconds)
[12:32] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[12:32] * yanzheng (~zhyan@171.221.143.132) Quit ()
[12:33] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[12:35] * vbellur (~vijay@209.132.188.8) Quit (Ping timeout: 480 seconds)
[12:46] * kippi_ (~oftc-webi@host-4.dxi.eu) has joined #ceph
[12:46] <kippi_> hey
[12:46] <kippi_> I had ceph running, I now have stoped ceph, re-mounted my osd and now it won't start
[12:46] <kippi_> osd.0 is up
[12:46] <kippi_> the other two won't start
[12:49] * kippi_ (~oftc-webi@host-4.dxi.eu) Quit ()
[12:49] * kippi_ (~oftc-webi@host-4.dxi.eu) has joined #ceph
[12:52] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[12:56] * linuxkidd (~linuxkidd@cpe-066-057-017-151.nc.res.rr.com) has joined #ceph
[12:57] * joerocklin (~joe@cpe-65-185-149-56.woh.res.rr.com) has joined #ceph
[12:59] * davidz (~Adium@cpe-23-242-12-23.socal.res.rr.com) has joined #ceph
[13:00] * huangjun (~kvirc@111.174.12.80) Quit (Ping timeout: 480 seconds)
[13:00] * JayJ_ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[13:00] * ponyofdeath (~vladi@cpe-66-27-98-26.san.res.rr.com) has joined #ceph
[13:00] * oblu (~o@62.109.134.112) has joined #ceph
[13:01] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Remote host closed the connection)
[13:01] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Quit: Leaving.)
[13:01] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[13:05] * haomaiwa_ (~haomaiwan@203.69.59.199) has joined #ceph
[13:05] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Connection reset by peer)
[13:05] * jcsp (~Adium@0001bf3a.user.oftc.net) has joined #ceph
[13:06] * gregsfortytwo (~Adium@cpe-107-184-64-126.socal.res.rr.com) has joined #ceph
[13:06] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[13:06] * phoenix (~phoenix@vpn1.safedata.ru) Quit ()
[13:08] <kippi_> how can I do service ceph start osd.2 from the osd?
[13:09] <fghaas> sudo start ceph-osd id=2 (if on ubuntu)
[13:10] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[13:12] * b0e (~aledermue@213.95.15.4) has joined #ceph
[13:13] * haomaiwa_ (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[13:15] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[13:19] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[13:27] * cok (~chk@2a02:2350:1:1203:2d94:a42f:e8da:c2dc) has joined #ceph
[13:28] * cooldharma06 (~chatzilla@218.248.24.19) has joined #ceph
[13:28] <cooldharma06> hi all
[13:28] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[13:28] <cooldharma06> i am making experiments in ceph + xen
[13:29] * analbeard (~shw@support.memset.com) Quit (Remote host closed the connection)
[13:29] <cooldharma06> anybody have any started guide for this one..
[13:30] * analbeard (~shw@support.memset.com) has joined #ceph
[13:30] * vbellur (~vijay@121.244.87.117) has joined #ceph
[13:33] * b0e (~aledermue@213.95.15.4) Quit (Quit: Leaving.)
[13:36] <cooldharma06> anybody there..
[13:36] <janos_> probably, but not with any comment on that
[13:38] <fghaas> cooldharma06: http://wiki.xen.org/wiki/Ceph_and_libvirt_technology_preview is the second Google hit for "ceph xen"; does that not help?
[13:39] <tnt> I use xen and ceph ...
[13:41] <janos_> i need to revise my statement ;)
[13:41] <tnt> well, I'm not sure I have anything to comment :p
[13:42] * nhm (~nhm@107-1-123-195-ip-static.hfc.comcastbusiness.net) has joined #ceph
[13:42] * ChanServ sets mode +o nhm
[13:42] <tnt> I used several ways: rbd.ko kernel driver, a custom made blktap driver for rbd, and the qemu driver through qdisk and qemu-upstream.
[13:45] * madkiss (~madkiss@213162068027.public.t-mobile.at) Quit (Ping timeout: 480 seconds)
[13:46] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[13:48] * huangjun (~kvirc@117.151.47.97) has joined #ceph
[13:49] <steveeJ> has anyone seen a performance comparison on librbd vs. kernel rbd?
[13:50] <tnt> well ... librbd in itself is just a library.
[13:52] * apolloJess (~Thunderbi@202.60.8.252) Quit (Quit: apolloJess)
[13:52] <steveeJ> which runs in userspace and comes with it's own caching mechanism. kernel rbd runs in kernel space and uses native caching. i'd expect it to be better handled by the io-scheduler too
[13:54] <tnt> What I mean is that librbd is useless by itself. Where and how the app use it and how it communicates will have impacts as well. For ex for xen, both my blktap driver and the qdisk driver use the same librbd, but my blktap driver is faster ...
[13:54] <cooldharma06> fghaas
[13:55] <cooldharma06> any guide to configure xen + ceph thatone is for xenserver
[13:56] <cooldharma06> tnt have any guide for doing ceph + xen
[13:57] <steveeJ> looking at it, blktap+librbd provide a block device, like the kernel rbd does. is there a good reason to prefer two userspace-libs instead of one kernel module?
[13:59] <tnt> steveeJ: yeah, 1) easier to update and more up-to-date in general. 2) if you have your OSD as DomU on the same xen machines, things go to hell with weird crashes (even though the classic mem issue of rbd.ko on the OSD doesn't apply because of xen separation, other stuff break).
[13:59] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[14:00] <tnt> cooldharma06: well, if you use the latest xen and have a qemu with rbd support installed, ou can just use stuff like ''backendtype=qdisk, vdev=xvdb, target=rbd:rbd/test:id=rbd' as disk spec.
[14:00] <tnt> there isn't much to it ...
[14:02] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[14:03] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit ()
[14:03] * sleinen1 (~Adium@2001:620:0:68::101) Quit (Read error: Connection reset by peer)
[14:03] <cooldharma06> i am using xen 4.4 and also new for ceph. and now only i am googling about these stuffs (ceph)
[14:03] * diegows (~diegows@190.190.5.238) has joined #ceph
[14:04] <tnt> Then it should "just work".
[14:04] <tnt> you need to use the xl toolstack though
[14:04] <steveeJ> tnt: interesting that you mention crashes. i'm experiencing kernel lockups lately but the backtrace is not so clear. may be i'll give librbd a try
[14:05] <cooldharma06> tnt yeah i am installing and trying to make experimentation on ceph but i need some guides
[14:09] * sleinen (~Adium@2001:620:0:68::100) has joined #ceph
[14:09] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:09] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has left #ceph
[14:12] * nhm (~nhm@107-1-123-195-ip-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[14:18] * ade (~abradshaw@193.202.255.218) has joined #ceph
[14:19] <kippi_> how can I do service ceph start osd.2 from the osd?
[14:20] * cok (~chk@2a02:2350:1:1203:2d94:a42f:e8da:c2dc) Quit (Quit: Leaving.)
[14:21] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[14:22] <kippi_> the issue is I have lost my admin server
[14:24] <kippi_> is there away to remove my config?
[14:25] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[14:25] * sage (~quassel@cpe-172-248-35-102.socal.res.rr.com) has joined #ceph
[14:25] * ChanServ sets mode +o sage
[14:30] * D-Spair (~dphillips@cpe-74-130-79-134.swo.res.rr.com) has joined #ceph
[14:38] * fghaas (~florian@91-119-223-7.dynamic.xdsl-line.inode.at) has left #ceph
[14:43] * blackmen (~Ajit@121.244.87.115) Quit (Remote host closed the connection)
[14:44] * KevinPerks (~Adium@2606:a000:80a1:1b00:415d:db78:8f52:19d4) has joined #ceph
[14:44] * steki (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[14:45] <flaf> Is possible to mount the same RADOS block device on 2 differents OS?
[14:47] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[14:48] <flaf> (to have I/O from 2 different nodes on the same RADOS block device)
[14:52] * ashishchandra (~ashish@49.32.0.250) Quit (Quit: Leaving)
[14:54] * JayJ_ (~jayj@157.130.21.226) has joined #ceph
[14:54] <Sysadmin88> flaf, hypervisors in a cluster?
[14:55] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[14:55] <flaf> Sysadmin88: No, just mount the same rados block device on 2 diffrent servers.
[14:57] <flaf> 2 servers use the same rados block device.
[14:57] <Sysadmin88> i could be thinking the wrong level of ceph... but if you do that with a non cluser aware file system inside you can mess things up.
[14:59] <flaf> Ok, so if I use the same RADOS block device in 2 different nodes and if I use ext4 fs, I will have problems.
[15:01] <flaf> Ok, indeed I mingled ceph level and fs level. Sorry.
[15:01] <Sysadmin88> what workload do you need to mount on 2 nodes at once?
[15:02] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[15:04] <flaf> Sysadmin88: I'm thinking about 2 web nodes (1 master and 1 slave) wich share the same Rados block device (the data, php scripts, files etc).
[15:04] <flaf> *which
[15:06] <flaf> And if the master node is down -> failover and the slave become the master.
[15:06] <flaf> (and use the same Rados block device)
[15:06] <flaf> Something like that is possible with ceph?
[15:08] * markbby (~Adium@168.94.245.2) has joined #ceph
[15:08] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[15:09] <flaf> (in the web nodes, the web application needs to a "classical" fs mounted in a directory)
[15:10] * hflai (~hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[15:13] * thomnico (~thomnico@192.165.183.201) has joined #ceph
[15:13] * BranchPredictor (branch@predictor.org.pl) has joined #ceph
[15:16] * JayJ_ (~jayj@157.130.21.226) Quit (Ping timeout: 480 seconds)
[15:16] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[15:16] * markbby (~Adium@168.94.245.2) has joined #ceph
[15:17] * garphy`aw is now known as garphy
[15:17] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[15:18] * nhm (~nhm@nat-pool-bos-u.redhat.com) has joined #ceph
[15:18] * ChanServ sets mode +o nhm
[15:18] * slo_ (~oftc-webi@194.249.247.164) has joined #ceph
[15:21] * gchristensen (~gchristen@li65-6.members.linode.com) has left #ceph
[15:21] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[15:21] * yanzheng (~zhyan@171.221.143.132) Quit ()
[15:22] <erice> steveeJ: You may want to look at http://tracker.ceph.com/issues/9192 on some testing of kRBD vs librbd
[15:23] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Quit: Leaving.)
[15:23] * thomnico (~thomnico@192.165.183.201) Quit (Quit: Ex-Chat)
[15:23] * thomnico (~thomnico@192.165.183.201) has joined #ceph
[15:25] * aldavud (~aldavud@217.192.177.51) has joined #ceph
[15:25] * aldavud is now known as dgurtner
[15:26] * Eco (~Eco@107.43.84.86) Quit (Ping timeout: 480 seconds)
[15:27] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[15:27] * oblu (~o@62.109.134.112) Quit (Quit: ~)
[15:30] * dgurtner (~aldavud@217.192.177.51) Quit (Quit: leaving)
[15:30] * aldavud (~aldavud@217.192.177.51) has joined #ceph
[15:31] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[15:34] * oblu (~o@62.109.134.112) has joined #ceph
[15:37] * RameshN (~rnachimu@121.244.87.117) Quit (Read error: Operation timed out)
[15:38] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) has joined #ceph
[15:39] * nhm (~nhm@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[15:39] * aldavud (~aldavud@217.192.177.51) Quit (Quit: leaving)
[15:43] * saurabh (~saurabh@121.244.87.117) Quit (Quit: Leaving)
[15:44] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[15:48] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[15:48] * garphy is now known as garphy`aw
[15:50] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[15:55] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:56] * garphy`aw is now known as garphy
[16:00] * oblu (~o@62.109.134.112) Quit (Read error: Connection reset by peer)
[16:00] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[16:02] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[16:02] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[16:03] * oblu (~o@62.109.134.112) has joined #ceph
[16:03] * elder (~elder@50.250.6.142) has joined #ceph
[16:06] * michalefty (~micha@p20030071CF06E2001D5319909125892E.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[16:08] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Read error: Operation timed out)
[16:09] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[16:14] * tinklebear (~tinklebea@cpe-066-057-253-171.nc.res.rr.com) has joined #ceph
[16:16] <ganders> hi all, i've the journals of a cluster on ramdisk (tmpfs), and i would like to know if there's a way to backup those journals, so if i lost power on all the datacenter and my ceph cluster goes down, then i had a way to recover the data.
[16:16] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[16:19] <Anticimex> nope
[16:19] <kraken> http://i.imgur.com/c4gTe5p.gif
[16:20] * gregsfortytwo (~Adium@cpe-107-184-64-126.socal.res.rr.com) Quit (Quit: Leaving.)
[16:20] <Anticimex> you need some batterybacked nvram
[16:20] <Anticimex> there are cards, regular ram can't be made to not lose data w/o losing perf
[16:21] <Anticimex> ganders: only if you know for sur ethere are syncperiods where no writes occur and you can get a consistent copy of the journal could it be done, right
[16:22] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[16:23] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[16:24] <ganders> anticimex: got it, yeah the thing is that i cant guaranteed that period for sure
[16:26] <huangjun> can we use respawn instead of assert(0) when threadpool do check and found suicide_timeout threads in OSD?
[16:29] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[16:29] <Anticimex> ganders: if you need ram-like perf, pci-e like p3700 is a good bet
[16:29] <Anticimex> that's ssd. i think there are nvram cards too
[16:30] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[16:34] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: ??????)
[16:36] <ganders> and what models of nvram cards good fit?
[16:36] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[16:37] * linuxkidd_ (~linuxkidd@2001:420:2280:1272:b0bd:57cb:f985:4d95) has joined #ceph
[16:39] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[16:45] * newbie|2 (~kvirc@117.151.47.175) has joined #ceph
[16:45] * diegows (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[16:47] * markbby1 (~Adium@168.94.245.4) has joined #ceph
[16:47] * ikrstic (~ikrstic@109-93-112-236.dynamic.isp.telekom.rs) has joined #ceph
[16:51] * huangjun (~kvirc@117.151.47.97) Quit (Read error: Operation timed out)
[16:51] * markbby1 (~Adium@168.94.245.4) Quit ()
[16:52] * markbby1 (~Adium@168.94.245.4) has joined #ceph
[16:53] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[16:54] * markl_ (~mark@knm.org) has joined #ceph
[16:55] <seapasul1i> ganders: While you could back it up, I believe the journal for osds is just the current writes to the osds. Kind of like a staging area for data. So if you were to reboot your server and all of your journals die, you should be able to ceph-osd --mkjournal and specify a path again to redo the journal based on the current object repo on the osd.
[16:55] <seapasul1i> --mkjournal
[16:55] <seapasul1i> Create a new journal file to match an existing object repository. This is useful if the journal device or file is wiped out due to a disk or file system failure.
[16:56] * tinklebear (~tinklebea@cpe-066-057-253-171.nc.res.rr.com) Quit (Quit: Nettalk6 - www.ntalk.de)
[16:57] <seapasul1i> lost Anticimex among the join/leave lines. Ignore me.
[16:57] * angdraug (~angdraug@50-196-3-97-static.hfc.comcastbusiness.net) has joined #ceph
[16:58] <ganders> seapasul1i: so what u are saying is that if a power outage occurs on the Data center, and all the cluster goes down, then i could power on the nodes, and 'recreate' the journals with the mkjournal, and all the data that was on the osd's is going to be there?
[16:58] * zack_dolby (~textual@p843a3d.tokynt01.ap.so-net.ne.jp) has joined #ceph
[16:58] <ganders> i ask you this since i wonder that if you lost the journal. you lost the osd
[16:59] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[17:00] * gregsfortytwo (~Adium@2600:1012:b00a:916d:e8c6:cbf:e46e:2d11) has joined #ceph
[17:00] * rdas (~rdas@121.244.87.115) has joined #ceph
[17:01] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:02] * markbby (~Adium@168.94.245.2) has joined #ceph
[17:03] * gregsfortytwo2 (~Adium@2607:f298:a:607:218c:9514:5a00:f003) Quit (Ping timeout: 480 seconds)
[17:07] * markbby1 (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[17:12] * markl_ (~mark@knm.org) Quit (Quit: leaving)
[17:17] * thomnico (~thomnico@192.165.183.201) Quit (Ping timeout: 480 seconds)
[17:21] <ganders> did someone try the fusion-io cards? like the PX600 atomic series?
[17:21] <ganders> would that card work fine for journal? and i guess that it would support a power outage on the dc
[17:22] * gregsfortytwo (~Adium@2600:1012:b00a:916d:e8c6:cbf:e46e:2d11) Quit (Read error: Connection reset by peer)
[17:22] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[17:23] * ade (~abradshaw@193.202.255.218) Quit (Quit: Too sexy for his shirt)
[17:24] * alram (~alram@38.122.20.226) has joined #ceph
[17:24] * gregsfortytwo (~Adium@38.122.20.226) has joined #ceph
[17:27] * nhm (~nhm@nat-pool-bos-u.redhat.com) has joined #ceph
[17:27] * ChanServ sets mode +o nhm
[17:29] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:29] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) has joined #ceph
[17:30] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) Quit ()
[17:31] * xarses (~andreww@12.164.168.117) has joined #ceph
[17:31] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) has joined #ceph
[17:32] * angdraug (~angdraug@50-196-3-97-static.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[17:35] * blackmen (~Ajit@42.104.12.76) has joined #ceph
[17:40] * kevinc (~kevinc__@client64-180.sdsc.edu) has joined #ceph
[17:40] <ganders> anyone?
[17:41] * JC (~JC@195.127.188.220) Quit (Quit: Leaving.)
[17:42] * dmsimard_away is now known as dmsimard
[17:43] * angdraug (~angdraug@50-196-3-97-static.hfc.comcastbusiness.net) has joined #ceph
[17:43] * newbie|2 (~kvirc@117.151.47.175) Quit (Ping timeout: 480 seconds)
[17:44] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[17:46] * jordanP (~jordan@185.23.92.11) Quit (Quit: Leaving)
[17:47] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[17:47] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[17:51] * oro (~oro@2001:620:20:16:3196:689e:5894:cae9) Quit (Ping timeout: 480 seconds)
[17:51] * oro_ (~oro@2001:620:20:16:3196:689e:5894:cae9) Quit (Ping timeout: 480 seconds)
[17:55] * dis (~dis@109.110.67.120) Quit (Ping timeout: 480 seconds)
[17:56] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[17:57] * JC (~JC@46.189.28.185) has joined #ceph
[17:58] * dis (~dis@109.110.67.53) has joined #ceph
[18:00] * sjustlaptop (~sam@24-205-54-233.dhcp.gldl.ca.charter.com) has joined #ceph
[18:00] * scuttle|afk is now known as scuttlemonkey
[18:00] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Read error: Connection reset by peer)
[18:01] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[18:01] * kevinc (~kevinc__@client64-180.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[18:04] * linuxkidd_ (~linuxkidd@2001:420:2280:1272:b0bd:57cb:f985:4d95) Quit (Quit: Leaving)
[18:04] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[18:06] * bandrus (~Adium@4.31.55.106) has joined #ceph
[18:10] * bandrus (~Adium@4.31.55.106) has left #ceph
[18:12] * joef (~Adium@2620:79:0:131:ad74:906e:f096:7bee) has joined #ceph
[18:15] * kevinc (~kevinc__@client64-180.sdsc.edu) has joined #ceph
[18:18] * angdraug (~angdraug@50-196-3-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[18:18] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[18:19] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[18:23] * sleinen (~Adium@2001:620:0:68::100) Quit (Ping timeout: 480 seconds)
[18:24] * RameshN (~rnachimu@101.222.242.11) has joined #ceph
[18:29] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[18:29] * thomnico (~thomnico@212.214.9.162) has joined #ceph
[18:30] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[18:30] * thomnico (~thomnico@212.214.9.162) Quit ()
[18:37] * adamcrume (~quassel@50.247.81.99) Quit (Read error: Connection reset by peer)
[18:39] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:41] * blackmen (~Ajit@42.104.12.76) Quit (Quit: Leaving)
[18:42] * dis (~dis@109.110.67.53) Quit (Ping timeout: 480 seconds)
[18:43] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[18:43] * lcavassa (~lcavassa@89.184.114.246) Quit (Quit: Leaving)
[18:44] * kevinc (~kevinc__@client64-180.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[18:45] * bkopilov (~bkopilov@213.57.16.103) Quit (Ping timeout: 480 seconds)
[18:46] * vbellur (~vijay@122.172.63.53) has joined #ceph
[18:46] * bkopilov (~bkopilov@213.57.67.62) has joined #ceph
[18:46] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[18:46] * RameshN (~rnachimu@101.222.242.11) Quit (Quit: Quit)
[18:47] * kevinc (~kevinc__@client64-180.sdsc.edu) has joined #ceph
[18:48] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[18:50] * dis (~dis@109.110.67.173) has joined #ceph
[18:52] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[18:55] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:58] * kevinc (~kevinc__@client64-180.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[18:58] * kevinc (~kevinc__@client64-180.sdsc.edu) has joined #ceph
[18:59] * jobewan (~jobewan@snapp.centurylink.net) has joined #ceph
[19:02] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[19:04] <grepory> I have calamari up and running and the salt minions have apparently registered with the salt master, but they don't appear to have done anything else. So, calamari says it sees 4 hosts, but doesn't see the ceph cluster running on them. I looked at /opt/calamari/salt and see that salt should be doing things, but nothing has been done. Is there a way I can
[19:04] <grepory> manually force them to do the things they need to do?
[19:05] * vbellur (~vijay@122.172.63.53) Quit (Quit: Leaving.)
[19:12] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[19:14] * sleinen1 (~Adium@2001:620:0:68::103) has joined #ceph
[19:15] <dmick> grepory: when you say "have registered", what's the evidence you're seeing (and what's the output of sudo salt-key -L on the master)?
[19:16] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[19:17] <grepory> dmick: shows the 4 machines i've ceph-deploy calamari connected. calamari web says: 4 Ceph servers are connected to Calamari, but no Ceph cluster has been created yet.
[19:20] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:21] <dmick> and......salt-key -L on the master?
[19:21] <grepory> dmick: sorry, that's what i meant by the first part of that. i see the four accepted keys i expect.
[19:21] <dmick> ok, how about "salt '*' test.ping"
[19:22] <grepory> dmick: true for all four
[19:22] <dmick> ok, digging...
[19:22] <grepory> <3
[19:23] <grepory> dmick: everything in calamari.py appears to have completed successfully. so i'm guessing it is something to do with the salt side of things. i've been trying to read up on salt, but you know how it goes.
[19:28] * adamcrume (~quassel@2601:9:6680:47:b48c:cc28:4b5e:4c8d) has joined #ceph
[19:28] <grepory> dmick: it looks like it might be safe to do salt '*' state.highstate -- i.e. it doesn't appear to modify ceph directly.
[19:28] <grepory> dmick: and would, i think, install diamond and get it running and configured... which i guess means that calamari would see the cluster?
[19:28] <dmick> how about salt <pickaminion> ceph.get_heartbeats
[19:29] <grepory> that is a good start
[19:29] <dmick> (the highstate should have happened when the minion connected)
[19:29] <grepory> dmick: it didn't
[19:29] <grepory> 'ceph.get_heartbeats' is not available.
[19:29] <dmick> so I would first try restarting salt-minion on the minions
[19:29] <grepory> ok
[19:30] <grepory> i think i tried that on friday, but i'll try again
[19:32] <grepory> dmick: 2014-08-27 12:51:01,190 [salt.state ][INFO ] Loading fresh modules for state activity
[19:32] <grepory> then no more logging
[19:32] <grepory> salt-minion just polling forever.
[19:39] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[19:39] <grepory> dmick: hmmm... i think that salt-master doesn't know about /opt/calamari/salt/salt/top.sls
[19:39] <dmick> /etc/salt/master.d/calamari.conf?
[19:39] <grepory> yeah that's what i was just looking at
[19:40] <grepory> state_top isn't set
[19:40] * zerick (~eocrospom@190.187.21.53) Quit (Ping timeout: 480 seconds)
[19:40] <grepory> so i'm going to guess it assumes /etc/salt/top.sls or something
[19:40] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[19:40] <grepory> but file_roots is set though
[19:40] <dmick> yes, that should be enough
[19:40] <dmick> test.ping still works, ceph.get_heartbeats still fails?
[19:41] <grepory> correct
[19:41] <grepory> wait
[19:41] <grepory> i spoke too soon
[19:41] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Quit: Lost terminal)
[19:42] <grepory> it didn't fail this time
[19:42] <grepory> so maybe restarting salt-minion made that pass.
[19:42] <grepory> trying another box to verify
[19:42] <grepory> okay nevermind. now it is saying not available again.
[19:45] <grepory> dmick: sorry about the confusion. there was one time that ceph.get_heartbeats just sat there for a little bit and then nothing was output
[19:45] * rendar (~I@host27-176-dynamic.35-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[19:47] * rendar (~I@host27-176-dynamic.35-79-r.retail.telecomitalia.it) has joined #ceph
[19:49] * KevinPerks (~Adium@2606:a000:80a1:1b00:415d:db78:8f52:19d4) has left #ceph
[19:50] * KevinPerks (~Adium@2606:a000:80a1:1b00:415d:db78:8f52:19d4) has joined #ceph
[19:50] <grepory> dmick: pillar/schedules.sls mentions ceph.heartbeat; i tried that as well to no avail (both from the master and on a minion directly)
[19:50] <dmick> highstate is probably reasonable to try
[19:51] <dmick> salt '*' state.highstate from the master
[19:51] * michalefty (~micha@ip2504598c.dynamic.kabel-deutschland.de) has joined #ceph
[19:51] <grepory> yeah. trying that now
[19:51] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[19:52] <grepory> Comment: No Top file or external nodes data matches found
[19:52] * alram (~alram@38.122.20.226) Quit (Quit: leaving)
[19:53] * alram (~alram@38.122.20.226) has joined #ceph
[19:55] <grepory> it looks as though 2 of the nodes didn't respond as well
[19:57] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:57] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) has joined #ceph
[20:01] * lalatenduM (~lalatendu@122.171.103.80) has joined #ceph
[20:02] * _slo (~oftc-webi@93-103-91-169.dynamic.t-2.net) has joined #ceph
[20:03] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:03] * kevinc (~kevinc__@client64-180.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[20:07] * maxxware_ (~maxx@149.210.133.105) has joined #ceph
[20:08] * kevinc (~kevinc__@client64-180.sdsc.edu) has joined #ceph
[20:10] * dmatson (~david.mat@216.51.73.42) has joined #ceph
[20:10] * smiley (~smiley@205.153.36.170) has joined #ceph
[20:10] * michalefty (~micha@ip2504598c.dynamic.kabel-deutschland.de) has left #ceph
[20:12] * maxxware (~maxx@149.210.133.105) Quit (Ping timeout: 480 seconds)
[20:12] <grepory> dmick: 'saltutil.pillar_refresh' is not available. --- a little concerning
[20:16] * sjusthm (~sam@24-205-54-233.dhcp.gldl.ca.charter.com) has joined #ceph
[20:16] <smiley> Hello...does anyone have any exp with Opennebula + ceph?
[20:17] <dmatson> I'm have two nodes mounting CephFS via the kernel driver. One node writes a file then roughly a hundred milliseconds later the other node tries to read the file but it isn't there yet. Is there any way I can force the kernel driver to "refresh" before trying to read?
[20:18] <smiley> we are looking at either cloudstack or opennebula....and I was wondering if anyone here has used the two?
[20:21] * angdraug (~angdraug@131.252.204.134) has joined #ceph
[20:26] * sjustlaptop (~sam@24-205-54-233.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[20:26] * jmlowe (~Adium@2601:d:a800:511:104b:e8f4:27af:312d) has left #ceph
[20:28] * lalatenduM (~lalatendu@122.171.103.80) Quit (Quit: Leaving)
[20:37] * gregmark1 (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[20:44] * zigo (quasselcor@ipv6-ftp.gplhost.com) Quit (Remote host closed the connection)
[20:46] * zigo (quasselcor@atl.apt-proxy.gplhost.com) has joined #ceph
[20:56] * kevinc (~kevinc__@client64-180.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[21:01] * Nacer (~Nacer@2001:41d0:fe82:7200:a4b7:25c9:9ed7:2ac3) has joined #ceph
[21:04] * millsu2 (~bgardner@fw.oremut02.us.wh.verio.net) has joined #ceph
[21:05] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:06] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[21:18] * fghaas (~florian@91-119-223-7.dynamic.xdsl-line.inode.at) has joined #ceph
[21:19] * ircolle is now known as ircolle-afk
[21:19] * Nacer (~Nacer@2001:41d0:fe82:7200:a4b7:25c9:9ed7:2ac3) Quit (Remote host closed the connection)
[21:19] <fghaas> nhm or gregsfortytwo or joao: if any of you guys are around, I wondered if the fact that the "ceph tell osd.X bench" defaults are no longer sane is a known issue
[21:19] <fghaas> as in, "ceph tell osd.0 bench" without arguments would lead to EINVAL
[21:20] <gregsfortytwo> uh, not known to me
[21:21] <fghaas> ceph tell osd.0 bench
[21:21] <fghaas> Error EINVAL: 'count' values greater than 750 for a block size of 4096 kB, assuming 102400 kB/s, for 30 seconds, can cause ill effects on osd. Please adjust 'osd_bench_large_size_max_throughput' with a higher value if you wish to use a higher 'count'.
[21:21] <gregsfortytwo> joao was the last one to touch that stuff that I'm aware of
[21:21] <gregsfortytwo> fghaas: what version are you running?
[21:21] <gregsfortytwo> and are the OSDs and mons the same?
[21:22] <fghaas> 0.80.5-1~bpo70+1 (on debian), and yes
[21:23] <fghaas> this is, obviously, with "osd_bench_large_size_max_throughput" untouched
[21:23] <gregsfortytwo> hmm, that actually does sound sort of familiar; did you search the tracker or scan the changelog on newer firefly releases?
[21:26] <fghaas> as per git blame https://github.com/ceph/ceph/commit/25a9bd3251ceb805c1cdcd7b470b939ab4dd2514 is the commit that introduced that error message or last touched it, back in March
[21:28] <fghaas> ah! there 'tis
[21:28] <fghaas> https://github.com/ceph/ceph/commit/7dc93a9651f602d9c46311524fc6b54c2f1ac595
[21:29] <gregsfortytwo> yeah
[21:29] * kevinc (~kevinc__@client64-180.sdsc.edu) has joined #ceph
[21:29] <gregsfortytwo> hrm, looks like it's in the firefly branch but not released yet
[21:29] <gregsfortytwo> (the fix, that is)
[21:29] <gregsfortytwo> 7f9fe22a1c73d5f2783c3303fb9f3a7cfcea61c5
[21:29] <gregsfortytwo> cherry-picked from the commit you found
[21:29] <fghaas> I'm trying to wrap my head around how I need to tweak the args to work around that though :)
[21:31] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[21:31] * joef (~Adium@2620:79:0:131:ad74:906e:f096:7bee) Quit (Quit: Leaving.)
[21:32] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:32] <_slo> Is there any opensoure app that would connect to RadosGW objects and do some data mining over data?
[21:33] <runfromnowhere> smiley: I've used (and am using) those two together
[21:33] * BManojlovic (~steki@95.180.4.243) has joined #ceph
[21:33] <runfromnowhere> smiley: Ping me when you're around and I'll gladly talk about my experiences :)
[21:33] <angdraug> is clock skew on mon/osd still able to irrecoverably break the cluster?
[21:34] <angdraug> or can it recover once clock skew is eliminated?
[21:36] <fghaas> thanks though, gregsfortytwo :)
[21:36] <gregsfortytwo> np
[21:37] <gregsfortytwo> angdraug: it will prevent availability but definitely can't damage anything permanently
[21:38] <angdraug> what I'm seeing is that when I resume a snapshot of a vm running osd, osd remains offline until I restart it by hand
[21:38] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[21:39] <angdraug> the problem is that until the woken vm has synced its clock, it's got hours of skew
[21:39] * rotbart (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) has joined #ceph
[21:39] <angdraug> s/offline/dead/
[21:39] <angdraug> http://paste.openstack.org/show/94917/
[21:40] <gregsfortytwo> yeah, that'll do it
[21:40] <angdraug> any way to make it recover automatically, without a restart?
[21:41] <gregsfortytwo> I'm not sure
[21:41] <angdraug> having a test script restart the services it's supposed to be testing for reliability smells wrong
[21:41] <gregsfortytwo> we haven't tried to make it too friendly for running in VMs like that; if the host OS is getting the right time but the OSD isn't then I can't think of anything
[21:42] <gregsfortytwo> but perhaps there are some timer settings or whatever you can futz with, because the OSD is just using normal syscalls for its time checks and maintenance
[21:42] <angdraug> you think in our case osd is using its own time and not system time?
[21:42] <angdraug> that's why it's not recovering when system time is synced?
[21:43] <angdraug> hm no that question sounds wrong too
[21:43] <angdraug> if osd is ysing syscalls it should get the same time the OS has
[21:43] <gregsfortytwo> I can't think of any mechanism for it to do that, but clocks are tricky things and VM systems spend a lot of time lying about them to deal with suspended or turned-off VMs, so I dunno
[21:44] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[21:45] <angdraug> a shame, resuming a snapshot is much faster than redeploying a new cluster every time you want to break something
[21:45] <angdraug> do you redeploy when testing ceph ha?
[21:46] <gregsfortytwo> uh???.?
[21:46] <angdraug> do you have ci tests that check failover and other node outage recovery scenarios?
[21:46] <gregsfortytwo> many of them
[21:47] <gregsfortytwo> we mostly run on real hardware
[21:47] <gregsfortytwo> and yes, we deploy between every (CI or nightly) test
[21:47] <angdraug> thanks, that's what I was asking about
[21:48] <angdraug> we're trying to increase our coverage and run more tests more often
[21:48] <gregsfortytwo> awesome
[21:49] <angdraug> ceph seems to be the only part of our stack that doesn't take kindly to being resumed from a snapshot
[21:49] <gregsfortytwo> I'm not sure what the problem with restarting the daemons is for that kind of test system, though
[21:49] <gregsfortytwo> ah, except for that, heh
[21:50] <gregsfortytwo> if this is just a testing setup, you could try "auth = none" instead of using cephx...
[21:50] * nico (~nico@200.68.116.185) has joined #ceph
[21:50] <gregsfortytwo> I think that's the bit that's breaking
[21:50] <angdraug> one more problem is that when a test fails, we'd like to bottle that failed env in a snapshot, to be resumed and investigated later by a dev
[21:51] <angdraug> good advice, we'll try that
[21:51] <angdraug> thanks!
[21:51] * adamcrume (~quassel@2601:9:6680:47:b48c:cc28:4b5e:4c8d) Quit (Remote host closed the connection)
[21:51] <nico> Hi, anyone know the line to put in ceph.conf to change the amount minimal of osd?
[21:51] <angdraug> pool size?
[21:52] <angdraug> nico: ^
[21:52] <nico> mmm, its neccesary to run with 2 osd only , not three
[21:52] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[21:55] <angdraug> you can even set it to 1, and have just 1 osd and no data redundance
[21:55] <angdraug> that's if you don't care about losing your data, of course :)
[21:56] * BManojlovic (~steki@95.180.4.243) Quit (Ping timeout: 480 seconds)
[21:57] <nico> angdraug, you said this line ??? "osd pool default size = 2" ?
[21:57] <angdraug> yup
[21:58] <angdraug> btw http://irclogs.ceph.widodh.nl/ returns 503
[21:59] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:59] * ircolle-afk is now known as ircolle
[22:00] * rotbart (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) Quit (Quit: Leaving)
[22:00] <seapasul1i> angdraug: this happened to me and although the cluster did break, once resolved the cluster did return to health_okay
[22:01] <seapasul1i> aah damn ignore me
[22:01] <seapasul1i> didn't scroll all the way down
[22:05] * fghaas (~florian@91-119-223-7.dynamic.xdsl-line.inode.at) has left #ceph
[22:05] <angdraug> seapasul1i: thanks, that's a useful data point
[22:05] <angdraug> how long did it take your cluster to recover?
[22:06] <angdraug> maybe our test scripts should simply wait longer after resume
[22:07] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[22:15] <seapasul1i> Off and on about a day. It was healthy once I figured out the issues (it was a ntp issue that caused the osds to go down and a python issue that stopped them from coming back after I tried rebooting them)
[22:15] * oblu (~o@62.109.134.112) Quit (Quit: ~)
[22:15] <seapasul1i> http://tracker.ceph.com/issues/8797
[22:16] <seapasul1i> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=754341
[22:17] * BManojlovic (~steki@net50-190-245-109.mbb.telenor.rs) has joined #ceph
[22:18] * Nacer (~Nacer@2001:41d0:fe82:7200:2c:9d5e:1f89:e179) has joined #ceph
[22:19] <seapasul1i> I only have a 2Pb cluster though
[22:20] * Nacer (~Nacer@2001:41d0:fe82:7200:2c:9d5e:1f89:e179) Quit (Remote host closed the connection)
[22:21] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[22:21] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[22:22] * oblu (~o@62.109.134.112) has joined #ceph
[22:23] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[22:25] * oblu (~o@62.109.134.112) Quit (Read error: Connection reset by peer)
[22:29] * oblu (~o@62.109.134.112) has joined #ceph
[22:30] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[22:40] * oblu (~o@62.109.134.112) Quit (Quit: ~)
[22:41] * adamcrume (~quassel@eduroam-242-18.ucsc.edu) has joined #ceph
[22:42] * Concubidated (~Adium@66.87.65.19) Quit (Ping timeout: 480 seconds)
[22:46] * oblu (~o@62.109.134.112) has joined #ceph
[22:46] * Concubidated (~Adium@66.87.65.90) has joined #ceph
[22:50] * qhartman (~qhartman@den.direwolfdigital.com) Quit (Remote host closed the connection)
[23:00] * adamcrume (~quassel@eduroam-242-18.ucsc.edu) Quit (Remote host closed the connection)
[23:02] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[23:05] * nico (~nico@200.68.116.185) Quit (Quit: Saliendo)
[23:06] * angdraug_ (~angdraug@131.252.204.134) has joined #ceph
[23:06] * steki (~steki@net62-129-245-109.mbb.telenor.rs) has joined #ceph
[23:10] <chuffpdx> hey all.. if I wanted to renumber my ceph storage network... what is the proper method?
[23:10] * nhm (~nhm@nat-pool-bos-u.redhat.com) Quit (Read error: Operation timed out)
[23:11] * angdraug (~angdraug@131.252.204.134) Quit (Ping timeout: 480 seconds)
[23:13] * BManojlovic (~steki@net50-190-245-109.mbb.telenor.rs) Quit (Ping timeout: 480 seconds)
[23:13] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[23:18] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[23:21] * D-Spair (~dphillips@cpe-74-130-79-134.swo.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:22] * oblu (~o@62.109.134.112) Quit (Quit: ~)
[23:23] * sleinen1 (~Adium@2001:620:0:68::103) Quit (Ping timeout: 480 seconds)
[23:30] * BManojlovic (~steki@net83-166-245-109.mbb.telenor.rs) has joined #ceph
[23:33] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[23:34] <seapasul1i> chuffpdx: can you explain a bit more? renumber how?
[23:35] <seapasul1i> renumber osds? change ips? change host names?
[23:36] <chuffpdx> hiya.. Yes, change the IP addresses on the storage network.
[23:36] <seapasul1i> So monitor nodes are not supposed to have their ips change but docs exist on how to do it here :: http://ceph.com/docs/master/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address
[23:37] <seapasul1i> which basically says. Add a new monitor with the correct IP, make sure the cluster can talk to both, and remove the old monitor after the new one is established
[23:38] * steki (~steki@net62-129-245-109.mbb.telenor.rs) Quit (Ping timeout: 480 seconds)
[23:38] <seapasul1i> Same goes for OSDs I believe. So you would need to weight out and remove the problem osds, add new osds with the correct IP, add new osds.
[23:38] <seapasul1i> not sure on the osd procedure though
[23:38] <chuffpdx> ah.. thanks for that link. I was searching with wrong keywords.
[23:38] * KevinPerks1 (~Adium@2606:a000:80a1:1b00:4d0c:31a2:deba:a74a) has joined #ceph
[23:39] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[23:40] * kfei (~root@61-227-16-64.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[23:41] * oblu (~o@62.109.134.112) has joined #ceph
[23:42] * angdraug_ (~angdraug@131.252.204.134) Quit (Quit: Leaving)
[23:44] * KevinPerks (~Adium@2606:a000:80a1:1b00:415d:db78:8f52:19d4) Quit (Ping timeout: 480 seconds)
[23:45] * ikrstic (~ikrstic@109-93-112-236.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[23:51] * joef (~Adium@2601:9:280:f2e:d01a:1d51:e4f9:dc81) has joined #ceph
[23:51] * vmx (~vmx@dslb-084-056-058-177.084.056.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[23:53] * kfei (~root@61-227-15-21.dynamic.hinet.net) has joined #ceph
[23:53] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[23:55] * erice (~erice@50.245.231.209) Quit (Read error: No route to host)
[23:56] * erice (~erice@50.245.231.209) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.