#ceph IRC Log

Index

IRC Log for 2015-02-18

Timestamps are in GMT/BST.

[0:04] * _br_ (~bjoern_of@213-239-215-232.clients.your-server.de) has joined #ceph
[0:04] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:12] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[0:14] * VisBits (~textual@8.29.138.28) Quit (Quit: Textual IRC Client: www.textualapp.com)
[0:18] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[0:19] * VisBits (~textual@8.29.138.28) has joined #ceph
[0:20] * Nats_ (~natscogs@114.31.195.238) Quit (Read error: Connection reset by peer)
[0:21] * VisBits_ (~textual@8.29.132.75) has joined #ceph
[0:21] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:22] * VisBits (~textual@8.29.138.28) Quit (Read error: Connection reset by peer)
[0:23] * Nats (~natscogs@114.31.195.238) has joined #ceph
[0:24] * _br_ (~bjoern_of@213-239-215-232.clients.your-server.de) Quit (Ping timeout: 480 seconds)
[0:26] <cholcombe973> does anyone know how to display what is in a crush bucket without dumping the entire crush ruleset?
[0:26] <cholcombe973> i mean the entire crush map
[0:27] * _br_ (~bjoern_of@213-239-215-232.clients.your-server.de) has joined #ceph
[0:30] * dmsimard is now known as dmsimard_away
[0:31] * VisBits_ (~textual@8.29.132.75) Quit (Quit: Textual IRC Client: www.textualapp.com)
[0:36] * joshd (~joshd@sccc-66-78-236-243.smartcity.com) Quit (Quit: Leaving.)
[0:40] * nitti (~nitti@162.222.47.218) Quit (Ping timeout: 480 seconds)
[0:49] * fsimonce (~simon@host217-37-dynamic.30-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:53] * rotbart (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) has joined #ceph
[0:57] * zack_dolby (~textual@nfmv001162072.uqw.ppp.infoweb.ne.jp) has joined #ceph
[0:58] * Concubidated (~Adium@2607:f298:b:635:68b4:7a8:5742:d6ec) Quit (Ping timeout: 480 seconds)
[0:59] * joshd (~joshd@sccc-66-78-236-243.smartcity.com) has joined #ceph
[1:03] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:03] * davidzlap1 (~Adium@cpe-23-242-189-171.socal.res.rr.com) has joined #ceph
[1:03] * davidzlap (~Adium@cpe-23-242-189-171.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[1:07] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[1:08] <vilobhmm> how do you boot from volume when your image type is ami ?
[1:09] <vilobhmm> because looks like inorder to create a volume from an image the image needs to be in ???raw??? format
[1:10] * _br_ (~bjoern_of@213-239-215-232.clients.your-server.de) Quit (Ping timeout: 480 seconds)
[1:10] * puffy (~puffy@50.185.218.255) has joined #ceph
[1:13] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[1:17] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[1:20] * davidzlap1 (~Adium@cpe-23-242-189-171.socal.res.rr.com) Quit (Quit: Leaving.)
[1:21] * puffy (~puffy@50.185.218.255) has joined #ceph
[1:25] * zack_dol_ (~textual@nfmv001162072.uqw.ppp.infoweb.ne.jp) has joined #ceph
[1:25] * zack_dolby (~textual@nfmv001162072.uqw.ppp.infoweb.ne.jp) Quit (Read error: Connection reset by peer)
[1:25] * lcurtis (~lcurtis@47.19.105.250) Quit (Ping timeout: 480 seconds)
[1:27] * davidzlap (~Adium@2605:e000:1313:8003:142d:7638:b814:1cc9) has joined #ceph
[1:29] * zack_dolby (~textual@nfmv001174051.uqw.ppp.infoweb.ne.jp) has joined #ceph
[1:33] * zack_dol_ (~textual@nfmv001162072.uqw.ppp.infoweb.ne.jp) Quit (Ping timeout: 480 seconds)
[1:58] * avozza_ (~avozza@83.162.204.36) Quit (Remote host closed the connection)
[1:58] * avozza (~avozza@83.162.204.36) has joined #ceph
[1:58] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) Quit (Quit: Ex-Chat)
[1:59] * oms101 (~oms101@p20030057EA07FE00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:59] * rljohnsn (~rljohnsn@ns25.8x8.com) has left #ceph
[2:01] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:03] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[2:06] * avozza (~avozza@83.162.204.36) Quit (Ping timeout: 480 seconds)
[2:06] * togdon (~togdon@74.121.28.6) Quit (Read error: Connection reset by peer)
[2:06] * togdon_ (~togdon@74.121.28.6) has joined #ceph
[2:08] * oms101 (~oms101@p20030057EA090100EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[2:12] * joshd (~joshd@sccc-66-78-236-243.smartcity.com) Quit (Ping timeout: 480 seconds)
[2:15] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:18] * SPACESHIP (~chatzilla@d24-141-231-126.home.cgocable.net) has joined #ceph
[2:20] * sputnik13 (~sputnik13@74.202.214.170) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:24] * cholcombe973 (~chris@7208-76ef-ff1f-ed2f-329a-f002-3420-2062.6rd.ip6.sonic.net) Quit (Remote host closed the connection)
[2:26] * zack_dolby (~textual@nfmv001174051.uqw.ppp.infoweb.ne.jp) Quit (Read error: Connection reset by peer)
[2:26] * zack_dol_ (~textual@nfmv001174051.uqw.ppp.infoweb.ne.jp) has joined #ceph
[2:28] * rmoe (~quassel@173-228-89-134.dsl.static.fusionbroadband.com) has joined #ceph
[2:29] * LeaChim (~LeaChim@host86-159-114-39.range86-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:30] * joshd (~joshd@8.25.222.10) has joined #ceph
[2:33] * sudocat (~davidi@192.185.1.20) Quit (Ping timeout: 480 seconds)
[2:36] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[2:37] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (Quit: Away)
[2:42] * OutOfNoWhere (~rpb@76.8.45.216) has joined #ceph
[2:54] * smokedmeets (~smokedmee@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[2:56] * togdon_ (~togdon@74.121.28.6) Quit (Quit: Textual IRC Client: www.textualapp.com)
[2:59] * avozza (~avozza@83.162.204.36) has joined #ceph
[3:01] * joshd (~joshd@8.25.222.10) Quit (Quit: Leaving.)
[3:05] * vasu_desk (~vasu@c-50-185-132-102.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[3:09] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) Quit (Quit: Leaving.)
[3:09] * snakamoto (~snakamoto@157.254.210.31) has joined #ceph
[3:11] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) has joined #ceph
[3:15] * Kupo1 (~tyler.wil@23.111.254.159) Quit (Read error: Connection reset by peer)
[3:16] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[3:18] * zerick_ (~zerick@179.7.77.196) has joined #ceph
[3:20] <snakamoto> Hi, is there any way to have one radosgw respond to multiple domain names?
[3:33] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) has joined #ceph
[3:41] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[3:46] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[3:52] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[3:54] * avozza (~avozza@83.162.204.36) Quit (Ping timeout: 480 seconds)
[4:05] * snakamoto (~snakamoto@157.254.210.31) Quit (Ping timeout: 480 seconds)
[4:05] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[4:06] * ctd (~root@00011932.user.oftc.net) has joined #ceph
[4:08] * ctd_ (~root@00011932.user.oftc.net) Quit (Ping timeout: 480 seconds)
[4:14] * lcurtis (~lcurtis@47.19.105.250) Quit (Ping timeout: 480 seconds)
[4:21] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[4:23] * lcurtis (~lcurtis@ool-18bfec0b.dyn.optonline.net) has joined #ceph
[4:24] * priane (~amy@116.251.192.71) has joined #ceph
[4:27] * OutOfNoWhere (~rpb@76.8.45.216) Quit (Ping timeout: 480 seconds)
[4:35] * vakulkar (~vakulkar@c-50-185-132-102.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[4:36] * ircolle (~Adium@2601:1:a580:145a:ac76:bd70:8e4:3caf) has joined #ceph
[4:42] * pmxceph (~pmxceph@208.98.194.163) has joined #ceph
[4:43] <pmxceph> can somebody tell me please how to change default max mds server from 1 to 2 so i can have two MDS server?
[4:45] * lcurtis (~lcurtis@ool-18bfec0b.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[4:55] * clayb (~clayb@cpe-172-254-27-101.nyc.res.rr.com) has joined #ceph
[4:55] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Quit: segutier)
[5:02] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[5:02] * zack_dol_ (~textual@nfmv001174051.uqw.ppp.infoweb.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[5:06] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[5:06] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[5:06] * clayb (~clayb@cpe-172-254-27-101.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[5:06] * SPACESHIP (~chatzilla@d24-141-231-126.home.cgocable.net) Quit (Ping timeout: 480 seconds)
[5:08] * SPACESHIP (~chatzilla@d24-141-231-126.home.cgocable.net) has joined #ceph
[5:12] * zack_dolby (~textual@nfmv001174051.uqw.ppp.infoweb.ne.jp) has joined #ceph
[5:12] * zack_dolby (~textual@nfmv001174051.uqw.ppp.infoweb.ne.jp) Quit (Read error: Connection reset by peer)
[5:12] * zack_dolby (~textual@nfmv001174051.uqw.ppp.infoweb.ne.jp) has joined #ceph
[5:13] * ircolle (~Adium@2601:1:a580:145a:ac76:bd70:8e4:3caf) Quit (Quit: Leaving.)
[5:17] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[5:19] * rotbart (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) Quit (Quit: Leaving)
[5:20] * clayb (~clayb@2604:2000:e1a9:ca00:3199:c0b2:35cc:bf0f) has joined #ceph
[5:20] * clayb (~clayb@2604:2000:e1a9:ca00:3199:c0b2:35cc:bf0f) Quit ()
[5:23] * cooldharma06 (~chatzilla@14.139.180.52) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 21.0/20130515140136])
[5:26] * Vacuum_ (~vovo@88.130.217.86) has joined #ceph
[5:33] * Vacuum (~vovo@i59F79BB8.versanet.de) Quit (Ping timeout: 480 seconds)
[5:35] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[5:36] <tserong> thanks sage :)
[5:36] <tserong> </lag>
[5:45] * zack_dolby (~textual@nfmv001174051.uqw.ppp.infoweb.ne.jp) Quit (Read error: Connection reset by peer)
[5:46] * zack_dolby (~textual@nfmv001074146.uqw.ppp.infoweb.ne.jp) has joined #ceph
[5:58] * OutOfNoWhere (~rpb@76.8.45.216) has joined #ceph
[6:04] * swami1 (~swami@49.32.0.202) has joined #ceph
[6:07] * OutOfNoWhere (~rpb@76.8.45.216) Quit (Ping timeout: 480 seconds)
[6:14] * segutier_ (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[6:16] * fghaas (~florian@185.15.236.4) has joined #ceph
[6:18] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) Quit (Quit: Leaving.)
[6:19] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:19] * segutier_ is now known as segutier
[6:19] <pmxceph> can somebody help me figure out why my second MDS daemon not becoming active please
[6:20] <pmxceph> my cephfs working just fine with one MDS, but to have MDS daemon redundancy i would like to add second MDS. but it not coming active
[6:23] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[6:23] * vbellur (~vijay@122.167.82.25) has joined #ceph
[6:32] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[6:32] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit ()
[6:33] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[6:33] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[6:36] * zack_dolby (~textual@nfmv001074146.uqw.ppp.infoweb.ne.jp) Quit (Read error: Connection reset by peer)
[6:37] * zack_dolby (~textual@nfmv008018.uqw.ppp.infoweb.ne.jp) has joined #ceph
[6:37] * overclk (~overclk@121.244.87.117) has joined #ceph
[6:39] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[6:51] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[6:51] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[6:52] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[6:53] * woleium (~woleium@104-37-62-160.dyn.novuscom.net) has joined #ceph
[6:54] * zack_dol_ (~textual@nfmv008018.uqw.ppp.infoweb.ne.jp) has joined #ceph
[6:54] * zack_dolby (~textual@nfmv008018.uqw.ppp.infoweb.ne.jp) Quit (Read error: Connection reset by peer)
[6:56] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:59] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[6:59] * swami2 (~swami@49.32.0.202) has joined #ceph
[7:01] * bandrus1 (~brian@184.sub-70-211-83.myvzw.com) Quit (Quit: Leaving.)
[7:03] * priane (~amy@116.251.192.71) has left #ceph
[7:03] * swami1 (~swami@49.32.0.202) Quit (Ping timeout: 480 seconds)
[7:05] * avozza (~avozza@83.162.204.36) has joined #ceph
[7:07] <jclm1> pmxceph: By default the second MDS will always be standby if I remember correctly. It will act as a failover for the only active (the first one deployed)
[7:08] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Ping timeout: 480 seconds)
[7:09] <jclm1> pmxceph: Active/Active configuration is not recommended due to potential problems that could arise. Future releases will lake the active/active configuration bullet-proof
[7:12] * rljohnsn (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) has joined #ceph
[7:13] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[7:14] * avozza (~avozza@83.162.204.36) Quit (Ping timeout: 480 seconds)
[7:18] * joshd (~joshd@8.25.222.10) has joined #ceph
[7:19] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[7:19] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit ()
[7:20] * fghaas (~florian@185.15.236.4) Quit (Quit: Leaving.)
[7:20] * amote (~amote@121.244.87.116) has joined #ceph
[7:22] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[7:23] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit ()
[7:23] * zack_dolby (~textual@pw126253104087.6.panda-world.ne.jp) has joined #ceph
[7:27] * zack_dol_ (~textual@nfmv008018.uqw.ppp.infoweb.ne.jp) Quit (Ping timeout: 480 seconds)
[7:28] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:29] * davidzlap (~Adium@2605:e000:1313:8003:142d:7638:b814:1cc9) Quit (Ping timeout: 480 seconds)
[7:29] * davidzlap (~Adium@cpe-23-242-189-171.socal.res.rr.com) has joined #ceph
[7:29] * woleium (~woleium@104-37-62-160.dyn.novuscom.net) Quit (Remote host closed the connection)
[7:31] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:37] * overclk (~overclk@121.244.87.117) Quit (Ping timeout: 480 seconds)
[7:37] * rossmartyn04 (~rnm@support.memset.com) Quit (Quit: Leaving.)
[7:39] * davidzlap (~Adium@cpe-23-242-189-171.socal.res.rr.com) Quit (Quit: Leaving.)
[7:40] * zack_dolby (~textual@pw126253104087.6.panda-world.ne.jp) Quit (Read error: Connection reset by peer)
[7:40] * davidzlap (~Adium@cpe-23-242-189-171.socal.res.rr.com) has joined #ceph
[7:53] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[7:53] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[7:54] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[7:55] * zack_dolby (~textual@nfmv008018.uqw.ppp.infoweb.ne.jp) has joined #ceph
[7:55] * linjan (~linjan@176.195.3.113) has joined #ceph
[8:01] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Quit: segutier)
[8:03] * fghaas (~florian@185.15.236.4) has joined #ceph
[8:07] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[8:08] * puffy (~puffy@50.185.218.255) has joined #ceph
[8:08] * puffy (~puffy@50.185.218.255) Quit ()
[8:08] * davidzlap (~Adium@cpe-23-242-189-171.socal.res.rr.com) Quit (Quit: Leaving.)
[8:08] * alram (~alram@ppp-seco11pa2-46-193-132-162.wb.wifirst.net) has joined #ceph
[8:30] * nardial (~ls@ipservice-092-209-178-132.092.209.pools.vodafone-ip.de) has joined #ceph
[8:33] * ddvip (~ddvip@171.43.100.30) has joined #ceph
[8:34] * overclk (~overclk@121.244.87.117) has joined #ceph
[8:41] * fghaas (~florian@185.15.236.4) Quit (Quit: Leaving.)
[8:41] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[8:41] * alram (~alram@ppp-seco11pa2-46-193-132-162.wb.wifirst.net) Quit (Quit: leaving)
[8:42] * ddvip (~ddvip@171.43.100.30) Quit (Quit: ??????)
[8:44] * zerick_ (~zerick@179.7.77.196) Quit (Read error: Connection reset by peer)
[8:45] * oro_ (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[8:46] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[8:48] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Quit: Leaving...)
[8:48] * _zerick_ (~zerick@179.7.77.196) has joined #ceph
[8:49] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[8:50] * fghaas (~florian@185.15.236.4) has joined #ceph
[8:51] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[8:52] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[8:55] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[8:57] * rossmartyn04 (~rnm@support.memset.com) has joined #ceph
[9:09] * fghaas (~florian@185.15.236.4) Quit (Ping timeout: 480 seconds)
[9:09] * zack_dolby (~textual@nfmv008018.uqw.ppp.infoweb.ne.jp) Quit (Read error: Connection reset by peer)
[9:09] * zack_dolby (~textual@nfmv008018.uqw.ppp.infoweb.ne.jp) has joined #ceph
[9:09] * alram (~alram@LAubervilliers-656-1-17-4.w217-128.abo.wanadoo.fr) has joined #ceph
[9:13] * analbeard (~shw@support.memset.com) has joined #ceph
[9:17] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:23] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[9:23] * dgurtner (~dgurtner@178.197.231.49) has joined #ceph
[9:24] * joshd (~joshd@8.25.222.10) Quit (Quit: Leaving.)
[9:27] * bjornar (~bjornar@ns3.uniweb.no) has joined #ceph
[9:28] * __NiC (~kristian@aeryn.ronningen.no) Quit (Quit: leaving)
[9:29] * _NiC (~kristian@aeryn.ronningen.no) has joined #ceph
[9:29] * zack_dolby (~textual@nfmv008018.uqw.ppp.infoweb.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:30] * vbellur (~vijay@122.167.82.25) Quit (Remote host closed the connection)
[9:32] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:35] * oro_ (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[9:35] * sleinen (~Adium@130.59.94.208) has joined #ceph
[9:36] <swami2> loicd: Ping...
[9:37] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[9:37] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[9:37] * _zerick_ (~zerick@179.7.77.196) Quit (Remote host closed the connection)
[9:37] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[9:39] * thb (~me@2a02:2028:1c3:4041:7c94:2ec4:4c5f:8bb1) has joined #ceph
[9:41] * mivaho_ (~quassel@2001:983:eeb4:1:c0de:69ff:fe2f:5599) Quit (Quit: Going)
[9:41] * mivaho (~quassel@2001:983:eeb4:1:c0de:69ff:fe2f:5599) has joined #ceph
[9:43] * sleinen (~Adium@130.59.94.208) Quit (Ping timeout: 480 seconds)
[9:49] * orion195 (~oftc-webi@91.204.80.85) has joined #ceph
[9:50] <orion195> hi all
[9:50] <orion195> do you guys are aware that some dependencies with ceph-giant - el7 and epel 7.5 release are broken?
[9:51] <orion195> in particular, the package: python-rados 0.80 gets priority on top of librados2 = 0.87
[9:51] <orion195> Error: Package: 1:python-rados-0.80.7-0.4.el7.x86_64 (epel) Requires: librados2 = 1:0.80.7 Installed: 1:librados2-0.87-0.el7.centos.x86_64 (@Ceph) librados2 = 1:0.87-0.el7.centos Available: 1:librados2-0.80.7-0.4.el7.x86_64 (epel) librados2 = 1:0.80.7-0.4.el7 Available: 1:librados2-0.86-0.el7.centos.x86_64 (Ceph) librados2 = 1:0.86-0.el7.centos Error: Package: 1:python-
[9:52] * cok (~chk@2a02:2350:18:1010:80f2:28a9:89f9:12d2) has joined #ceph
[9:54] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[9:55] * sleinen (~Adium@130.59.94.208) has joined #ceph
[9:56] * karnan (~karnan@121.244.87.117) has joined #ceph
[9:56] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Read error: Connection reset by peer)
[9:57] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[10:03] <badone_> orion195: do you have priority=2 in your repo files?
[10:03] <orion195> I do
[10:03] <orion195> epel, should be enabled
[10:03] * sleinen (~Adium@130.59.94.208) Quit (Ping timeout: 480 seconds)
[10:03] <orion195> on a centos 7 -- anyway
[10:04] * fsimonce (~simon@host217-37-dynamic.30-79-r.retail.telecomitalia.it) has joined #ceph
[10:04] <orion195> I do have also openstack rdo juno
[10:04] <orion195> repository
[10:04] <orion195> but is not causing any problem
[10:05] * aszeszo (~aszeszo@dnq24.neoplus.adsl.tpnet.pl) has joined #ceph
[10:06] <orion195> badone_: http://ur1.ca/jr24s
[10:07] <badone_> so your upgrading...
[10:07] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:08] <badone_> *you're*
[10:09] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:09] <orion195> badone_: and here I'm, installing:
[10:09] <orion195> http://fpaste.org/186910/25056814/
[10:10] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[10:10] <Be-El> hi
[10:11] * aszeszo (~aszeszo@dnq24.neoplus.adsl.tpnet.pl) Quit (Quit: Leaving.)
[10:12] <badone_> orion195: don't suppose --disablerepo=epel helps?
[10:12] <orion195> badone_: if I disable epel, it works, but, the question is: why this now?
[10:13] <badone_> orion195: not a question for me. Glad I could get it going for you
[10:13] <orion195> yeah, If I disable epel... I cna install but... don't really know where to ask
[10:14] <orion195> and/or who would know anything about it
[10:14] <badone_> orion195: keep asking here. Just not me because I don't know :)
[10:14] <badone_> orion195: closer to NA hours you might get an answer
[10:15] <badone_> orion195: it's quite possible epel is no longer required. i know we were heading in that direction
[10:16] <orion195> you mean: epel not required to install ceph or, not required to use on a Centos7 system?
[10:17] <badone_> orion195: to install Ceph
[10:18] <loicd> swami2: pong
[10:19] * Dasher (~oftc-webi@46.218.69.130) Quit (Remote host closed the connection)
[10:22] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:24] * badone__ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[10:26] <swami2> loicd: I could see a ping on DNM: v092 organizationmap (#3651)...
[10:26] <swami2> loicd: Sorry I missed that ping yesterday...Please let me..
[10:27] <loicd> yes, did you notice https://github.com/ceph/ceph/pull/3651#commitcomment-9690886 ?
[10:31] * linjan (~linjan@176.195.3.113) Quit (Ping timeout: 480 seconds)
[10:31] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[10:34] <swami2> loicd: Checked it...and added my comment...can you plz check now??
[10:36] <loicd> swami2: it is https://github.com/swamireddy/ceph/commit/e86cccc55960e0de2eda974273433c13f48ce2a4#diff-dab6a7efce64a6e066009e19abaf8da1R240 that is incorrect
[10:36] <loicd> Unaffiliated <no@organization.net> Jerry7X <875016668@qq.com>
[10:38] <swami2> loicd: But the commit message has - Jerry7X <875016668@qq.com>...
[10:38] <swami2> loicd: Sure, Can update it soon
[10:40] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) Quit (Quit: Textual IRC Client: www.textualapp.com)
[10:43] * shaunm (~shaunm@nat-pool-brq-t.redhat.com) has joined #ceph
[10:44] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) has joined #ceph
[10:45] * linjan (~linjan@195.110.41.9) has joined #ceph
[10:46] * Sysadmin88 (~IceChat77@94.12.240.104) Quit (Quit: Make it idiot proof and someone will make a better idiot.)
[10:46] * avozza (~avozza@a83-160-116-36.adsl.xs4all.nl) has joined #ceph
[10:46] * Sysadmin88 (~IceChat77@94.12.240.104) has joined #ceph
[10:46] <loicd> swami2: Jerry7X is going to be remapped via .mailmap before it reaches the .organizationmap
[10:58] * oro_ (~oro@2001:620:20:16:9858:9d3:b3e9:fc05) has joined #ceph
[10:59] * _br_ (~bjoern_of@213-239-215-232.clients.your-server.de) has joined #ceph
[11:00] * oro (~oro@2001:620:20:16:9858:9d3:b3e9:fc05) has joined #ceph
[11:02] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[11:03] * jtang (~jtang@109.255.42.21) Quit (Remote host closed the connection)
[11:03] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[11:08] * badone__ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[11:11] * jtang (~jtang@109.255.42.21) has joined #ceph
[11:13] * cooldharma06 (~chatzilla@14.139.180.52) has joined #ceph
[11:38] * capri_on (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[11:48] * sugoruyo (~sug_@00014f5c.user.oftc.net) Quit (Quit: Leaving)
[12:00] * OutOfNoWhere (~rpb@76.8.45.216) has joined #ceph
[12:11] * dmick (~dmick@2607:f298:a:607:c937:1916:c8b2:a4ee) Quit (Ping timeout: 480 seconds)
[12:15] * cooldharma06 (~chatzilla@14.139.180.52) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 21.0/20130515140136])
[12:21] * dmick (~dmick@2607:f298:a:607:c5ec:52cf:f46:69f5) has joined #ceph
[12:26] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[12:30] <swami2> loicd: OK...I have udated...do you receive the pull request?? p
[12:33] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) has joined #ceph
[12:36] * cooldharma06 (~chatzilla@14.139.180.52) has joined #ceph
[12:37] * sugoruyo (~sug_@00014f5c.user.oftc.net) has joined #ceph
[12:37] * jnq (~jnq@95.85.22.50) Quit (Quit: WeeChat 0.3.7)
[12:41] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[12:42] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[12:45] * danieagle (~Daniel@200-148-39-11.dsl.telesp.net.br) has joined #ceph
[12:49] * Sysadmin88_ (~IceChat77@94.12.240.104) has joined #ceph
[12:50] * macjack1 (~Thunderbi@123.51.160.200) has joined #ceph
[12:53] * vivcheri (~vivcheri@117.194.140.59) has joined #ceph
[12:55] * macjack (~Thunderbi@123.51.160.200) Quit (Ping timeout: 480 seconds)
[12:56] * bitserker (~toni@213.229.187.103) has joined #ceph
[12:57] * Sysadmin88 (~IceChat77@94.12.240.104) Quit (Ping timeout: 480 seconds)
[12:57] * ganders (~root@200.32.121.70) has joined #ceph
[12:57] * Sysadmin88 (~IceChat77@94.12.240.104) has joined #ceph
[12:57] * Sysadmin88_ (~IceChat77@94.12.240.104) Quit (Ping timeout: 480 seconds)
[13:00] * cok (~chk@2a02:2350:18:1010:80f2:28a9:89f9:12d2) Quit (Quit: Leaving.)
[13:01] * ctd (~root@00011932.user.oftc.net) Quit (Quit: END OF LINE)
[13:02] * ctd (~root@00011932.user.oftc.net) has joined #ceph
[13:03] * jnq (~jnq@95.85.22.50) has joined #ceph
[13:04] * sugoruyo (~sug_@00014f5c.user.oftc.net) Quit (Quit: Leaving)
[13:04] * sugoruyo (~sug_@00014f5c.user.oftc.net) has joined #ceph
[13:06] * orion195 (~oftc-webi@91.204.80.85) Quit (Remote host closed the connection)
[13:08] * avozza (~avozza@a83-160-116-36.adsl.xs4all.nl) Quit (Remote host closed the connection)
[13:09] * avozza (~avozza@62.140.132.32) has joined #ceph
[13:11] * avozza_ (~avozza@a83-160-116-36.adsl.xs4all.nl) has joined #ceph
[13:11] * avozza (~avozza@62.140.132.32) Quit (Read error: Connection reset by peer)
[13:15] * alram (~alram@LAubervilliers-656-1-17-4.w217-128.abo.wanadoo.fr) Quit (Read error: No route to host)
[13:15] * alram_ (~alram@LAubervilliers-656-1-17-4.w217-128.abo.wanadoo.fr) has joined #ceph
[13:17] * sugoruyo (~sug_@00014f5c.user.oftc.net) Quit (Remote host closed the connection)
[13:24] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:36] * cooldharma06 (~chatzilla@14.139.180.52) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 21.0/20130515140136])
[13:44] * vivcheri_ (~vivcheri@72.163.220.28) has joined #ceph
[13:48] * vivcheri__ (~vivcheri@117.194.140.59) has joined #ceph
[13:48] * vivcheri_ (~vivcheri@72.163.220.28) Quit (Read error: Connection reset by peer)
[13:50] * vivcheri (~vivcheri@117.194.140.59) Quit (Ping timeout: 480 seconds)
[13:57] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[13:57] <MentalRay> Hello
[13:58] * vivcheri__ (~vivcheri@117.194.140.59) Quit (Ping timeout: 480 seconds)
[13:59] <MentalRay> We are experimenting some issue with OpenStack and Ceph. When we migrate instances or resizing them, Nova try to import the image when doing any of those two but is causing an issue.
[14:00] * shaunm (~shaunm@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[14:00] <MentalRay> Anyone is using Icehouse or Juno and was able to "fix" this situation
[14:04] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[14:07] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:10] * OutOfNoWhere (~rpb@76.8.45.216) Quit (Ping timeout: 480 seconds)
[14:19] * shaunm (~shaunm@nat-pool-brq-u.redhat.com) has joined #ceph
[14:30] * OutOfNoWhere (~rpb@76.8.45.216) has joined #ceph
[14:31] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) has joined #ceph
[14:34] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[14:35] * adrian15b (~kvirc@149.Red-83-47-234.dynamicIP.rima-tde.net) has joined #ceph
[14:36] <adrian15b> Hello. Is it technically possible to build and use a ceph storage cluster with one node ? (Yes, the idea is to use three nodes but in the future). Thank you.
[14:38] <flaf> Yes adrian15b. You must install at least one monitor and one osd. And you must set the number of replicas =< #osd daemons.
[14:38] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[14:39] <adrian15b> flaf: Thank you!
[14:41] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[14:44] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[14:47] <Sysadmin88> if your server has multiple HDDs you could have multiple OSD on it
[14:48] <Gugge-47527> you could have multiple osd's on a single disk too :)
[14:48] <Gugge-47527> But for anything other than test, that would be strange :)
[14:48] * madkiss1 (~madkiss@ip5b418369.dynamic.kabel-deutschland.de) has joined #ceph
[14:49] <adrian15b> Well, yeah, it's a budget problem not a technical one.
[14:49] <adrian15b> We will probably have one OSD for SSD disks and another one for SATA disks.
[14:50] * bitserker (~toni@213.229.187.103) Quit (Ping timeout: 480 seconds)
[14:50] <adrian15b> Once you use a server for non-ceph storage then it's more difficult to convert it to Ceph storage, so I want to use Ceph storage as early as possible.
[14:51] * _prime_ (~oftc-webi@199.168.44.192) has joined #ceph
[14:52] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[14:53] * madkiss (~madkiss@2001:6f8:12c3:f00f:fd35:6e05:790b:8755) Quit (Ping timeout: 480 seconds)
[14:54] * thb (~me@89.204.139.84) has joined #ceph
[14:56] * dyasny (~dyasny@198.251.59.151) has joined #ceph
[14:57] * sjm (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) has joined #ceph
[14:58] * swami2 (~swami@49.32.0.202) Quit (Quit: Leaving.)
[15:05] <nardial> i have a SAS cache tier of 28 osds on top of 56 SATA osds, how can i measure/visualize the improved parallel reads/writes with rados bench?
[15:06] <nardial> i always get slower results with the SAS pool, i think regarding the lower number of OSDs
[15:09] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[15:10] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[15:12] <darkfader> nardial: not sure about the measurement, but i'd like to recommend you compare the write cache settings - sata disks usually have it turned on by default, and sas have it usually off
[15:13] <nardial> locally i can see the better performances and als write caching is handled by the controller
[15:13] <darkfader> okk
[15:14] <nardial> i even started parallel rados benc write/seq threads but there was no speed improvement
[15:15] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[15:18] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:21] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[15:22] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[15:24] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[15:28] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[15:28] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[15:31] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[15:32] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[15:33] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[15:38] <avozza_> can a ceph-backed block device be mounted by two clients?
[15:39] <dwm> avozza_: I don't think anything inherently prevents it, though I believe safety features exist to make it hard to do accidentally.
[15:40] * jeffmcdonald-msi1 (~Thunderbi@claudia.msi.umn.edu) has joined #ceph
[15:40] <dwm> (Most filesystems corrupt very rapidly indeed when mounted concurrently.)
[15:40] <avozza_> dwm: thanks, sure, then all hell break loose if the fs doesn't support concurrent access
[15:43] <oro_> Hello. Can I set a _CRUSH_ rule's min_size and max_size from cli? I know that I can tune these via editing crushmap and compiling and setting this.
[15:43] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Remote host closed the connection)
[15:44] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[15:44] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[15:44] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[15:45] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:45] * tupper_ (~tcole@rtp-isp-nat1.cisco.com) Quit (Remote host closed the connection)
[15:47] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) has joined #ceph
[15:48] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[15:56] <MentalRay> We are experimenting some issue with OpenStack and Ceph. When we migrate instances or resizing them, Nova try to import the image when doing any of those two but is causing an issue.
[15:56] <MentalRay> Anyone is using Icehouse or Juno and was able to "fix" this situation
[15:59] * dc_mattj (~matt@office-fw0.sal01.datacentred.net) has joined #ceph
[16:02] * dneary (~dneary@96.237.180.105) has joined #ceph
[16:02] * tupper_ (~tcole@rtp-isp-nat-pool1-1.cisco.com) has joined #ceph
[16:02] * ircolle (~Adium@2601:1:a580:145a:5c3a:5c37:5056:f47) has joined #ceph
[16:08] * jtang (~jtang@109.255.42.21) Quit (Ping timeout: 480 seconds)
[16:10] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[16:11] * nitti (~nitti@162.222.47.218) has joined #ceph
[16:16] * CephTestC (~CephTestC@199.91.185.156) Quit ()
[16:16] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[16:16] <beardo> oro_, ceph osd pool set {pool-name} min_size {size}
[16:17] <beardo> likewise for max_size
[16:17] <oro_> I know how to set it for the pool
[16:17] <oro_> What I can't find is how to set it for the ruleset
[16:18] <bilco105_> MentalRay: Yes
[16:18] <beardo> does it behave differently? The ruleset is applied at the pool level
[16:18] <bilco105_> MentalRay: There are some patches available for IceHouse to make it "behave" properly with ceph rbds
[16:18] <MentalRay> oh ok
[16:18] <beardo> ahh
[16:18] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[16:19] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[16:19] <MentalRay> because according to this bug: https://bugs.launchpad.net/cinder/+bug/1348811
[16:19] <MentalRay> its wouldn't be fix until 1-2 month
[16:19] <MentalRay> you have a link for the patch you are talking about?
[16:19] <oro_> beardo, according to this conversation (https://www.mail-archive.com/ceph-users@lists.ceph.com/msg16734.html) these values aren't the same
[16:20] <bilco105_> MentalRay: I can link you to our version, which includes the patches
[16:20] * Redcavalier (~Redcavali@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[16:20] <MentalRay> I would really appreciate it and I will try it in our lab
[16:21] <beardo> oro_, yeah, I was just rereading the docs. The pool command sets the min_size, the crush option is used to enforce rule selection based on that size
[16:21] <beardo> my mistake
[16:21] <bilco105_> MentalRay: https://github.com/datacentred/nova/tree/rbd-ephemeral-clone-stable-icehouse
[16:21] <MentalRay> Ok I will have my team take a look
[16:22] <MentalRay> Thank a million
[16:22] <bilco105_> Sure. PM me if you have problems
[16:22] <beardo> I don't know of way to change that without decompiling and modifying the crush map
[16:22] <oro_> beardo, it's not crucial now, can live with de default 1, 10 values, just was curious how to implement our deployment script by using only cli, not the download-decompile-edit-compile-upload roundtrip
[16:24] * archiestengol (~chatzilla@c-50-183-112-236.hsd1.co.comcast.net) has joined #ceph
[16:25] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[16:26] <beardo> oro_, I would expect that if there isn't a cli way to handle it, you could keep a copy of the modified crush map (created before adding any nodes or osds), and have the deploy script upload that when you create a new cluster
[16:27] <oro_> now the tricky part comes: our deploy script writes itself crushmap.txt :) so yeah this is very ugly, I'd rather prefer trhu cli setting these values
[16:27] * archiestengol (~chatzilla@c-50-183-112-236.hsd1.co.comcast.net) Quit (Read error: Connection reset by peer)
[16:28] * linjan (~linjan@195.110.41.9) has joined #ceph
[16:29] * archiestengol (~chatzilla@c-50-183-112-236.hsd1.co.comcast.net) has joined #ceph
[16:29] * JCL (~JCL@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[16:31] * saltlake (~saltlake@12.250.199.170) has joined #ceph
[16:31] * archiestengol (~chatzilla@c-50-183-112-236.hsd1.co.comcast.net) Quit (Read error: Connection reset by peer)
[16:32] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[16:33] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[16:33] * archiestengol (~chatzilla@c-50-183-112-236.hsd1.co.comcast.net) has joined #ceph
[16:33] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[16:36] * archiestengol (~chatzilla@c-50-183-112-236.hsd1.co.comcast.net) Quit (Read error: Connection reset by peer)
[16:36] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[16:37] * archiestengol (~chatzilla@c-50-183-112-236.hsd1.co.comcast.net) has joined #ceph
[16:40] * Lyncos (~lyncos@208.71.184.41) has left #ceph
[16:40] <devicenull> is there any easy fix for 'stuck unclean' pgs? as far as I can tell, they dont have anything blocking them
[16:41] <devicenull> pg query output: https://gist.githubusercontent.com/devicenull/ac7e50b876e4c1dfc05c/raw/dc552102058e99108bfe2b2be05ca22f076cc021/gistfile1.txt
[16:42] <sh> @devicenull: I had several stuck unclean pgs a few weeks ago which appeared to be ok. A restart of the osd daemons helped
[16:42] * archiestengol (~chatzilla@c-50-183-112-236.hsd1.co.comcast.net) Quit (Read error: Connection reset by peer)
[16:43] * dmsimard_away is now known as dmsimard
[16:43] <devicenull> good idea
[16:44] * archiestengol (~chatzilla@c-50-183-112-236.hsd1.co.comcast.net) has joined #ceph
[16:44] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[16:45] * _prime_ (~oftc-webi@199.168.44.192) Quit (Quit: Page closed)
[16:45] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[16:45] * OutOfNoWhere (~rpb@76.8.45.216) Quit (Ping timeout: 480 seconds)
[16:46] * archiestengol (~chatzilla@c-50-183-112-236.hsd1.co.comcast.net) Quit (Read error: Connection reset by peer)
[16:47] * archiestengol (~chatzilla@c-50-183-112-236.hsd1.co.comcast.net) has joined #ceph
[16:48] <devicenull> sadly, didn't help
[16:48] <Redcavalier> I got a quick question regarding ceph network usage. Basically, on our setup, we have the ceph public network and cluster network on the same link, on two different vlans.
[16:48] <devicenull> it's actually 3 pgs, and they're all on entirely different OSDs
[16:49] * togdon (~togdon@74.121.28.6) has joined #ceph
[16:49] <Redcavalier> What I see right now is that if a host goes down, the cluster network will take as much bandwidth as possible on the link, thus reducing the bandwidth available for the ceph clients
[16:50] <Redcavalier> We're trying to rate-limit that link through traffic shaping by using tc
[16:51] <Redcavalier> However, I then get the following errors in ceph -s : "health HEALTH_WARN 61 requests are blocked > 32 sec"
[16:51] <bilco105_> Redcavalier: You can adjust the number of recovery and backfill threads
[16:51] <bilco105_> Redcavalier: I wouldn't mess around introducing rate limiting - will lead to problems like you're seeing
[16:51] <Redcavalier> ah, that may be a better option than throttling then
[16:51] <Be-El> devicenull: which osd did you restart? the query output indicates that osd.56 might be the problematic one
[16:51] <devicenull> all of them
[16:52] <Redcavalier> yea, I'll probably undo rate-limiting and look into backfilling thread limits then
[16:52] <devicenull> what makes you say 56 is the bad one?
[16:52] <bilco105_> Yeah, that's what we do.. as we don't want losing an OSD to impact performance
[16:53] <Redcavalier> I just wonder how thread limits translate into actual bandwidth usage? Do you have some ideas on that?
[16:53] <Be-El> devicenull: it's acting, but not up
[16:53] <Redcavalier> Else I'll just go with trial and error
[16:54] <devicenull> oh, good catch!
[16:54] <bilco105_> Redcavalier: Nope, trial and error by injecting the recovery/backfill max_threads on the fly into the running OSDs
[16:54] <Redcavalier> ok, thanks
[16:55] * archiestengol (~chatzilla@c-50-183-112-236.hsd1.co.comcast.net) Quit (Read error: Connection reset by peer)
[16:55] <bilco105_> Redcavalier: ceph tell osd.\* injectargs '--osd_max_backfills 1 --osd_recovery_max_active 1'
[16:55] <bilco105_> and work up
[16:55] <absynth> you should never, never, never ratelimit a ceph network link
[16:55] <absynth> all kinds of really bad things happen
[16:55] <bilco105_> yah
[16:55] <devicenull> Be-El: hm, ceph osd dump shows it up, but I see it not up in query still
[16:56] <absynth> devicenull: check the osd logs, maybe it's flapping
[16:56] <absynth> (rapidly changing between down and up)
[16:56] * thb (~me@2a02:2028:1c3:4041:7c94:2ec4:4c5f:8bb1) has joined #ceph
[16:56] <devicenull> doesn
[16:56] <devicenull> doesn't seem to be
[16:58] <absynth> does the log for that osd say anything meaningful at all?
[16:58] <devicenull> no, it's just printing stuff ike this
[16:58] <devicenull> 2015-02-18 10:56:09.005866 7f8ac3ebd700 0 log_channel(default) log [INF] : 14.164 scrub ok
[16:59] <absynth> well, then the OSD is up
[16:59] <absynth> can the other OSDs and the mons reach it via the ceph network?
[16:59] <absynth> i.e. is it pingable etc?
[17:00] <devicenull> yes
[17:00] <devicenull> and lsof shows a bunch of open connections on the private network
[17:01] <absynth> interesting, why doesn't it come up then?
[17:01] <devicenull> that's the question :)
[17:01] <absynth> what happens if you do something like ceph osd up osd.56 on one of the other OSDs?
[17:02] <devicenull> ceph osd up? never seen that command before
[17:02] <absynth> you can tell ceph to manually mark OSDs up/down/in/out
[17:03] <absynth> sometimes helps to bring them back in line
[17:03] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) Quit (Quit: Leaving...)
[17:03] <absynth> ah, up doesn't seem to exist
[17:03] <absynth> meh
[17:03] <devicenull> yea. ceph doesnt recognize ceph osd up though
[17:04] <absynth> what happens if you manually mark it down?
[17:04] <absynth> nothing should happen at all i guess
[17:05] <devicenull> it gets angry
[17:05] <devicenull> 2015-02-18 11:04:40.938098 7f973ffe4700 0 log_channel(default) log [WRN] : map e179252 wrongly marked me down
[17:05] <devicenull> but no change
[17:05] <absynth> still as "down" in your osd tree?
[17:06] <devicenull> no, the weird thing is the osd tree shows it as up
[17:06] <devicenull> even though ceph pg query doesn't show it as up
[17:07] <devicenull> I suppose I could just mark that one as out, and see what happens
[17:08] * CephTestC (~CephTestC@199.91.185.156) has joined #ceph
[17:09] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:11] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:13] * puffy (~puffy@50.185.218.255) has joined #ceph
[17:13] * puffy (~puffy@50.185.218.255) Quit ()
[17:13] <devicenull> hm weird, it doesnt get auto marked as out once it goes down
[17:14] * oro (~oro@2001:620:20:16:9858:9d3:b3e9:fc05) Quit (Ping timeout: 480 seconds)
[17:15] * kapil (~ksharma@2620:113:80c0:5::2222) has joined #ceph
[17:15] * oro_ (~oro@2001:620:20:16:9858:9d3:b3e9:fc05) Quit (Ping timeout: 480 seconds)
[17:18] * alram_ (~alram@LAubervilliers-656-1-17-4.w217-128.abo.wanadoo.fr) Quit (Read error: No route to host)
[17:18] * puffy (~puffy@50.185.218.255) has joined #ceph
[17:18] * puffy (~puffy@50.185.218.255) Quit ()
[17:18] * alram (~alram@LAubervilliers-656-1-17-4.w217-128.abo.wanadoo.fr) has joined #ceph
[17:18] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) has joined #ceph
[17:19] * thomnico (~thomnico@66.211.241.83.in-addr.dgcsystems.net) has joined #ceph
[17:22] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[17:23] * halbritt (~halbritt@65.50.222.90) has joined #ceph
[17:23] * avozza_ (~avozza@a83-160-116-36.adsl.xs4all.nl) Quit (Remote host closed the connection)
[17:23] <halbritt> howdy folks
[17:24] * avozza (~avozza@a83-160-116-36.adsl.xs4all.nl) has joined #ceph
[17:24] <halbritt> I'm at a loss here.
[17:25] <halbritt> have a 2-node sandbox environment setup in vmware. Each node has 3 OSDs that are virtual disks of 10GB size. Bring up the pool, but can't get it to go active/clean.
[17:25] <devicenull> how many replicas do you have?
[17:25] <halbritt> 2
[17:25] <halbritt> HEALTH_WARN 128 pgs degraded; 128 pgs stuck degraded; 128 pgs stuck unclean; 128 pgs stuck undersized; 128 pgs undersized
[17:25] * cmorandin (~cmorandin@194.206.51.157) has joined #ceph
[17:26] <devicenull> I remember seeing something about this before...
[17:26] <devicenull> it had to do with tiny disks
[17:26] <halbritt> f[ceph@ceph-admin my-cluster]$ ceph osd pool get rbd size
[17:26] <halbritt> size: 2
[17:26] <halbritt> Was wondering if that's the case.
[17:26] * dgurtner (~dgurtner@178.197.231.49) Quit (Remote host closed the connection)
[17:26] <Be-El> halbritt: with 10GB OSD disk, where do you store the journal?
[17:26] * dgurtner (~dgurtner@178.197.231.49) has joined #ceph
[17:27] <halbritt> part2Number Start End Size File system Name Flags 2 1049kB 5369MB 5368MB ceph journal 1 5370MB 10.7GB 5368MB xfs ceph data
[17:27] <halbritt> eh
[17:27] <Be-El> halbritt: and did you set min_size also to 2?
[17:27] <halbritt> that didn't format.
[17:27] <halbritt> min_size is 1
[17:27] * guerby (~guerby@ip165-ipv6.tetaneutral.net) Quit (Remote host closed the connection)
[17:27] <halbritt> [ceph@ceph-admin my-cluster]$ ceph osd pool get rbd min_size
[17:27] <halbritt> min_size: 1
[17:28] <halbritt> anyway, part table looks like this:
[17:28] <halbritt> 2 1049kB 5369MB 5368MB ceph journal
[17:28] <halbritt> 1 5370MB 10.7GB 5368MB xfs ceph data
[17:29] * guerby (~guerby@ip165-ipv6.tetaneutral.net) has joined #ceph
[17:30] <Be-El> i also remember seeing a thread on the mailing list with a similar problem. the solution was to use large disks
[17:30] <burley_> halbritt: the last time I saw something similar posted was when the disks were so small that they got assigned a weight of 0
[17:30] <burley_> in the crushmap
[17:30] <Be-El> burley_: ah...that's it
[17:31] <burley_> the solution is to modify the crushmap to give them all a size (say 1)
[17:31] <burley_> if that is the case, anyways
[17:31] <burley_> "ceph osd tree" should output the weight
[17:32] <halbritt> okay
[17:32] * avozza (~avozza@a83-160-116-36.adsl.xs4all.nl) Quit (Ping timeout: 480 seconds)
[17:32] <halbritt> yup
[17:32] <halbritt> weight of 0
[17:32] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:32] <halbritt> new to ceph, so modifying the crushmap is beyond my expertise at the moment.
[17:32] <burley_> http://ceph.com/docs/master/rados/operations/crush-map/#editing-a-crush-map
[17:32] <halbritt> I'm sure I can figure it out quickly.
[17:32] <halbritt> yeah, I grasp the concept.
[17:33] <halbritt> thanks
[17:33] <burley_> there are ways to do at the cli, but that's what I've done -- just get a copy, decompile it, change the weights, recompile, and then load it in
[17:33] <devicenull> alright this is weird.. I killed osd 56 (ceph osd out, eventually killed the service entirely)
[17:34] <devicenull> now PGs that were hosted there aren't getting rebuilt... they're undersized now
[17:34] <burley_> and I recall that there was a ticket submitted as well at that time for the issue to get fixed, so in theory it should get fixed in the future to not bite others
[17:35] * vbellur (~vijay@122.172.198.91) has joined #ceph
[17:35] <burley_> devicenull: what's your "mon osd down out interval" set to?
[17:35] <burley_> and have you waited that many seconds since it was down
[17:35] * alram (~alram@LAubervilliers-656-1-17-4.w217-128.abo.wanadoo.fr) Quit (Quit: leaving)
[17:35] <debian112> anyone use debian 7 for ceph in production, or should I stick with CentOS 7?
[17:36] <Be-El> halbritt: ceph osd crush reweight <osd.XYZ> <weight>
[17:36] <halbritt> [ceph@ceph-admin my-cluster]$ ceph health detail
[17:36] <devicenull> burley_: huh? ceph is aware of the osd being down and out already
[17:36] <halbritt> HEALTH_OK
[17:36] * sputnik13 (~sputnik13@74.202.214.170) has joined #ceph
[17:36] <devicenull> so that timeout shouldnt matter
[17:36] <devicenull> 'osd.56 down out'
[17:36] <halbritt> ceph osd crush reweight?
[17:36] <halbritt> with that automatically modify the crushmap in flight?
[17:37] <Be-El> halbritt: that's the command to change the weight of a osd in the crush map
[17:37] <halbritt> I just decompiled, modified, and recompiled.
[17:37] * nardial (~ls@ipservice-092-209-178-132.092.209.pools.vodafone-ip.de) Quit (Quit: Leaving)
[17:37] <saltlake> experts, I have a deployed ceph cluster. I want to add 2 more nodes with storage on it ..
[17:38] <halbritt> I'll go idle now. Thanks so much for your help.
[17:39] <saltlake> I was wondering if there are anny gotchas or any specific way to add the monitors.
[17:40] <saltlake> I was going to follow the process of just "ceph deploy <newmon1> <newmon2>" follweed by osd addition
[17:41] * thomnico (~thomnico@66.211.241.83.in-addr.dgcsystems.net) Quit (Quit: Ex-Chat)
[17:44] * dyasny (~dyasny@198.251.59.151) Quit (Ping timeout: 480 seconds)
[17:46] * dgurtner (~dgurtner@178.197.231.49) Quit (Ping timeout: 480 seconds)
[17:47] * roehrich (~roehrich@146.174.238.100) has joined #ceph
[17:51] <CephTestC> saltlake: I just purchased a book called "learning ceph" publisher packt. Do you have this?
[17:52] <saltlake> Cephtestc: Nope !! DO you like it ?
[17:52] <CephTestC> This is the most helpful book I've ever purchased.
[17:52] <saltlake> CephTestC: gee thanks.. will get it too..
[17:53] <CephTestC> I'm wondering if the author is in this channel. I would like to send a big thank you!
[17:53] <saltlake> Karan Singh!! Wow!!
[17:53] <CephTestC> I just bought it this morning and it's awesome. I got my Ceph tiering setup in less than an hour.
[17:53] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[17:54] <saltlake> CephTestC: I will order it right away (Me hoping You are NOT karan singh .. hahahaha)
[17:55] <CephTestC> Lol... Nope. I'm just happy it exists... and totally recommend it!
[17:55] <CephTestC> Let me know what you think after reading it!
[17:55] <saltlake> cephtestc: if u don't mind can u check if it has a section on "How to add a new node to an existing ceph cluster" with no disruption to anything ..
[17:55] <CephTestC> sure
[17:56] <saltlake> I really should write a book on an opensrouce technology and get rich!!
[17:57] * alram (~alram@ppp-seco11pa2-46-193-132-162.wb.wifirst.net) has joined #ceph
[17:58] <ircolle> saltlake - yes, writing a book is definitely a quick way to get rich???err, maybe not :-)
[17:58] <saltlake> ircolle: Sorry I was trying to get some humor for myself for today.. :-)
[18:00] * kefu (~kefu@114.92.100.153) has joined #ceph
[18:02] <devicenull> hmmm... now osd.55 has started doing the not up but still up thing
[18:02] * sudocat (~davidi@192.185.1.20) has joined #ceph
[18:04] * bandrus (~brian@184.sub-70-211-83.myvzw.com) has joined #ceph
[18:05] * rturk|afk is now known as rturk
[18:05] * rmoe (~quassel@173-228-89-134.dsl.static.fusionbroadband.com) Quit (Ping timeout: 480 seconds)
[18:06] * dneary (~dneary@96.237.180.105) Quit (Ping timeout: 480 seconds)
[18:06] * oro_ (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[18:06] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[18:07] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:08] <nils_> is there a way to mount a striped rbd image?
[18:12] * davidzlap (~Adium@2605:e000:1313:8003:95e6:e660:9c18:47b9) has joined #ceph
[18:13] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[18:14] * rljohnsn (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[18:19] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Quit: segutier)
[18:19] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[18:19] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:21] * shaunm (~shaunm@nat-pool-brq-u.redhat.com) Quit (Ping timeout: 480 seconds)
[18:25] * analbeard (~shw@host86-140-202-228.range86-140.btcentralplus.com) has joined #ceph
[18:25] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[18:28] <devicenull> any ideas when the next giant version is coming?
[18:28] <devicenull> I've found giant loves to wedge itself into weird unrecoverable states
[18:28] <devicenull> like... health HEALTH_WARN 6 pgs degraded; 9 pgs stuck unclean; 6 pgs undersized
[18:28] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[18:29] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:30] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[18:33] * dc_mattj (~matt@office-fw0.sal01.datacentred.net) Quit (Remote host closed the connection)
[18:35] * mykola (~Mikolaj@91.225.201.255) has joined #ceph
[18:42] * analbeard (~shw@host86-140-202-228.range86-140.btcentralplus.com) Quit (Quit: Leaving.)
[18:42] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[18:42] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[18:46] * davidzlap1 (~Adium@cpe-23-242-189-171.socal.res.rr.com) has joined #ceph
[18:46] * davidzlap (~Adium@2605:e000:1313:8003:95e6:e660:9c18:47b9) Quit (Ping timeout: 480 seconds)
[18:47] * puffy (~puffy@161.170.193.99) has joined #ceph
[18:48] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[18:48] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[18:49] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:51] * rturk is now known as rturk|afk
[18:54] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[18:54] * rturk|afk is now known as rturk
[18:55] * hellertime (~Adium@pool-173-48-56-84.bstnma.fios.verizon.net) has joined #ceph
[18:55] * hellertime (~Adium@pool-173-48-56-84.bstnma.fios.verizon.net) Quit ()
[18:59] * fghaas (~florian@185.15.236.4) has joined #ceph
[19:00] * davidzlap (~Adium@cpe-23-242-189-171.socal.res.rr.com) has joined #ceph
[19:00] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[19:04] * jrankin (~jrankin@nat-pool-rdu-t.redhat.com) has joined #ceph
[19:05] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[19:06] * rljohnsn1 (~rljohnsn@ns25.8x8.com) has joined #ceph
[19:06] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:06] * davidzlap1 (~Adium@cpe-23-242-189-171.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[19:07] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[19:08] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) has joined #ceph
[19:10] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[19:13] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Ping timeout: 480 seconds)
[19:13] * fghaas (~florian@185.15.236.4) Quit (Quit: Leaving.)
[19:14] * fghaas (~florian@185.15.236.4) has joined #ceph
[19:15] * linjan (~linjan@213.8.240.146) has joined #ceph
[19:16] * snakamoto (~snakamoto@157.254.210.31) has joined #ceph
[19:16] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[19:16] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[19:20] * avozza (~avozza@83.162.204.36) has joined #ceph
[19:25] * fghaas (~florian@185.15.236.4) Quit (Quit: Leaving.)
[19:26] * rturk is now known as rturk|afk
[19:27] * kefu (~kefu@114.92.100.153) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:27] * jamespd_ (~mucky@mucky.socket7.org) Quit (Ping timeout: 480 seconds)
[19:28] * jamespd (~mucky@mucky.socket7.org) has joined #ceph
[19:31] * bro_ (~flybyhigh@panik.darksystem.net) has joined #ceph
[19:31] * ircolle is now known as ircolle-afk
[19:32] <devicenull> managed to get it down to HEALTH_WARN 1 pgs degraded; 1 pgs stuck unclean; 1 pgs undersized
[19:33] <devicenull> but I have no idea why I have something stuck undersized, that doesn't make much sense
[19:33] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[19:34] * archiestengol (~chatzilla@69.241.53.218) has joined #ceph
[19:36] <CephTestC> <devicenull>: Would you suggest installing Firefly instead of Giant?
[19:36] <devicenull> yea, we had no problems until we upgraded to giant
[19:37] <CephTestC> Ok Thanks!
[19:40] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:41] * bearkitten (~bearkitte@cpe-66-27-98-26.san.res.rr.com) has joined #ceph
[19:41] * jeffmcdonald-msi1 (~Thunderbi@claudia.msi.umn.edu) has left #ceph
[19:42] * jeffmcdonald-msi1 (~Thunderbi@claudia.msi.umn.edu) has joined #ceph
[19:43] * adrian15b (~kvirc@149.Red-83-47-234.dynamicIP.rima-tde.net) has left #ceph
[19:43] * jeffmcdonald-msi1 (~Thunderbi@claudia.msi.umn.edu) has left #ceph
[19:44] * jeff1 (~Thunderbi@claudia.msi.umn.edu) has joined #ceph
[19:44] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (Remote host closed the connection)
[19:44] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[19:44] * rturk|afk is now known as rturk
[19:45] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Ping timeout: 480 seconds)
[19:47] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[19:48] * avozza (~avozza@83.162.204.36) Quit (Remote host closed the connection)
[19:49] * avozza (~avozza@83.162.204.36) has joined #ceph
[19:49] <snakamoto> Hi, is there anyone that can help me with a couple of Rados GW questions?
[19:50] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[19:54] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (Quit: No Ping reply in 180 seconds.)
[19:54] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[19:57] * avozza (~avozza@83.162.204.36) Quit (Ping timeout: 480 seconds)
[19:58] * rturk is now known as rturk|afk
[20:01] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[20:01] * alram (~alram@ppp-seco11pa2-46-193-132-162.wb.wifirst.net) Quit (Ping timeout: 480 seconds)
[20:02] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[20:06] * Sysadmin88 (~IceChat77@94.12.240.104) Quit (Quit: When the chips are down, well, the buffalo is empty)
[20:08] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[20:09] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:09] * bearkitten (~bearkitte@cpe-66-27-98-26.san.res.rr.com) Quit (Quit: WeeChat 1.1.1)
[20:10] * rljohnsn1 (~rljohnsn@ns25.8x8.com) Quit (Read error: Connection reset by peer)
[20:12] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) Quit (Quit: Ex-Chat)
[20:14] * bearkitten (~bearkitte@cpe-66-27-98-26.san.res.rr.com) has joined #ceph
[20:15] * bearkitten (~bearkitte@cpe-66-27-98-26.san.res.rr.com) Quit ()
[20:15] * Kioob (~Kioob@2a01:e34:ec0a:c0f0:7e7a:91ff:fe3c:6865) has joined #ceph
[20:17] * bearkitten (~bearkitte@cpe-66-27-98-26.san.res.rr.com) has joined #ceph
[20:17] * bearkitten (~bearkitte@cpe-66-27-98-26.san.res.rr.com) Quit ()
[20:18] * LeaChim (~LeaChim@host86-147-114-247.range86-147.btcentralplus.com) has joined #ceph
[20:19] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) has joined #ceph
[20:19] * xarses (~andreww@12.164.168.117) has joined #ceph
[20:20] * puffy (~puffy@161.170.193.99) Quit (Quit: Leaving.)
[20:23] * fghaas (~florian@185.15.236.4) has joined #ceph
[20:24] * fretb (~fretb@pie.frederik.pw) Quit (Ping timeout: 480 seconds)
[20:24] * Sysadmin88 (~IceChat77@94.12.240.104) has joined #ceph
[20:25] <championofcyrodi> does rados/ceph using any kind of caching 'under the hood'. e.g. I have a VM using rados image as a block device. reads are sometimes over 100MB/sec. even though the network is bound to 1.0Gbps
[20:26] <championofcyrodi> like 150MB/sec... even though 1.0GBps ~ 120MB/sec.
[20:27] <jeff1> Hi, we're having some issues with the radosgw marking osds as down after large multi-part file transfers. I don't see very much useful information in the logs files. The system was running splendidly until we upgraded to the giant release: 0.87. The symptoms are that an s3cmd transfer of a file (bigger than 3 GB) starts a multi-part transfer and kicks over some number of osds--on the final part of the multi-part transfer. The s3cmd is very 1.5.0,
[20:27] <jeff1> but we've reproduced the issue with 1.5.2 and sever different python versions. The osds will remain marked as down and any attempt to restart them without first stopping the radosgw fails to bring up the failed osd. Any advice on what could be wrong would be very appreciated.
[20:27] * linjan (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[20:29] * _prime_ (~oftc-webi@199.168.44.192) has joined #ceph
[20:29] * dyasny (~dyasny@173.231.115.58) Quit (Ping timeout: 480 seconds)
[20:29] * nils_ (~nils@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[20:30] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[20:33] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[20:33] * bearkitten (~bearkitte@cpe-66-27-98-26.san.res.rr.com) has joined #ceph
[20:34] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Read error: Connection reset by peer)
[20:37] * jrankin (~jrankin@nat-pool-rdu-t.redhat.com) Quit (Ping timeout: 480 seconds)
[20:37] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[20:39] * puffy (~puffy@161.170.193.99) has joined #ceph
[20:39] * puffy (~puffy@161.170.193.99) Quit ()
[20:49] <devicenull> are your osds just locking up trying to write the file?
[20:49] <devicenull> does strace show them doing anything (or even top)
[20:51] <saturnine> With KVM & rbd_cache enabled, should we be using cache=none or cache=writeback?
[20:51] * avozza (~avozza@83.162.204.36) has joined #ceph
[20:51] <saturnine> Apparently none is safe for live migration, but there's a risk of data loss if not using writeback?
[20:51] * fretb (frederik@november.openminds.be) has joined #ceph
[20:56] * archiestengol (~chatzilla@69.241.53.218) Quit (Ping timeout: 480 seconds)
[20:56] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:57] <lurbs> saturnine: http://marc.info/?l=ceph-devel&m=134110569328691&w=2
[20:57] * archiestengol (~chatzilla@69.241.53.218) has joined #ceph
[20:58] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) has joined #ceph
[20:59] * avozza (~avozza@83.162.204.36) Quit (Remote host closed the connection)
[21:00] * avozza (~avozza@83.162.204.36) has joined #ceph
[21:02] <saturnine> lurbs: I saw that, but I also saw this: http://markmail.org/message/ob5pizl6vbt4l44x#query:+page:1+mid:lm3heehpjnfzgihw+state:results
[21:02] <saturnine> It looks like there was an issue with the Qemu defaults that made it unsafe, but I see a patch was committed a few months ago.
[21:03] <halbritt> anyone have any experience using very low-latency SSDs as a journal for SSD OSDs?
[21:05] * archiestengol (~chatzilla@69.241.53.218) Quit (Remote host closed the connection)
[21:05] <jeff1> devicenull: the osds are locking up only on writes only so far. I have 170 of them and no predictability as to which one the file will land on, so I didn't try to do an strace. Problems don't seem to occur on files smaller than 1 GB. The osds -- when I restart them don't seem to show any errors, it appears that something is cached on the radosgw and it continually tries to write to that osd which fails and the osd is marked as down.
[21:06] * nitti (~nitti@162.222.47.218) Quit (Ping timeout: 480 seconds)
[21:06] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[21:08] * avozza (~avozza@83.162.204.36) Quit (Ping timeout: 480 seconds)
[21:09] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) Quit (Quit: KVIrc 4.3.1 Aria http://www.kvirc.net/)
[21:13] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:15] * togdon (~togdon@74.121.28.6) has joined #ceph
[21:16] * lx0 is now known as lxo
[21:24] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) has joined #ceph
[21:26] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) Quit ()
[21:27] * vbellur (~vijay@122.172.198.91) Quit (Ping timeout: 480 seconds)
[21:27] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) has joined #ceph
[21:30] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Remote host closed the connection)
[21:30] * ganders (~root@200.32.121.70) Quit (Quit: WeeChat 0.4.2)
[21:31] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[21:34] * dyasny (~dyasny@173.231.115.58) Quit (Ping timeout: 480 seconds)
[21:36] * linjan (~linjan@80.179.241.26) has joined #ceph
[21:37] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:38] * fretb (frederik@november.openminds.be) Quit (Quit: Lost terminal)
[21:38] * togdon (~togdon@74.121.28.6) has joined #ceph
[21:43] * nitti (~nitti@162.222.47.218) has joined #ceph
[21:46] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[21:46] * fretb (frederik@november.openminds.be) has joined #ceph
[21:46] * fretb (frederik@november.openminds.be) Quit ()
[21:47] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[21:48] * fretb (frederik@november.openminds.be) has joined #ceph
[21:49] * fretb_ (frederik@november.openminds.be) has joined #ceph
[21:49] * fretb (frederik@november.openminds.be) Quit ()
[21:49] * fretb_ (frederik@november.openminds.be) Quit ()
[21:51] * fretb (frederik@november.openminds.be) has joined #ceph
[21:51] * fretb (frederik@november.openminds.be) Quit ()
[21:52] * fretb (frederik@november.openminds.be) has joined #ceph
[21:52] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) Quit (Quit: Leaving)
[21:53] * fghaas (~florian@185.15.236.4) has left #ceph
[21:54] * linjan (~linjan@80.179.241.26) Quit (Ping timeout: 480 seconds)
[21:55] * linjan (~linjan@213.8.240.146) has joined #ceph
[21:55] * roehrich (~roehrich@146.174.238.100) Quit (Quit: Leaving)
[21:58] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:00] * haomaiwa_ (~haomaiwan@115.218.153.142) has joined #ceph
[22:01] * oro_ (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:03] * _prime_ (~oftc-webi@199.168.44.192) Quit (Quit: Page closed)
[22:03] * alram (~alram@ppp-seco11pa2-46-193-132-162.wb.wifirst.net) has joined #ceph
[22:04] * sputnik13 (~sputnik13@74.202.214.170) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:06] * alram (~alram@ppp-seco11pa2-46-193-132-162.wb.wifirst.net) Quit ()
[22:07] * haomaiwang (~haomaiwan@115.218.155.68) Quit (Ping timeout: 480 seconds)
[22:07] * linjan (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[22:09] * sputnik13 (~sputnik13@74.202.214.170) has joined #ceph
[22:10] * puffy (~puffy@161.170.193.99) has joined #ceph
[22:14] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[22:16] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) has joined #ceph
[22:17] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[22:17] * georgem (~Adium@fwnat.oicr.on.ca) Quit ()
[22:19] * Rickus (~Rickus@office.protected.ca) Quit (Read error: Connection reset by peer)
[22:20] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[22:23] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[22:31] * rljohnsn1 (~rljohnsn@ns25.8x8.com) has joined #ceph
[22:32] * _prime_ (~oftc-webi@199.168.44.192) has joined #ceph
[22:32] * dupont-y (~dupont-y@2a01:e34:ec92:8070:3cda:ef32:6b54:b12d) has joined #ceph
[22:34] * mykola (~Mikolaj@91.225.201.255) Quit (Quit: away)
[22:35] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[22:38] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Ping timeout: 480 seconds)
[22:42] * Redcavalier (~Redcavali@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Read error: Connection reset by peer)
[22:43] * rljohnsn1 (~rljohnsn@ns25.8x8.com) Quit (Quit: Leaving.)
[22:44] * georgem (~Adium@184.151.178.119) has joined #ceph
[22:48] * georgem (~Adium@184.151.178.119) Quit ()
[22:48] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[22:52] * cholcombe973 (~chris@73.25.105.99) has joined #ceph
[22:52] * shaunm (~shaunm@62.209.224.147) has joined #ceph
[22:58] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[23:03] * sputnik13 (~sputnik13@74.202.214.170) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:03] * lcurtis (~lcurtis@47.19.105.250) Quit (Read error: Connection reset by peer)
[23:04] * sputnik13 (~sputnik13@74.202.214.170) has joined #ceph
[23:05] * togdon (~togdon@74.121.28.6) has joined #ceph
[23:06] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[23:08] * Nacer (~Nacer@2001:41d0:fe82:7200:19a:7310:6598:9ee) has joined #ceph
[23:10] * sjm1 (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) has joined #ceph
[23:10] * sjm1 (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) Quit ()
[23:14] * ircolle-afk is now known as ircolle
[23:15] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[23:15] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[23:16] * oro_ (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[23:22] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Quit: Leaving.)
[23:23] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[23:24] * saltlake (~saltlake@12.250.199.170) Quit (Ping timeout: 480 seconds)
[23:27] * angdraug (~angdraug@c-50-174-102-105.hsd1.ca.comcast.net) has joined #ceph
[23:28] * dyasny (~dyasny@173.231.115.58) Quit (Ping timeout: 480 seconds)
[23:30] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:31] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[23:38] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[23:40] * dyasny (~dyasny@198.251.59.151) has joined #ceph
[23:41] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[23:42] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) has joined #ceph
[23:43] * lcurtis (~lcurtis@47.19.105.250) Quit (Ping timeout: 480 seconds)
[23:46] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[23:46] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) Quit (Quit: Leaving)
[23:50] * vata (~vata@208.88.110.46) Quit (Quit: Leaving.)
[23:59] * sjm (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) has left #ceph
[23:59] * joef (~Adium@2620:79:0:2420::10) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.