#ceph IRC Log

Index

IRC Log for 2015-04-14

Timestamps are in GMT/BST.

[0:06] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[0:06] * xcezzz (~Adium@pool-100-3-14-19.tampfl.fios.verizon.net) has joined #ceph
[0:06] * xcezzz1 (~Adium@pool-100-3-14-19.tampfl.fios.verizon.net) Quit (Read error: Connection reset by peer)
[0:09] * fghaas (~florian@185.15.236.4) Quit (Quit: Leaving.)
[0:10] * fireD (~fireD@93-139-197-152.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[0:16] * oro_ (~oro@sccc-66-78-236-243.smartcity.com) has joined #ceph
[0:16] * oro (~oro@sccc-66-78-236-243.smartcity.com) Quit (Ping timeout: 480 seconds)
[0:16] * ChrisNBlum (~ChrisNBlu@178.255.153.117) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[0:20] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[0:22] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[0:23] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[0:24] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[0:27] * jiyer (~chatzilla@63.229.31.161) Quit (Quit: ChatZilla 0.9.91.1 [Firefox 37.0.1/20150403142420])
[0:29] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[0:30] * MVenesio (~MVenesio@186.136.59.165) Quit (Quit: Leaving...)
[0:31] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Ping timeout: 480 seconds)
[0:32] * Steki (~steki@cable-89-216-232-72.dynamic.sbb.rs) Quit (Remote host closed the connection)
[0:32] * fsimonce (~simon@host178-188-dynamic.26-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:33] * OutOfNoWhere (~rpb@199.68.195.102) has joined #ceph
[0:43] * ivs (~ivs@ip-95-221-196-162.bb.netbynet.ru) Quit ()
[0:51] * PerlStalker (~PerlStalk@162.220.127.20) Quit (Quit: ...)
[0:59] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[1:13] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[1:13] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[1:17] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[1:28] * wschulze (~wschulze@38.96.12.2) Quit (Quit: Leaving.)
[1:28] * alram (~alram@38.96.12.2) has joined #ceph
[1:29] * oms101 (~oms101@2003:57:ea00:ad00:eef4:bbff:fe0f:7062) Quit (Ping timeout: 480 seconds)
[1:29] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[1:31] * alram (~alram@38.96.12.2) Quit ()
[1:33] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[1:37] * oms101 (~oms101@p20030057EA00CA00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:37] * wschulze (~wschulze@38.96.12.2) has joined #ceph
[1:38] * wschulze (~wschulze@38.96.12.2) Quit ()
[1:39] * alram (~alram@38.96.12.2) has joined #ceph
[1:46] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:46] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[1:48] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit ()
[1:56] * dmick (~dmick@2607:f298:a:607:c91b:63e9:9528:c716) has left #ceph
[2:03] * alram (~alram@38.96.12.2) Quit (Quit: leaving)
[2:03] * alram (~alram@38.96.12.2) has joined #ceph
[2:03] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) Quit (Quit: Leaving...)
[2:23] * bandrus (~brian@128.sub-70-211-79.myvzw.com) Quit (Quit: Leaving.)
[2:24] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[2:24] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[2:27] * sherlocked (~watson@14.139.82.6) has joined #ceph
[2:27] * sherlocked (~watson@14.139.82.6) Quit ()
[2:28] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[2:28] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Remote host closed the connection)
[2:29] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[2:29] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Remote host closed the connection)
[2:30] * calvinx (~calvin@103.7.202.198) has joined #ceph
[2:36] * puffy (~puffy@216.207.42.144) Quit (Quit: Leaving.)
[2:36] * reed (~reed@2602:244:b653:6830:71f3:5114:3563:946d) Quit (Quit: Ex-Chat)
[2:40] * nigwil (~Oz@li747-216.members.linode.com) Quit (Quit: leaving)
[2:40] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[2:41] * evilrob00 (~evilrob00@adsl-172-2-49-87.dsl.aus2tx.sbcglobal.net) has joined #ceph
[2:41] * calvinx (~calvin@103.7.202.198) Quit (Quit: calvinx)
[2:43] * nigwil (~Oz@li747-216.members.linode.com) has joined #ceph
[2:44] * daniel2_ (~daniel2_@12.164.168.117) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[2:48] * oro_ (~oro@sccc-66-78-236-243.smartcity.com) Quit (Ping timeout: 480 seconds)
[2:51] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Quit: segutier)
[2:52] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[2:56] * zack_dolby (~textual@nfmv001080013.uqw.ppp.infoweb.ne.jp) has joined #ceph
[2:58] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[3:00] * yanzheng (~zhyan@182.139.204.64) has joined #ceph
[3:01] * wschulze (~wschulze@38.96.12.2) has joined #ceph
[3:02] * daniel2_ (~daniel2_@12.164.168.117) has joined #ceph
[3:02] * segutier_ (~segutier@128.90.95.72) has joined #ceph
[3:06] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:06] * segutier_ is now known as segutier
[3:08] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[3:10] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[3:11] * georgem (~Adium@69-196-174-91.dsl.teksavvy.com) has joined #ceph
[3:14] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[3:16] * georgem (~Adium@69-196-174-91.dsl.teksavvy.com) Quit (Quit: Leaving.)
[3:16] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[3:18] * daniel2_ (~daniel2_@12.164.168.117) Quit (Quit: Textual IRC Client: www.textualapp.com)
[3:22] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[3:25] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[3:35] * root2 (~root@pD9E9DA3E.dip0.t-ipconnect.de) has joined #ceph
[3:38] * badone_ is now known as badone
[3:41] * root (~root@p5DDE649B.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:45] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[3:50] * kefu (~kefu@114.92.111.70) has joined #ceph
[3:56] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[3:58] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[4:03] * zhaochao (~zhaochao@111.161.77.236) has joined #ceph
[4:09] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) has joined #ceph
[4:18] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has left #ceph
[4:20] * jksM (~jks@178.155.151.121) has joined #ceph
[4:21] * jks (~jks@178.155.151.121) Quit (Read error: Connection reset by peer)
[4:23] * glzhao_ (~glzhao@220.181.124.79) has joined #ceph
[4:25] * anon (~oftc-webi@cpe-72-183-101-74.austin.res.rr.com) has joined #ceph
[4:25] * anon (~oftc-webi@cpe-72-183-101-74.austin.res.rr.com) Quit ()
[4:26] * anon9110w (~oftc-webi@cpe-72-183-101-74.austin.res.rr.com) has joined #ceph
[4:29] * bkopilov (~bkopilov@bzq-109-66-134-152.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[4:30] * glzhao (~glzhao@203.90.249.185) Quit (Ping timeout: 480 seconds)
[4:30] * zack_dol_ (~textual@nfmv001072045.uqw.ppp.infoweb.ne.jp) has joined #ceph
[4:32] * Kyso (~murmur@tor.de.smashthestack.org) has joined #ceph
[4:34] * zack_dolby (~textual@nfmv001080013.uqw.ppp.infoweb.ne.jp) Quit (Ping timeout: 480 seconds)
[4:40] * oro_ (~oro@8.25.222.10) has joined #ceph
[4:44] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) Quit (Quit: Leaving.)
[4:54] * glzhao__ (~glzhao@203.90.249.185) has joined #ceph
[4:59] * MACscr (~Adium@2601:d:c800:de3:4ca1:b9d2:c410:6b79) Quit (Quit: Leaving.)
[4:59] * segutier (~segutier@128.90.95.72) Quit (Quit: segutier)
[5:00] * tobiash (~quassel@mail.bmw-carit.de) Quit (Remote host closed the connection)
[5:01] * glzhao_ (~glzhao@220.181.124.79) Quit (Ping timeout: 480 seconds)
[5:01] * rrerolle (~smuxi@vps.neonex.fr) Quit (Remote host closed the connection)
[5:01] * tobiash (~quassel@mail.bmw-carit.de) has joined #ceph
[5:01] * S`Husky (~sam@huskeh.net) Quit (Remote host closed the connection)
[5:01] * nyov (~nyov@178.33.33.184) Quit (Remote host closed the connection)
[5:02] * S`Husky (~sam@huskeh.net) has joined #ceph
[5:02] * nyov (~nyov@178.33.33.184) has joined #ceph
[5:02] * Kyso (~murmur@5NZAABLK9.tor-irc.dnsbl.oftc.net) Quit ()
[5:02] * s3an2 (~sean@korn.s3an.me.uk) Quit (Remote host closed the connection)
[5:02] * s3an2 (~sean@korn.s3an.me.uk) has joined #ceph
[5:03] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Remote host closed the connection)
[5:03] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[5:03] * rrerolle (~smuxi@vps.neonex.fr) has joined #ceph
[5:09] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[5:13] * Mika_c (~quassel@125.227.22.217) has joined #ceph
[5:20] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) has joined #ceph
[5:23] * visbits (~textual@cpe-174-101-246-167.cinci.res.rr.com) Quit (Read error: No route to host)
[5:23] * visbits (~textual@cpe-174-101-246-167.cinci.res.rr.com) has joined #ceph
[5:24] * visbits (~textual@cpe-174-101-246-167.cinci.res.rr.com) Quit ()
[5:25] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[5:29] * Vacuum (~vovo@88.130.221.51) has joined #ceph
[5:31] * evilrob00 (~evilrob00@adsl-172-2-49-87.dsl.aus2tx.sbcglobal.net) Quit (Remote host closed the connection)
[5:32] * fam is now known as fam_away
[5:33] * evilrob00 (~evilrob00@adsl-172-2-49-87.dsl.aus2tx.sbcglobal.net) has joined #ceph
[5:36] * Vacuum_ (~vovo@i59F7ACBB.versanet.de) Quit (Ping timeout: 480 seconds)
[5:37] * oro_ (~oro@8.25.222.10) Quit (Ping timeout: 480 seconds)
[5:38] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[5:42] * fam_away is now known as fam
[5:46] * MrHeavy_ (~mrheavy@pool-108-54-190-117.nycmny.fios.verizon.net) Quit (Read error: Connection reset by peer)
[5:55] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[5:55] * fmanana (~fdmanana@bl13-131-19.dsl.telepac.pt) has joined #ceph
[5:56] * evilrob00 (~evilrob00@adsl-172-2-49-87.dsl.aus2tx.sbcglobal.net) Quit (Remote host closed the connection)
[6:00] * vbellur (~vijay@122.166.171.197) Quit (Ping timeout: 480 seconds)
[6:00] * qwebirc54434 (~oftc-webi@ec2-54-149-249-115.us-west-2.compute.amazonaws.com) has joined #ceph
[6:01] * vbellur (~vijay@122.166.171.197) has joined #ceph
[6:02] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[6:02] * fdmanana (~fdmanana@bl13-133-198.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[6:02] * qwebirc30342 (~oftc-webi@116.255.132.3) has joined #ceph
[6:04] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[6:05] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) Quit (Quit: Leaving.)
[6:13] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) has joined #ceph
[6:13] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Quit: segutier)
[6:15] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:16] * vbellur (~vijay@122.166.171.197) Quit (Ping timeout: 480 seconds)
[6:16] * kefu (~kefu@114.92.111.70) has joined #ceph
[6:20] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) has joined #ceph
[6:26] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:36] <skullone> is ceph.com having issues?
[6:36] <destrudo_> yup
[6:36] <skullone> ;(
[6:36] * destrudo_ is now known as destrudo
[6:36] <destrudo> Intricate, delicate
[6:38] * vpol (~vpol@000131a0.user.oftc.net) has joined #ceph
[6:39] * joshd (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[6:43] * vbellur (~vijay@121.244.87.117) has joined #ceph
[6:43] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[7:02] * madkiss (~madkiss@2001:6f8:12c3:f00f:e8e4:145f:403a:5f58) Quit (Ping timeout: 480 seconds)
[7:02] * AluAlu (~pepzi@146.185.143.144) has joined #ceph
[7:06] * hasues (~hazuez@66.87.152.244) has joined #ceph
[7:06] * hasues (~hazuez@66.87.152.244) has left #ceph
[7:11] * overclk (~overclk@121.244.87.117) has joined #ceph
[7:14] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[7:16] * hasues1 (~hazuez@66.87.153.48) has joined #ceph
[7:16] * hasues1 (~hazuez@66.87.153.48) has left #ceph
[7:19] * fghaas (~florian@185.15.236.4) has joined #ceph
[7:20] * hasues (~hazuez@66.87.153.239) has joined #ceph
[7:20] * hasues (~hazuez@66.87.153.239) Quit ()
[7:21] * fghaas1 (~florian@212095007017.public.telering.at) has joined #ceph
[7:23] * oro_ (~oro@207-47-24-10.static-ip.telepacific.net) has joined #ceph
[7:24] * rdas (~rdas@110.227.44.189) has joined #ceph
[7:24] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[7:25] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[7:28] * fghaas (~florian@185.15.236.4) Quit (Ping timeout: 480 seconds)
[7:32] * AluAlu (~pepzi@2WVAABJRV.tor-irc.dnsbl.oftc.net) Quit ()
[7:33] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[7:34] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[7:38] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[7:39] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[7:39] * OutOfNoWhere (~rpb@199.68.195.102) Quit (Ping timeout: 480 seconds)
[7:45] * joef (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) has joined #ceph
[7:45] * fghaas1 (~florian@212095007017.public.telering.at) Quit (Ping timeout: 480 seconds)
[7:48] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[7:51] * kefu (~kefu@114.92.111.70) has joined #ceph
[7:51] * joef (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[7:52] * puffy (~puffy@50.185.218.255) has joined #ceph
[7:53] * alram (~alram@38.96.12.2) Quit (Quit: Lost terminal)
[7:57] * elder (~elder@50.250.13.174) Quit (Ping timeout: 480 seconds)
[8:00] * hasues (~hazuez@66.87.152.27) has joined #ceph
[8:00] * hasues (~hazuez@66.87.152.27) has left #ceph
[8:01] * fghaas (~florian@185.15.236.4) has joined #ceph
[8:01] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[8:01] <qwebirc30342> ceph version 0.80.0
[8:01] * fghaas (~florian@185.15.236.4) Quit ()
[8:02] <qwebirc30342> how to resolved FAILED assert(soid < scrubber.start || soid >= scrubber.end)
[8:02] * phoenix42 (~phoenix42@122.252.249.67) has joined #ceph
[8:02] * Grimhound (~clusterfu@strasbourg-tornode.eddai.su) has joined #ceph
[8:10] * phoenix42_ (~phoenix42@122.252.249.67) has joined #ceph
[8:11] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[8:12] * daniel2_ (~daniel2@12.0.207.18) has joined #ceph
[8:13] * daniel2_ (~daniel2@12.0.207.18) Quit ()
[8:13] * daniel2_ (~daniel2@12.0.207.18) has joined #ceph
[8:13] * phoenix42 (~phoenix42@122.252.249.67) Quit (Ping timeout: 480 seconds)
[8:16] * stein (~stein@185.56.185.82) Quit (Ping timeout: 480 seconds)
[8:22] * zack_dolby (~textual@nfmv001073072.uqw.ppp.infoweb.ne.jp) has joined #ceph
[8:23] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[8:25] * fghaas (~florian@185.15.236.4) has joined #ceph
[8:25] * karnan (~karnan@121.244.87.117) has joined #ceph
[8:27] * phoenix42 (~phoenix42@122.252.249.67) has joined #ceph
[8:27] * zack_dol_ (~textual@nfmv001072045.uqw.ppp.infoweb.ne.jp) Quit (Ping timeout: 480 seconds)
[8:31] * phoenix42_ (~phoenix42@122.252.249.67) Quit (Ping timeout: 480 seconds)
[8:32] * Grimhound (~clusterfu@5NZAABLSP.tor-irc.dnsbl.oftc.net) Quit ()
[8:35] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) Quit (Quit: He who laughs last, thinks slowest)
[8:36] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[8:38] * phoenix42_ (~phoenix42@122.252.249.67) has joined #ceph
[8:38] * phoenix42 (~phoenix42@122.252.249.67) Quit (Read error: Connection reset by peer)
[8:43] * fghaas (~florian@185.15.236.4) Quit (Quit: Leaving.)
[8:45] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[8:50] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[8:55] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[8:55] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[8:56] * fghaas (~florian@185.15.236.4) has joined #ceph
[9:00] * fghaas1 (~florian@213162068109.public.t-mobile.at) has joined #ceph
[9:02] * brianjjo (~Curt`@2WVAABJWJ.tor-irc.dnsbl.oftc.net) has joined #ceph
[9:05] * phoenix42 (~phoenix42@122.252.249.67) has joined #ceph
[9:05] * Mika_c (~quassel@125.227.22.217) Quit (Ping timeout: 480 seconds)
[9:06] * fghaas (~florian@185.15.236.4) Quit (Ping timeout: 480 seconds)
[9:09] * phoenix42_ (~phoenix42@122.252.249.67) Quit (Ping timeout: 480 seconds)
[9:11] * smithfarm (~ncutler@nat1.scz.suse.com) has joined #ceph
[9:13] * analbeard (~shw@support.memset.com) has joined #ceph
[9:15] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:15] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:16] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[9:19] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[9:20] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) Quit (Quit: Leaving.)
[9:21] * daniel2_ (~daniel2@12.0.207.18) Quit (Remote host closed the connection)
[9:22] * thomnico (~thomnico@2a01:e35:8b41:120:c43a:4e84:65cd:44cf) has joined #ceph
[9:26] * qstion (~qstion@37.157.144.44) has joined #ceph
[9:27] <qstion> hello, i'm using ceph version 0.93
[9:27] <qstion> i have one pool (24 OSD, 1024PG)
[9:27] <qstion> Pool size: 3
[9:27] <qstion> Pool contains: 4 images x 2TB
[9:28] * vbellur (~vijay@121.244.87.117) has joined #ceph
[9:28] <qstion> Images are connected to 4 servers.
[9:29] <qstion> Sum of disk space used across all 4 servers is: 4747 GB
[9:30] <qstion> Total usage on cephs' side should be 3x (pool size) 4747 GB = 14 TB
[9:30] <qstion> But "ceph osd df" show me that total space used is 18 TB
[9:30] * hfu (~hfu@202.100.81.240) has joined #ceph
[9:31] <qstion> that is 4 TB of space that is used for what?
[9:32] * brianjjo (~Curt`@2WVAABJWJ.tor-irc.dnsbl.oftc.net) Quit ()
[9:32] * hassifa (~Lattyware@31.31.74.64) has joined #ceph
[9:35] * elder (~elder@50.250.13.174) has joined #ceph
[9:37] * fsimonce (~simon@host178-188-dynamic.26-79-r.retail.telecomitalia.it) has joined #ceph
[9:41] * erice (~eric@c-50-134-164-169.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[9:44] * dgurtner (~dgurtner@178.197.231.228) has joined #ceph
[9:44] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[9:45] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:45] * qwebirc30342 (~oftc-webi@116.255.132.3) Quit (Remote host closed the connection)
[9:46] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[9:47] * rendar (~I@host122-176-dynamic.3-87-r.retail.telecomitalia.it) has joined #ceph
[9:48] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:49] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:53] * oro_ (~oro@207-47-24-10.static-ip.telepacific.net) Quit (Ping timeout: 480 seconds)
[9:53] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:55] * fireD (~fireD@93-139-197-152.adsl.net.t-com.hr) has joined #ceph
[9:57] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[9:59] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:01] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[10:01] * b0e (~aledermue@213.95.25.82) has joined #ceph
[10:01] * thomnico (~thomnico@2a01:e35:8b41:120:c43a:4e84:65cd:44cf) Quit (Quit: Ex-Chat)
[10:01] * thomnico (~thomnico@2a01:e35:8b41:120:c43a:4e84:65cd:44cf) has joined #ceph
[10:02] * hassifa (~Lattyware@5NZAABLWL.tor-irc.dnsbl.oftc.net) Quit ()
[10:03] * oro_ (~oro@207-47-24-10.static-ip.telepacific.net) has joined #ceph
[10:03] * lucas1 (~Thunderbi@218.76.52.64) Quit (Read error: Connection reset by peer)
[10:04] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[10:08] * bitserker (~toni@213.229.187.110) has joined #ceph
[10:08] <guerby> loicd, 0.94 releases notes are not listed here: http://ceph.com/docs/hammer/release-notes/
[10:09] <guerby> but they're ok here: http://ceph.com/docs/master/release-notes/
[10:13] * hfu (~hfu@202.100.81.240) Quit (Remote host closed the connection)
[10:13] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[10:13] * phoenix42 (~phoenix42@122.252.249.67) Quit (Ping timeout: 480 seconds)
[10:14] * fghaas (~florian@212095007053.public.telering.at) has joined #ceph
[10:15] * fghaas1 (~florian@213162068109.public.t-mobile.at) Quit (Ping timeout: 480 seconds)
[10:16] * wschulze (~wschulze@38.96.12.2) Quit (Quit: Leaving.)
[10:16] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[10:17] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[10:18] * oro_ (~oro@207-47-24-10.static-ip.telepacific.net) Quit (Ping timeout: 480 seconds)
[10:18] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[10:20] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[10:22] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[10:25] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[10:26] <Be-El> hi
[10:30] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[10:30] * thomnico (~thomnico@2a01:e35:8b41:120:c43a:4e84:65cd:44cf) Quit (Quit: Ex-Chat)
[10:30] * thomnico (~thomnico@2a01:e35:8b41:120:c43a:4e84:65cd:44cf) has joined #ceph
[10:31] * phoenix42 (~phoenix42@122.252.249.67) has joined #ceph
[10:31] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[10:32] * fghaas (~florian@212095007053.public.telering.at) Quit (Ping timeout: 480 seconds)
[10:35] * lalatenduM (~lalatendu@121.244.87.124) has joined #ceph
[10:36] * MACscr (~Adium@2601:d:c800:de3:812a:c8b1:782d:d35e) has joined #ceph
[10:38] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[10:39] * lalatenduM (~lalatendu@121.244.87.124) Quit ()
[10:41] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[10:41] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[10:42] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[10:43] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:47] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:47] * hfu (~hfu@202.100.81.240) has joined #ceph
[10:49] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[10:50] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[10:55] <loicd> guerby: yes, the release notes are in master always, it is intended but somewhat confusing
[10:56] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:00] * aszeszo (~aszeszo@geq177.internetdsl.tpnet.pl) has joined #ceph
[11:02] <b0e> Hi
[11:02] * thomnico (~thomnico@2a01:e35:8b41:120:c43a:4e84:65cd:44cf) Quit (Quit: Ex-Chat)
[11:02] <b0e> we are doing an upgrade von firefly to hammer. what is pg status undersized?
[11:02] * thomnico (~thomnico@2a01:e35:8b41:120:c43a:4e84:65cd:44cf) has joined #ceph
[11:02] * EdGruberman (~xENO_@98EAAA8O5.tor-irc.dnsbl.oftc.net) has joined #ceph
[11:03] * zack_dolby (~textual@nfmv001073072.uqw.ppp.infoweb.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:05] <b0e> sorry, i found the definition in the documentation, now. :D
[11:08] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) has joined #ceph
[11:09] <MaZ-> ok so lets assume in testing i have a raid1+0 (hardware) across 12 disks and had that as 1 OSD, with a size of 1 (0 replicas) - that gives me 44TB/2 of usable storage. That osd is currently at ~60% utilisation. I then configure an identical machine with 12 x separate OSD's, no hardware raid, and with a size of 2 for the data pool (1 replica). After replicating the data across from the first
[11:09] <MaZ-> system, I'd expect this to use a similar-ish amount of raw storage - there shouldn't be a significant (i.e. Terabytes) of overhead from storing across multiple OSD's in this situation right?
[11:13] * branto (~borix@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[11:14] * hfu (~hfu@202.100.81.240) Quit (Remote host closed the connection)
[11:17] * stalob (~oftc-webi@br167-098.ifremer.fr) has joined #ceph
[11:18] <stalob> hello?
[11:25] <Kvisle> any el7-users who are running ceph 0.87 and have a solution on what repositories to use? the documented way doesn't work well, Package python-ceph is obsoleted by python-rados, trying to install 1:python-rados-0.80.7-2.el7.x86_64 instead
[11:26] <Kvisle> and then it fails
[11:28] <Kvisle> figured it out
[11:28] <Kvisle> exclude=python-rados,python-rbd
[11:28] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[11:29] <stalob> i'm fairly new with ceph , i'm trying to make it work following the documentation...but i encountered a problem
[11:29] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[11:30] <stalob> when trying to run a ceph command on mchaine i got this error "-1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication"
[11:31] <stalob> problably a a missing file as you said..yeap , but ceph is installed on an othe rmachine and guess what, keyring file at the same location and it's work>>
[11:32] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[11:32] * EdGruberman (~xENO_@98EAAA8O5.tor-irc.dnsbl.oftc.net) Quit ()
[11:33] <stalob> *command on a machine (sorry for the typo mistake )
[11:35] * phoenix42 (~phoenix42@122.252.249.67) Quit (Ping timeout: 480 seconds)
[11:35] <cetex> throughput has dropped from 350MB/sec on average to 140MB/sec on average when i've filled the cluster with rados bench to ~60% :>
[11:36] * MACscr (~Adium@2601:d:c800:de3:812a:c8b1:782d:d35e) Quit (Quit: Leaving.)
[11:39] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[11:43] * ChrisNBlum (~ChrisNBlu@178.255.153.117) has joined #ceph
[11:44] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[11:44] * kawa2014 (~kawa@212.110.41.244) Quit (Ping timeout: 480 seconds)
[11:44] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[11:46] <darkfaded> cetex: does it go back up after a xfs defrag? :)
[11:46] <darkfaded> (they're pretty quick or i'd not even suggest)
[11:51] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[12:02] * Linkshot (~sese_@tor-exit1.arbitrary.ch) has joined #ceph
[12:06] <cetex> hm hm.
[12:06] <cetex> no idea.
[12:06] <cetex> will check
[12:08] <cetex> root@s1:~# xfs_db -c frag -r /dev/sda2
[12:08] <cetex> actual 943810, ideal 873930, fragmentation factor 7.40%
[12:09] <cetex> formatted the filesystem 18hours ago
[12:09] <cetex> :>
[12:09] <cetex> and have only started "rados bench -p data 90000 write --no-cleanup" on ceph.
[12:10] <cetex> journal is on raw partition on beginning of drive so that shouldn't cause fragmentation.
[12:21] * thomnico (~thomnico@2a01:e35:8b41:120:c43a:4e84:65cd:44cf) Quit (Remote host closed the connection)
[12:23] * thomnico (~thomnico@2a01:e35:8b41:120:a549:a73c:cbb5:eb2d) has joined #ceph
[12:25] * lx0 is now known as lxo
[12:25] * thomnico (~thomnico@2a01:e35:8b41:120:a549:a73c:cbb5:eb2d) Quit ()
[12:26] * thomnico (~thomnico@2a01:e35:8b41:120:a549:a73c:cbb5:eb2d) has joined #ceph
[12:26] <lxo> I'm experiencing frequent deadlocks during recovery: multiple pairs of osds block on sendmsg to each other. I've increased /proc/sys/net/ipv4/tcp_limit_output_bytes from 128K to 16M to see if it gets any better
[12:26] <sugoruyo> hey folks, I was wondering: what do people use to monitor Ceph? what's a good system to use to measure things about Ceph and also what do people measure about Ceph?
[12:28] <Kioob`Taff> I use Zabbix, with trappers for each OSD & MON. But... it's not really easy to deploy.
[12:30] * shyu (~Shanzhi@119.254.196.66) Quit (Remote host closed the connection)
[12:32] * Linkshot (~sese_@98EAAA8QZ.tor-irc.dnsbl.oftc.net) Quit ()
[12:32] * Phase (~Spessu@tor-exit3-readme.dfri.se) has joined #ceph
[12:33] * Phase is now known as Guest2020
[12:33] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[12:39] <qstion> why is trim/unmap not working for rbd devices?
[12:40] <qstion> i have rbd image mapped with rbdmap and exported as iscsi device
[12:40] <qstion> FITRIM ioctl failed: Operation not supported
[12:43] * fghaas (~florian@212095007053.public.telering.at) has joined #ceph
[12:46] <sugoruyo> Kioob`Taff: we use Nagios for just about everything but it's being migrated to Icinga soon
[12:46] <sugoruyo> I'm looking for things that graph numbers over time
[12:46] <sugoruyo> like ganglia, graphite etc
[12:50] * kklimonda (~kklimonda@2001:41d0:a:f2b5::1) Quit (Quit: WeeChat 0.4.2)
[12:50] <qstion> sugoruyo: have you tried riemann, influxdb and grafana?
[12:50] * Hell_Fire (~hellfire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[12:53] <joelm> I'd hold off influxdb (personally) until 0.9 hits
[12:54] <qstion> i have already deployed it to dev environment. performance is not very good, but technology is promising :)
[12:54] <joelm> oh, me too, but for production, watch out.. a few people I know had issues when their datasets grew to a certain size
[12:55] <joelm> 0.9 seems to be the one to go for when it's stable :)
[12:55] <joelm> Having a specific timeseries db though is a real win
[12:55] * OnTheRock (~overonthe@199.68.193.54) has joined #ceph
[12:57] <qstion> if they will push really hard on performance and resource usage, because for now it's slower that a little bit tuned mysql and resource usage is a bit high
[12:58] <qstion> but yes, having timeseries for monitoring data is a big win.
[12:58] <joelm> I like the continous queries and aggregate functions, they're handy.. plus being able to set retentions and scope older data to a lower sampling freqency is really cool
[12:58] * branto (~borix@ip-213-220-214-203.net.upcbroadband.cz) has left #ceph
[12:58] <joelm> keeping that all in the same context too, so can dial in on the same graph
[12:59] <sugoruyo> qstion: none of them, I've heard of riemann but I don't think we're likely to run it here, influxdb and grafana are things I'm about to evaluate, another service run here is already trying influxdb+grafana monitoring
[12:59] <qstion> yep
[12:59] <joelm> I keep meaning to test riemann some more, but need to invest time learning clojure a bit more
[12:59] <sugoruyo> keep in mind I'm looking for a generic solution that could possibly replace our ganglia setup and we're a scientific computing facility so changes can be slow to take place
[13:00] <joelm> don't forget OpenTSDB too
[13:01] <sugoruyo> what about plain-old graphite
[13:01] <sugoruyo> we've currently got Ganglia going for everything (batch farms, other storage services etc)
[13:02] * Guest2020 (~Spessu@5NZAABL4O.tor-irc.dnsbl.oftc.net) Quit ()
[13:02] * Scrin (~Gibri@torsrvn.snydernet.net) has joined #ceph
[13:02] <joelm> Yea, graphite too, just more involved maintaining it I suppose. All depends on your needs I guess
[13:03] <joelm> InfluxDB talks carbon protocol
[13:03] <Be-El> basic ganglia monitoring is quite easy to setup. i'm using some public puppet packages for it
[13:04] <sugoruyo> joelm: basically we need something that can do graphs of our time series data of things ranging from IPMI voltages to Ceph PG states
[13:04] <Be-El> if you need more specific monitoring (read: ceph), you may need to deploy some extra scripts
[13:04] * analbeard (~shw@support.memset.com) Quit (Ping timeout: 480 seconds)
[13:04] * erice (~eric@c-50-134-164-169.hsd1.co.comcast.net) has joined #ceph
[13:04] <sugoruyo> with the ability to make custom graphs and pages in a way that's easier and more maintainable than writing a bunch of Perl
[13:05] <sugoruyo> and also that can handle a lot of data (we have about 200 racks of machines) putting stuff into Ganglia currently and we need a hierarchical approach to it otherwise it can't cope
[13:06] * analbeard (~shw@support.memset.com) has joined #ceph
[13:07] <joelm> well, sure, influx/opentsdb/graphite will do all that - if you use with Kibana you can get custom dashboards etc
[13:07] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[13:07] <joelm> how you scale out the backend is another thing to think about I guess
[13:07] <joelm> depending on what your retention needs are, capacity planning, needs for availibilty, backup strategies etc :)
[13:08] <joelm> and the frequency of checks.. tick data, 10s, 60s etc. how much info granularity do you need
[13:09] <joelm> does it matter that IPMI voltages are checked ever 10s opposed to every 5mins for example :)
[13:09] <joelm> anwyay, probably wrong chan to discuss monitoring :D
[13:10] <sugoruyo> joelm: no that's another thing we need to look at, Ceph IOPS will obviously need to be checked more often than IPMI but Ganglia is fussy about setting that up
[13:10] <sugoruyo> my focus is Ceph monitoring
[13:10] <joelm> yopu probably want to look at diamond then
[13:10] <sugoruyo> but it should be a generic thing, not Calamari for instance
[13:11] <joelm> https://github.com/BrightcoveOS/Diamond
[13:11] <joelm> that's got ceph checks
[13:11] <joelm> there are others, but I've used that
[13:11] <joelm> it's pretty damn good imho
[13:11] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:11] <joelm> https://github.com/BrightcoveOS/Diamond/wiki/collectors-CephCollector
[13:12] <joelm> https://github.com/BrightcoveOS/Diamond/wiki/collectors-CephStatsCollector
[13:12] <joelm> or collectd etc ;)
[13:12] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:14] <jcsp1> the latest greatest ceph collector is still up a in a branch here: https://github.com/ceph/diamond/tree/calamari
[13:14] <jcsp1> but if the one that's upstream in diamond is good enough for your needs then that might be more convenient for packages etc
[13:16] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[13:19] * fmanana (~fdmanana@bl13-131-19.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[13:19] <sugoruyo> joelm, jcsp1 : ok thx for your suggestions I'll look into them
[13:20] <joelm> n/p
[13:20] * sugoruyo (~georgev@paarthurnax.esc.rl.ac.uk) Quit (Remote host closed the connection)
[13:20] * sugoruyo (~georgev@paarthurnax.esc.rl.ac.uk) has joined #ceph
[13:22] * zhaochao (~zhaochao@111.161.77.236) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 31.6.0/20150331233809])
[13:23] * rdas (~rdas@110.227.44.189) Quit (Quit: Leaving)
[13:23] * shang_ (~ShangWu@175.41.48.77) Quit (Remote host closed the connection)
[13:24] * Hell_Fire (~hellfire@123-243-155-184.static.tpgi.com.au) Quit (Quit: Konversation terminated!)
[13:25] * fmanana (~fdmanana@bl13-131-19.dsl.telepac.pt) has joined #ceph
[13:27] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:29] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[13:32] * thomnico (~thomnico@2a01:e35:8b41:120:a549:a73c:cbb5:eb2d) Quit (Quit: Ex-Chat)
[13:32] * thomnico (~thomnico@2a01:e35:8b41:120:a549:a73c:cbb5:eb2d) has joined #ceph
[13:32] <qstion> sugoruyo: you sould post how it goes in some blog. It would be interesting if you get influxdb and riemann to work for 200 racks ;)
[13:32] * Scrin (~Gibri@5NZAABL5X.tor-irc.dnsbl.oftc.net) Quit ()
[13:32] * verbalins (~JamesHarr@2WVAABJ62.tor-irc.dnsbl.oftc.net) has joined #ceph
[13:34] <sugoruyo> qstion: I'm not sure I'll even be trying that route, monitoring isn't my thing, I work on the Ceph project, it's just become clear that the current Ganglia set up is not functional enough for our needs with Ceph so we're evaluating alternatives. If we do end up switching it'll take a long time.
[13:34] <sugoruyo> and not all 200 racks are Ceph of course
[13:34] <sugoruyo> I'll be recording my experiences though, I might put something up on the net somewhere once I have a writeup of our Ceph solution
[13:38] * fmanana (~fdmanana@bl13-131-19.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[13:39] * shylesh (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[13:40] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:41] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:45] * Anticimex (anticimex@185.19.66.194) Quit (Ping timeout: 480 seconds)
[13:45] * fghaas (~florian@212095007053.public.telering.at) Quit (Ping timeout: 480 seconds)
[13:46] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[13:48] * morse_ (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[13:53] * kefu (~kefu@114.92.111.70) has joined #ceph
[13:58] * georgem (~Adium@184.151.178.70) has joined #ceph
[13:58] * georgem (~Adium@184.151.178.70) Quit (Read error: Connection reset by peer)
[13:59] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[14:00] * bitserker (~toni@213.229.187.110) Quit (Ping timeout: 480 seconds)
[14:02] * verbalins (~JamesHarr@2WVAABJ62.tor-irc.dnsbl.oftc.net) Quit ()
[14:03] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[14:05] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[14:07] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[14:07] * ganders (~root@190.2.42.21) has joined #ceph
[14:10] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has joined #ceph
[14:11] * vbellur (~vijay@122.167.65.220) has joined #ceph
[14:22] * sankarshan (~sankarsha@121.244.87.117) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[14:24] * kanagaraj (~kanagaraj@27.7.33.227) has joined #ceph
[14:25] * fmanana (~fdmanana@bl13-131-19.dsl.telepac.pt) has joined #ceph
[14:30] * analbeard (~shw@support.memset.com) Quit (Remote host closed the connection)
[14:31] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[14:32] * Xylios (~Solvius@tor-proxy-readme.cloudexit.eu) has joined #ceph
[14:34] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[14:34] * aszeszo (~aszeszo@geq177.internetdsl.tpnet.pl) Quit (Read error: No route to host)
[14:34] * aszeszo (~aszeszo@geq177.internetdsl.tpnet.pl) has joined #ceph
[14:38] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[14:40] * analbeard (~shw@support.memset.com) has joined #ceph
[14:41] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[14:43] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[14:45] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[14:50] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[14:51] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 482 seconds)
[14:52] * capri (~capri@212.218.127.222) has joined #ceph
[14:55] * capri_on (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[14:55] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[14:57] * bitserker (~toni@213.229.187.110) has joined #ceph
[15:00] <frickler> if I install only ceph-common from debian-hammer/precise, executing the ceph cli fails because ceph_argparse.py is missing
[15:01] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[15:02] * Xylios (~Solvius@2WVAABJ9J.tor-irc.dnsbl.oftc.net) Quit ()
[15:02] * rikai2 (~nartholli@98EAAA8V8.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:03] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) has joined #ceph
[15:04] <alfredodeza> frickler: does it fail for debian-hammer/precise as well?
[15:04] <alfredodeza> or is it just with hammer?
[15:05] <frickler> you mean debian-giant? I know it worked with debian-firefly
[15:05] <alfredodeza> sorry heh yes
[15:06] <alfredodeza> can you try with giant?
[15:08] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[15:09] * yanzheng (~zhyan@182.139.204.64) Quit (Quit: This computer has gone to sleep)
[15:10] <frickler> yes, giant is working
[15:10] <frickler> there ceph_argparse.py is in python-ceph
[15:10] <frickler> for hammer it seems to have moved to ceph
[15:10] * yanzheng (~zhyan@182.139.204.64) has joined #ceph
[15:10] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:12] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:14] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (Quit: leaving)
[15:14] <alfredodeza> frickler: do you have the output of the error (nicer if it shows all the way back from installing ceph-common)
[15:14] * kefu (~kefu@114.92.111.70) has joined #ceph
[15:14] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[15:15] <qstion> is there any way to trim/unmap (discard) rbd image on 3.10 and 2.6.32 kernels?
[15:16] <frickler> alfredodeza: I'll put it together for you
[15:16] <alfredodeza> thanks
[15:17] <alfredodeza> frickler: you should've not been allowed to use the ceph CLI (I think). The Debian control files for the Ceph package show this:
[15:17] <alfredodeza> Replaces: ceph-common (<< 0.78-500), python-ceph (<< 0.92-1223)
[15:17] <alfredodeza> and also this
[15:17] <alfredodeza> Breaks: python-ceph (<< 0.92-1223)
[15:18] <alfredodeza> so what you really want is to install ceph, not ceph-common
[15:18] <alfredodeza> but then again, you should not find out by trying to use the Ceph CLI and have it break there
[15:20] <alfredodeza> irc translations?
[15:20] <alfredodeza> welp wrong window
[15:21] * haomaiwang (~haomaiwan@61.185.255.226) has joined #ceph
[15:25] * linuxkidd (~linuxkidd@vpngac.ccur.com) has joined #ceph
[15:26] * tupper_ (~tcole@rtp-isp-nat-pool1-1.cisco.com) has joined #ceph
[15:26] * MVenesio (~MVenesio@186.136.59.165) has joined #ceph
[15:30] * DV__ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[15:30] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[15:30] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has left #ceph
[15:30] <frickler> alfredodeza: http://paste.openstack.org/show/203865/
[15:31] <frickler> not a good solution to have to install all the python things for a client node
[15:32] <frickler> but if that is what is intended, /usr/bin/ceph should move from ceph-common to ceph, too
[15:32] * rikai2 (~nartholli@98EAAA8V8.tor-irc.dnsbl.oftc.net) Quit ()
[15:32] * Bj_o_rn (~Doodlepie@ns323918.ip-94-23-30.eu) has joined #ceph
[15:32] <MVenesio> Hi guys, someone knows the best way to use Ceph and NFS ?
[15:33] <alfredodeza> frickler: this is a bug for sure
[15:33] <stalob> i'm still trying to find the answer
[15:33] <qstion> MVenesio: was there. don't do it.
[15:33] * fxmulder_ (~fxmulder@cpe-24-55-6-128.austin.res.rr.com) has joined #ceph
[15:33] <qstion> performance is tragic
[15:33] <MVenesio> mm ok
[15:34] <stalob> so on the #ceph channel you recommand to not use #ceph? ^'
[15:34] <stalob> *ceph
[15:34] <qstion> not use ceph with nfs ;)
[15:34] <stalob> ok
[15:34] <MVenesio> qstion: so any plans to make it work in the future ?
[15:34] <stalob> actually i'm still tryign to use ceph at all
[15:35] <ktdreyer> hi frickler , alfredodeza pointed me at your paste there
[15:35] <ktdreyer> thanks for providing that
[15:35] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[15:36] * karnan (~karnan@106.51.234.17) has joined #ceph
[15:36] <ktdreyer> frickler: what version of the "ceph" deb do you have installed?
[15:37] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[15:37] * fxmulder (~fxmulder@cpe-24-55-6-128.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:37] <ktdreyer> frickler: dpkg -l ceph
[15:38] * harold (~hamiller@71-94-227-43.dhcp.mdfd.or.charter.com) has joined #ceph
[15:40] <qstion> MVenesio stalob: http://goo.gl/S2v0uP
[15:40] <qstion> a bit of benchmarks I did 2 months ago
[15:40] <georgem> I'm trying to create larger journals, but the newly created OSD gets a new ID, any idea how I can keep the old number? I have the journal on the OSD drive as a raw partition
[15:41] <joelm> qstion: did you tyr Ganesha?
[15:41] <frickler> ktdreyer: http://tracker.ceph.com/issues/11388, version is 0.94.1-1precise
[15:42] <frickler> ktdreyer: if I install ceph, things work fine, but the idea is to only install ceph-common for client nodes
[15:42] * harold (~hamiller@71-94-227-43.dhcp.mdfd.or.charter.com) Quit ()
[15:42] <qstion> georgem: i don't know if it's recomended to do it, but you can chage osd ID here: /var/lib/ceph/osd/ceph-*/fsid
[15:42] <joelm> qstion: I'm about to setup Ganesha talking to Ceph.. will see how that goes
[15:44] <qstion> joelm: you can try that, but nfs is nfs and ganesha is no cure. nfs locking is intoducing a lot of latency and you get very bad performance.
[15:44] <ktdreyer> frickler: thanks for filing that, I've added myself as a watcher
[15:45] <joelm> qstion: I'm not bothered about locking, this is traditional NFS usecase, just plain old NFS (which we use well, heavily.)
[15:46] <joelm> I'd be interested to see who it works in relation to zfs exported one
[15:46] <ktdreyer> frickler: I'll continue the discussion in that ticket, if you don't mind
[15:47] <qstion> joelm: http://goo.gl/S2v0uP
[15:48] <joelm> that's apple and oranges comparison
[15:48] <joelm> RBD is altady network based
[15:48] <joelm> also, ganesh leverages libcephfs directly afair
[15:48] <frickler> ktdreyer: sure
[15:49] <joelm> I'm not expecting any kind of shiny performance, but interested to test
[15:49] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[15:50] * oro_ (~oro@207-47-24-10.static-ip.telepacific.net) has joined #ceph
[15:50] <joelm> rbd over nfs too
[15:50] <joelm> that's erm...
[15:50] * yanzheng (~zhyan@182.139.204.64) Quit (Quit: This computer has gone to sleep)
[15:50] <joelm> not nfs over rbd?
[15:50] * DV__ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[15:50] <joelm> if it *is* rbd over nfs then, whaaaa? :)
[15:50] <qstion> as I said, performance will be horrible compared to other solutions, but if single distributed filesystem is needed then go for it
[15:50] <joelm> why would you even do that :)
[15:51] <joelm> yea, interested to see cephfs vs ganesha nfs basically
[15:51] <joelm> not rbd mapped vols
[15:51] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) has joined #ceph
[15:51] <joelm> evenm nfs exported rbd vols?!?
[15:51] * yanzheng (~zhyan@182.139.204.64) has joined #ceph
[15:51] <qstion> export rbd as /dev/rbd* (rbdmap) and pass that as nfs share
[15:51] <joelm> sure that would be nfs over rbd in my opinion, but sure :)
[15:51] <qstion> so yes, it's the other way around ;)
[15:51] <joelm> :)
[15:52] * RayTracer (~RayTracer@153.19.7.39) has joined #ceph
[15:53] * capri_on (~capri@212.218.127.222) has joined #ceph
[15:55] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[15:56] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[15:57] * yanzheng (~zhyan@182.139.204.64) Quit (Quit: This computer has gone to sleep)
[15:59] <georgem> qstion: I mean I have OSD.1 that was using /dev/sdb2 as data partition and /dev/sdb1 as journal; the size of the journal was only 100 MB, so I stopped the OSD.1, zapped /dev/sdb and then "ceph-disk prepare /dev/sdb" which created the two partitions with the journal now the right size (10 GB) but it now uses osd.36 instead of osd.1
[16:00] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[16:00] <qstion> georgem: i don't know how to solve this one. maybe crushmap + directory renaming?
[16:02] * Bj_o_rn (~Doodlepie@5NZAABMDK.tor-irc.dnsbl.oftc.net) Quit ()
[16:02] * SweetGirl (~AluAlu@luna115.startdedicated.net) has joined #ceph
[16:05] <Be-El> georgem: you forgot to remove the osd from ceph before recreating it
[16:06] * Rickus_ (~Rickus@office.protected.ca) has joined #ceph
[16:06] * fxmulder (~fxmulder@cpe-24-55-6-128.austin.res.rr.com) has joined #ceph
[16:07] * ChrisNBl_ (~ChrisNBlu@178.255.153.117) has joined #ceph
[16:07] * georgem1 (~Adium@fwnat.oicr.on.ca) has joined #ceph
[16:08] * puffy1 (~puffy@50.185.218.255) has joined #ceph
[16:08] <joelm> ganesha actually really easy to setup
[16:08] * kefu_ (~kefu@114.92.111.70) has joined #ceph
[16:09] * puffy (~puffy@50.185.218.255) Quit (Read error: Connection reset by peer)
[16:09] * ChrisNBlum (~ChrisNBlu@178.255.153.117) Quit (Read error: Connection reset by peer)
[16:09] * alfredodeza_ (~alfredode@198.206.133.89) has joined #ceph
[16:09] * alfredodeza (~alfredode@198.206.133.89) Quit (Ping timeout: 480 seconds)
[16:09] * alfredodeza_ is now known as alfredodeza
[16:11] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) Quit (Quit: Leaving.)
[16:11] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) has joined #ceph
[16:12] * fxmulder_ (~fxmulder@cpe-24-55-6-128.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[16:13] * kefu (~kefu@114.92.111.70) Quit (Ping timeout: 480 seconds)
[16:13] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Ping timeout: 480 seconds)
[16:13] * Rickus (~Rickus@office.protected.ca) Quit (Ping timeout: 480 seconds)
[16:14] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:14] * phoenix42 (~phoenix42@122.252.249.67) has joined #ceph
[16:14] * Anticimex (anticimex@185.19.66.194) has joined #ceph
[16:17] * bitserker (~toni@213.229.187.110) Quit (Ping timeout: 480 seconds)
[16:18] * bitserker (~toni@213.229.187.110) has joined #ceph
[16:18] * fireD_ (~fireD@93-139-207-158.adsl.net.t-com.hr) has joined #ceph
[16:24] * PerlStalker (~PerlStalk@162.220.127.20) has joined #ceph
[16:25] * fireD (~fireD@93-139-197-152.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[16:30] * diegows (~diegows@190.190.5.238) has joined #ceph
[16:31] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) Quit (Quit: Splunk> Be an IT superhero. Go home early.)
[16:31] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[16:32] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[16:32] * SweetGirl (~AluAlu@5NZAABMEX.tor-irc.dnsbl.oftc.net) Quit ()
[16:32] * Zyn (~Borf@176.10.99.209) has joined #ceph
[16:32] * lkoranda (~lkoranda@213.175.37.10) has joined #ceph
[16:35] * hellertime (~Adium@72.246.185.14) has joined #ceph
[16:36] * lkoranda (~lkoranda@213.175.37.10) Quit ()
[16:37] * hellertime (~Adium@72.246.185.14) Quit ()
[16:38] * bkopilov (~bkopilov@bzq-109-66-134-152.red.bezeqint.net) has joined #ceph
[16:38] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) has joined #ceph
[16:41] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[16:42] * hellertime (~Adium@72.246.185.14) has joined #ceph
[16:43] * hellertime (~Adium@72.246.185.14) Quit ()
[16:43] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[16:49] * aszeszo (~aszeszo@geq177.internetdsl.tpnet.pl) Quit (Quit: Leaving.)
[16:52] <georgem1> qstion: thanks, I'll take a look at osdmap
[16:52] * oro_ (~oro@207-47-24-10.static-ip.telepacific.net) Quit (Ping timeout: 480 seconds)
[16:55] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[16:55] * aszeszo (~aszeszo@public-gprs515189.centertel.pl) has joined #ceph
[16:56] * getup (~getup@gw.office.cyso.net) has joined #ceph
[16:56] <getup> hi, when i run a regionmap update i get cannot update region map, master_region conflict, what am i missing here?
[16:57] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) Quit (Quit: Splunk> Be an IT superhero. Go home early.)
[16:58] * lkoranda (~lkoranda@213.175.37.10) has joined #ceph
[17:02] * lkoranda (~lkoranda@213.175.37.10) Quit ()
[17:02] * karnan (~karnan@106.51.234.17) Quit (Ping timeout: 481 seconds)
[17:02] * bene2 (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[17:02] * Zyn (~Borf@2WVAABKG9.tor-irc.dnsbl.oftc.net) Quit ()
[17:02] * lkoranda (~lkoranda@213.175.37.10) has joined #ceph
[17:02] * Bj_o_rn (~Wijk@sipb-tor.mit.edu) has joined #ceph
[17:02] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:03] * fghaas (~florian@185.15.236.4) has joined #ceph
[17:03] * anon9110w (~oftc-webi@cpe-72-183-101-74.austin.res.rr.com) Quit (Remote host closed the connection)
[17:05] * sbfox (~Adium@72.2.49.50) has joined #ceph
[17:06] * lkoranda (~lkoranda@213.175.37.10) Quit ()
[17:07] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) has joined #ceph
[17:08] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[17:08] * elder (~elder@50.250.13.174) Quit (Ping timeout: 480 seconds)
[17:09] * smithfarm (~ncutler@nat1.scz.suse.com) Quit (Remote host closed the connection)
[17:09] * joef (~Adium@2601:9:280:f2e:4f2:daeb:7d02:5029) has joined #ceph
[17:09] * bitserker1 (~toni@213.229.187.110) has joined #ceph
[17:10] <pmatulis> everything that gets put onto a ceph cluster gets spread across many PGs right?
[17:10] <pmatulis> so why when i read up on RBD it is specifically stated "striped over multiple PGs for performance" as if this is a special RBD feature?
[17:10] * karnan (~karnan@106.51.133.93) has joined #ceph
[17:10] * bitserker (~toni@213.229.187.110) Quit (Read error: Connection reset by peer)
[17:11] * sep (~sep@40.211.jostedal.no) Quit (Ping timeout: 480 seconds)
[17:12] <sugoruyo> pmatulis: things get split into chunks based on a maximum object size and get spread across PGs for a uniform distribution
[17:12] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) Quit ()
[17:12] <sugoruyo> what you're talking about is striping which can use multiple objects at once for IO
[17:12] * lkoranda (~lkoranda@213.175.37.10) has joined #ceph
[17:12] <pmatulis> sugoruyo: oh right. i wonder how that is achieved then
[17:12] * joef (~Adium@2601:9:280:f2e:4f2:daeb:7d02:5029) has left #ceph
[17:13] <sugoruyo> so you can tell it to write to N objects at the same time
[17:13] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[17:14] * kanagaraj (~kanagaraj@27.7.33.227) Quit (Ping timeout: 480 seconds)
[17:14] <sugoruyo> pmatulis: http://ceph.com/docs/master/architecture/?highlight=striping#data-striping
[17:15] <sugoruyo> best way I've seen it explained is the diagrams on that page
[17:15] <pmatulis> sugoruyo: nice. thanks a lot
[17:21] * lkoranda (~lkoranda@213.175.37.10) Quit (Quit: Splunk> Be an IT superhero. Go home early.)
[17:23] * lkoranda (~lkoranda@213.175.37.10) has joined #ceph
[17:23] * kanagaraj (~kanagaraj@27.7.33.227) has joined #ceph
[17:26] * elder (~elder@50.153.131.133) has joined #ceph
[17:29] * lkoranda (~lkoranda@213.175.37.10) Quit (Remote host closed the connection)
[17:31] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) has joined #ceph
[17:32] * joshd (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[17:32] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[17:32] * Bj_o_rn (~Wijk@2WVAABKIZ.tor-irc.dnsbl.oftc.net) Quit ()
[17:32] * biGGer (~toast@exit1.ipredator.se) has joined #ceph
[17:32] * kanagaraj (~kanagaraj@27.7.33.227) Quit (Quit: Leaving)
[17:32] * kanagaraj (~kanagaraj@27.7.33.227) has joined #ceph
[17:38] * sep (~sep@2a04:2740:1:0:52e5:49ff:feeb:32) has joined #ceph
[17:38] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[17:42] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) Quit (Quit: Splunk> Be an IT superhero. Go home early.)
[17:43] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) has joined #ceph
[17:43] * fghaas (~florian@185.15.236.4) Quit (Ping timeout: 480 seconds)
[17:44] * mjeanson_ (~mjeanson@bell.multivax.ca) has joined #ceph
[17:44] * fghaas (~florian@185.15.236.4) has joined #ceph
[17:44] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:44] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[17:44] * aszeszo (~aszeszo@public-gprs515189.centertel.pl) Quit (Read error: Connection reset by peer)
[17:48] * aszeszo (~aszeszo@adsa112.neoplus.adsl.tpnet.pl) has joined #ceph
[17:49] * bandrus (~brian@128.sub-70-211-79.myvzw.com) has joined #ceph
[17:51] * fghaas (~florian@185.15.236.4) Quit (Quit: Leaving.)
[17:51] * p66kumar (~p66kumar@74.119.205.248) has joined #ceph
[17:53] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[17:56] * bene2 (~ben@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[17:57] * bitserker (~toni@213.229.187.110) has joined #ceph
[17:57] * bitserker1 (~toni@213.229.187.110) Quit (Read error: Connection reset by peer)
[17:58] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[17:59] * nils_ (~nils@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[18:00] * RayTracer (~RayTracer@153.19.7.39) Quit (Remote host closed the connection)
[18:01] * hellertime (~Adium@72.246.185.14) has joined #ceph
[18:02] * hellertime (~Adium@72.246.185.14) Quit ()
[18:02] * biGGer (~toast@98EAAA82E.tor-irc.dnsbl.oftc.net) Quit ()
[18:02] * yuastnav (~luckz@98EAAA83F.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:05] * ifur (~osm@0001f63e.user.oftc.net) Quit (Quit: leaving)
[18:06] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[18:07] * bene2 (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[18:12] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[18:12] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[18:15] * bene2 (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[18:16] * branto (~branto@nat-pool-brq-t.redhat.com) has left #ceph
[18:17] * getup (~getup@gw.office.cyso.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[18:21] * RayTracer (~RayTracer@public-gprs517526.centertel.pl) has joined #ceph
[18:22] * mgolub (~Mikolaj@91.225.202.153) has joined #ceph
[18:25] <flaf> georgem1: you should stop the osd daeomon, flush the journal, resize the journal and restard the osd. No need to remove osd etc.
[18:27] <georgem1> flaf: how do I resize the journal? it's the second partition on the OSD drive..
[18:29] * hellothere (~oftc-webi@12.234.128.141) has joined #ceph
[18:29] <flaf> Ah, it's another problem. I think this side wasn't a problem for you.
[18:29] * wschulze (~wschulze@38.96.12.2) has joined #ceph
[18:30] <georgem1> flaf:I actually found a way that works, but I cannot remember how I obtained the osd-uuid of the old OSD
[18:31] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:31] <flaf> Sorry I don't know. If you just remove an OSD and recreate a new, you don't care about that.
[18:32] * jaycedars (~jaycedars@104-189-175-58.lightspeed.austtx.sbcglobal.net) has joined #ceph
[18:32] * yuastnav (~luckz@98EAAA83F.tor-irc.dnsbl.oftc.net) Quit ()
[18:32] * mps (~zc00gii@drew010-relay01.drew-phillips.com) has joined #ceph
[18:32] <georgem1> flaf: what worked for me with the first disk was: stop osd id=1, ceph-disk zap /deb/sdb, ceph-disk prepare --osd-uuid UUID /dev/sdb, ceph auth del osd.1, ceph-disk activate --activate-key /var/lib/ceph/bootstrap-osd/ceph.keyring /dev/sdb1
[18:33] * ChrisNBl_ (~ChrisNBlu@178.255.153.117) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:33] <georgem1> flaf: I think I need to keep the old osd-uuid because I want to have the same OSD ID (e.g. 1) and not create a new OSD
[18:33] * kawa2014 (~kawa@212.110.41.244) Quit (Quit: Leaving)
[18:34] <flaf> Are you sure? You resize the journal partition and the working dir of a osd without stop and remove this osd?
[18:35] * hellertime (~Adium@72.246.185.14) has joined #ceph
[18:36] * hellertime (~Adium@72.246.185.14) Quit ()
[18:36] <flaf> georgem1: I don't use ceph-disk but the "zap" subcommand remove all the data in the osd.
[18:36] * hellothere (~oftc-webi@12.234.128.141) Quit (Quit: Page closed)
[18:36] * hellertime (~Adium@72.246.185.14) has joined #ceph
[18:37] * davidz (~davidz@cpe-23-242-189-171.socal.res.rr.com) Quit (Quit: Leaving.)
[18:37] <georgem1> flaf: the first step was: stop osd id=1
[18:37] <flaf> So, to my mind, the simplest solution is to remove the OSD from the cluster and a recreate a completely new osd.
[18:39] * hellertime (~Adium@72.246.185.14) Quit ()
[18:39] <georgem1> flaf: yes, that's one good way but then I have a lot of directories sitting around and live OSDs that start counting from a high number???I'm trying to keep everything as is
[18:40] * fghaas (~florian@185.15.236.4) has joined #ceph
[18:41] * reed (~reed@d649-3653-4115-3f17-0386-356b-4420-2062.6rd.ip6.sonic.net) has joined #ceph
[18:42] <flaf> @all question about radosgw and Hammer: With Firefly, to install a radosgw (in Trusty) you should install specific "apache2" and "libapache2-mod-fastcgi" packages from a specific ceph repo. Now, with Hammer, if I search in the online doc, I don't see anymore this step. Is it became useless with Hammer? Can I use directly the package from the distro?
[18:42] <cephalobot> flaf: Error: "all" is not a valid command.
[18:43] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[18:44] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Quit: Exeunt dneary)
[18:45] * RayTracer (~RayTracer@public-gprs517526.centertel.pl) Quit (Remote host closed the connection)
[18:45] * jaycedars (~jaycedars@104-189-175-58.lightspeed.austtx.sbcglobal.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[18:46] * reed (~reed@d649-3653-4115-3f17-0386-356b-4420-2062.6rd.ip6.sonic.net) Quit (Quit: Ex-Chat)
[18:47] <B_Rake> flaf: IIRC there was a previous discussion to moving to civetweb instead of the apache + fcgi approach
[18:50] <devicenull> so, based on the default osd capabilities ' mon 'allow profile osd' osd 'allow *' '.. any OSD can write to any object?
[18:50] <devicenull> just making sure I understand this properly
[18:51] * reed (~reed@198.23.103.89-static.reverse.softlayer.com) has joined #ceph
[18:52] <B_Rake> flaf: this is the thread I was recalling https://www.mail-archive.com/ceph-users@lists.ceph.com/msg17306.html
[18:58] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[19:02] * mps (~zc00gii@2WVAABKND.tor-irc.dnsbl.oftc.net) Quit ()
[19:02] * kalleeen (~hifi@98EAAA85O.tor-irc.dnsbl.oftc.net) has joined #ceph
[19:02] * joshd (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[19:03] * kefu_ (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:04] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[19:07] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:10] * kanagaraj (~kanagaraj@27.7.33.227) Quit (Ping timeout: 480 seconds)
[19:13] * segutier_ (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[19:15] <skullone> does ceph have any mechanisms to make a "write once, read many" type store?
[19:15] * Concubidated (~Adium@206.169.83.146) has joined #ceph
[19:16] <skullone> ive thought about implementing this on an HTTP reverse proxy, by filtering out destructive HTTP verbs
[19:16] <skullone> not sure if anyone else has come across this, though
[19:18] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:18] * segutier_ is now known as segutier
[19:19] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) has joined #ceph
[19:19] * kanagaraj (~kanagaraj@115.242.181.183) has joined #ceph
[19:19] * kanagaraj (~kanagaraj@115.242.181.183) Quit ()
[19:20] * bene (~ben@c-75-68-96-186.hsd1.nh.comcast.net) has joined #ceph
[19:22] <sugoruyo> skullone: I need to go now so I'll just throw this out there: have you seen the cache tiering?
[19:23] <sugoruyo> or where you looking for something that prohibits writes on top of already existing things?
[19:23] <sugoruyo> if the latter I'm not aware of a way to do it
[19:24] * sugoruyo (~georgev@paarthurnax.esc.rl.ac.uk) Quit (Remote host closed the connection)
[19:24] * segutier_ (~segutier@172.56.16.168) has joined #ceph
[19:26] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:26] * segutier_ is now known as segutier
[19:27] * fghaas (~florian@185.15.236.4) Quit (Quit: Leaving.)
[19:32] * kalleeen (~hifi@98EAAA85O.tor-irc.dnsbl.oftc.net) Quit ()
[19:35] <flaf> B_Rake: but in the online doc, it's "fastcgi+apache" way which is indicated.
[19:36] <flaf> B_Rake: can I use civetweb in production?
[19:40] <B_Rake> I'm not certain how ready it is, we are still testing, however some of the other people along that thread indicated that they prefer civetweb. The documentation was still quite lacking
[19:40] <B_Rake> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg17300.html
[19:41] <flaf> Ok, B_Rake thanks for you help. ;)
[19:41] * lalatenduM (~lalatendu@122.172.75.155) has joined #ceph
[19:42] <B_Rake> np
[19:46] * dgurtner (~dgurtner@178.197.231.228) Quit (Ping timeout: 480 seconds)
[19:48] * diegows (~diegows@181.164.171.210) has joined #ceph
[19:51] <flaf> Another question about backup: is there a way to backup ceph pools? For rbd, it's possible to backup snaps etc. but for a pool of "raw" objects?
[19:52] <flaf> How people make backup of pools of "raw" objects? (if they make backup ;))
[19:53] * elder (~elder@50.153.131.133) Quit (Ping timeout: 480 seconds)
[19:56] * getup (~getup@business-dsl-80-101.xs4all.nl) has joined #ceph
[19:58] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:59] * segutier (~segutier@172.56.16.168) Quit (Quit: segutier)
[20:00] * PerlStalker (~PerlStalk@162.220.127.20) Quit (Quit: ...)
[20:02] * karnan (~karnan@106.51.133.93) Quit (Remote host closed the connection)
[20:02] * TGF (~PappI@tor-exit.fassiburg.de) has joined #ceph
[20:03] * segutier (~segutier@172.56.16.168) has joined #ceph
[20:03] * elder (~elder@50.153.131.133) has joined #ceph
[20:04] * bene (~ben@c-75-68-96-186.hsd1.nh.comcast.net) Quit (Quit: Konversation terminated!)
[20:09] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[20:11] * scuttlemonkey is now known as scuttle|afk
[20:12] * ircolle (~ircolle@66-194-8-225.static.twtelecom.net) has joined #ceph
[20:14] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[20:16] * segutier (~segutier@172.56.16.168) Quit (Ping timeout: 480 seconds)
[20:17] <cetex> so.. anyone has any idea of what filesystem fragments the least? :)
[20:18] * alram (~alram@38.96.12.2) has joined #ceph
[20:18] <cetex> or what filesystem performs best with 800k 1-4MB files?
[20:18] <rkeene> NILFS ?
[20:18] <cetex> :D
[20:18] <rkeene> Now that's an entirely different question..
[20:19] <cetex> Both xfs and ext4 seems to start seeking a lot when the drive becomes closer to full.
[20:21] <cetex> i'm not entirely sure since i don't know how to check that.. but iops and write performance suffer a lot.
[20:21] * i_m (~ivan.miro@deibp9eh1--blueice1n2.emea.ibm.com) has joined #ceph
[20:22] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) Quit (Quit: Ex-Chat)
[20:25] * scuttle|afk is now known as scuttlemonkey
[20:29] * fghaas (~florian@185.15.236.4) has joined #ceph
[20:29] * georgem1 (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[20:30] * rldleblanc (~rdleblanc@69-195-66-44.unifiedlayer.com) has joined #ceph
[20:31] * hellertime (~Adium@72.246.185.14) has joined #ceph
[20:31] * jo00nas (~jonas@188-183-5-254-static.dk.customer.tdc.net) has joined #ceph
[20:31] * jo00nas (~jonas@188-183-5-254-static.dk.customer.tdc.net) Quit ()
[20:31] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[20:32] * oro_ (~oro@sccc-66-78-236-243.smartcity.com) has joined #ceph
[20:32] * TGF (~PappI@5NZAABMVS.tor-irc.dnsbl.oftc.net) Quit ()
[20:32] * Da_Pineapple (~JWilbur@bolobolo1.torservers.net) has joined #ceph
[20:33] * i_m (~ivan.miro@deibp9eh1--blueice1n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[20:33] * Anticimex (anticimex@185.19.66.194) Quit (Quit: leaving)
[20:33] * Anticimex (anticimex@185.19.66.194) has joined #ceph
[20:35] * phoenix42 (~phoenix42@122.252.249.67) Quit (Remote host closed the connection)
[20:36] * getup (~getup@business-dsl-80-101.xs4all.nl) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[20:36] * wschulze (~wschulze@38.96.12.2) Quit (Quit: Leaving.)
[20:38] * reed_ (~reed@198.23.103.66-static.reverse.softlayer.com) has joined #ceph
[20:38] * hellertime (~Adium@72.246.185.14) Quit (Quit: Leaving.)
[20:39] * hellertime (~Adium@72.246.185.14) has joined #ceph
[20:40] * reed__ (~reed@198.23.103.98-static.reverse.softlayer.com) has joined #ceph
[20:41] * thomnico (~thomnico@2a01:e35:8b41:120:a549:a73c:cbb5:eb2d) Quit (Ping timeout: 480 seconds)
[20:42] * reed__ (~reed@198.23.103.98-static.reverse.softlayer.com) Quit ()
[20:42] * hellertime (~Adium@72.246.185.14) Quit ()
[20:45] * reed (~reed@198.23.103.89-static.reverse.softlayer.com) Quit (Ping timeout: 480 seconds)
[20:47] * reed_ (~reed@198.23.103.66-static.reverse.softlayer.com) Quit (Ping timeout: 480 seconds)
[20:48] * reed (~reed@198.23.103.98-static.reverse.softlayer.com) has joined #ceph
[20:48] * alram (~alram@38.96.12.2) Quit (Ping timeout: 480 seconds)
[20:48] * shivark (~oftc-webi@32.97.110.57) has joined #ceph
[20:49] <shivark> anybody hitting performance issues with firefly 0.80.9 ?
[20:51] * aszeszo (~aszeszo@adsa112.neoplus.adsl.tpnet.pl) Quit (Quit: Leaving.)
[20:52] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[20:55] * bitserker (~toni@213.229.187.110) Quit (Ping timeout: 480 seconds)
[20:57] * lalatenduM (~lalatendu@122.172.75.155) Quit (Quit: Leaving)
[21:02] * Da_Pineapple (~JWilbur@5NZAABMXU.tor-irc.dnsbl.oftc.net) Quit ()
[21:02] * _303 (~PierreW@sipb-tor.mit.edu) has joined #ceph
[21:06] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) has joined #ceph
[21:12] * alfredodeza (~alfredode@198.206.133.89) Quit (Remote host closed the connection)
[21:14] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[21:16] * diegows (~diegows@181.164.171.210) Quit (Quit: Leaving)
[21:18] * oro_ (~oro@sccc-66-78-236-243.smartcity.com) Quit (Ping timeout: 480 seconds)
[21:21] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) has joined #ceph
[21:21] <_robbat2|irssi> cetex: your question really depends on your write pattern to those files
[21:22] <_robbat2|irssi> are they effectively immutable after writing?
[21:22] <_robbat2|irssi> or do they get appended
[21:22] <_robbat2|irssi> if appended, or written in multiple passes, the fragmentation is going to suck pretty much everywhere
[21:23] <_robbat2|irssi> *each file in multiple passes
[21:23] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[21:25] <cetex> immutable
[21:25] <cetex> deleted after one week
[21:25] <cetex> then new created
[21:25] <cetex> :)
[21:25] <cetex> we're going to use it like s3
[21:25] <cetex> so write once, read many, data is valid for one week, then deleted.
[21:26] <cetex> we write 24/7 and will run delete batchjobs every minute / hour or something equal depending on performance.
[21:27] <cetex> so we'll continuously purge data while also writing new. we'll have drives filled to around 70-80% continuously.
[21:27] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[21:30] * hellertime (~Adium@72.246.185.14) has joined #ceph
[21:30] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[21:30] <cetex> at least that's the current goal if we can make it work.
[21:31] <cetex> drives should be simple, cheap 3.5" 7.2k rpm 4tb.
[21:31] * hellertime (~Adium@72.246.185.14) Quit ()
[21:32] * _303 (~PierreW@5NZAABMZU.tor-irc.dnsbl.oftc.net) Quit ()
[21:32] * totalwormage (~zc00gii@176.10.99.208) has joined #ceph
[21:36] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) has joined #ceph
[21:38] * RayTracer (~RayTracer@host-81-190-2-156.gdynia.mm.pl) has joined #ceph
[21:38] <c3s4r> '0
[21:39] * TMM (~hp@46.243.30.149) has joined #ceph
[21:40] * LeaChim (~LeaChim@host86-143-18-67.range86-143.btcentralplus.com) has joined #ceph
[21:45] * rendar (~I@host122-176-dynamic.3-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:46] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[21:47] <bstillwell> What would cause placement groups to be in 'active+remapped' state instead of 'active+remapped+wait_backfill' ?
[21:47] * rendar (~I@host122-176-dynamic.3-87-r.retail.telecomitalia.it) has joined #ceph
[21:48] <bstillwell> I have 18 of them in ative+remapped state right now, but I'm pretty sure after one of the existing backfills completes they will move to the wait_backfill state...
[21:59] * TMM (~hp@46.243.30.149) Quit (Remote host closed the connection)
[22:00] * TMM (~hp@46.243.30.149) has joined #ceph
[22:01] * joshd (~jdurgin@206.169.83.146) has joined #ceph
[22:01] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[22:02] * totalwormage (~zc00gii@2WVAABKW9.tor-irc.dnsbl.oftc.net) Quit ()
[22:03] * mgolub (~Mikolaj@91.225.202.153) Quit (Quit: away)
[22:03] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) has joined #ceph
[22:04] * ChrisNBlum (~ChrisNBlu@178.255.153.117) has joined #ceph
[22:07] * Quackie (~DougalJac@98EAAA9CK.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:08] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[22:10] * phoenix42 (~phoenix42@122.252.249.67) has joined #ceph
[22:14] * vpol (~vpol@000131a0.user.oftc.net) Quit (Quit: vpol)
[22:15] * B_Rake_ (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[22:15] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) Quit (Quit: Leaving...)
[22:16] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) has joined #ceph
[22:18] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[22:19] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[22:19] * oro_ (~oro@sccc-66-78-236-243.smartcity.com) has joined #ceph
[22:20] * ganders (~root@190.2.42.21) Quit (Quit: WeeChat 0.4.2)
[22:20] * B_Rake_ (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Remote host closed the connection)
[22:21] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Ping timeout: 480 seconds)
[22:21] * phoenix42 (~phoenix42@122.252.249.67) Quit (Ping timeout: 480 seconds)
[22:22] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[22:23] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[22:25] * dmick (~dmick@206.169.83.146) has joined #ceph
[22:28] * davidzlap (~Adium@206.169.83.146) has joined #ceph
[22:36] * BManojlovic (~steki@cable-89-216-225-32.dynamic.sbb.rs) has joined #ceph
[22:36] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[22:36] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[22:37] * Quackie (~DougalJac@98EAAA9CK.tor-irc.dnsbl.oftc.net) Quit ()
[22:37] * dyasny (~dyasny@173.231.115.58) Quit (Ping timeout: 480 seconds)
[22:37] * BManojlovic (~steki@cable-89-216-225-32.dynamic.sbb.rs) Quit ()
[22:38] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) has joined #ceph
[22:38] * BManojlovic (~steki@cable-89-216-225-32.dynamic.sbb.rs) has joined #ceph
[22:39] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[22:41] * BManojlovic (~steki@cable-89-216-225-32.dynamic.sbb.rs) Quit (Remote host closed the connection)
[22:42] * BManojlovic (~steki@cable-89-216-225-32.dynamic.sbb.rs) has joined #ceph
[22:45] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) Quit (Quit: Leaving)
[22:46] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has left #ceph
[22:47] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) has joined #ceph
[22:48] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[22:49] * BManojlovic (~steki@cable-89-216-225-32.dynamic.sbb.rs) Quit (Remote host closed the connection)
[22:49] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[22:51] * sbfox (~Adium@72.2.49.50) Quit (Quit: Leaving.)
[22:54] * ChrisNBlum (~ChrisNBlu@178.255.153.117) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[22:54] * BManojlovic (~steki@cable-89-216-225-32.dynamic.sbb.rs) has joined #ceph
[22:59] <skullone> i love openstack
[22:59] <skullone> i have an openstack POC, and got DNS integrated with Active Directory, and building my ceph POC now :o
[23:02] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Ping timeout: 480 seconds)
[23:06] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Remote host closed the connection)
[23:07] * wschulze (~wschulze@38.96.12.2) has joined #ceph
[23:08] * elder (~elder@50.153.131.133) Quit (Ping timeout: 480 seconds)
[23:10] * ChrisNBlum (~ChrisNBlu@178.255.153.117) has joined #ceph
[23:11] <Knorrie> yay :)
[23:12] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) has joined #ceph
[23:13] * puffy1 (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[23:14] * linuxkidd (~linuxkidd@vpngac.ccur.com) Quit (Quit: Leaving)
[23:14] * puffy (~puffy@50.185.218.255) has joined #ceph
[23:19] * ChrisNBlum (~ChrisNBlu@178.255.153.117) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[23:19] * elder (~elder@50.250.13.174) has joined #ceph
[23:22] <skullone> ceph.com is so slow though, can barely pull down packages ;(
[23:22] * oro_ (~oro@sccc-66-78-236-243.smartcity.com) Quit (Ping timeout: 480 seconds)
[23:26] * daniel2_ (~daniel@12.164.168.117) has joined #ceph
[23:29] <skullone> are there any ceph.com mirrors i could try?
[23:30] <fghaas> eu.ceph.com
[23:31] <skullone> ceph.com needs a new host ;(
[23:33] <fghaas> scuttlemonkey: see skullone's comment; I do realize you've been fighting this battle for some time :)
[23:33] <skullone> id gladly be a mirror :)
[23:33] <skullone> we have a few (dozen) 1Gbs peers
[23:34] <scuttlemonkey> yes, ceph.com is going to get a new host
[23:34] <scuttlemonkey> the problem is our startup-y infrastructure is all tangly
[23:34] <scuttlemonkey> so we need to split the gitbuilders (which build the doc in addiiton to other things) out
[23:34] <skullone> hehe, yah, i know how that goes
[23:34] <bstillwell> Isn't ceph.com hosted by dreamhost?
[23:34] <scuttlemonkey> it's coming...just slowly
[23:35] <scuttlemonkey> bstillwell: yep
[23:35] <bstillwell> Seems like they would have the capcity to host ceph.com...
[23:35] <skullone> we actually left dreamhost for some marketing sites, and moved them to rackspace, due to some issues
[23:36] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[23:36] <scuttlemonkey> well, the issue is less a dreamhost one
[23:36] <scuttlemonkey> and more a you-really-shouldn't-cram-that-much-stuff-onto-a-single-dedicated-host problem
[23:37] * utugi______ (~clusterfu@37.187.129.166) has joined #ceph
[23:37] <scuttlemonkey> ok, time to go put some food in my face before I fall over
[23:38] <bstillwell> DreamCompute seems like a natural fit. :)
[23:40] <skullone> does RH meddle in cephs business after the aquisition?
[23:46] * MACscr (~Adium@2601:d:c800:de3:4195:a4b1:5af2:f82c) has joined #ceph
[23:49] * MACscr (~Adium@2601:d:c800:de3:4195:a4b1:5af2:f82c) Quit ()
[23:51] * RayTracer (~RayTracer@host-81-190-2-156.gdynia.mm.pl) Quit (Remote host closed the connection)
[23:52] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.