#ceph IRC Log

Index

IRC Log for 2015-04-13

Timestamps are in GMT/BST.

[0:06] * xcezzz1 (~Adium@pool-100-3-14-19.tampfl.fios.verizon.net) has joined #ceph
[0:06] * xcezzz (~Adium@pool-100-3-14-19.tampfl.fios.verizon.net) Quit (Read error: Connection reset by peer)
[0:08] * Diablodoct0r (~Aramande_@98EAAA6WY.tor-irc.dnsbl.oftc.net) Quit ()
[0:08] * Helleshin (~tokie@176.10.99.200) has joined #ceph
[0:09] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[0:10] * oro (~oro@p1.almaden.ibm.com) has joined #ceph
[0:17] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[0:21] <flaf> Hi, are they specific changes between radosgw in firefly and radosgw in Hammer. Because I have installed a cluster with my puppetmaster classes but with Hammer version instead of Firefly and if I want to test radosgw with s3cmd I have "ERROR: S3 error: 405 (MethodNotAllowed)" when I want to create a cluster.
[0:23] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[0:28] <flaf> However my wild card dns is ok: *.$fqdn_of_my_radosgw are resolved to the ip address of the radosgw.
[0:30] * rendar (~I@host185-39-dynamic.60-82-r.retail.telecomitalia.it) Quit ()
[0:36] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[0:36] * bkopilov (~bkopilov@bzq-109-66-134-152.red.bezeqint.net) Quit (Read error: Connection reset by peer)
[0:36] * bkopilov (~bkopilov@bzq-109-66-134-152.red.bezeqint.net) has joined #ceph
[0:37] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Remote host closed the connection)
[0:38] * Helleshin (~tokie@98EAAA6X0.tor-irc.dnsbl.oftc.net) Quit ()
[0:42] * Scrin (~ZombieTre@nchinda2.mit.edu) has joined #ceph
[0:45] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[0:47] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[0:48] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[1:12] * Scrin (~ZombieTre@98EAAA6Y1.tor-irc.dnsbl.oftc.net) Quit ()
[1:13] * Eric (~Kwen@166.70.207.2) has joined #ceph
[1:30] * oms101 (~oms101@p20030057EA2DD700EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:37] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[1:40] * oms101 (~oms101@2003:57:ea00:ad00:eef4:bbff:fe0f:7062) has joined #ceph
[1:41] * zack_dol_ (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:42] * Eric (~Kwen@98EAAA6ZT.tor-irc.dnsbl.oftc.net) Quit ()
[1:43] * Vidi (~redbeast1@thoreau.gtor.org) has joined #ceph
[1:44] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:54] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[1:55] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[2:12] * Vidi (~redbeast1@2WVAABHLR.tor-irc.dnsbl.oftc.net) Quit ()
[2:13] * Linkshot (~TomyLobo@tor.thd.ninja) has joined #ceph
[2:14] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:18] * georgem (~Adium@206-248-156-43.dsl.teksavvy.com) has joined #ceph
[2:25] * georgem (~Adium@206-248-156-43.dsl.teksavvy.com) Quit (Quit: Leaving.)
[2:28] <flaf> Something has probably changed.
[2:28] <flaf> If I install my radosgw with firefly (and my puppet classes), I have no problem to create buckets with s3cmd (and put objects etc).
[2:29] <flaf> Then if I upgrade the radosgw to Hammer without changing the conf, impossible to create bucket with s3cmd.
[2:31] <flaf> (just sed -i 's/firefly/hammer/g' /etc/apt/sources.list.d/ceph.list && apt-get update && apt-get dist-upgrade -y && service stop apache2 && stop radosgw-all && start radosgw-all && service apache2 start)
[2:32] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) has joined #ceph
[2:32] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) Quit ()
[2:34] <flaf> This is probably something stupid but I don't see and the log seems to me not very helpful. ;)
[2:38] * zack_dolby (~textual@nfmv001080190.uqw.ppp.infoweb.ne.jp) has joined #ceph
[2:41] * georgem (~Adium@206-248-156-43.dsl.teksavvy.com) has joined #ceph
[2:41] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[2:42] * Linkshot (~TomyLobo@5NZAABJLO.tor-irc.dnsbl.oftc.net) Quit ()
[2:43] * superdug (~Hideous@wannabe.torservers.net) has joined #ceph
[3:02] * georgem (~Adium@206-248-156-43.dsl.teksavvy.com) has left #ceph
[3:10] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[3:12] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[3:12] * superdug (~Hideous@5NZAABJMK.tor-irc.dnsbl.oftc.net) Quit ()
[3:14] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[3:18] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[3:26] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Ping timeout: 480 seconds)
[3:27] * DV__ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[3:28] * davidz1 (~davidz@cpe-23-242-189-171.socal.res.rr.com) Quit (Quit: Leaving.)
[3:33] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[3:34] * Mika_c (~quassel@125.227.22.217) has joined #ceph
[3:35] * root (~root@p5DDE649B.dip0.t-ipconnect.de) has joined #ceph
[3:38] * sankarshan (~sankarsha@121.244.87.117) has joined #ceph
[3:42] * root4 (~root@p5DDE7D10.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:43] * CydeWeys (~visored@tor-exit0.conformal.com) has joined #ceph
[3:51] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[3:58] * vbellur (~vijay@122.167.74.40) Quit (Ping timeout: 480 seconds)
[4:04] * kefu (~kefu@114.92.99.163) has joined #ceph
[4:06] * shyu (~Shanzhi@119.254.196.66) Quit (Ping timeout: 480 seconds)
[4:08] * vbellur (~vijay@122.172.195.33) has joined #ceph
[4:12] * CydeWeys (~visored@2WVAABHP7.tor-irc.dnsbl.oftc.net) Quit ()
[4:13] * blank (~Grimhound@176.10.99.201) has joined #ceph
[4:14] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[4:15] * joshd (~joshd@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[4:16] * glzhao (~glzhao@203.90.249.185) has joined #ceph
[4:22] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[4:30] * elder (~elder@50.153.129.31) Quit (Quit: Leaving)
[4:40] * vbellur (~vijay@122.172.195.33) Quit (Ping timeout: 480 seconds)
[4:40] * kefu_ (~kefu@114.92.111.70) has joined #ceph
[4:42] * blank (~Grimhound@98EAAA625.tor-irc.dnsbl.oftc.net) Quit ()
[4:43] * Bonzaii (~Curt`@fenix.nullbyte.me) has joined #ceph
[4:43] * zhaochao (~zhaochao@111.161.77.236) has joined #ceph
[4:45] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:46] * kefu (~kefu@114.92.99.163) Quit (Ping timeout: 480 seconds)
[4:53] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[4:53] * vbellur (~vijay@122.167.120.230) has joined #ceph
[4:56] * DV__ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[4:59] * Steki (~steki@cable-89-216-232-72.dynamic.sbb.rs) has joined #ceph
[5:01] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[5:01] * shang (~ShangWu@175.41.48.77) has joined #ceph
[5:05] * BManojlovic (~steki@cable-89-216-231-136.dynamic.sbb.rs) Quit (Ping timeout: 480 seconds)
[5:09] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[5:12] * subscope (~subscope@92-249-244-248.pool.digikabel.hu) has joined #ceph
[5:12] * Bonzaii (~Curt`@5NZAABJQG.tor-irc.dnsbl.oftc.net) Quit ()
[5:15] * OutOfNoWhere (~rpb@199.68.195.102) Quit (Ping timeout: 480 seconds)
[5:17] * Frymaster (~Kwen@1.tor.exit.babylon.network) has joined #ceph
[5:20] * mynam (~tim@c-73-171-239-126.hsd1.fl.comcast.net) has left #ceph
[5:26] * kefu_ (~kefu@114.92.111.70) Quit (Max SendQ exceeded)
[5:27] * kefu (~kefu@114.92.111.70) has joined #ceph
[5:27] * fam is now known as fam_away
[5:28] * fam_away is now known as fam
[5:29] * kefu (~kefu@114.92.111.70) Quit (Max SendQ exceeded)
[5:30] * kefu (~kefu@114.92.111.70) has joined #ceph
[5:30] * Vacuum_ (~vovo@i59F7ACBB.versanet.de) has joined #ceph
[5:35] * mlausch (~mlausch@2001:8d8:1fe:7:7837:8c92:e048:9d94) Quit (Ping timeout: 480 seconds)
[5:37] * Vacuum (~vovo@i59F799FD.versanet.de) Quit (Ping timeout: 480 seconds)
[5:43] * mlausch (~mlausch@2001:8d8:1fe:7:85ec:3ef6:c7dd:368f) has joined #ceph
[5:47] * Frymaster (~Kwen@425AAAKMU.tor-irc.dnsbl.oftc.net) Quit ()
[5:47] * djidis__ (~skrblr@marcuse-2.nos-oignons.net) has joined #ceph
[5:48] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[5:51] * vbellur (~vijay@122.167.120.230) Quit (Ping timeout: 480 seconds)
[5:51] * kefu (~kefu@114.92.111.70) Quit (Read error: Connection reset by peer)
[5:53] * karnan (~karnan@106.51.242.69) has joined #ceph
[5:56] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[5:59] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[6:03] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:10] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[6:10] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[6:17] * djidis__ (~skrblr@425AAAKNC.tor-irc.dnsbl.oftc.net) Quit ()
[6:17] * tritonx (~CydeWeys@torsrvr.snydernet.net) has joined #ceph
[6:18] * karnan (~karnan@106.51.242.69) Quit (Ping timeout: 480 seconds)
[6:20] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[6:23] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[6:27] * karnan (~karnan@106.51.233.71) has joined #ceph
[6:27] * WinnieThePedo (6cb8af48@107.161.19.53) has joined #ceph
[6:28] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[6:42] * shyu (~Shanzhi@119.254.196.66) Quit (Remote host closed the connection)
[6:43] * subscope (~subscope@92-249-244-248.pool.digikabel.hu) Quit (Quit: Textual IRC Client: www.textualapp.com)
[6:47] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[6:47] * derjohn_mob (~aj@ip-95-223-126-17.hsi16.unitymediagroup.de) Quit (Ping timeout: 480 seconds)
[6:47] * tritonx (~CydeWeys@5NZAABJSV.tor-irc.dnsbl.oftc.net) Quit ()
[6:47] * w2k (~vend3r@herngaard.torservers.net) has joined #ceph
[6:52] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[6:53] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[7:01] <Vivek> Is there any document that shows Vcenter integration with Ceph ?
[7:01] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Quit: Leaving.)
[7:01] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:03] * karnan (~karnan@106.51.233.71) Quit (Ping timeout: 480 seconds)
[7:08] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[7:12] * mgolub (~Mikolaj@91.225.202.153) has joined #ceph
[7:15] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:16] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) has joined #ceph
[7:17] * w2k (~vend3r@2WVAABHXM.tor-irc.dnsbl.oftc.net) Quit ()
[7:17] * Phase (~Thayli@171.ip-5-135-148.eu) has joined #ceph
[7:18] * Phase is now known as Guest1892
[7:18] * WinnieThePedo (6cb8af48@107.161.19.53) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[7:26] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:28] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[7:30] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[7:34] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[7:39] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[7:41] * derjohn_mob (~aj@tmo-100-34.customers.d1-online.com) has joined #ceph
[7:43] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:47] * Guest1892 (~Thayli@98EAAA67O.tor-irc.dnsbl.oftc.net) Quit ()
[7:57] * vbellur (~vijay@121.244.87.124) has joined #ceph
[8:00] * overclk (~overclk@121.244.87.117) has joined #ceph
[8:01] * rdas (~rdas@121.244.87.116) has joined #ceph
[8:11] * oro (~oro@p1.almaden.ibm.com) Quit (Ping timeout: 480 seconds)
[8:12] * glzhao (~glzhao@203.90.249.185) Quit (Ping timeout: 480 seconds)
[8:16] * haomaiwang (~haomaiwan@118.244.255.9) has joined #ceph
[8:16] * haomaiwang (~haomaiwan@118.244.255.9) Quit (autokilled: This host may be infected. Mail support@oftc.net with questions. BOPM (2015-04-13 06:16:34))
[8:17] * Peaced (~ahmeni@tor-exit.server6.tvdw.eu) has joined #ceph
[8:17] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[8:17] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[8:19] <Be-El> hi
[8:20] * cok (~chk@2a02:2350:18:1010:28e9:d95c:4781:4d08) has joined #ceph
[8:26] * haomaiwang (~haomaiwan@114.111.166.249) has joined #ceph
[8:29] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[8:29] * derjohn_mobi (~aj@tmo-109-26.customers.d1-online.com) has joined #ceph
[8:31] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[8:35] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:36] * derjohn_mob (~aj@tmo-100-34.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[8:37] * Kioob (~Kioob@200.254.0.109.rev.sfr.net) has joined #ceph
[8:39] * derjohn_mobi (~aj@tmo-109-26.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[8:47] * Peaced (~ahmeni@5NZAABJW8.tor-irc.dnsbl.oftc.net) Quit ()
[8:47] * cooey (~Pommesgab@95.130.11.147) has joined #ceph
[8:50] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) has joined #ceph
[8:54] * cok (~chk@2a02:2350:18:1010:28e9:d95c:4781:4d08) Quit (Quit: Leaving.)
[8:57] * lkoranda (~lkoranda@213.175.37.10) Quit (Quit: Splunk> Be an IT superhero. Go home early.)
[8:58] * lkoranda (~lkoranda@213.175.37.10) has joined #ceph
[9:00] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[9:05] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:10] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:10] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[9:12] * oro (~oro@207-47-24-10.static-ip.telepacific.net) has joined #ceph
[9:13] * oro (~oro@207-47-24-10.static-ip.telepacific.net) Quit ()
[9:17] * cooey (~Pommesgab@2WVAABH25.tor-irc.dnsbl.oftc.net) Quit ()
[9:17] * homosaur (~anadrom@chomsky.torservers.net) has joined #ceph
[9:23] <anorak> hi
[9:24] * thomnico (~thomnico@2a01:e35:8b41:120:c43a:4e84:65cd:44cf) has joined #ceph
[9:29] * dgurtner (~dgurtner@178.197.231.228) has joined #ceph
[9:31] * dgurtner (~dgurtner@178.197.231.228) Quit ()
[9:32] * dgurtner (~dgurtner@178.197.231.228) has joined #ceph
[9:32] * fsimonce (~simon@host178-188-dynamic.26-79-r.retail.telecomitalia.it) has joined #ceph
[9:33] * bobrik (~bobrik@83.243.64.45) has joined #ceph
[9:43] * rendar (~I@95.234.176.127) has joined #ceph
[9:44] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[9:44] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[9:47] * homosaur (~anadrom@2WVAABH4X.tor-irc.dnsbl.oftc.net) Quit ()
[9:47] * Grimmer (~hyst@strasbourg-tornode.eddai.su) has joined #ceph
[9:51] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[9:52] * kapil (~ksharma@2620:113:80c0:5::2222) Quit (Quit: Leaving)
[9:53] * mivaho (~quassel@2001:983:eeb4:1:c0de:69ff:fe2f:5599) has joined #ceph
[9:58] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[9:58] * shang (~ShangWu@175.41.48.77) has joined #ceph
[10:00] * jordanP (~jordan@213.215.2.194) has joined #ceph
[10:03] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[10:12] * vbellur (~vijay@121.244.87.117) has joined #ceph
[10:16] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[10:17] * Grimmer (~hyst@5NZAABJ11.tor-irc.dnsbl.oftc.net) Quit ()
[10:17] * N3X15 (~jacoo@edwardsnowden1.torservers.net) has joined #ceph
[10:25] * lkoranda (~lkoranda@213.175.37.10) Quit (Quit: Splunk> Be an IT superhero. Go home early.)
[10:28] * joshd (~joshd@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Ping timeout: 480 seconds)
[10:30] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[10:33] * getup (~getup@gw.office.cyso.net) has joined #ceph
[10:34] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:42] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[10:43] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[10:46] * haigang (~haigang@180.166.129.186) has joined #ceph
[10:47] * N3X15 (~jacoo@98EAAA7D9.tor-irc.dnsbl.oftc.net) Quit ()
[10:47] * PappI (~Mattress@jaures.gtor.org) has joined #ceph
[10:48] * zack_dolby (~textual@nfmv001080190.uqw.ppp.infoweb.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[10:51] * ksingh (~Adium@2001:708:10:10:75aa:11f3:9f07:50cb) has joined #ceph
[10:51] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[10:51] <ksingh> Hello Cephers
[10:52] * glzhao (~glzhao@203.90.249.185) has joined #ceph
[10:59] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[11:02] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[11:02] * bash (~oftc-webi@teeri.csc.fi) has joined #ceph
[11:02] <bash> Hi there
[11:03] <bash> The formula mentioned in Ceph documentation Total PGs = (OSDs * 100) / pool size Does this gives PG number for ENTIRE CLUSTER or for ONE POOL
[11:11] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[11:13] * haigang (~haigang@180.166.129.186) Quit (Quit: This computer has gone to sleep)
[11:14] <bash> The formula mentioned in Ceph documentation Total PGs = (OSDs * 100) / pool size Does this gives PG number for ENTIRE CLUSTER or for ONE POOL
[11:15] * haigang (~haigang@180.166.129.186) has joined #ceph
[11:16] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[11:17] * PappI (~Mattress@98EAAA7FH.tor-irc.dnsbl.oftc.net) Quit ()
[11:17] * rhonabwy (~Tarazed@tor.nullbyte.me) has joined #ceph
[11:27] * haigang (~haigang@180.166.129.186) Quit (Quit: This computer has gone to sleep)
[11:27] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[11:27] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[11:27] * smithfarm (~ncutler@nat1.scz.suse.com) has joined #ceph
[11:28] <smithfarm> loicd: joao said you were looking for me?
[11:31] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:39] * kefu (~kefu@114.92.111.70) has joined #ceph
[11:43] * vbellur (~vijay@121.244.87.124) has joined #ceph
[11:45] <loicd> smithfarm: just wondering if you had another nick on IRC but here you are, thanks for https://github.com/ceph/ceph/pull/4334 ;-)
[11:46] <loicd> smithfarm: feel free to ping me on #ceph-devel if you have dev related questions , jluis tells me we're in the same timezone (I'm in France)
[11:47] * rhonabwy (~Tarazed@2WVAABIA3.tor-irc.dnsbl.oftc.net) Quit ()
[11:49] <smithfarm> loicd: thanks
[11:52] * bitserker1 (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[11:55] * haigang (~haigang@180.166.129.186) has joined #ceph
[11:58] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[11:59] * DV__ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[12:00] * haigang (~haigang@180.166.129.186) Quit (Quit: ??????)
[12:00] * _nick (~nick@zarquon.dischord.org) Quit (Quit: ZNC - http://znc.in)
[12:01] * _nick (~nick@zarquon.dischord.org) has joined #ceph
[12:05] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[12:14] <getup> we're currently running radosgw-agent and when doing a full sync it doesn't pick up objects that have a folder as prefix, e.g. files/file.txt isn't synchronized whereas file.txt that sits directly in the bucket is. Apparently it results in a 404 on the other end during the PUT request. What could I be missing here?
[12:14] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) has joined #ceph
[12:14] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Read error: Connection reset by peer)
[12:17] * Sliker (~biGGer@ds1789779.dedicated.solnet.ch) has joined #ceph
[12:24] * Kioob (~Kioob@200.254.0.109.rev.sfr.net) Quit (Quit: Leaving.)
[12:24] * bgleb (~bgleb@2a02:6b8:0:2309:a426:314d:abd5:f11b) has joined #ceph
[12:28] * shyu (~Shanzhi@119.254.196.66) Quit (Remote host closed the connection)
[12:30] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) Quit (Quit: Leaving.)
[12:34] * cok (~chk@2a02:2350:18:1010:c190:9a9:17f4:462e) has joined #ceph
[12:40] * shylesh (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[12:40] * Mika_c (~quassel@125.227.22.217) Quit (Remote host closed the connection)
[12:47] * Sliker (~biGGer@1GLAAA80G.tor-irc.dnsbl.oftc.net) Quit ()
[12:47] * Diablothein (~Kidlvr@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[12:48] * dgurtner (~dgurtner@178.197.231.228) Quit (Ping timeout: 480 seconds)
[12:49] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[12:51] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[12:53] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) has left #ceph
[12:59] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:00] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:01] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[13:03] * kefu (~kefu@114.92.111.70) has joined #ceph
[13:05] * bash (~oftc-webi@teeri.csc.fi) Quit (Quit: Page closed)
[13:07] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[13:07] * kefu is now known as kefu|afk
[13:09] * madkiss (~madkiss@2001:6f8:12c3:f00f:40ec:9cb2:84eb:cf3e) Quit (Quit: Leaving.)
[13:13] * kefu|afk (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:14] * dgurtner (~dgurtner@178.197.231.228) has joined #ceph
[13:14] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[13:15] * Kioob (~Kioob@200.254.0.109.rev.sfr.net) has joined #ceph
[13:17] * Diablothein (~Kidlvr@5NZAABJ94.tor-irc.dnsbl.oftc.net) Quit ()
[13:22] * bgleb (~bgleb@2a02:6b8:0:2309:a426:314d:abd5:f11b) Quit (Read error: Connection timed out)
[13:22] * bgleb (~bgleb@77.88.2.37-spb.dhcp.yndx.net) has joined #ceph
[13:23] * branto (~branto@213.175.37.10) has joined #ceph
[13:27] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[13:30] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[13:33] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[13:35] * bitserker1 (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[13:40] * getup (~getup@gw.office.cyso.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[13:43] * marrusl (~mark@cpe-24-90-46-248.nyc.res.rr.com) has joined #ceph
[13:47] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[13:48] * hellertime (~Adium@23.79.238.10) has joined #ceph
[13:51] * zhaochao (~zhaochao@111.161.77.236) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 31.6.0/20150331233809])
[13:57] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Quit: Leaving.)
[13:58] * i_m (~ivan.miro@deibp9eh1--blueice1n2.emea.ibm.com) has joined #ceph
[13:59] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Quit: o//)
[14:01] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit ()
[14:01] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[14:02] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[14:04] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[14:06] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[14:06] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[14:12] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Ping timeout: 480 seconds)
[14:14] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Quit: Leaving.)
[14:17] * tZ (~Mattress@bolobolo1.torservers.net) has joined #ceph
[14:18] * tupper_ (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[14:24] * branto (~branto@213.175.37.10) Quit (Remote host closed the connection)
[14:26] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[14:27] * ganders (~root@190.2.42.21) has joined #ceph
[14:28] * smithfarm (~ncutler@nat1.scz.suse.com) has left #ceph
[14:34] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[14:41] <flaf> Ah bash is gone. Too bad. :)
[14:43] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[14:45] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Remote host closed the connection)
[14:46] * bene2 (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[14:46] * tupper_ (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[14:47] * tZ (~Mattress@2WVAABII7.tor-irc.dnsbl.oftc.net) Quit ()
[14:47] * Bored (~Chaos_Lla@dreamatorium.badexample.net) has joined #ceph
[14:52] * elder (~elder@50.250.13.174) has joined #ceph
[14:53] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[14:53] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[14:54] * sjm (~sjm@ca6.vpnunlimitedapp.com) has joined #ceph
[14:54] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[14:56] * tupper_ (~tcole@rtp-isp-nat1.cisco.com) has joined #ceph
[14:57] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:00] * getup (~getup@gw.office.cyso.net) has joined #ceph
[15:04] * bash (~oftc-webi@teeri.csc.fi) has joined #ceph
[15:04] <bash> The formula mentioned in Ceph documentation Total PGs = (OSDs * 100) / pool size Does this gives PG number for ENTIRE CLUSTER or for ONE POOL
[15:05] * dyasny (~dyasny@173.231.115.58) Quit (Quit: Ex-Chat)
[15:05] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[15:05] <bash> flaf: could you help ?
[15:05] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[15:06] <joelm> brutuscat: entire cluster afaik, use the pg calc
[15:06] <joelm> bash: ^ even
[15:06] <joelm> http://ceph.com/pgcalc/
[15:06] <joelm> if the website was working that is (seems to have been really slow recently1)
[15:06] * cok (~chk@2a02:2350:18:1010:c190:9a9:17f4:462e) Quit (Quit: Leaving.)
[15:07] <bash> joelm : i am using PG calc but want to understand the concept , i am confused , should i consider that formula to be per pool or entire cluster
[15:07] * bgleb (~bgleb@77.88.2.37-spb.dhcp.yndx.net) Quit (Remote host closed the connection)
[15:08] <joelm> you need to factor in the number of pools, sure
[15:09] <bash> joelm : do you mean whatever PG value comes out of this formula i should divide it by NUM_OF_POOLS ??
[15:10] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[15:11] <joelm> No, just use the formula :)
[15:11] <joelm> that's why the number of pools are listed in this
[15:12] <joelm> as you want a total number of pools to work out how many O_Pgs, depending on replica and target OSD requirements
[15:13] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Ping timeout: 480 seconds)
[15:15] <flaf> bash, yes it's for the entire cluster but it's an approximation because "pool size" is not necessary constant.
[15:16] <flaf> In fact, the number of pg should be weighted by the amount of data on each pool.
[15:17] * Bored (~Chaos_Lla@425AAAKU6.tor-irc.dnsbl.oftc.net) Quit ()
[15:17] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[15:17] <flaf> The short answer is indeed: "see http://ceph.com/pgcalc/".
[15:18] * getup (~getup@gw.office.cyso.net) Quit (Ping timeout: 480 seconds)
[15:18] <flaf> The link gives some explanations.
[15:19] <bash> Thanks flaf and joelm thanks for your help . so i should blindly use PGCALC and hope it should give me correct PG values for my production ceph cluster.
[15:19] <flaf> bash: an the equation is globally "#pg for a specific pool = 100 x (#OSDs for this pool) x (%DATA for this pool) / (pool size)"
[15:19] <flaf> But the equation as exception (explained in the link).
[15:22] <flaf> The principle is to have indeed ~100 PGs per OSD but in fact the number of PGs for a give pool is weighted by the %DATA/(pool size) coefficient.
[15:23] * sugoruyo (~georgev@paarthurnax.esc.rl.ac.uk) has joined #ceph
[15:24] * bobrik (~bobrik@83.243.64.45) Quit (Quit: (null))
[15:24] <bash> flaf: thanks you are a real Ceph hero
[15:25] <flaf> bash: Oh no... I'm not at all, believe me. :)
[15:25] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[15:26] * bash (~oftc-webi@teeri.csc.fi) Quit (Quit: Page closed)
[15:26] * bash (~oftc-webi@teeri.csc.fi) has joined #ceph
[15:26] <flaf> But in practice, it's very simple: you use http://ceph.com/pgcalc/ ;)
[15:33] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[15:34] * b0e (~aledermue@213.95.25.82) has joined #ceph
[15:36] <flaf> Ah bash: last info: the equation is valid for "big" clusters (#OSDs > 50). For a little cluster (#OSDs =< 50), you follow this link http://ceph.com/docs/master/rados/operations/placement-groups/#a-preselection-of-pg-num
[15:37] * harold (~hamiller@71-94-227-66.dhcp.mdfd.or.charter.com) has joined #ceph
[15:37] <bash> flaf: thanks again , i have read that link , and my cluster has 240 OSD
[15:39] <flaf> Ah ok.
[15:39] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[15:40] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[15:44] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[15:44] <cetex> \o/
[15:45] <cetex> journal on (raw) first partition on disk, changed filesystem to xfs, getting 1.5x throughput :)
[15:45] <cetex> up to 600MB/sec to 18drives :)
[15:46] <flaf> cetex: cool, how was the config before?
[15:46] <cetex> network is the limit though.. seeing plenty of packetloss on our 2gbit links..
[15:46] <cetex> ext3, journal and data on same partition
[15:46] <flaf> Ah ok.
[15:46] * harmw (~harmw@chat.manbearpig.nl) Quit (Ping timeout: 480 seconds)
[15:46] <flaf> And do you have try btrfs? ;)
[15:46] <flaf> *tryed
[15:47] <cetex> tried btrfs on a few drives and saw some improvements, mostly the write throughput was more stable.
[15:47] <cetex> also, using leveldb.
[15:47] <cetex> currently.
[15:47] * dux0r (~drdanick@tor-daiquiri.piraten-nds.de) has joined #ceph
[15:47] <T1w> what about ext4 instead of xfs?
[15:48] <cetex> ah, sorry. ext4 was what i had before
[15:48] <cetex> :)
[15:48] <T1w> ah
[15:48] <cetex> the write throughput/s is still jumping around all over the place, from 50MB/s sometimes to 650MB/s..
[15:49] <T1w> it could be nice to know if it was the switch to raw for the journal or ext4->xfs that did it
[15:49] * bene2 (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[15:49] <flaf> cetex: how do you bench your cluster?
[15:49] * maxxware (~maxx@149.210.133.105) has joined #ceph
[15:49] <cetex> yeah. i'm most likely going to test.
[15:49] <cetex> one "rados bench -p data 90000 write" per host, 9 hosts, 2drives per host.
[15:50] <flaf> ok thx.
[15:50] <T1w> btw, what's the sweetspot for number of drives in a osd compared to more osds?
[15:51] <cetex> one drive per osd? :)
[15:51] <cetex> i don't know. we're running one per osd at least.
[15:51] <cetex> if one drive breaks we'll only need to resynch 4TB of data, and 2disks per host means at most 8TB of data.
[15:51] <T1w> I was thinking hw-wise.. 1 machine, 1 osd, 1 drive or 1 machine, 1 osd 2 drives or 1 machine, 2 osds, 2 drives
[15:52] <cetex> i guess 1 machine x osd, x drives
[15:52] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[15:52] <T1w> mmm
[15:52] <T1w> up to a saturation point of some sorts
[15:52] <cetex> yeah. at least 1GB of ram per osd.
[15:53] <T1w> what about # cores for a given osd?
[15:53] <cetex> then it's about network throughput.
[15:53] <T1w> oh.. stupid me..
[15:53] * m0zes was under the impression it was 1GB of ram / TB of osd space on the machine.
[15:53] <cetex> not sure, but for me it seems like they're using 30% of one core.
[15:53] <T1w> an osd does nothing cpui intensive anyway
[15:53] <cetex> hm, no.. 30-70% :>
[15:53] <m0zes> erasure-coded pools use more.
[15:54] <cetex> but that's on a xeon cpu, it will differ on others.
[15:54] <T1w> of course
[15:54] <T1w> oh well.. I'm off
[15:54] <cetex> me to
[15:54] <cetex> home!
[15:54] <T1w> cya sometime
[15:55] * bash (~oftc-webi@teeri.csc.fi) Quit (Quit: Page closed)
[15:55] * bash (~oftc-webi@teeri.csc.fi) has joined #ceph
[15:56] <bash> cetex: how did you configure your journal to be placed on xfs filesystem. could you please share the command to do it. I am a newbie in this space
[15:57] <flaf> cetex: which type of disk do you use? HDD, SSD?
[15:58] <flaf> SAS Disk? etc.
[15:58] * kefu (~kefu@114.92.111.70) has joined #ceph
[15:58] * yghannam (~yghannam@0001f8aa.user.oftc.net) has joined #ceph
[15:59] <flaf> bash: no, cetex places journal in a *raw* partition (in the same disk of the OSD).
[16:00] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[16:01] * harmw (~harmw@chat.manbearpig.nl) has joined #ceph
[16:01] * harold (~hamiller@71-94-227-66.dhcp.mdfd.or.charter.com) Quit (Read error: Connection reset by peer)
[16:02] * vata (~vata@208.88.110.46) has joined #ceph
[16:02] * wushudoin (~wushudoin@209.132.181.86) has joined #ceph
[16:02] * pcsquared (sid11336@id-11336.ealing.irccloud.com) has joined #ceph
[16:02] * harold (~hamiller@71-94-227-66.dhcp.mdfd.or.charter.com) has joined #ceph
[16:02] <pcsquared> hey, let's say i have a file on a ceph-backed instance that i want to make sure is *gone* (not lazily deleted)
[16:03] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:03] <pcsquared> is there any mechanism for doing that?
[16:06] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[16:07] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[16:07] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) has joined #ceph
[16:10] <bash> flaf: cetex: just wrote above "journal on (raw) first partition on disk, changed filesystem to xfs, getting 1.5x throughput" which means he was using RAW before and after that he changed it to XFS filesystem and saw performance improvement. Please correct me if my understanding is incorrect.
[16:11] * harold (~hamiller@71-94-227-66.dhcp.mdfd.or.charter.com) Quit (Ping timeout: 480 seconds)
[16:11] <bash> flaf: by the way how to check where is my OSD journal partition located ? i have a running cluster ?
[16:12] * harold (~hamiller@71-94-227-43.dhcp.mdfd.or.charter.com) has joined #ceph
[16:12] * tupper_ (~tcole@rtp-isp-nat1.cisco.com) Quit (Ping timeout: 480 seconds)
[16:13] <flaf> bash: before "ext4 in a uniq partition for the disk and the journal was in this ext4 partition as a regular file". After "disk with part1 -> raw partition for the journal and part2 -> xfs for the working dir of the osd".
[16:14] <cetex> sorry, on my way home. disks are cheap 3.5" 7.2k 4tb sata drives
[16:14] <flaf> cetex: is 600 MB/s an average or is the max? 600 MB/s seems to me me very good for just 18 drives.
[16:14] <flaf> cetex: no pb.
[16:15] <flaf> cetex: and the pool size? 2, 3?
[16:16] <bash> cetex: thanks you are back , could you please elaborate how was your journal setup before and after ?
[16:16] <bash> flaf: Thanks buddy , i am trying to understand this
[16:16] <flaf> cetex: Do you a controller RAID? I'm curious about your hardware config.
[16:16] <cetex> yeah. 600 is max. need more data to see average, but its dropping way to much during low throughput, seeing 100MB/s sometimes currently. we need 300MB minimum if its going to work for us.
[16:17] * dux0r (~drdanick@2WVAABINS.tor-irc.dnsbl.oftc.net) Quit ()
[16:17] * Xeon06 (~HoboPickl@98EAAA7RN.tor-irc.dnsbl.oftc.net) has joined #ceph
[16:17] * b0e (~aledermue@213.95.25.82) has joined #ceph
[16:17] <cetex> min_size is 2, size is 3
[16:17] <cetex> 600pgs, but dont think that matters
[16:18] <cetex> no raid, one drive one osd
[16:18] <flaf> ok, thx cetex :)
[16:18] <flaf> bash: it's simple.
[16:19] <flaf> bash: the more simple is to have a disk with only one partition.
[16:19] <flaf> and the working directory of the osd and the journal are in this partition.
[16:19] <cetex> though, this should kinda work if we scale it up to one PB (224 drives), but need to verify a few times before im allowed to order :)
[16:20] <flaf> so, in this case, /var/lib/ceph/osd/ceph-$id/journal is a regular file (of 10GB or 20GB).
[16:21] <flaf> then the best is to have one disk for the entire OSD working dir and put the journal in a raw partition of a SSD disk.
[16:21] <flaf> so, in this case, /var/lib/ceph/osd/ceph-$id/journal is a symlink to /dev/my-disk-ssd/part1 (it's a example)
[16:22] * harold (~hamiller@71-94-227-43.dhcp.mdfd.or.charter.com) Quit (Quit: Leaving)
[16:23] <bash> flaf: A BIG THANKS to you for your explanation , I love you
[16:23] <bash> :-)
[16:24] <flaf> And if you have no ssd, the medium solution, is to have a OSD disk with 2 partitions: part1 a raw partition for the journal and part2 xfs partition for the working dir of the OSD.
[16:24] <flaf> so, in this case, /var/lib/ceph/osd/ceph-$id/journal is a symlink too but a symlink to the raw partition of the same OSD disk.
[16:25] <flaf> You should put the raw journal partition in the first position because, the first partition is generally ~20% faster.
[16:25] <flaf> :)
[16:26] <bash> flaf : you are the right person to ask and you have deep knowledge , i dont have SSD and currently my journal in on the same disk as data [/var/lib/ceph/osd/ceph-$id/journal is a regular file]. So to get more performance should i move this journal to partition of same disk ? is is recommended for production ?
[16:27] <bash> i deployed my OSD's using ceph-deploy osd create so by default it has created journals on the same disk as a FILE ( the same you explained )
[16:27] * ivs (~ivs@ip-95-221-196-162.bb.netbynet.ru) Quit (Remote host closed the connection)
[16:27] * ivs (~ivs@ip-95-221-196-162.bb.netbynet.ru) has joined #ceph
[16:28] <bash> so should i move them to RAW partition like cetex: has done
[16:28] <flaf> bash: yes I think so. The improvement will not be massive at all, no miracle (in french: "ce sera pas la f??te ?? dudul", forget it ;)) but it's better.
[16:29] <m0zes> the beginning of the disk is *slightly* faster, and all in one place, so I would.
[16:29] <m0zes> especially with a write-only (or mostly) journal files.
[16:29] <bash> i hope doing this is safe
[16:30] * shang (~ShangWu@223-136-43-219.EMOME-IP.hinet.net) has joined #ceph
[16:31] <cetex> the idea behind putthing the journal on it's own raw partition is that it should be located on the beginning of the disk where the disk where the disk (seeks) are fastest
[16:32] <flaf> Ah, personally, I have made the same "mistake" as you and I should remove each osd and recreate en new (I have just 18 OSDs).
[16:32] <bash> cetex: thanks , how did you created your OSD , is it using ceph-deploy or any other way ?
[16:32] <cetex> manually
[16:33] <cetex> running them in a docker container, the same hosts also runs mesos and such.
[16:34] <bash> cetex: so step1 : manually create two partitions on disk using FDISK step 2 : create osd step 3 Point journal to partiton 1 ? are these correct ?
[16:34] <cetex> if the disk doesn't have any ceph data we run:
[16:34] <cetex> ceph-disk prepare --data-dir /data/ceph --osd-uuid=$UUID
[16:34] <cetex> ceph-osd -d -i $OSDID --osd-data=/data/ceph/ --osd-journal=/data/ceph/journal --mkfs
[16:34] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[16:34] <flaf> cetex: just another question: for your switchs between cluster nodes? 1GB, 10GB?
[16:34] <cetex> and if it has data we run:
[16:34] <cetex> ceph osd crush create-or-move osd.$OSDID 1.0 root=default datacenter=age row=1 rack=$RACK host=$HOSTNAME
[16:34] * shang_ (~ShangWu@175.41.48.77) has joined #ceph
[16:34] <cetex> ceph-osd -d -i $OSDID --osd-data=/data/ceph/ --osd-journal=/data/ceph/journal'
[16:35] <cetex> curently 2x1gbit, we use lacp so we get roughly 1.8Gbit.
[16:35] <flaf> ok, thx.
[16:35] <cetex> going to be 2x10gbit soonish though :>
[16:36] <bash> certx : whats your recommendation for journal partitoin size for a 4TB disk
[16:36] <cetex> but that won't do much for ceph. we barely push it to max out the interfaces during spikes now.
[16:36] <cetex> i have no idea about size of that, i'm still trying to figure it out
[16:36] <bash> flaf : you ?
[16:36] * shang (~ShangWu@223-136-43-219.EMOME-IP.hinet.net) Quit (Read error: Connection reset by peer)
[16:36] <cetex> if you're doing bursty writes (write once in a while but write a lot) you should most likely have a larger partition.
[16:36] <flaf> I think with 20GB for the journal, it will be Ok.
[16:37] <bash> thanks
[16:37] <cetex> but if you like us have continuous writes 24/7 i guess a smaller journal (so it can fit 1second of data or something) may provide more even throughput
[16:37] <bash> cetex: did you created first stop partitions using fdisk / parted or any other linux utility ?
[16:37] <cetex> i currently have 1650MB partition for the journal.
[16:37] <cetex> parted
[16:37] <cetex> parted -s /dev/ldisksda -- mklabel gpt mkpart ext4 2048s 1650 mkpart xfs 1650 100%
[16:38] <cetex> "ext4" and "xfs" is just a name as far as i understand it, so i just set it to something random.
[16:38] <cetex> "ext4" being the random choice.
[16:39] <m0zes> I am willing to bet parted did a mkfs.ext4 on it. then ceph overwrote it as the journal.
[16:39] <bash> cetex: Thanks for sharing the command , you know what i want :)
[16:39] <cetex> no, i did "dd if=/dev/zero of=/dev/ldisksda1" afterwards, ceph wouldn't start otherwise.
[16:40] <bash> cetex: did you mean create journal and then just DD it to make sure everything is clean ?
[16:40] <bash> and raw for the journal
[16:41] <cetex> no, dd first, then ceph-osd mkfs stuff
[16:41] <bash> yeah i mean create partition using parted , then DD then ceph-osd create
[16:41] <cetex> gonna be back in a few hours, building new floor in the kitchen.. :>
[16:42] <cetex> yeah
[16:42] <bash> cetex: if i would have living next door , i would have helped you in your kitchen building
[16:42] <bash> thanks for your ceph help
[16:43] <bash> flaf: Thanks to you as well :)
[16:43] <flaf> :)
[16:44] <cetex> :p
[16:44] <cetex> i have no idea if this is the best way to do it, i'm still doing experiments. :)
[16:47] * Xeon06 (~HoboPickl@98EAAA7RN.tor-irc.dnsbl.oftc.net) Quit ()
[16:51] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[16:52] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[16:55] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit ()
[16:56] * bash (~oftc-webi@teeri.csc.fi) Quit (Remote host closed the connection)
[16:58] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has left #ceph
[16:58] * scuttle|afk is now known as scuttlemonkey
[17:00] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[17:00] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[17:00] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) has joined #ceph
[17:03] * ksingh (~Adium@2001:708:10:10:75aa:11f3:9f07:50cb) Quit (Ping timeout: 480 seconds)
[17:03] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) has joined #ceph
[17:03] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) Quit ()
[17:03] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) has joined #ceph
[17:04] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) Quit ()
[17:04] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) has joined #ceph
[17:06] * ircolle (~Adium@2601:1:a580:1735:507e:4aaf:e5ad:c905) has joined #ceph
[17:07] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:07] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Read error: Connection reset by peer)
[17:07] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[17:07] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Quit: Leaving.)
[17:08] * PerlStalker (~PerlStalk@162.220.127.20) has joined #ceph
[17:08] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) has joined #ceph
[17:10] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[17:11] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) Quit (Quit: Splunk> Be an IT superhero. Go home early.)
[17:11] * linuxkidd (~linuxkidd@vpngac.ccur.com) has joined #ceph
[17:14] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[17:17] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[17:20] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) has joined #ceph
[17:22] <Vivek> loicd: Does ceph officially support ceph ?
[17:22] * elder (~elder@50.250.13.174) Quit (Ping timeout: 480 seconds)
[17:22] <loicd> Vivek: I think so ;-)
[17:22] <Vivek> loicd: Does ceph officially support vcenter ?
[17:22] <Vivek> sorry about the typo earlier ?
[17:22] <loicd> Vivek: not to my knowledge
[17:23] * burley (~khemicals@cpe-98-28-239-78.cinci.res.rr.com) Quit (Quit: burley)
[17:23] <loicd> Vivek: that was a nice typo ;-)
[17:24] <joelm> I've heard of 'eat your own dogfood' before, but not 'eat your own storage backend' :)
[17:25] <Vivek> Ok.
[17:25] <Vivek> I have a requirement from my employers to integrate ceph with vcenter.
[17:25] <Vivek> Any vms launched in vcenter should get it's storage from Ceph.
[17:26] <Vivek> I could not find any use cases for the same.
[17:26] <Vivek> loicd: thanks for that info.
[17:28] * carmstrong (sid22558@id-22558.uxbridge.irccloud.com) Quit ()
[17:28] * madkiss (~madkiss@2001:6f8:12c3:f00f:e8e4:145f:403a:5f58) has joined #ceph
[17:28] <Vivek> loicd: Also can I use Openstack Object Storage with Swift ?
[17:29] <Vivek> s/Swift/Ceph
[17:29] <Vivek> If so can you point me to some document which I can use.
[17:31] <gleam> radosgw has an implementation of some of the swift api.. i believe i've also seen a swift fork (or something) using rbd as backend storage
[17:31] <m0zes> http://docs.openstack.org/openstack-ops/content/storage_decision.html
[17:31] * elder (~elder@50.153.131.154) has joined #ceph
[17:31] * tupper_ (~tcole@rtp-isp-nat1.cisco.com) has joined #ceph
[17:32] <m0zes> if the ceph website was up I'd point to the radosgw docs.
[17:32] <m0zes> I guess this works: https://github.com/ceph/ceph/tree/master/doc/radosgw
[17:33] <xcezzz1> anyone want a good chuckle? seems when looking for cheap drives for our cluster??? we got WD 2TB enterprises??? refurbished??? AND they are all 5400 RPM??? so out of 32 we bought??? 8 have already took a dump??? if we had done our normal RAID we would have been screwed??? thx god ceph is so awesome
[17:33] <m0zes> *shudder*
[17:34] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[17:35] <xcezzz1> ya??? ikr??? granted this is our first cluster that we are testing in a production like environment.. i cant believe it is running as well as it is???
[17:36] <xcezzz1> especially for a quarter of the drives put in originally are dead now
[17:39] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[17:40] <joelm> ceph.com being funny again?
[17:40] <joelm> is it a DoS or something more subtle?
[17:43] <frickler> joelm: no idea, also got no feedback as to what has caused the outage last Friday
[17:44] * DV__ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[17:44] <joelm> yea, it's not ideal, gives quite a negative impression
[17:44] * joelm finds eu.ceph.com working
[17:45] <joelm> at least for packages
[17:45] <joelm> will update ours to that, apt-get update breaks otherwise
[17:47] * Rens2Sea (~zviratko@425AAAKYA.tor-irc.dnsbl.oftc.net) has joined #ceph
[17:47] * reed (~reed@198.23.103.89-static.reverse.softlayer.com) has joined #ceph
[17:48] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:50] <Vivek> m0zes: Thanks.
[17:50] <ron-slc> ceph.com down again????? I think there needs to be an analysis of changing hosts, or providers... This is very frequent.
[17:50] * fghaas (~florian@213162068029.public.t-mobile.at) has joined #ceph
[17:52] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[17:54] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has joined #ceph
[17:54] * fghaas (~florian@213162068029.public.t-mobile.at) Quit ()
[17:54] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has left #ceph
[17:55] <joelm> at this rate will be prefixing it all with .nyud.net :D
[17:55] * daniel2_ (~daniel2_@12.164.168.117) has joined #ceph
[17:56] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[17:56] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:57] <gregsfortytwo> it's under discussion
[17:57] <gregsfortytwo> hosting, I mean
[17:58] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[17:58] * bandrus (~brian@128.sub-70-211-79.myvzw.com) has joined #ceph
[17:59] * omar_m (~omar_m@209.163.140.194) has joined #ceph
[17:59] * puffy (~puffy@50.185.218.255) has joined #ceph
[18:00] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[18:01] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit ()
[18:02] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) has joined #ceph
[18:03] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[18:05] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[18:06] * branto (~branto@nat-pool-brq-t.redhat.com) Quit (Quit: Leaving.)
[18:06] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[18:07] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[18:08] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Quit: Leaving...)
[18:09] * kanagaraj (~kanagaraj@27.7.32.214) has joined #ceph
[18:09] * ChrisNBlum (~ChrisNBlu@178.255.153.117) has joined #ceph
[18:15] <MaZ-> hmm... radosgw-agent seems to somehow be... creating more data in the secondary zone than exists in the primary zone
[18:16] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[18:16] * reed (~reed@198.23.103.89-static.reverse.softlayer.com) Quit (Ping timeout: 480 seconds)
[18:17] * Rens2Sea (~zviratko@425AAAKYA.tor-irc.dnsbl.oftc.net) Quit ()
[18:17] * mps (~offer@192.3.24.178) has joined #ceph
[18:17] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[18:21] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[18:24] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[18:24] * jordanP (~jordan@213.215.2.194) Quit (Quit: Leaving)
[18:24] * Kioob (~Kioob@200.254.0.109.rev.sfr.net) Quit (Quit: Leaving.)
[18:26] * joef (~Adium@2601:9:280:f2e:8d6e:903c:a73c:24b1) has joined #ceph
[18:27] * adeel (~adeel@fw1.ridgeway.scc-zip.net) has joined #ceph
[18:27] * adeel (~adeel@fw1.ridgeway.scc-zip.net) Quit ()
[18:29] * reed (~reed@2602:244:b653:6830:71f3:5114:3563:946d) has joined #ceph
[18:32] * jharley (~jharley@66.207.210.170) has joined #ceph
[18:34] * fghaas (~florian@194.112.182.213) has joined #ceph
[18:34] <Vivek> fghaas: hI
[18:35] <Vivek> fghaas: Got a few mins to address some my queries ?
[18:38] * davidz (~davidz@cpe-23-242-189-171.socal.res.rr.com) has joined #ceph
[18:38] * joef (~Adium@2601:9:280:f2e:8d6e:903c:a73c:24b1) has left #ceph
[18:41] * subscope (~subscope@92-249-244-248.pool.digikabel.hu) has joined #ceph
[18:45] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[18:47] * mps (~offer@2WVAABIU9.tor-irc.dnsbl.oftc.net) Quit ()
[18:47] * skney (~zviratko@tor-node.rutgers.edu) has joined #ceph
[18:47] <seapasulli> is the ceph site down?
[18:47] <joelm> heh, yea, probably
[18:48] <joelm> if you need packages - try eu.ceph.com
[18:48] * visbits (~textual@cpe-174-101-246-167.cinci.res.rr.com) has joined #ceph
[18:48] <seapasulli> http://www.downforeveryoneorjustme.com/ceph.com (yup)
[18:48] <visbits> guess someone upgrade d ceph.com to hammer
[18:48] <visbits> rofl
[18:48] <seapasulli> ah fancy. Thanks. Don't need packages just info on the swift container bulk operations info
[18:48] <seapasulli> haha
[18:48] <joelm> visbits: or maybe that's what broke it ;)
[18:49] <visbits> that is what broke it
[18:50] * achieva (ZISN2.9G@foresee.postech.ac.kr) Quit (Ping timeout: 480 seconds)
[18:51] * vbellur (~vijay@122.166.171.197) has joined #ceph
[18:52] <seapasulli> back up it looks like!
[18:52] <seapasulli> nope nm nm
[18:53] <seapasulli> cached
[18:53] <seapasulli> damn
[18:53] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[18:53] <visbits> ceph ftl
[18:55] <sage> ceph.com is fallign over under the load. working on it.
[18:55] <visbits> the load? its an html site
[18:55] <visbits> lol
[18:55] <visbits> and setup cloudflare on that shit
[18:59] * dgurtner (~dgurtner@178.197.231.228) Quit (Ping timeout: 480 seconds)
[19:00] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[19:00] <sage> wordpress
[19:00] * sbfox (~Adium@72.2.49.50) has joined #ceph
[19:01] <visbits> yeah total cache + cloudflare and forget about it
[19:02] <visbits> sage is it a synflood? we've had a ton of that lately
[19:03] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[19:03] * B_Rake_ (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[19:04] * adeel (~adeel@fw1.ridgeway.scc-zip.net) has joined #ceph
[19:04] * haomaiwa_ (~haomaiwan@60.10.97.115) has joined #ceph
[19:04] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[19:06] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Ping timeout: 480 seconds)
[19:06] * haomaiwang (~haomaiwan@114.111.166.249) Quit (Ping timeout: 480 seconds)
[19:08] * subscope (~subscope@92-249-244-248.pool.digikabel.hu) Quit (Quit: Textual IRC Client: www.textualapp.com)
[19:11] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:12] * sbfox (~Adium@72.2.49.50) Quit (Quit: Leaving.)
[19:15] * fghaas (~florian@194.112.182.213) Quit (Ping timeout: 480 seconds)
[19:17] * skney (~zviratko@2WVAABIW5.tor-irc.dnsbl.oftc.net) Quit ()
[19:17] * fghaas (~florian@194.112.182.213) has joined #ceph
[19:17] * sbfox (~Adium@72.2.49.50) has joined #ceph
[19:17] * w0lfeh (~ItsCrimin@199.188.100.154) has joined #ceph
[19:23] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[19:29] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[19:30] * sjm (~sjm@ca6.vpnunlimitedapp.com) Quit (Ping timeout: 480 seconds)
[19:34] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[19:35] * alram_ (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[19:37] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[19:38] * dmick (~dmick@2607:f298:a:607:c91b:63e9:9528:c716) has joined #ceph
[19:39] * omar_m (~omar_m@209.163.140.194) Quit (Remote host closed the connection)
[19:41] * shaunm (~shaunm@74.215.76.114) Quit (Ping timeout: 480 seconds)
[19:42] * puffy (~puffy@216.207.42.129) has joined #ceph
[19:44] * brad[] (~Brad@184-94-17-26.dedicated.allstream.net) has joined #ceph
[19:44] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has joined #ceph
[19:44] * ircolle is now known as ircolle-running
[19:44] <brad[]> anyone have an alternate URL to the ceph quick start guide? :-)
[19:45] * B_Rake_ (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Remote host closed the connection)
[19:45] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[19:47] * w0lfeh (~ItsCrimin@5NZAABKWL.tor-irc.dnsbl.oftc.net) Quit ()
[19:47] * kalmisto (~cyphase@tor-exit1.arbitrary.ch) has joined #ceph
[19:51] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[19:51] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[19:58] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[20:00] <espeer> hello, does anybody know what it means for OSDs to be blocking other OSDs? (I'm referring to the output of ceph osd blocked-by)
[20:01] <espeer> I end up with several OSDs getting stuck in that list (and nothing but restarting those OSDs seems to recover it) whenever I reboot too many cluster nodes all at once
[20:01] * thomnico (~thomnico@2a01:e35:8b41:120:c43a:4e84:65cd:44cf) Quit (Ping timeout: 480 seconds)
[20:01] <espeer> if I reboot my cluster, it doesn't come back without manual intervenion, bouncing the OSDs I find in that list
[20:02] <georgem> brad[]: http://webcache.googleusercontent.com/search?q=cache:GhC-rhCzh7IJ:ceph.com/docs/master/start/quick-ceph-deploy/&hl=en&gl=ca&strip=1
[20:02] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[20:05] * fireD (~fireD@93-139-197-152.adsl.net.t-com.hr) has joined #ceph
[20:07] <brad[]> I was actually very briefly able to load them from the actual site on a fluke
[20:08] <brad[]> (I'm assuming the site being down is a known issue)
[20:11] * i_m (~ivan.miro@deibp9eh1--blueice1n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[20:12] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[20:17] * kalmisto (~cyphase@98EAAA7YM.tor-irc.dnsbl.oftc.net) Quit ()
[20:17] * cooey (~Dragonsha@tor-proxy-readme.cloudexit.eu) has joined #ceph
[20:23] * fghaas (~florian@194.112.182.213) Quit (Quit: Leaving.)
[20:23] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) Quit (Quit: Changing server)
[20:24] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) has joined #ceph
[20:25] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[20:25] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) Quit ()
[20:25] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) has joined #ceph
[20:25] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) Quit ()
[20:26] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Quit: Leaving.)
[20:26] <joelm> brad[]: yea, they know :)
[20:26] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) has joined #ceph
[20:27] * puffy1 (~puffy@216.207.42.144) has joined #ceph
[20:27] * fghaas (~florian@194.112.182.213) has joined #ceph
[20:28] <brad[]> joelm: Looks back. Back? :-)
[20:31] * puffy (~puffy@216.207.42.129) Quit (Ping timeout: 480 seconds)
[20:33] * nils_ (~nils@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[20:33] <ivs> hi folks. How to downgrade filestore from 4 to 3? I want to roll back hammer to giant.
[20:33] * fghaas1 (~florian@212095007108.public.telering.at) has joined #ceph
[20:35] * fghaas1 (~florian@212095007108.public.telering.at) Quit ()
[20:36] * omar_m (~omar_m@209.163.140.194) has joined #ceph
[20:37] * kanagaraj (~kanagaraj@27.7.32.214) Quit (Quit: Leaving)
[20:40] * fghaas (~florian@194.112.182.213) Quit (Ping timeout: 480 seconds)
[20:42] * asalor (~asalor@0001ef37.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:43] * elder (~elder@50.153.131.154) Quit (Ping timeout: 480 seconds)
[20:46] * lalatenduM (~lalatendu@122.167.132.226) has joined #ceph
[20:47] * cooey (~Dragonsha@2WVAABI08.tor-irc.dnsbl.oftc.net) Quit ()
[20:52] * elder (~elder@50.250.13.174) has joined #ceph
[20:52] * shaunm (~shaunm@cpe-65-185-127-82.cinci.res.rr.com) has joined #ceph
[20:54] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[20:54] * karnan (~karnan@106.51.232.37) has joined #ceph
[20:55] * karnan (~karnan@106.51.232.37) Quit ()
[20:55] * hellertime (~Adium@23.79.238.10) Quit (Quit: Leaving.)
[20:56] * B_Rake_ (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[21:00] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Ping timeout: 480 seconds)
[21:02] * B_Rake_ (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Remote host closed the connection)
[21:04] * fghaas (~florian@185.15.236.4) has joined #ceph
[21:06] * fattaneh (~fattaneh@31.59.48.27) has joined #ceph
[21:07] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[21:18] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[21:19] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[21:19] <devicenull> pg 0.c is stuck unclean since forever, current state active+remapped, last acting [83,59] <-- any suggestions?
[21:22] * fghaas (~florian@185.15.236.4) Quit (Quit: Leaving.)
[21:26] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[21:26] * lalatenduM (~lalatendu@122.167.132.226) Quit (Quit: Leaving)
[21:26] * daniel2_ (~daniel2_@12.164.168.117) Quit (Remote host closed the connection)
[21:27] * daniel2_ (~daniel2_@12.164.168.117) has joined #ceph
[21:30] * fattaneh (~fattaneh@31.59.48.27) has left #ceph
[21:32] * fghaas (~florian@185.15.236.4) has joined #ceph
[21:34] * georgem1 (~Adium@fwnat.oicr.on.ca) has joined #ceph
[21:34] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Read error: Connection reset by peer)
[21:42] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[21:46] * wschulze (~wschulze@38.96.12.2) has joined #ceph
[21:47] * wschulze (~wschulze@38.96.12.2) Quit ()
[21:47] * ain (~Popz@2WVAABI5G.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:48] * wschulze (~wschulze@38.96.12.2) has joined #ceph
[21:50] * rendar (~I@95.234.176.127) Quit (Ping timeout: 480 seconds)
[21:51] <xcezzz1> devicenull: you try a ceph pg repair?
[21:51] <devicenull> many times
[21:51] * puffy1 (~puffy@216.207.42.144) Quit (Quit: Leaving.)
[21:51] * ircolle-running is now known as ircolle
[21:52] <xcezzz1> are you size=3 min_size=2?
[21:52] <devicenull> no, size=2
[21:52] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[21:52] * rendar (~I@95.234.176.127) has joined #ceph
[21:52] * Andrew (~oftc-webi@32.97.110.54) has joined #ceph
[21:53] <Andrew> hello
[21:53] * puffy (~puffy@216.207.42.144) has joined #ceph
[21:53] * jnc0289 (~oftc-webi@32.97.110.56) has joined #ceph
[21:54] <xcezzz1> any errors coming off osd.83 & osd.59 in logs?
[21:54] <espeer> if I reboot my cluster, it doesn't come back without manual intervenion, bouncing the OSDs I find in that list
[21:55] <jnc0289> Hi guys, can anyone point me to a guide or forum which explains how to comile ceph code in debug mode? I'm attempting to try to walk through some of the execution and I'm not sure where to begin.
[21:55] <espeer> (sorry old message, wrong window)
[21:55] <gregsfortytwo> jnc0289: probably easiest to just install the debug symbol packages
[21:56] <devicenull> nope, the osd's seem pretty happy
[21:56] * Andrew (~oftc-webi@32.97.110.54) Quit ()
[21:56] <gregsfortytwo> but if you want to build it, the README and related files in the source directory should be pretty clear
[21:56] * underscore3000 (~oftc-webi@32.97.110.54) has joined #ceph
[21:56] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Quit: Ex-Chat)
[21:56] * underscore3000 (~oftc-webi@32.97.110.54) Quit ()
[21:57] * anon10101 (~oftc-webi@32.97.110.54) has joined #ceph
[21:57] <xcezzz1> devicenull: is every host/osd weighted the same?
[21:58] <devicenull> yep
[21:59] <jnc0289> gregsfortytwo: So when you say install debug symbol packages... do I need to do this on an active cluster?
[21:59] <gregsfortytwo> well, if you want to trace execution you'll need a running system....
[21:59] * jharley (~jharley@66.207.210.170) Quit (Ping timeout: 480 seconds)
[21:59] <gregsfortytwo> but I just meant there are packages that include debug symbols so you can use those for gdb instead of rebuilding or whatever
[22:00] <xcezzz1> devicenull: i had a weird problem like that and lowering the min_size fixed it.. but I guess you are not in that boat. have you tried a scrub/deepscrub on the pg?
[22:00] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[22:02] * mgolub (~Mikolaj@91.225.202.153) Quit (Ping timeout: 480 seconds)
[22:02] <devicenull> yea, I've tried basically every command I can think of on them
[22:02] <xcezzz1> heh.. what version?
[22:02] <jnc0289> ok, it appears that the debugging packages are the -dbg packages from the ceph repo... am I right?
[22:02] <xcezzz1> yes jnc
[22:02] * jharley (~jharley@66.207.210.170) has joined #ceph
[22:02] <devicenull> 0.87.1
[22:04] <xcezzz1> this references an older version??? but it is something to try possibly??? http://tracker.ceph.com/issues/3747 he references that he out/in the primary osd and it kickstarted it working on the stuck active+remapped pgs
[22:06] * brad[] (~Brad@184-94-17-26.dedicated.allstream.net) Quit (Remote host closed the connection)
[22:07] <devicenull> yea, no real change
[22:08] * joshd (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[22:17] * ain (~Popz@2WVAABI5G.tor-irc.dnsbl.oftc.net) Quit ()
[22:17] * AluAlu (~Maza@5NZAABK61.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:23] * jks (~jks@178.155.151.121) Quit (Read error: No route to host)
[22:23] * jks (~jks@178.155.151.121) has joined #ceph
[22:24] * alram_ (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Quit: leaving)
[22:24] * jharley (~jharley@66.207.210.170) Quit (Quit: jharley)
[22:28] * sbfox (~Adium@72.2.49.50) Quit (Ping timeout: 480 seconds)
[22:30] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[22:30] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[22:30] * sbfox (~Adium@72.2.49.50) has joined #ceph
[22:35] * asalor (~asalor@2a00:1028:96c1:4f6a:204:e2ff:fea1:64e6) has joined #ceph
[22:38] * ganders (~root@190.2.42.21) Quit (Quit: WeeChat 0.4.2)
[22:40] * bandrus1 (~brian@128.sub-70-211-79.myvzw.com) has joined #ceph
[22:41] * diegows (~diegows@190.190.5.238) has joined #ceph
[22:43] <jnc0289> Ok, so I've installed the ceph-dbg package... How do I get the debug output from the package?
[22:45] * bandrus (~brian@128.sub-70-211-79.myvzw.com) Quit (Ping timeout: 480 seconds)
[22:47] * AluAlu (~Maza@5NZAABK61.tor-irc.dnsbl.oftc.net) Quit ()
[22:51] * sbfox (~Adium@72.2.49.50) Quit (Ping timeout: 480 seconds)
[22:52] * shaunm (~shaunm@cpe-65-185-127-82.cinci.res.rr.com) Quit (Ping timeout: 480 seconds)
[22:54] <xcezzz1> jnc0289: umm its assumed you know how to do that yourself??? thats the whole point of the debug symbols is to trace it using gdb or tools YOU already know how to use...
[22:54] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Quit: Exeunt dneary)
[22:55] <xcezzz1> if all you wanted was more debug/verbose messages from ceph you could have done that without dbg symbols??? what are you actually trying to do?
[22:56] * fghaas (~florian@185.15.236.4) Quit (Quit: Leaving.)
[22:57] * bandrus1 (~brian@128.sub-70-211-79.myvzw.com) Quit (Quit: Leaving.)
[22:57] * bandrus (~brian@128.sub-70-211-79.myvzw.com) has joined #ceph
[22:58] * fghaas (~florian@185.15.236.4) has joined #ceph
[22:58] <jnc0289> I'm just was curious how to step through the code, which I have found virtually no documentation on. I'm using the developer guide: https://github.com/ceph/ceph/blob/master/doc/dev/quick_guide.rst... but it isn't giving me enough behind-the-scenes detail
[23:00] * dyasny (~dyasny@173.231.115.58) Quit (Ping timeout: 480 seconds)
[23:01] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[23:01] * alexxy[home] (~alexxy@2001:470:1f14:106::2) Quit (Read error: Connection reset by peer)
[23:05] * fghaas (~florian@185.15.236.4) Quit (Quit: Leaving.)
[23:05] * fghaas (~florian@185.15.236.4) has joined #ceph
[23:08] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:11] * georgem1 (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[23:12] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) Quit (Remote host closed the connection)
[23:13] * fghaas (~florian@185.15.236.4) Quit (Quit: Leaving.)
[23:14] * fghaas (~florian@185.15.236.4) has joined #ceph
[23:16] * omar_m (~omar_m@209.163.140.194) Quit ()
[23:17] * jnc0289 (~oftc-webi@32.97.110.56) Quit (Quit: Page closed)
[23:18] * anon10101 (~oftc-webi@32.97.110.54) Quit (Remote host closed the connection)
[23:23] * haomaiwa_ (~haomaiwan@60.10.97.115) Quit (Remote host closed the connection)
[23:23] * linuxkidd (~linuxkidd@vpngac.ccur.com) Quit (Quit: Leaving)
[23:23] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[23:26] * xarses (~andreww@12.164.168.117) has joined #ceph
[23:26] * Da_Pineapple (~Silentkil@destiny.enn.lu) has joined #ceph
[23:26] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[23:28] * tupper_ (~tcole@rtp-isp-nat1.cisco.com) Quit (Ping timeout: 480 seconds)
[23:34] * ircolle is now known as ircolle-afk
[23:37] <xcezzz1> well unless you have a cluster setup there's nothing to step through???
[23:37] <xcezzz1> err well i guess their dev stuff has fake stuff that it sets up
[23:38] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) has joined #ceph
[23:39] * oro (~oro@sccc-66-78-236-243.smartcity.com) has joined #ceph
[23:48] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Ping timeout: 480 seconds)
[23:49] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[23:49] * vata (~vata@208.88.110.46) Quit (Quit: Leaving.)
[23:50] * joshd1 (~joshd@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[23:50] * joshd1 (~joshd@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit ()
[23:52] * MVenesio (~MVenesio@186.136.59.165) has joined #ceph
[23:52] * rendar (~I@95.234.176.127) Quit ()
[23:53] <MVenesio> Hi guys I'm integrating ceph and openstack nova-compute, and the issue is that libvirt is not taking the cephx configuration support
[23:54] <MVenesio> the secret is correctly set up in the compute nodes and ceph.conf its also checked
[23:55] <MVenesio> do you know, if there are an known issue about this ?
[23:55] <lurbs> Are you following: http://ceph.com/docs/master/rbd/rbd-openstack/
[23:56] * Da_Pineapple (~Silentkil@3OZAAA4CR.tor-irc.dnsbl.oftc.net) Quit ()
[23:57] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[23:57] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.