#ceph IRC Log

Index

IRC Log for 2016-03-04

Timestamps are in GMT/BST.

[0:00] * victordenisov (~vdenisov@64.124.158.100) has joined #ceph
[0:00] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[0:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[0:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[0:02] * rendar (~I@host190-104-dynamic.61-82-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:03] * sese_ (~Bwana@4MJAAC8SE.tor-irc.dnsbl.oftc.net) Quit ()
[0:03] * ira_ (~ira@24.34.255.34) Quit (Quit: Leaving)
[0:03] * anadrom (~Miho@91.109.29.120) has joined #ceph
[0:08] * wCPO (~Kristian@188.228.31.139) Quit (Ping timeout: 480 seconds)
[0:13] * Kingrat (~shiny@2605:a000:161a:c0f6:650d:2f43:49:6b9) Quit (Ping timeout: 480 seconds)
[0:19] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[0:19] * wCPO (~Kristian@188.228.31.139) has joined #ceph
[0:19] * wCPO (~Kristian@188.228.31.139) Quit (Max SendQ exceeded)
[0:19] * wCPO (~Kristian@188.228.31.139) has joined #ceph
[0:21] * vicente (~~vicente@1-161-187-73.dynamic.hinet.net) has joined #ceph
[0:21] * vicente (~~vicente@1-161-187-73.dynamic.hinet.net) Quit ()
[0:22] * Kingrat (~shiny@2605:a000:161a:c0f6:61f1:c3dc:f0fb:7690) has joined #ceph
[0:22] * davidzlap1 (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) Quit (Read error: No route to host)
[0:23] * rahulgoyal (~rahulgoya@117.198.213.135) has joined #ceph
[0:24] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[0:26] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[0:33] * anadrom (~Miho@4MJAAC8TF.tor-irc.dnsbl.oftc.net) Quit ()
[0:33] * masteroman (~ivan@93-139-228-110.adsl.net.t-com.hr) Quit (Quit: WeeChat 1.4)
[0:34] <motk> leseb: don't mind me; just whining
[0:35] <motk> I'm snerdish btw
[0:35] <motk> still think the osds need a fqdn set too
[0:35] <motk> ceph osd tree only shows me two hosts instead of four for example
[0:36] * BrianA (~BrianA@fw-rw.shutterfly.com) has left #ceph
[0:36] * rahulgoyal (~rahulgoya@117.198.213.135) Quit (Ping timeout: 480 seconds)
[0:37] * krypto (~krypto@65.115.222.52) Quit (Ping timeout: 480 seconds)
[0:37] * krypto (~krypto@G68-121-13-234.sbcis.sbc.com) has joined #ceph
[0:38] * mrapple (~lmg@178-175-128-50.ip.as43289.net) has joined #ceph
[0:39] * kefu (~kefu@58.20.51.71) has joined #ceph
[0:40] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[0:40] * kefu (~kefu@58.20.51.71) Quit ()
[0:41] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[0:41] * BrianA (~BrianA@fw-rw.shutterfly.com) has joined #ceph
[0:44] * ircolle (~Adium@c-71-229-136-109.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[0:45] * cathode (~cathode@50-198-166-81-static.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[0:51] * quinoa (~quinoa@24-148-81-106.c3-0.grn-ubr2.chi-grn.il.cable.rcn.com) has joined #ceph
[0:54] * davidzlap1 (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) has joined #ceph
[0:55] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[0:57] * yanzheng (~zhyan@182.139.204.152) has joined #ceph
[1:08] * mrapple (~lmg@4MJAAC8UY.tor-irc.dnsbl.oftc.net) Quit ()
[1:08] * capitalthree (~mason@185.36.100.145) has joined #ceph
[1:08] * BrianA (~BrianA@fw-rw.shutterfly.com) Quit (Ping timeout: 480 seconds)
[1:09] * togdon (~togdon@74.121.28.6) Quit (Quit: Bye-Bye.)
[1:13] * BrianA (~BrianA@fw-rw.shutterfly.com) has joined #ceph
[1:14] * BrianA (~BrianA@fw-rw.shutterfly.com) Quit (Read error: Connection reset by peer)
[1:14] <motk> ok
[1:14] <motk> should ceph osds get confused if two nodes have identical short names but different fqdns?
[1:15] <lurbs> That seems like a recipe for trouble.
[1:18] <motk> not my choice
[1:18] <motk> it sure sounds like a common enough scenario
[1:18] <motk> I can't change the shortnames
[1:21] * Kioob1 (~Kioob@2a01:e34:ec0a:c0f0:5216:9d2b:4d14:2b8e) has joined #ceph
[1:22] * bliu (~liub@203.192.156.9) has joined #ceph
[1:22] <Kioob1> Hi
[1:22] * Kioob1 is now known as Kioob`
[1:23] <Kioob`> is there a raison why a "ceph pg repair XXX" seems ignored ?
[1:23] <Kioob`> (no PG switch in "repair" state)
[1:24] <Kioob`> I have "91 active+clean+inconsistent" on an EC pool
[1:25] * victordenisov (~vdenisov@64.124.158.100) Quit (Quit: victordenisov)
[1:27] * VictorDenisov (~vdenisov@64.124.158.100) has joined #ceph
[1:30] * VictorDenisov (~vdenisov@64.124.158.100) Quit ()
[1:31] * VictorDenisov (~vdenisov@64.124.158.100) has joined #ceph
[1:32] * rotbeard (~redbeard@ppp-115-87-78-25.revip4.asianet.co.th) has joined #ceph
[1:32] <motk> Kioob`: wait a while?
[1:32] * krypto (~krypto@G68-121-13-234.sbcis.sbc.com) Quit (Read error: Connection reset by peer)
[1:37] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[1:38] * capitalthree (~mason@84ZAAC5AW.tor-irc.dnsbl.oftc.net) Quit ()
[1:38] * Blueraven (~hifi@lumumba.torservers.net) has joined #ceph
[1:40] * angdraug (~angdraug@64.124.158.100) Quit (Quit: Leaving)
[1:45] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Quit: leaving)
[1:48] * sbfox (~Adium@vancouver.xmatters.com) Quit (Quit: Leaving.)
[1:49] * VictorDenisov (~vdenisov@64.124.158.100) Quit (Quit: Lost terminal)
[1:49] * naoto (~naotok@103.23.4.77) Quit (Quit: Leaving...)
[1:51] * VictorDenisov (~chatzilla@64.124.158.100) has joined #ceph
[1:51] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[1:57] * dvanders_ (~dvanders@pb-d-128-141-3-210.cern.ch) has joined #ceph
[1:57] * dvanders (~dvanders@2001:1458:202:225::101:124a) Quit (Read error: Connection reset by peer)
[1:58] * quinoa (~quinoa@24-148-81-106.c3-0.grn-ubr2.chi-grn.il.cable.rcn.com) Quit (Remote host closed the connection)
[2:02] * oms101 (~oms101@p20030057EA051100C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:08] * Blueraven (~hifi@7V7AAC25H.tor-irc.dnsbl.oftc.net) Quit ()
[2:08] * xarses (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[2:09] * tim_s007 (~tim_s007@2001:67c:12a0::bc1c:f72e) Quit (Ping timeout: 480 seconds)
[2:11] * oms101 (~oms101@p20030057EA069300C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[2:18] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[2:19] <Kioob`> motk: in fact I'm already waiting for some hours :)
[2:21] <motk> checked your crush map?
[2:23] * tim_s007 (~tim_s007@2001:67c:12a0::bc1c:f72e) has joined #ceph
[2:24] * davidzlap (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) has joined #ceph
[2:25] * davidzlap1 (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) Quit (Read error: Connection reset by peer)
[2:26] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[2:29] * naoto (~naotok@27.131.11.254) has joined #ceph
[2:32] * wCPO (~Kristian@188.228.31.139) Quit (Ping timeout: 480 seconds)
[2:35] * jtriley (~jtriley@c-73-249-255-187.hsd1.ma.comcast.net) has joined #ceph
[2:37] * jclm1 (~jclm@ip68-224-244-110.lv.lv.cox.net) has joined #ceph
[2:37] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[2:38] * Yopi (~Kurimus@torsrva.snydernet.net) has joined #ceph
[2:43] * jclm (~jclm@ip68-224-244-110.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[2:52] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[2:54] * NetMon (~Administr@cpe-76-184-150-12.tx.res.rr.com) Quit (Quit: Going offline, see ya! (www.adiirc.com))
[2:55] <Kioob`> motk: the crush map, for "inconsistent" errors ?
[2:56] * doppelgrau (~doppelgra@p54894244.dip0.t-ipconnect.de) has left #ceph
[2:58] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[2:58] * bjornar_ (~bjornar@ti0099a430-0908.bb.online.no) Quit (Ping timeout: 480 seconds)
[3:08] * Yopi (~Kurimus@4MJAAC8Y1.tor-irc.dnsbl.oftc.net) Quit ()
[3:08] * Jyron (~visored@digi00810.digicube.fr) has joined #ceph
[3:09] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[3:09] * scuttlemonkey is now known as scuttle|afk
[3:14] * scuttle|afk is now known as scuttlemonkey
[3:15] * zhaochao (~zhaochao@124.202.191.135) has joined #ceph
[3:15] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[3:29] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) Quit (Ping timeout: 480 seconds)
[3:34] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) has joined #ceph
[3:38] * Jyron (~visored@84ZAAC5E3.tor-irc.dnsbl.oftc.net) Quit ()
[3:39] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[3:40] * LeaChim (~LeaChim@host86-171-90-242.range86-171.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[3:46] * houming (~houming-w@103.10.86.234) has joined #ceph
[3:47] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Ping timeout: 480 seconds)
[3:52] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[3:52] * houming (~houming-w@103.10.86.234) Quit (Quit: Have a good day:))
[3:53] * aj__ (~aj@x4db1ac40.dyn.telefonica.de) has joined #ceph
[3:53] <motk> yeah --test
[3:56] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[4:00] * derjohn_mobi (~aj@x590e2f10.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[4:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[4:01] * quinoa (~quinoa@24-148-81-106.c3-0.grn-ubr2.chi-grn.il.cable.rcn.com) has joined #ceph
[4:08] * delcake (~skney@chomsky.torservers.net) has joined #ceph
[4:09] * VictorDenisov (~chatzilla@64.124.158.100) Quit (Quit: ChatZilla 0.9.92 [Firefox 41.0.1/20151001175956])
[4:12] * davidzlap1 (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) has joined #ceph
[4:12] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[4:13] * quinoa (~quinoa@24-148-81-106.c3-0.grn-ubr2.chi-grn.il.cable.rcn.com) Quit (Remote host closed the connection)
[4:13] * davidzlap (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) Quit (Read error: Connection reset by peer)
[4:19] * dneary (~dneary@pool-96-237-170-97.bstnma.fios.verizon.net) has joined #ceph
[4:20] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Ping timeout: 480 seconds)
[4:21] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[4:23] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[4:26] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[4:27] * ronrib (~boswortr@45.32.242.135) has joined #ceph
[4:31] * davidzlap (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) has joined #ceph
[4:31] * davidzlap1 (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) Quit (Read error: Connection reset by peer)
[4:38] * delcake (~skney@84ZAAC5GO.tor-irc.dnsbl.oftc.net) Quit ()
[4:38] * AG_Scott (~Miho@h-133-122.a2.corp.bahnhof.no) has joined #ceph
[4:39] * dmick1 (~dmick@206.169.83.146) has joined #ceph
[4:42] * shohn1 (~shohn@dslb-146-060-207-108.146.060.pools.vodafone-ip.de) has joined #ceph
[4:44] * shohn (~shohn@dslb-188-102-036-162.188.102.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[4:48] * krypto (~krypto@65.115.222.52) has joined #ceph
[4:49] * dmick1 is now known as dmick
[4:51] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[4:53] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[5:04] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[5:06] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[5:07] * RameshN (~rnachimu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[5:08] * AG_Scott (~Miho@4MJAAC82Y.tor-irc.dnsbl.oftc.net) Quit ()
[5:13] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[5:16] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[5:28] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[5:34] * swami1 (~swami@49.44.57.245) has joined #ceph
[5:35] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[5:38] * Miho (~Qiasfah@176.10.99.202) has joined #ceph
[5:40] * Vacuum__ (~Vacuum@i59F79344.versanet.de) has joined #ceph
[5:42] * swami1 (~swami@49.44.57.245) Quit (Ping timeout: 480 seconds)
[5:44] * davidzlap1 (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) has joined #ceph
[5:44] * davidzlap (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) Quit (Read error: Connection reset by peer)
[5:45] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[5:47] * Vacuum_ (~Vacuum@88.130.207.211) Quit (Ping timeout: 480 seconds)
[5:53] * swami1 (~swami@49.32.0.104) has joined #ceph
[5:55] * rotbeard (~redbeard@ppp-115-87-78-25.revip4.asianet.co.th) Quit (Ping timeout: 480 seconds)
[5:56] * jtriley (~jtriley@c-73-249-255-187.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[6:02] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[6:08] * Miho (~Qiasfah@84ZAAC5IP.tor-irc.dnsbl.oftc.net) Quit ()
[6:18] * m8x (~user@182.150.27.112) Quit (Remote host closed the connection)
[6:19] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[6:28] * dneary (~dneary@pool-96-237-170-97.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:30] * RameshN (~rnachimu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[6:34] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[6:38] * kalmisto (~Harryhy@tor-amici-exit.tritn.com) has joined #ceph
[6:38] * rdas (~rdas@121.244.87.116) has joined #ceph
[6:39] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[6:39] * m8x (~user@182.150.27.112) has joined #ceph
[6:50] * davidzlap1 (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) Quit (Quit: Leaving.)
[6:55] * m8x (~user@182.150.27.112) Quit (Remote host closed the connection)
[6:59] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[7:08] * kalmisto (~Harryhy@84ZAAC5KF.tor-irc.dnsbl.oftc.net) Quit ()
[7:08] * GuntherDW1 (~lmg@84ZAAC5K8.tor-irc.dnsbl.oftc.net) has joined #ceph
[7:12] * cooldharma06 (~chatzilla@14.139.180.40) has joined #ceph
[7:15] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[7:20] * krypto (~krypto@65.115.222.52) Quit (Ping timeout: 480 seconds)
[7:30] * jclm (~jclm@ip68-224-244-110.lv.lv.cox.net) has joined #ceph
[7:35] * jclm (~jclm@ip68-224-244-110.lv.lv.cox.net) Quit ()
[7:37] * jclm1 (~jclm@ip68-224-244-110.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[7:38] * GuntherDW1 (~lmg@84ZAAC5K8.tor-irc.dnsbl.oftc.net) Quit ()
[7:41] * rotbeard (~redbeard@ppp-115-87-78-25.revip4.asianet.co.th) has joined #ceph
[7:46] * rahulgoyal (~rahulgoya@59.91.209.207) has joined #ceph
[7:53] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:00] <Be-El> hi
[8:01] * rahulgoyal (~rahulgoya@59.91.209.207) Quit (Quit: Leaving)
[8:04] * Kioob` (~Kioob@2a01:e34:ec0a:c0f0:5216:9d2b:4d14:2b8e) Quit (Quit: Leaving.)
[8:08] * rhonabwy (~Arfed@192.42.115.101) has joined #ceph
[8:09] * enax (~enax@94-21-125-222.pool.digikabel.hu) has joined #ceph
[8:09] * enax (~enax@94-21-125-222.pool.digikabel.hu) has left #ceph
[8:09] * RogierDikkes (~Adium@a83-162-177-114.adsl.xs4all.nl) has joined #ceph
[8:13] * abhishekvrshny (~abhishekv@180.179.116.54) has joined #ceph
[8:13] * RogierDikkes (~Adium@a83-162-177-114.adsl.xs4all.nl) Quit ()
[8:24] * linjan_ (~linjan@176.195.151.213) Quit (Ping timeout: 480 seconds)
[8:26] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:26] * Geph (~Geoffrey@41.77.153.99) has joined #ceph
[8:28] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[8:30] * rendar (~I@host128-182-dynamic.12-79-r.retail.telecomitalia.it) has joined #ceph
[8:34] * aj__ (~aj@x4db1ac40.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[8:34] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[8:36] * swami2 (~swami@49.44.57.245) has joined #ceph
[8:38] * rhonabwy (~Arfed@4MJAAC888.tor-irc.dnsbl.oftc.net) Quit ()
[8:38] * xENO_ (~sese_@94.242.228.108) has joined #ceph
[8:42] * swami1 (~swami@49.32.0.104) Quit (Ping timeout: 480 seconds)
[8:44] * adun153 (~ljtirazon@112.198.90.251) has joined #ceph
[8:45] * madkiss2 (~madkiss@2001:6f8:12c3:f00f:5077:8912:9880:1892) has joined #ceph
[8:54] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[8:57] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[8:58] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[9:00] * m8x (~user@182.150.27.112) has joined #ceph
[9:01] * nardial (~ls@ipservice-092-217-059-182.092.217.pools.vodafone-ip.de) has joined #ceph
[9:04] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[9:05] * heddima (~Hamid@stc.hosting.alterway.fr) has joined #ceph
[9:08] * xENO_ (~sese_@7V7AAC3DO.tor-irc.dnsbl.oftc.net) Quit ()
[9:08] * Atomizer (~tZ@torproxy02.31173.se) has joined #ceph
[9:11] <heddima> Hello guys !
[9:12] * aj__ (~aj@2001:6f8:1337:0:21f2:18ec:57f2:be73) has joined #ceph
[9:12] <heddima> I would be very grateful if someone could tell me the number of monitors should I deploy to manage 52 OSDs (OSD=Disk of 4TB)
[9:12] <IcePic> 3 or 5
[9:12] * Drankis (~martin@89.111.13.198) has joined #ceph
[9:12] <IcePic> and odd number larger than 1
[9:12] <IcePic> an*
[9:14] * analbeard (~shw@support.memset.com) has joined #ceph
[9:15] <rotbeard> is there btw any limit for MONs? for a new cluster I think about 11+ MON nodes
[9:16] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[9:16] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[9:17] * pam (~pam@193.106.183.1) has joined #ceph
[9:21] * dgurtner (~dgurtner@178.197.231.34) has joined #ceph
[9:25] <mistur> hello
[9:25] <mistur> is anyone can help me on this ? : http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-March/008027.html
[9:25] * karnan (~karnan@121.244.87.117) has joined #ceph
[9:26] <IcePic> rotbeard: I have no idea, but the general rule of "diminishing returns" starts to apply after 5 I guess.
[9:26] <IcePic> the amount of awesomeness when going from 9 MONs to 11 will be rather low.
[9:30] <Be-El> mistur: is the symlink to the journal correct? I've seen this message before when the kernel decided to renumber the disks and the symlinks becomes incorrect (e.g. not based on partition uuids)
[9:31] <Be-El> rotbeard: keep in mind that every mon related operation like changes in PGs have to acknowledged by 11 mons, thus introducing more latency
[9:32] <lincolnb> /dev/disk/by-partuuid is definitely your friend.
[9:32] * pabluk__ is now known as pabluk_
[9:32] * hyperbaba (~hyperbaba@private.neobee.net) has joined #ceph
[9:32] <mistur> lrwxrwxrwx 1 ceph ceph 9 f?vr. 1 15:53 journal -> /dev/sdl1
[9:33] <mistur> Be-El: seem to be good
[9:34] <Be-El> mistur: does ceph-disk on that host like the journal partition and their association to the osd partitions correctly?
[9:35] <rotbeard> IcePic, Be-El thanks for pointing out. I just thinking about the amount of redundancy having a cluster with about 4k OSDs in 300 OSD nodes. 5 MONs sounded a bit less so far :p
[9:35] <mistur> Be-El: http://pastebin.com/m0r7cWZF
[9:37] <Be-El> mistur: there' no /dev/sda (os disk maybe?) and 10 osd partition, but only 9 journals.
[9:38] <Be-El> did you miss the last line for /dev/sdm5?
[9:38] * Atomizer (~tZ@84ZAAC5O0.tor-irc.dnsbl.oftc.net) Quit ()
[9:38] * basicxman (~LorenXo@192.42.115.101) has joined #ceph
[9:39] <mistur> Be-El: oops sorry : http://pastebin.com/zdUtBw4z
[9:40] * RogierDikkes (~Adium@145.100.62.36) has joined #ceph
[9:40] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[9:44] <Be-El> mistur: the list seems to be ok.
[9:44] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[9:44] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[9:46] <mistur> Be-El: yup
[9:47] <mistur> Be-El: I can't explain why fsid on journal partition is read a 0000...
[9:49] <Be-El> mistur: did you restart the osd after installation?
[9:49] <mistur> yup
[9:49] <mistur> and I reboot the server yesterday
[9:50] <mistur> to see what's happen
[9:51] <Mosibi> rotbeard: the OSDs communicate with every monitor, so having that much MONs would generate a lot of traffic on your network :)
[9:51] <Be-El> mistur: are all osd on that host affected?
[9:53] <mistur> yes 10 down
[9:53] <mistur> Be-El: ans on another server 8 out of 10 down
[9:53] <rotbeard> Mosibi, lots of traffic regarding to latency or to bandwidth? since we have 2x25G per OSD node and don't using SSDs as OSDs, a higher bandwidth consumption would be ok
[9:56] <Be-El> mistur: i don't know the ansible playbook for ceph, but I had similar problems before with drive ids being changed after reboot
[9:57] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) Quit (Ping timeout: 480 seconds)
[9:57] * heddima (~Hamid@stc.hosting.alterway.fr) Quit (Ping timeout: 480 seconds)
[9:58] <mistur> Be-El: this happend before the reboot
[9:58] <mistur> I had the same behavior on my other cluster on 2 osds
[9:58] <mistur> I can easly reinstall it from scratch
[9:58] <Mosibi> rotbeard: number of connections. 'my' ceph cluster contains 1900 OSDs and whe have 5 MONs. We see a lot of heartbeat traffic between OSDs and OSD to MON.
[9:58] <mistur> but I'd like to know what's happen
[9:59] <Mosibi> rotbeard: not a problem now, but when you mentioned 11+ MONs, that triggered me :)
[10:00] <rotbeard> Mosibi, wow, ok. did you had trouble with just running 5 MONs for that near 2k OSDs?
[10:00] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[10:00] <Be-El> mistur: the partition uuids for the journal partitions are not correct. maybe the udev scripts involved in starting the osds mixed up some partitions
[10:00] <Mosibi> rotbeard: not at all, it's running smoothly and we are'nt planning to add more MONs
[10:01] <mistur> Be-El: maybe
[10:01] <Be-El> mistur: but I don't run an infernalis cluster yet, so I can only have a look at the hammer scripts
[10:01] <mistur> ok
[10:01] <rotbeard> Mosibi, that will help me a lot thanks
[10:01] <Mosibi> rotbeard: yw
[10:01] <mistur> Be-El: I think I gonna reinstall the cluster from scratch
[10:01] <Be-El> mistur: you can try to fix the part uuid (see /usr/sbin/ceph-disk for the correct one) and restart the node
[10:01] <mistur> update ansible-ceph playbook before, il seem to have lot's of update last 2 weeks
[10:03] <rotbeard> Mosibi, may I ask you for some hints for running about 2k OSDs? I guess our hardware setting will be fine (2x 6C Xeons, 128G RAM, 14x 4T WD disks + 4 intel dc s3710 journals per node) but don't know wether I have to modity thinks in the crush map e.g. to achieve better or smoother operation
[10:04] <rotbeard> *modify
[10:04] <rotbeard> and *things. holy...
[10:05] <Be-El> mistur: one last thing
[10:05] <Mosibi> rotbeard: our setup, (110x) Dell R730XD with 128 GB and (about 70x) Dell R730XD with 256GB and 3 SSDs (for journal)
[10:06] <Mosibi> rotbeard: whe run those machines as compute node for our openstack cloud and(!) ceph osds
[10:06] <Be-El> mistur: can you run 'ceph-osd -i 0 --get-journal-uuid --osd-journal <one journal partition>>' on the affected host? it should print the uuid of the osd associated to the journal
[10:06] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) has joined #ceph
[10:07] * ade (~abradshaw@194.32.188.1) has joined #ceph
[10:07] <Mosibi> rotbeard: the only thing i have done with the crushmap is define our failure domains
[10:07] <rotbeard> Mosibi, interesting. I also had the idea to use bigger nodes for both, OSDs and compute nodes. but afraid of it, I decided to go for seperate nodes :p
[10:07] <Be-El> mistur: there should be a symlink in /dev/disk/by-partuuid/ pointing to the osd partition
[10:07] <Mosibi> rotbeard: so no tuning
[10:07] <rotbeard> Mosibi, cool thanks.
[10:07] <Mosibi> rotbeard: if i could choose, i would seperate those functions...
[10:07] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[10:07] <Mosibi> rotbeard: but i am not the one that buy's the HW ;)
[10:08] * basicxman (~LorenXo@84ZAAC5PS.tor-irc.dnsbl.oftc.net) Quit ()
[10:09] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:09] * shohn (~shohn@dslb-146-060-207-108.146.060.pools.vodafone-ip.de) has joined #ceph
[10:09] * heddima (~Hamid@stc.hosting.alterway.fr) has joined #ceph
[10:12] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:2455:8e0d:541:904b) has joined #ceph
[10:12] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[10:13] * shohn1 (~shohn@dslb-146-060-207-108.146.060.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[10:15] * linjan_ (~linjan@86.62.112.22) has joined #ceph
[10:18] <mistur> Be-El: root@iccluster014:/var/log/ceph# ceph-osd -i 0 --get-journal-uuid --osd-journal /dev/sdl1
[10:18] <mistur> 00000000-0000-0000-0000-000000000000
[10:18] <mistur> root@iccluster014:/var/log/ceph# ls /dev/disk/by-partuuid/ -l | grep sdl1
[10:18] <mistur> lrwxrwxrwx 1 root root 10 mars 3 14:08 032e6322-7e43-413e-9d3f-9ef6ca913b23 -> ../../sdl1
[10:19] * heddima (~Hamid@stc.hosting.alterway.fr) Quit (Ping timeout: 480 seconds)
[10:22] <rotbeard> Mosibi, i see ;-)
[10:24] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[10:24] <Be-El> mistur: ok, that partition has probably never been initialized as journal for an osd (thus all zeroes)
[10:26] <mistur> Be-El: the point is at the begin, it was ok
[10:27] <mistur> Be-El: after the first play of ansible I had 100 OSDs up
[10:27] <Be-El> mistur: maybe some osds got mixed up and used the same journal. If you do not put anything on the osds it might appear to be running fine
[10:27] <mistur> make sense
[10:28] <Be-El> mistur: you can check the osd partition uuid for the other journals, too
[10:28] <mistur> same
[10:28] <Be-El> mistur: ceph-disk actually uses this command to associated journals to osds during activation (at least in hammer)
[10:28] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[10:29] <mistur> Be-El: I see what might happen, I just create a new pool yesterday and check the tree just after
[10:29] <mistur> and in the log the error appear at the same time
[10:29] <mistur> so mabe the initialisation went wront
[10:29] <Be-El> to make a long story short: there was probably a problem during cluster setup, and you might want to start from scratch again
[10:29] * TMM (~hp@185.5.122.2) has joined #ceph
[10:29] * RogierDikkes (~Adium@145.100.62.36) Quit (Ping timeout: 480 seconds)
[10:29] <mistur> yup
[10:29] <mistur> until it's for test only
[10:29] <mistur> I can reinstall it on demand
[10:30] <Be-El> mistur: according to your mail there's a second cluster that is already in use. you might want to check the journals on that cluster, too
[10:30] <mistur> Be-El: on the other one we already have 24TB of data in
[10:30] <mistur> and everythink looks good
[10:30] <mistur> all osds are in use with data
[10:31] <Be-El> the ceph-osd --get-journal-uuid check should also work with a running osd
[10:31] <mistur> ok
[10:32] * heddima (~Hamid@stc.hosting.alterway.fr) has joined #ceph
[10:32] <mistur> another point related to ceph-ansible playbook is I give directly partiton to ceph-ansible
[10:32] <mistur> create previously by a script
[10:32] * thomnico (~thomnico@2a01:e35:8b41:120:18d:7181:cb83:dec7) has joined #ceph
[10:32] <mistur> instead of give a device and ceph-ansible create the partition
[10:33] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[10:33] <mistur> so the pb might appeare at this moment
[10:34] <Be-El> it depends on how the ansible playbook passes this information to the underlying ceph commands
[10:34] <Be-El> i had a hard time trying to use lvm volumes as osds
[10:35] <Be-El> i failed due to ceph-disk not recognizing it as volume and trying to create a partition table on the vlume
[10:35] * LeaChim (~LeaChim@host86-171-90-242.range86-171.btcentralplus.com) has joined #ceph
[10:35] <Be-El> but that might have been fixed in the last month...it was firefly or hammer the last time I tried
[10:36] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[10:37] <mistur> Be-El: leseb told be that giving partition instead of device is not recommended
[10:38] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[10:38] * Szernex (~Tonux@exit-01c.noisetor.net) has joined #ceph
[10:38] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[10:41] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[10:47] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[10:50] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[10:50] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[10:51] * i_m (~ivan.miro@deibp9eh1--blueice3n4.emea.ibm.com) has joined #ceph
[10:53] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[10:53] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[10:55] * Mika_c (~quassel@122.146.93.152) Quit (Remote host closed the connection)
[10:55] * adun153 (~ljtirazon@112.198.90.251) Quit (Read error: Connection reset by peer)
[10:56] <Be-El> (and now for something completely different...) are there GUIs or webinterfaces for managing radosgw users?
[10:56] <analbeard> morning guys, quick question regarding relative/absolute sizing of cache tiers
[10:58] <mistur> Be-El: it's possible with inkscope
[10:58] <analbeard> obviously it's dependant on your use case, but i would be correct in assuming that you want to set one type of sizing up and ignore the other? i.e. if I wanted to work with relative sizing, should i set absolute sizing options very high so they're effectively irrelevant?
[10:58] <mistur> Be-El: https://github.com/inkscope/inkscope
[10:59] <Be-El> mistur: thx, I'll have a look at it
[11:00] <Be-El> analbeard: afaik there is no relative sizing
[11:00] * Foloex (~foloex@81-67-102-161.rev.numericable.fr) has joined #ceph
[11:00] <Foloex> hello world
[11:00] <analbeard> Be-El - what about cache_target_dirty_ratio, cache_target_dirty_high_ratio, cache_target_full_ratio?
[11:00] <Be-El> analbeard: you need to define the maximum size / maximum number of objects. there are other relative settings like flush/evict threshold
[11:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[11:01] <Be-El> analbeard: they refer to the absolute values
[11:01] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[11:01] <Be-El> analbeard: there's been a mail to the mailing list today explaining how the values interact
[11:01] <analbeard> ah fantastic, i must've missed that. thanks!
[11:01] <Foloex> I'm trying to bootstrap a ceph cluster using ceph/daemon docker container, I'm stuck at setting the OSDs. Is there a simple step by step tutorial somewhere ?
[11:02] <analbeard> Be-El -just reading now, looks perfect. thanks!
[11:04] <boolman> Foloex: http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/
[11:05] <Foloex> boolman: doesn't the docker container do most of that stuff ?
[11:08] * Szernex (~Tonux@4MJAAC9EK.tor-irc.dnsbl.oftc.net) Quit ()
[11:08] * tZ (~Zombiekil@edwardsnowden1.torservers.net) has joined #ceph
[11:09] <Foloex> I find the ceph documentation overwhelming
[11:09] <Foloex> which is very strange to say
[11:10] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) has joined #ceph
[11:12] <IcePic> I guess that is because ceph is a beast with many movable parts.
[11:13] <Foloex> true, I don't think I ever set something this complex up
[11:14] <analbeard> i've found the best way to learn is just to get stuck in and break things
[11:14] <analbeard> our best learning epxeriences have come from where we've misconfigured something and then needed to resolve it
[11:14] <Foloex> so far so good then
[11:14] <Foloex> :)
[11:16] <Foloex> is there some kind of cheat sheet about bootstrapping a ceph cluster
[11:16] <Foloex> like the main steps
[11:17] <IcePic> "good judgement comes from experience. Experience comes from bad judgement"
[11:17] <IcePic> set up a cluster or 3 and and use them for lab stuff for a while
[11:18] <IcePic> and there are VMs I think you can DL which runs smallest possible clusters on one or a few vms
[11:18] <Foloex> IcePic: for a while ? what happened ?
[11:19] <IcePic> I did not mean "I set up ..", I meant "you should set up.."
[11:19] <Foloex> ah ok
[11:19] <Foloex> that's what I'm doing actually
[11:19] <IcePic> make a mental note to remind you that the first X clusters you rig should not run someones heart-lung machine on the first attempt. =)
[11:20] <Foloex> it's for private use
[11:20] <Foloex> I mean personnal use
[11:20] <Foloex> so no worries, I won't care about the data it holds
[11:20] <Foloex> if I can get it running ...
[11:25] * cholcombe (~chris@2001:67c:1562:8007::aac:40f1) has joined #ceph
[11:32] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[11:34] <Foloex> I see there is a pool called rbd already present, is it mandatory to create another one ?
[11:37] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[11:37] * pam (~pam@193.106.183.1) Quit (Ping timeout: 480 seconds)
[11:38] * tZ (~Zombiekil@84ZAAC5SJ.tor-irc.dnsbl.oftc.net) Quit ()
[11:38] * blip2 (~Tenk@178-175-128-50.ip.as43289.net) has joined #ceph
[11:38] * ira (~ira@24.34.255.34) has joined #ceph
[11:42] <boolman> how is the upgrade procedure with a minor version. eg 9.2.0 to 9.2.1, is it always mon > osd > mds ?
[11:45] <Foloex> is there a page on the documentation about the meaning of the ceph status statuses ? for example active+undersized+degraded and active+remapped
[11:45] <Foloex> there is also "stuck unclean"
[11:45] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Ping timeout: 480 seconds)
[11:47] * mortn (~mortn@217-215-219-69-no229.tbcn.telia.com) has joined #ceph
[11:48] * jluis (~joao@8.184.114.89.rev.vodafone.pt) has joined #ceph
[11:48] * ChanServ sets mode +o jluis
[11:53] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[11:55] * joao (~joao@8.184.114.89.rev.vodafone.pt) Quit (Ping timeout: 480 seconds)
[12:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[12:07] <Foloex> ok, I found what was wrong in my cluster, both OSD were on the same node
[12:08] * blip2 (~Tenk@7V7AAC3IB.tor-irc.dnsbl.oftc.net) Quit ()
[12:08] * adept256 (~Unforgive@192.87.28.82) has joined #ceph
[12:08] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[12:15] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[12:15] <IcePic> Foloex: such a guide would be neat, perhaps there is one, but I didnt find it yet. One thing I do know is that when I began with ceph, most of them sounded really scare, and they werent, so it can be hard in the beginning to know if "degraded" is a disaster or not
[12:16] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Ping timeout: 480 seconds)
[12:20] <Foloex> IcePic: I think I saw a page in the official documentation listing all the trouble code and their meaning
[12:20] <Foloex> but I couldn't find it again
[12:21] * dgurtner (~dgurtner@178.197.231.34) Quit (Ping timeout: 480 seconds)
[12:22] * rakeshgm (~rakesh@121.244.87.117) Quit (Remote host closed the connection)
[12:22] * garphy is now known as garphy`aw
[12:23] * zhaochao (~zhaochao@124.202.191.135) Quit (Quit: ChatZilla 0.9.92 [Iceweasel 44.0.2/20160214092551])
[12:23] <Foloex> I'm very confused now, I have three OSDs, two are disk-based living on the same machine, another one is directory-based on another machine
[12:24] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[12:24] <Foloex> I killed them all and it says 1/2 in osds are down
[12:25] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) Quit (Ping timeout: 480 seconds)
[12:26] <Foloex> it should be something like 3/3 in osds are down
[12:28] <Foloex> or at least 2/2 are down
[12:31] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[12:34] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) has joined #ceph
[12:38] * adept256 (~Unforgive@76GAACZOT.tor-irc.dnsbl.oftc.net) Quit ()
[12:38] * Silentspy (~fauxhawk@dsl-olubrasgw1-54fb48-110.dhcp.inet.fi) has joined #ceph
[12:39] <IcePic> you can ask ceph what osds it knows about
[12:40] <Foloex> osd tree shows me 2 osd
[12:40] <IcePic> then one of your 3 never called in
[12:40] * pam (~pam@193.106.183.1) has joined #ceph
[12:40] <Foloex> that's strange osd.1 seems to be moving between the two nodes
[12:41] <Foloex> ok, I get it now, I have two osds with the same id -_-
[12:45] * wyang (~wyang@116.216.30.3) has joined #ceph
[12:50] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[12:50] * ade (~abradshaw@194.32.188.1) Quit (Quit: Too sexy for his shirt)
[12:50] * rotbeard (~redbeard@ppp-115-87-78-25.revip4.asianet.co.th) Quit (Quit: Leaving)
[12:51] * wyang (~wyang@116.216.30.3) Quit (Quit: This computer has gone to sleep)
[12:51] <Foloex> how do I tell cep?? it can use the two osds on the same node for replication, in a raid-like fashion
[12:52] <boolman> Foloex: read up on crushmaps
[12:52] <Foloex> boolman: thanks
[12:53] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[12:53] * rotbeard (~redbeard@ppp-115-87-78-25.revip4.asianet.co.th) has joined #ceph
[12:54] * wyang (~wyang@114.111.166.41) has joined #ceph
[12:54] * shyu (~shyu@222.130.152.9) has joined #ceph
[12:54] <boolman> Foloex: eg this url is useful http://docs.ceph.com/docs/master/rados/operations/crush-map/
[12:55] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[12:55] <boolman> and you might want to try your crushmap before you apply it, see how to test your crushmap http://dachary.org/?p=3189
[12:55] <boolman> to test if your ruleset works with your replication factor or not, etc
[12:56] * wyang (~wyang@114.111.166.41) Quit ()
[12:56] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[12:58] <swami2> Hi--
[12:58] * RameshN (~rnachimu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:58] * dgurtner (~dgurtner@178.197.231.40) has joined #ceph
[12:58] <swami2> confused with "status ceph-mon-all" and "status ceph-mon id=<id>"
[12:59] <swami2> ceph-mon-all is working...
[12:59] <swami2> and ceph-mon id=<id> says unknown instance
[13:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[13:02] <Foloex> IcePic: I finally found the page with the error code explained: http://docs.ceph.com/docs/master/dev/placement-group/
[13:08] * Silentspy (~fauxhawk@4MJAAC9II.tor-irc.dnsbl.oftc.net) Quit ()
[13:08] * n0x1d (~Diablothe@65.19.167.130) has joined #ceph
[13:08] * rakeshgm (~rakesh@121.244.87.117) Quit (Remote host closed the connection)
[13:09] * wyang (~wyang@114.111.166.41) has joined #ceph
[13:10] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[13:11] * nardial (~ls@ipservice-092-217-059-182.092.217.pools.vodafone-ip.de) Quit (Remote host closed the connection)
[13:17] * pam (~pam@193.106.183.1) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:18] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Ping timeout: 480 seconds)
[13:20] * wCPO (~Kristian@188.228.77.148) has joined #ceph
[13:23] * wyang (~wyang@114.111.166.41) Quit (Quit: This computer has gone to sleep)
[13:28] * wyang (~wyang@116.216.30.3) has joined #ceph
[13:32] <brians> someone has a big vendor worried
[13:32] <brians> http://www.cloudscaling.com/blog/cloud-computing/killing-the-storage-unicorn-purpose-built-scaleio-spanks-multi-purpose-ceph-on-performance/
[13:38] * n0x1d (~Diablothe@7V7AAC3KM.tor-irc.dnsbl.oftc.net) Quit ()
[13:38] * poller (~luigiman@chulak.enn.lu) has joined #ceph
[13:43] <darkfader> brians: some bored marketing department i guess
[13:44] <darkfader> $here they got dozens of highend hitachis and a nice bunch of ceph systems. neither needs to give way
[13:44] <darkfader> just gets more in total
[13:45] <IcePic> one of the local emc representatives here called me to ask what this ceph thing was, I guess because isilon sales were going down due to ceph
[13:45] <darkfader> lol
[13:46] * darkfader hates dealing with people like that who just work in the area because they get money
[13:46] <rotbeard> +1 darkfader
[13:46] <darkfader> so what did you tell him?
[13:47] <IcePic> it is somewhat similar to what happened for a while in the late 90s, everyone tried to sell you scsi drives expensively and suddenly there appeared ide-raid boxes which presented themselves as scsi on the backplane.
[13:48] <rotbeard> the emc sales guys visited us last year laughed and talked about ceph like using a toycar instead of going with porsches or ferraris :/ it is a pity that people just acting like that
[13:48] <IcePic> pricewise, you could get 10x more space (and lots more spindles) for same price as scsi, and move to raid10 or something equally "wasteful", just because drives were super cheap.
[13:48] <IcePic> they also made each sub-drive hotswap with any-disk-of-same-or-larger-size so you could put any silly disk in as replacement.
[13:49] <IcePic> did a lot for non-HPC storage for a while. I dont think vendors were quite prepared for someone replacing a single disk with 10 inexpensive ones and getting more space/perf/resilince out of it.
[13:50] <IcePic> one of those setups had 2 drives fail, then two drives died duing rebuild from hot-spares in the same enclosure. Didnt lose data.
[13:50] <IcePic> it-boss didnt mind end result of that weekend being "tech guy worked some over time, and 4 IDE disks"
[13:51] <IcePic> instead of "huge data loss, restore times from hell and database users upset" and so on
[13:52] <IcePic> so to me, this smells of the same. Something that is so "silly" it can run on an rpi, and still run on 100+ nodes with 1-12 disks each.
[13:52] <IcePic> that must scare some parts of the business.
[13:53] <IcePic> oh, and I told him isilons still had a place, if you primarily value serving nfs/smb, ie "need to jack huge amounts of data to an old traditional system"
[13:55] <darkfader> "if you "..." actually have users"
[13:55] <darkfader> i think i said it here before
[13:56] <darkfader> but if anyone ever makes a 3.5" sized e3 with 8gb, 10ge in front and 4g fc in back
[13:56] <darkfader> that would do very bad things to old storage
[13:57] <darkfader> (each shelf running ceph on the inside...)
[14:00] * Foloex (~foloex@81-67-102-161.rev.numericable.fr) Quit (Quit: thank you and goodbye)
[14:00] <Tetard> latency ...
[14:02] <darkfader> Tetard: yeah... that's the main difference between the ceph env here and the 'proper' storage, but then they figured out there's data that goes well with ceph and other that does not
[14:02] <darkfader> since then there's no more problem
[14:02] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[14:03] <darkfader> iirc the wrost thing was some ldap server that just needs many many sync iops to two locations
[14:03] <darkfader> not fake not partial flush, just deliver. but so they stayed on a different system, noone had to die
[14:03] <Tetard> or VM usage with, say, DB applications
[14:06] <IcePic> remote storage for VMs is a sham, in my book. If you rig NFS you notice really poor performance, until
[14:06] <IcePic> you figure all other nfs boxes will lie to the VM env for perf.
[14:06] <darkfader> IcePic: netapp doesn't need to lie
[14:06] <darkfader> shitty linux boxes w/ async mounts are a different story
[14:07] <darkfader> i trust cephfs a lot more than anything like that
[14:07] <IcePic> I just think that a lot of the "shitty" things got to set the bar.
[14:07] <darkfader> :)
[14:08] <darkfader> yeah
[14:08] * GabrielDias (~GabrielDi@srx.h1host.ru) has joined #ceph
[14:08] * georgem (~Adium@24.114.64.7) has joined #ceph
[14:08] * poller (~luigiman@4MJAAC9KY.tor-irc.dnsbl.oftc.net) Quit ()
[14:08] <GabrielDias> Hi!
[14:08] <IcePic> most of the time I want stuff to end on disk when I call sync(), not getting lied to by something that uses battery and caps to make good on someone elses promise that it was on disk when it wasnt
[14:08] * Aethis (~dontron@176.10.99.207) has joined #ceph
[14:08] <GabrielDias> I have a new issue.
[14:09] <GabrielDias> 3 OSD of 4 are in down status after reboot
[14:09] <GabrielDias> CentOS 7
[14:09] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[14:09] <GabrielDias> ceph version 9.2.1
[14:09] * pabluk_ is now known as pabluk__
[14:09] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[14:10] <GabrielDias> I thought, that it was problem with fstab
[14:11] <GabrielDias> Like this http://tracker.ceph.com/issues/5194
[14:11] * madkiss2 (~madkiss@2001:6f8:12c3:f00f:5077:8912:9880:1892) Quit (Quit: Leaving.)
[14:12] <GabrielDias> What is the problem ?
[14:12] <GabrielDias> 1 OSD is up, and 3 - down.
[14:14] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[14:15] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) Quit (Ping timeout: 480 seconds)
[14:17] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Ping timeout: 480 seconds)
[14:20] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[14:23] * kashyap (~kashyap@121.244.87.116) has joined #ceph
[14:23] * karnan (~karnan@106.51.139.170) has joined #ceph
[14:23] * natarej (~natarej@101.188.147.129) has joined #ceph
[14:24] * karnan (~karnan@106.51.139.170) Quit ()
[14:24] <kashyap> Hi folks, I'm setting up Ceph (on a _single_ node) for the first time: Before using `ceph-deploy`, I ensured (a) Can SSH into the host without password; (b) hostname is resolvable just fine
[14:24] * karnan (~karnan@106.51.139.170) has joined #ceph
[14:24] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) has joined #ceph
[14:24] <kashyap> Now, I'm doing this inside a VM.
[14:24] <kashyap> It doesn't have a 'domainname'
[14:25] <kashyap> So, is domain name mandatory for Ceph? (My completely undeucated guess: No)
[14:26] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[14:26] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[14:27] <GabrielDias> You can use cannonical names
[14:27] <GabrielDias> Edit /etc/hosts
[14:27] <GabrielDias> Like 10.10.10.20 Hos1
[14:27] <GabrielDias> 10.10.10.30 Hos2
[14:27] <GabrielDias> and use ssh Hos1 or ssh Hos2
[14:28] * Aethis (~dontron@76GAACZQ9.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[14:29] * georgem (~Adium@24.114.64.7) Quit (Quit: Leaving.)
[14:30] * Miouge (~Miouge@94.136.92.20) Quit ()
[14:32] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[14:32] * quinoa (~quinoa@24-148-81-106.c3-0.grn-ubr2.chi-grn.il.cable.rcn.com) has joined #ceph
[14:33] <kashyap> GabrielDias: Yes, I already did that.
[14:33] <kashyap> Thanks for confirming
[14:36] * bniver (~bniver@nat-pool-bos-u.redhat.com) has joined #ceph
[14:37] * Racpatel (~Racpatel@2601:87:3:3601::8a6f) has joined #ceph
[14:39] * Racpatel (~Racpatel@2601:87:3:3601::8a6f) Quit (Remote host closed the connection)
[14:39] * Racpatel (~Racpatel@2601:87:3:3601::8a6f) has joined #ceph
[14:43] * wyang (~wyang@116.216.30.3) Quit (Quit: This computer has gone to sleep)
[14:48] * wyang (~wyang@116.216.30.3) has joined #ceph
[14:50] * rahulgoyal (~rahulgoya@117.198.212.121) has joined #ceph
[14:55] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[14:57] * gregmark (~Adium@68.87.42.115) has joined #ceph
[14:58] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Quit: Ex-Chat)
[15:00] * QuantumBeep (~Xylios@hessel3.torservers.net) has joined #ceph
[15:01] * stefan0 (~stefan0@amti.com.br) has joined #ceph
[15:02] <stefan0> Hi all! First time here! I'm designing a Ceph deployment, may I get some opinions here?
[15:03] * shohn1 (~shohn@dslb-146-060-207-108.146.060.pools.vodafone-ip.de) has joined #ceph
[15:04] * shaunm (~shaunm@74.83.215.100) has joined #ceph
[15:09] * shohn (~shohn@dslb-146-060-207-108.146.060.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[15:10] * wyang (~wyang@116.216.30.3) Quit (Quit: This computer has gone to sleep)
[15:11] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[15:13] * b0e (~aledermue@213.95.25.82) has joined #ceph
[15:13] * wCPO (~Kristian@188.228.77.148) Quit (Ping timeout: 480 seconds)
[15:13] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[15:14] * wyang (~wyang@114.111.166.41) has joined #ceph
[15:16] * linjan_ (~linjan@86.62.112.22) Quit (Ping timeout: 480 seconds)
[15:17] * mhack (~mhack@66-168-117-78.dhcp.oxfr.ma.charter.com) has joined #ceph
[15:17] * danieagle (~Daniel@177.68.229.84) has joined #ceph
[15:18] <bjozet> stefan0: ask and you shall see :)
[15:18] <stefan0> =], thanks..
[15:18] <bjozet> eg. don't ask to ask, ask!
[15:19] <stefan0> I'm wondering buy a Ceph environment with 2 Dell R730 (per node, 2 x 1.6 Tb Dell NVMe, 16 x SATA 6 Tb, 10 Gb NICs and so on)
[15:20] <stefan0> That would be the initial investment (in aprox. 6 months buy the 3rd node)
[15:20] * Kurt (~Adium@2001:628:1:5:2590:feb0:fd55:d367) Quit (Ping timeout: 480 seconds)
[15:20] <T1w> go with 3 nodes from the start
[15:20] * wyang (~wyang@114.111.166.41) Quit (Quit: This computer has gone to sleep)
[15:20] <stefan0> Would that be 'alright' or the initial deployment shall held at least 3 nodes
[15:21] <T1w> if your one node with the MON on it goes away you loose everything
[15:21] <stefan0> at the very beginning
[15:21] <T1w> .. and the number of MONs should always be an uneven number
[15:21] <stefan0> I have other hosts to run MON..
[15:22] <stefan0> in fact, I can have 3 MONs isolated from this others two nodes (crafted only for the OSDs)
[15:22] <T1w> also, be careful with 16 OSDs per node - it's quite high and you journals could suffer due to too many iops
[15:22] <T1w> that many nodes would also result in a very uneven distribution of data once you add your 3rd node in some months
[15:22] * venkat (~chatzilla@61.1.228.5) has joined #ceph
[15:23] <T1w> sorry, that many OSDs even
[15:23] <T1w> hm.. afk
[15:23] <rotbeard> stefan0, for some testing it will be ok, but if you use that setting in production: keep in mind that it'll create a lot of stress to the remaining OSDs if 1 node dies + the lack of redundancy
[15:24] * wyang (~wyang@114.111.166.41) has joined #ceph
[15:25] <stefan0> would be better instead of 16 OSDs for 2 nodes run 3 nodes with, kind, 12 OSDs?
[15:25] <stefan0> (i guess i can predict the answer)
[15:25] <stefan0> my point is to use the NVMe for caching
[15:26] * heddima (~Hamid@stc.hosting.alterway.fr) Quit (Ping timeout: 480 seconds)
[15:26] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:26] <stefan0> Is possible to run 2 nodes with NVMe (1.6 Tb each), 12 OSDs at each host and a 3rn host with no NVMe?
[15:26] <rotbeard> stefan0, so you are planning to run the OSDs without a SSD backed journal?
[15:27] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[15:27] <rotbeard> stefan0, in the case of OSD journals the size won't matter but the durability and overall performance will
[15:27] <stefan0> that would be my other questions.. what about create some partition to NVMe (single card) to run journal and the rest of the room to caching?
[15:28] <stefan0> I can set 2 NVMe + 12 OSDs and use one slice of NVMe for journal and at the 3rd node run 2 SSDs for journaling..
[15:28] <rotbeard> I would expect that you will slow down everything aka getting a benefit from your SSDs while running some OSD journals on SSD and some not
[15:29] * dyasny (~dyasny@cable-192.222.131.135.electronicbox.net) Quit (Ping timeout: 480 seconds)
[15:29] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[15:30] * QuantumBeep (~Xylios@76GAACZSS.tor-irc.dnsbl.oftc.net) Quit ()
[15:30] * TheDoudou_a (~totalworm@185.63.252.44) has joined #ceph
[15:30] <stefan0> rotbeard, i see.. and about running 2 hosts with journal SSD cache using NVMe e the 3rd host with SSDs for journal?
[15:30] * dyasny (~dyasny@cable-192.222.131.135.electronicbox.net) has joined #ceph
[15:30] * jtriley (~jtriley@140.247.242.54) has joined #ceph
[15:31] * i_m (~ivan.miro@deibp9eh1--blueice3n4.emea.ibm.com) Quit (Quit: Leaving.)
[15:31] <rotbeard> stefan0, I didn't work with NVMe so far but if they are suitable for SSD journals, of course
[15:33] <rotbeard> stefan0, in general you could run a ceph cluster without seperating the journals, sometimes it depends on your use-case. for a cold-storage e.g. it could be ok to not use SSD for journals
[15:33] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[15:33] <stefan0> so, may I set 3 hosts with SATA HDDs for OSDs (all hosts with SSD journals (2 of these using NVMe for journaling)) and create a caching pool using only the NVMes of two of these hosts?
[15:34] <rotbeard> stefan0, also the ratio between OSD:SSD is a thing you have to keep in mind. I really don't know the NVMe, but you need to make sure that one of them is suitable to run 8 journals on it
[15:35] <stefan0> rotbeard, why 8 journals?
[15:36] <stefan0> well, doesn't matter in fact..
[15:36] <stefan0> but would be OK design a hot-storage with 2 nodes pushing data to a cold-storage with 3 nodes? (kind dirty question)
[15:36] <rotbeard> stefan0, if you have 16 OSDs per node and 2 SSDs per node, it will come down that 1 SSD needs to keep 8 journals on it
[15:37] <stefan0> hmmm. gotcha, I was wondering to create and RAID-1 between this SSDs to do journaling..
[15:38] <rotbeard> stefan0, poorly I don't have that lot of experience (especially with caching pools and stuff) to pointing the right things out in that discussion.
[15:38] <stefan0> maybe could be better run 8 or 12 HDDs per host
[15:38] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Quit: Ex-Chat)
[15:38] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[15:38] <rotbeard> but in general from my experiences the last 2 years: be careful to run a cluster in that small size. if you lose 1 node, things could get wrong very quick
[15:39] <rotbeard> I also started with a 3 node cluster and 1 day the controller of 1 node died, everything becomes unresponsible very quick.
[15:39] <rotbeard> stefan0, imho the less the better
[15:40] <stefan0> yes, thanks for sharing your experience.. I guess I'll craft a system with 3 initial nodes instead of only two
[15:40] <rotbeard> our current setup is to run 14 SATA OSDs on 4 intel DC S3710 SSDs + having 2x6 Core CPUs + 128G RAM in one node
[15:40] <stefan0> the major issue is the NVMe pricing (5k usd per card)
[15:40] * i_m (~ivan.miro@deibp9eh1--blueice3n4.emea.ibm.com) has joined #ceph
[15:40] <stefan0> so I could only order two..
[15:40] <rotbeard> stefan0, 3 would be better than 2 but for using it in production you really need to take care ;-)
[15:41] <rotbeard> stefan0, those intel dc s7xxx SSDs will do fine and I guess a lot of folks here using them as journal SSDs
[15:42] <stefan0> and about hot-storage, do you think is a good idea use that too?
[15:42] * icey (~Chris@0001bbad.user.oftc.net) Quit (Remote host closed the connection)
[15:43] * vicente (~~vicente@1-161-187-73.dynamic.hinet.net) has joined #ceph
[15:43] * icey (~Chris@pool-74-109-7-163.phlapa.fios.verizon.net) has joined #ceph
[15:43] * pam (~pam@193.106.183.1) has joined #ceph
[15:46] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) Quit (Ping timeout: 480 seconds)
[15:47] * dvanders (~dvanders@2001:1458:202:225::101:124a) has joined #ceph
[15:48] * dvanders_ (~dvanders@pb-d-128-141-3-210.cern.ch) Quit (Read error: Connection reset by peer)
[15:49] * heddima (~Hamid@stc.hosting.alterway.fr) has joined #ceph
[15:50] * pabluk__ is now known as pabluk_
[15:50] * dvanders (~dvanders@2001:1458:202:225::101:124a) Quit (Remote host closed the connection)
[15:53] <rotbeard> stefan0, you mean those intel SSDs as OSDs? of course, yes
[15:53] * venkat (~chatzilla@61.1.228.5) Quit (Ping timeout: 480 seconds)
[15:54] <stefan0> rotbeard, yes!
[15:54] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:54] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) has joined #ceph
[15:55] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[15:55] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[15:58] * kawa2014 (~kawa@94.162.2.137) has joined #ceph
[16:00] * TheDoudou_a (~totalworm@84ZAAC50C.tor-irc.dnsbl.oftc.net) Quit ()
[16:00] * Spessu (~uhtr5r@anonymous6.sec.nl) has joined #ceph
[16:00] * swami2 (~swami@49.44.57.245) Quit (Quit: Leaving.)
[16:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[16:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[16:01] * kefu (~kefu@114.92.107.250) has joined #ceph
[16:02] * naoto (~naotok@27.131.11.254) Quit (Quit: Leaving...)
[16:02] * dvanders (~dvanders@dvanders-pro.cern.ch) has joined #ceph
[16:04] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:04] * rwheeler_ (~rwheeler@nat-pool-bos-u.redhat.com) has joined #ceph
[16:05] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[16:06] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit ()
[16:06] * wyang (~wyang@114.111.166.41) Quit (Quit: This computer has gone to sleep)
[16:07] * rahulgoyal (~rahulgoya@117.198.212.121) Quit (Ping timeout: 480 seconds)
[16:09] * Yovel12 (~Yovel12@117.211.90.154) has joined #ceph
[16:09] * Yovel12 (~Yovel12@117.211.90.154) Quit ()
[16:12] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Remote host closed the connection)
[16:14] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[16:17] * brians__ (~brian@80.111.114.175) has joined #ceph
[16:18] * brians_ (~brian@80.111.114.175) Quit (Ping timeout: 480 seconds)
[16:18] * mattronix_ is now known as mattronix
[16:21] * rahulgoyal (~rahulgoya@117.198.212.121) has joined #ceph
[16:25] * rahulgoyal (~rahulgoya@117.198.212.121) Quit (Read error: Connection reset by peer)
[16:27] * quinoa (~quinoa@24-148-81-106.c3-0.grn-ubr2.chi-grn.il.cable.rcn.com) Quit (Remote host closed the connection)
[16:28] <stefan0> using 4 nodes with Intel 2 x S3610 per node for hot-storage with Intel S3710 for journal (10 Gb networking)
[16:28] <stefan0> may I expect random writes down 2 ms? :o
[16:30] * wCPO (~Kristian@188.228.31.139) has joined #ceph
[16:30] * Spessu (~uhtr5r@84ZAAC51O.tor-irc.dnsbl.oftc.net) Quit ()
[16:30] * KeeperOfTheSoul (~adept256@aluminium.calmocelot.com) has joined #ceph
[16:30] * rahulgoyal (~rahulgoya@117.198.212.121) has joined #ceph
[16:32] * linjan_ (~linjan@176.195.151.213) has joined #ceph
[16:32] * dvanders (~dvanders@dvanders-pro.cern.ch) Quit (Read error: Connection reset by peer)
[16:32] * dvanders (~dvanders@dvanders-pro.cern.ch) has joined #ceph
[16:36] * wCPO (~Kristian@188.228.31.139) Quit (Remote host closed the connection)
[16:36] * sleinen (~Adium@2001:620:0:45:ae87:a3ff:fe13:e5b7) has joined #ceph
[16:37] * shaunm (~shaunm@74.83.215.100) Quit (Ping timeout: 480 seconds)
[16:37] * xarses (~xarses@64.124.158.100) has joined #ceph
[16:45] * olid198115 (~olid1982@aftr-185-17-204-207.dynamic.mnet-online.de) has joined #ceph
[16:46] * kmroz (~kmroz@00020103.user.oftc.net) Quit (Quit: WeeChat 1.4)
[16:47] <lincolnb> when I attempt to increase pgs a cache pool, I get the error "Error EPERM: splits in cache pools must be followed by scrubs and leave sufficient free space to avoid overfilling". will the scrub start automatically or do I need to identify the pgs and scrub them?
[16:48] <lincolnb> nvm, found a relevant ML post.
[16:49] * MentalRay (~MentalRay@142.169.78.144) has joined #ceph
[16:52] * rahulgoyal_030 (~rahulgoya@117.198.212.16) has joined #ceph
[16:52] * kefu (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[16:52] * MannerMan (~oscar@user170.217-10-117.netatonce.net) Quit (Remote host closed the connection)
[16:53] * kefu (~kefu@114.92.107.250) has joined #ceph
[16:58] * togdon (~togdon@74.121.28.6) has joined #ceph
[16:58] * rahulgoyal (~rahulgoya@117.198.212.121) Quit (Ping timeout: 480 seconds)
[17:00] * KeeperOfTheSoul (~adept256@7V7AAC3P2.tor-irc.dnsbl.oftc.net) Quit ()
[17:00] * allenmelon (~dontron@4.tor.exit.babylon.network) has joined #ceph
[17:00] * olid198115 is now known as olid1982
[17:00] * venkat (~chatzilla@117.208.161.217) has joined #ceph
[17:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[17:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[17:04] * kashyap (~kashyap@121.244.87.116) has left #ceph
[17:04] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[17:05] * TMM (~hp@185.5.122.2) Quit (Quit: Ex-Chat)
[17:07] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[17:08] * wCPO (~Kristian@188.228.31.139) has joined #ceph
[17:10] * Drankis (~martin@89.111.13.198) Quit (Remote host closed the connection)
[17:12] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:14] * yanzheng (~zhyan@182.139.204.152) Quit (Quit: This computer has gone to sleep)
[17:17] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[17:18] * heddima (~Hamid@stc.hosting.alterway.fr) Quit (Remote host closed the connection)
[17:19] * yanzheng (~zhyan@182.139.204.152) has joined #ceph
[17:25] * pam (~pam@193.106.183.1) Quit (Quit: Textual IRC Client: www.textualapp.com)
[17:26] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[17:29] * LDA (~lda@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[17:30] * allenmelon (~dontron@4MJAAC9UT.tor-irc.dnsbl.oftc.net) Quit ()
[17:31] * yanzheng (~zhyan@182.139.204.152) Quit (Quit: This computer has gone to sleep)
[17:32] * Geph (~Geoffrey@41.77.153.99) Quit (Ping timeout: 480 seconds)
[17:32] * jtriley_ (~jtriley@65.112.10.221) has joined #ceph
[17:33] * shaunm (~shaunm@cpe-74-132-70-216.kya.res.rr.com) has joined #ceph
[17:34] * curtis864 (~arsenaali@tor.thd.ninja) has joined #ceph
[17:34] * kawa2014 (~kawa@94.162.2.137) Quit (Ping timeout: 480 seconds)
[17:35] * aj__ (~aj@2001:6f8:1337:0:21f2:18ec:57f2:be73) Quit (Ping timeout: 480 seconds)
[17:38] * rotbeard (~redbeard@ppp-115-87-78-25.revip4.asianet.co.th) Quit (Ping timeout: 480 seconds)
[17:38] * jtriley (~jtriley@140.247.242.54) Quit (Ping timeout: 480 seconds)
[17:39] * davidzlap (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) has joined #ceph
[17:40] * venkat (~chatzilla@117.208.161.217) Quit (Quit: ChatZilla 0.9.92 [Firefox 41.0.2/20151016093648])
[17:42] * jclm (~jclm@ip68-224-244-110.lv.lv.cox.net) has joined #ceph
[17:42] * jclm (~jclm@ip68-224-244-110.lv.lv.cox.net) Quit ()
[17:43] * stefan0 (~stefan0@amti.com.br) Quit ()
[17:43] * kefu is now known as kefu|afk
[17:43] * kawa2014 (~kawa@tsn109-201-154-199.dyn.nltelcom.net) has joined #ceph
[17:44] * kawa2014 (~kawa@tsn109-201-154-199.dyn.nltelcom.net) Quit ()
[17:44] * kawa2014 (~kawa@tsn109-201-154-199.dyn.nltelcom.net) has joined #ceph
[17:45] * dvanders (~dvanders@dvanders-pro.cern.ch) Quit (Ping timeout: 480 seconds)
[17:49] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[17:54] * sleinen (~Adium@2001:620:0:45:ae87:a3ff:fe13:e5b7) Quit (Ping timeout: 480 seconds)
[17:57] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:58] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[18:00] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[18:00] * angdraug (~angdraug@c-69-181-140-42.hsd1.ca.comcast.net) has joined #ceph
[18:01] * squizzi (~squizzi@107.13.31.195) Quit (Ping timeout: 480 seconds)
[18:03] * davidzlap1 (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) has joined #ceph
[18:04] * davidzlap (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) Quit (Read error: Connection reset by peer)
[18:04] * rwheeler_ (~rwheeler@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[18:04] * curtis864 (~arsenaali@84ZAAC55U.tor-irc.dnsbl.oftc.net) Quit ()
[18:04] * cheese^ (~Unforgive@207.244.70.35) has joined #ceph
[18:05] * MentalRay (~MentalRay@142.169.78.144) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:06] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:10] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:11] * kefu|afk (~kefu@114.92.107.250) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:12] * kawa2014 (~kawa@tsn109-201-154-199.dyn.nltelcom.net) Quit (Quit: Leaving)
[18:13] * bjornar_ (~bjornar@ti0099a430-0908.bb.online.no) has joined #ceph
[18:17] * swami1 (~swami@27.7.172.103) has joined #ceph
[18:21] * davidzlap (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) has joined #ceph
[18:21] * davidzlap1 (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) Quit (Read error: Connection reset by peer)
[18:22] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[18:22] * shohn1 (~shohn@dslb-146-060-207-108.146.060.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[18:27] <lincolnb> so ive been noticing a lot of disks get completely hammered and blocked ops backing up w/ my EC+Cache tier setup. when i check iostat, i see the machine is doing a lot of r/s, very little rMB/s, which makes me think its heavy random reads. strangely they only seem to be affecting 1-2 OSDs per host, though.
[18:28] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[18:28] <lincolnb> wondering if i need to split my PGs? i've got about 450 OSDs (recently-ish expanded) but my cache tier has 2048 pgs and 3x replication
[18:29] <lincolnb> essentially i'm wondering if i'm seeing uneven utilization because i don't have enough PGs.
[18:29] <m0zes> are you seeing uneven usage in ceph osd df?
[18:30] <m0zes> in infernalis-land, that also shows the number of pgs per osd.
[18:31] <m0zes> have you split your cache-tier and ec pools to use different disks yet?
[18:31] <lincolnb> ah, i'm on hammer. but yeah i have use as low as 38% and high as 72% (with an outlier at 83%)
[18:32] <lincolnb> no, haven't yet :/ definitely need to. the cache has a lot of room but yeah, on the to-do list.
[18:32] <m0zes> it could be, then, that you need more pgs in at least one of the tiers :)
[18:32] <m0zes> but if you're going to move them to seperate disks I wouldn't increase the pgs yet.
[18:32] <lincolnb> hm, alright
[18:33] <m0zes> you could 'ceph osd reweight' the heavy disks down, temporarily.
[18:34] <lincolnb> yeah, i've done that on one or two disks already
[18:34] <m0zes> of course that will cause lots of migration.
[18:34] * swami1 (~swami@27.7.172.103) Quit (Quit: Leaving.)
[18:34] * cheese^ (~Unforgive@4MJAAC9XS.tor-irc.dnsbl.oftc.net) Quit ()
[18:34] * totalwormage (~Pulec@destiny.enn.lu) has joined #ceph
[18:34] <m0zes> 'ceph osd reweight-by-pg {{pools}} 110' should work automatically...
[18:34] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[18:35] <m0zes> whoops swapped {{pools}} and 'percentage'
[18:35] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[18:35] <lincolnb> hm, nice. didnt know about that one
[18:36] <m0zes> its a temporary thing, just like the reweight...
[18:36] <lincolnb> yeah
[18:36] * vicente (~~vicente@1-161-187-73.dynamic.hinet.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:39] * swami1 (~swami@27.7.172.103) has joined #ceph
[18:39] <lincolnb> m0zes: what was your general recipe for seperating the disks, if you have time to explain? did you just pick a few disks per host, move them to a new crush root, then ..?
[18:41] <m0zes> we have 4x 4TB disks and 12x 6TB disks per host. I added all the 4TB disks to a new crush root, added a new crush ruleset to point to the new crush root. then did a ceph osd pool set {pool} crush_ruleset {int}.
[18:41] <m0zes> same for the ec pool. there is nothing in our default crush root anymore.
[18:42] <m0zes> roots were setup like this: http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
[18:42] <lincolnb> so once you set the crush ruleset, the data migration should kick in?
[18:42] <m0zes> yep
[18:42] <lincolnb> yeah im looking at that post too :)
[18:42] <lincolnb> cool
[18:43] * wCPO (~Kristian@188.228.31.139) Quit (Read error: Connection reset by peer)
[18:43] * togdon (~togdon@74.121.28.6) Quit (Quit: Sleeping...)
[18:43] * DanFoster (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[18:43] <lincolnb> seems like the ec pool could've sat in place in the default crush root though. what was the reason for moving it in your case?
[18:44] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[18:44] * togdon (~togdon@74.121.28.6) has joined #ceph
[18:45] <m0zes> I was testing other things in the default root. sometimes I'll take a portion of the disks and have them in both trees, and run some benchmarks on a testing pool in the default root.
[18:45] <lincolnb> gotcha
[18:47] * Darius_ (~Darius@82-135-143-87.static.zebra.lt) Quit (Quit: Leaving)
[18:53] * sleinen (~Adium@2001:620:1000:3:a65e:60ff:fedb:f305) has joined #ceph
[18:54] * dgurtner (~dgurtner@178.197.231.40) Quit (Ping timeout: 480 seconds)
[18:55] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[18:57] * bniver (~bniver@nat-pool-bos-u.redhat.com) Quit (Read error: Connection reset by peer)
[18:59] * karnan (~karnan@106.51.139.170) Quit (Quit: Leaving)
[19:04] * totalwormage (~Pulec@84ZAAC577.tor-irc.dnsbl.oftc.net) Quit ()
[19:04] * zapu (~Tumm@vps-1065056.srv.pa.infobox.ru) has joined #ceph
[19:08] * wCPO (~Kristian@188.228.31.139) has joined #ceph
[19:08] * bniver (~bniver@nat-pool-bos-u.redhat.com) has joined #ceph
[19:09] * i_m (~ivan.miro@deibp9eh1--blueice3n4.emea.ibm.com) Quit (Quit: Leaving.)
[19:09] * bniver (~bniver@nat-pool-bos-u.redhat.com) Quit ()
[19:09] * bniver (~bniver@nat-pool-bos-u.redhat.com) has joined #ceph
[19:10] * Walex (~Walex@72.249.182.114) has joined #ceph
[19:13] * linjan_ (~linjan@176.195.151.213) Quit (Ping timeout: 480 seconds)
[19:17] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[19:22] * dnovosel (~dnovosel@2600:3c03::f03c:91ff:fe96:5796) has joined #ceph
[19:25] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) Quit (Quit: Leaving)
[19:29] * shakamunyi (~shakamuny@c-67-180-191-38.hsd1.ca.comcast.net) has joined #ceph
[19:32] * pabluk_ is now known as pabluk__
[19:34] * zapu (~Tumm@4MJAAC90B.tor-irc.dnsbl.oftc.net) Quit ()
[19:34] * notarima (~Spikey@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[19:37] * dyasny (~dyasny@cable-192.222.131.135.electronicbox.net) Quit (Ping timeout: 480 seconds)
[19:38] * dyasny (~dyasny@cable-192.222.131.135.electronicbox.net) has joined #ceph
[19:39] * mykola (~Mikolaj@91.225.202.96) has joined #ceph
[19:42] * bniver (~bniver@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[19:42] * davidzlap (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) Quit (Read error: Connection reset by peer)
[19:42] * davidzlap (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) has joined #ceph
[19:45] * thomnico (~thomnico@2a01:e35:8b41:120:18d:7181:cb83:dec7) Quit (Quit: Ex-Chat)
[19:50] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) has joined #ceph
[19:51] * davidzlap1 (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) has joined #ceph
[19:52] * davidzlap (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) Quit (Read error: Connection reset by peer)
[19:56] * quinoa (~quinoa@24-148-81-106.c3-0.grn-ubr2.chi-grn.il.cable.rcn.com) has joined #ceph
[19:58] * sleinen1 (~Adium@2001:620:0:82::101) has joined #ceph
[20:00] * kbader (~kyle@64.169.30.57) has joined #ceph
[20:03] * Elie (~oftc-webi@ma838-1-88-123-212-9.fbx.proxad.net) has joined #ceph
[20:04] * Concubidated (~Adium@c-50-173-245-118.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[20:04] * notarima (~Spikey@84ZAAC6AN.tor-irc.dnsbl.oftc.net) Quit ()
[20:04] * Peaced (~Plesioth@62-210-37-82.rev.poneytelecom.eu) has joined #ceph
[20:05] * sleinen (~Adium@2001:620:1000:3:a65e:60ff:fedb:f305) Quit (Ping timeout: 480 seconds)
[20:08] * brians__ (~brian@80.111.114.175) Quit (Quit: Textual IRC Client: www.textualapp.com)
[20:09] * brians_ (~brian@80.111.114.175) has joined #ceph
[20:10] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[20:10] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit ()
[20:12] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[20:14] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[20:15] <atg> Is it possible to swap out OSD backends (filestore to bluestore, once it's stable) on a per-OSD basis? Essentially do a rolling change (bring down an OSD, wipe disk, bring back up as bluestore) ?
[20:19] * thumpba (~thumbpa@38.67.18.130) has joined #ceph
[20:19] <thumpba> how can i remove a ceph node from a ceph cluster
[20:20] <m0zes> atg: pretty sure that should be doable.
[20:21] * BlaXpirit (~irc@blaxpirit.com) has joined #ceph
[20:21] <atg> It seemed potentially doable but I didn't find any documentation on it
[20:22] <m0zes> I'm willing to bet there will be release notes and/or documentation/blogs about doing it when stable.
[20:22] <atg> Fair point
[20:22] <atg> Thanks
[20:24] <joshd> yes, replacing by wiping and starting a fresh osd will work fine for any osd store
[20:27] <atg> So mixed-backend environments are fine?
[20:27] * swami1 (~swami@27.7.172.103) Quit (Quit: Leaving.)
[20:31] <olid1982> @thumpba: you simply remove all osd/mon/mds from the node
[20:31] <olid1982> then you removed that node
[20:31] <m0zes> I would expect so, since the osd should simply expose the rados api, not any of the actual underlying storage data structures
[20:34] * Peaced (~Plesioth@84ZAAC6B5.tor-irc.dnsbl.oftc.net) Quit ()
[20:35] <thumpba> so ceph-deploy purge node-x will take it out of the cluster?
[20:36] * hyperbaba (~hyperbaba@private.neobee.net) Quit (Ping timeout: 480 seconds)
[20:36] * rendar (~I@host128-182-dynamic.12-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:37] <joshd> yes, exactly, the local storage is the lowest abstraction layer, and does not affect anything other than that one osd
[20:38] * vhawk (~vhawk@c-98-246-44-234.hsd1.or.comcast.net) has joined #ceph
[20:38] <johnavp1989> So something I've been wondering as I've worked with Ceph... I beat it up pretty bad, restarted nodes, destroyed OSD's and journals and I've been able to recover from all of it seemingly without any data loss
[20:38] * rendar (~I@host128-182-dynamic.12-79-r.retail.telecomitalia.it) has joined #ceph
[20:39] <johnavp1989> So my question how do I know when data loss has occurred? Would Ceph simply never return to a healthy state?
[20:40] <m0zes> usually one of the following: health err, stuck+peering pgs, inconsistent pgs.
[20:42] <m0zes> with deep scrubs it should detect objects that aren't "correct" (according to matching checksums of each $replica in the backend, leading to the inconsistent status.
[20:42] * Concubidated (~Adium@c-71-197-117-125.hsd1.ca.comcast.net) has joined #ceph
[20:43] * dmick (~dmick@206.169.83.146) has left #ceph
[20:44] <BlaXpirit> Hello. I am looking at http://ceph.com/gsoc2016/#div_102 "Python 3 Support for Ceph". The part about Python is clear, but I'm not sure just how much of Ceph I would need to know, and what would the efforts be to meaningfully try out all those scripts. Also, why is "C/C++ coding" a requirement?
[20:45] <m0zes> speaking inconsistent and "health err". in an infernalis (9.2.0) cluster, I've got 1 pg that is active+inconsistent. I've tried issuing a repair to the pg and it is still marked inconsistent. in hammer, the logs would indicate the object(s) that were wrong. infernalis doesn't seem to be logging to /var/log/ceph/ceph-osd-${x}.log anymore. what should I do?
[20:46] <gregsfortytwo> BlaXpirit: probably get more help in #ceph-devel for that
[20:51] <johnavp1989> m0zes: cool thank you
[20:51] <m0zes> the final piece of that question, this is an EC pool. if the object is incorrect, shouldn't the EC portion be able to repair the object/pg?
[20:52] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:53] <atg> Thanks m0zes and joshd
[20:54] * vata (~vata@cable-21.246.173-197.electronicbox.net) Quit (Quit: Leaving.)
[20:55] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) has joined #ceph
[20:57] * vicente (~~vicente@1-161-187-73.dynamic.hinet.net) has joined #ceph
[21:00] * shaunm (~shaunm@cpe-74-132-70-216.kya.res.rr.com) Quit (Ping timeout: 480 seconds)
[21:00] * Elie (~oftc-webi@ma838-1-88-123-212-9.fbx.proxad.net) Quit (Quit: Page closed)
[21:03] * saru95 (67334bf1@107.161.19.53) has joined #ceph
[21:04] * Bored (~rogst@193.90.12.89) has joined #ceph
[21:05] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) Quit (Ping timeout: 480 seconds)
[21:05] * vicente (~~vicente@1-161-187-73.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[21:09] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[21:13] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) has joined #ceph
[21:27] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:33] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) has joined #ceph
[21:34] * rahulgoyal_030 (~rahulgoya@117.198.212.16) Quit (Read error: Connection reset by peer)
[21:34] * Bored (~rogst@7V7AAC3XS.tor-irc.dnsbl.oftc.net) Quit ()
[21:36] * kbader (~kyle@64.169.30.57) Quit (Ping timeout: 480 seconds)
[21:36] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) Quit ()
[21:40] * infernix (nix@spirit.infernix.net) Quit (Remote host closed the connection)
[21:41] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[21:41] * Concubidated (~Adium@c-71-197-117-125.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[21:44] * kbader (~kyle@64.169.30.57) has joined #ceph
[21:45] * xcezzz (~xcezzz@97-96-111-106.res.bhn.net) has left #ceph
[21:52] <saru95> @joshd making python3 changes in src/ceph.in dont show up when it is run after compiling with this , http://pastebin.com/bpcByXf8 .
[21:52] <saru95> ?
[21:54] <saru95> what can I do in this situation ?
[21:54] * thumpba (~thumbpa@38.67.18.130) Quit (Remote host closed the connection)
[22:03] * infernix (nix@spirit.infernix.net) has joined #ceph
[22:04] * HoboPickle1 (~Chrissi_@watchme.tor-exit.network) has joined #ceph
[22:08] <joshd> saru95: that line is just for recompiling the librados python bindings
[22:10] <joshd> saru95: you'd run 'make ceph' to convert ceph.in to ceph I believe
[22:10] <saru95> yes , the code that runs fine for python2 gives error for python3 . When re-written slightly, the changes do not get detected . Does it have to do with the bindings ?
[22:10] * xcezzz (~xcezzz@97-96-111-106.res.bhn.net) has joined #ceph
[22:10] <saru95> sorry, my bad . i figured it out .
[22:11] * SDub (~SDub@stpaul-nat.cray.com) has joined #ceph
[22:12] <saru95> also , @joshd ceph_daemon.py gives no error now for python3 .
[22:12] <joshd> saru95: great!
[22:15] <joshd> saru95: could you commit the changes to make ceph_daemon.py work and submit a pull request? (http://docs.ceph.com/docs/master/dev/#pull-requests)
[22:16] <saru95> Okay ! on it now . do i need an exception for python2 or do i make it pure python3 ?
[22:16] <joshd> saru95: it should work in both
[22:16] <joshd> saru95: thanks!
[22:17] <saru95> :)
[22:21] * enax (~enax@94-21-125-222.pool.digikabel.hu) has joined #ceph
[22:25] * Georgyo (~georgyo@shamm.as) has joined #ceph
[22:28] <saru95> @joshd https://github.com/ceph/ceph/commit/2999357c1780e26b9c062285fcdeac7d171170e2
[22:29] <saru95> Its, this one https://github.com/ceph/ceph/pull/7935
[22:30] <joshd> saru95: great! could you just add your signed-off-by (git commit --amend -s) and re-push?
[22:31] <saru95> alright !
[22:33] * mykola (~Mikolaj@91.225.202.96) Quit (Remote host closed the connection)
[22:34] * HoboPickle1 (~Chrissi_@4MJAAC98I.tor-irc.dnsbl.oftc.net) Quit ()
[22:34] * Eman1 (~Misacorp@tor.piratenpartei-nrw.de) has joined #ceph
[22:38] * kbader (~kyle@64.169.30.57) Quit (Ping timeout: 480 seconds)
[22:39] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) Quit (Ping timeout: 480 seconds)
[22:40] * aj__ (~aj@x4db1ac40.dyn.telefonica.de) has joined #ceph
[22:40] * Concubidated (~Adium@c-50-173-245-118.hsd1.ca.comcast.net) has joined #ceph
[22:43] * shaunm (~shaunm@cpe-65-185-127-82.cinci.res.rr.com) has joined #ceph
[22:47] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) has joined #ceph
[22:48] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[22:54] * rahulgoyal (~rahulgoya@117.198.212.16) has joined #ceph
[23:02] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[23:03] * davidzlap (~Adium@2605:e000:1313:8003:3134:f504:304:8ad6) has joined #ceph
[23:03] * davidzlap1 (~Adium@2605:e000:1313:8003:4181:8797:2f81:ef2a) Quit (Read error: Connection reset by peer)
[23:03] * dmick (~dmick@206.169.83.146) has joined #ceph
[23:04] * Eman1 (~Misacorp@84ZAAC6IC.tor-irc.dnsbl.oftc.net) Quit ()
[23:04] * AG_Clinton (~Aethis@edwardsnowden2.torservers.net) has joined #ceph
[23:08] * LDA (~lda@host217-114-156-249.pppoe.mark-itt.net) Quit (Quit: LDA)
[23:10] * rahulgoyal (~rahulgoya@117.198.212.16) Quit (Ping timeout: 480 seconds)
[23:12] <saru95> @joshd , https://github.com/ceph/ceph/pull/7937 .
[23:13] * dmick (~dmick@206.169.83.146) has left #ceph
[23:15] * danieagle (~Daniel@177.68.229.84) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[23:22] * davidzlap1 (~Adium@2605:e000:1313:8003:3134:f504:304:8ad6) has joined #ceph
[23:22] * davidzlap (~Adium@2605:e000:1313:8003:3134:f504:304:8ad6) Quit (Read error: Connection reset by peer)
[23:24] * jtriley_ (~jtriley@65.112.10.221) Quit (Ping timeout: 480 seconds)
[23:26] * dnovosel (~dnovosel@2600:3c03::f03c:91ff:fe96:5796) Quit (Quit: Leaving)
[23:28] * shaunm (~shaunm@cpe-65-185-127-82.cinci.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:33] * enax (~enax@94-21-125-222.pool.digikabel.hu) Quit (Ping timeout: 480 seconds)
[23:34] * AG_Clinton (~Aethis@4MJAADAA0.tor-irc.dnsbl.oftc.net) Quit ()
[23:34] * Wijk (~Nanobot@94.102.49.175) has joined #ceph
[23:38] * SDub (~SDub@stpaul-nat.cray.com) Quit (Quit: Leaving)
[23:49] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:49] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[23:55] * mx (~myeho@66.193.98.66) has joined #ceph
[23:55] * saru95 (67334bf1@107.161.19.53) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[23:57] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[23:58] * Racpatel (~Racpatel@2601:87:3:3601::8a6f) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.