#ceph IRC Log

Index

IRC Log for 2014-02-10

Timestamps are in GMT/BST.

[0:08] * tziOm (~bjornar@ns3.uniweb.no) Quit (Ping timeout: 480 seconds)
[0:10] * tziOm (~bjornar@ns3.uniweb.no) has joined #ceph
[0:14] * ScOut3R (~scout3r@BC0652CA.dsl.pool.telekom.hu) Quit ()
[0:20] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[0:28] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[0:30] * sarob (~sarob@2601:9:7080:13a:5d0e:daa2:b8a1:7575) has joined #ceph
[0:33] <paradon> Anyone from inktank here?
[0:33] <paradon> 'ceph-deploy install' currently fails because ceph.com's HTTPS cert expired yesterday...
[0:38] * sarob (~sarob@2601:9:7080:13a:5d0e:daa2:b8a1:7575) Quit (Ping timeout: 480 seconds)
[0:48] * fdmanana (~fdmanana@bl14-136-199.dsl.telepac.pt) Quit (Quit: Leaving)
[1:03] * fdmanana (~fdmanana@bl14-136-199.dsl.telepac.pt) has joined #ceph
[1:09] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[1:16] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:16] * fdmanana (~fdmanana@bl14-136-199.dsl.telepac.pt) Quit (Quit: Leaving)
[1:16] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:24] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[1:26] <pmatulis> paradon: open a bug i guess
[1:32] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:40] * sjustlaptop (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) has joined #ceph
[1:41] * dpippenger (~riven@cpe-76-166-208-250.socal.res.rr.com) Quit (Quit: Leaving.)
[1:45] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) has joined #ceph
[2:01] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Read error: Connection reset by peer)
[2:01] * mjeanson_ (~mjeanson@bell.multivax.ca) has joined #ceph
[2:02] * houkouonchi-home (~linux@2001:470:c:c69::2) Quit (Ping timeout: 480 seconds)
[2:02] * houkouonchi-home (~linux@houkouonchi-1-pt.tunnel.tserv15.lax1.ipv6.he.net) has joined #ceph
[2:16] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:23] * zack_dolby (~textual@219.117.239.161.static.zoot.jp) has joined #ceph
[2:24] * sarob (~sarob@2601:9:7080:13a:a188:a5a:e61e:7634) has joined #ceph
[2:25] * davidzlap1 (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[2:26] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[2:30] * yuriw1 (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) has joined #ceph
[2:31] * hflai (hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[2:33] * sarob (~sarob@2601:9:7080:13a:a188:a5a:e61e:7634) Quit (Ping timeout: 480 seconds)
[2:34] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[2:37] * yuriw (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:38] * jo0nas (~jonas@188-183-5-254-static.dk.customer.tdc.net) Quit (Quit: Leaving.)
[2:40] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[2:50] * lupu (~lupu@86.107.101.246) Quit (Ping timeout: 480 seconds)
[2:50] * jonas (~jonas@188-183-5-254-static.dk.customer.tdc.net) has joined #ceph
[2:50] * jonas is now known as jo0nas
[2:51] * yanzheng (~zhyan@134.134.137.71) has joined #ceph
[2:59] * shang (~ShangWu@175.41.48.77) has joined #ceph
[3:07] * markbby (~Adium@168.94.245.2) has joined #ceph
[3:14] * carif (~mcarifio@pool-173-76-155-34.bstnma.fios.verizon.net) has joined #ceph
[3:17] * diegows (~diegows@190.190.17.57) Quit (Ping timeout: 480 seconds)
[3:27] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[3:27] * shang (~ShangWu@175.41.48.77) has joined #ceph
[3:28] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[3:29] * yanzheng (~zhyan@134.134.137.71) Quit (Ping timeout: 480 seconds)
[3:29] * yanzheng (~zhyan@134.134.137.71) has joined #ceph
[3:36] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:49] * baffle (baffle@jump.stenstad.net) has joined #ceph
[3:49] * baffle_ (baffle@jump.stenstad.net) Quit (Read error: Connection reset by peer)
[4:16] * mjeanson_ (~mjeanson@bell.multivax.ca) Quit (Read error: Connection reset by peer)
[4:20] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[4:21] * john_barbee (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 26.0/20131205075310])
[4:28] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[4:36] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:38] * carif (~mcarifio@pool-173-76-155-34.bstnma.fios.verizon.net) Quit (Read error: Operation timed out)
[4:38] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[4:46] <aarontc> so after cranking up the logging on my OSD, I restarted osd.5 and grepped the log for the 'incomplete' pg (2.28b): http://hastebin.com/koxevilono
[4:46] <aarontc> I don't know how to interpret that or if more lines around those are needed to troubleshoot
[4:47] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:50] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Quit: No Ping reply in 180 seconds.)
[4:53] <aarontc> hmm, I did the same thing on osd.8 (since I think it was the primary), and got this: http://hastebin.com/wuqacowubo
[4:54] * houkouonchi-home (~linux@houkouonchi-1-pt.tunnel.tserv15.lax1.ipv6.he.net) Quit (Ping timeout: 480 seconds)
[4:54] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[4:54] * houkouonchi-home (~linux@houkouonchi-1-pt.tunnel.tserv15.lax1.ipv6.he.net) has joined #ceph
[4:55] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[5:04] * Vacum (~vovo@88.130.203.241) has joined #ceph
[5:06] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) has joined #ceph
[5:07] * Vacum_ (~vovo@i59F79E41.versanet.de) Quit (Read error: Operation timed out)
[5:07] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[5:10] * fatih (~fatih@c-50-174-71-251.hsd1.ca.comcast.net) has joined #ceph
[5:18] * dpippenger (~riven@cpe-76-166-208-250.socal.res.rr.com) has joined #ceph
[5:21] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[5:25] * KevinPerks1 (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[5:31] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[5:38] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[5:56] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[5:58] * lupu (~lupu@86.107.101.246) has joined #ceph
[6:00] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[6:07] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:09] * fatih (~fatih@c-50-174-71-251.hsd1.ca.comcast.net) Quit (Quit: Leaving...)
[6:18] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[6:21] * senk (~Adium@ip-5-147-216-213.unitymediagroup.de) has joined #ceph
[6:23] * ShaunR (~ShaunR@staff.ndchost.com) Quit (Ping timeout: 480 seconds)
[6:32] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[6:38] * ScOut3R (~ScOut3R@BC0652CA.dsl.pool.telekom.hu) has joined #ceph
[6:40] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:41] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Read error: Connection reset by peer)
[6:42] * sarob (~sarob@2601:9:7080:13a:e965:ddb2:b45d:487) has joined #ceph
[6:44] * houkouonchi-home (~linux@houkouonchi-1-pt.tunnel.tserv15.lax1.ipv6.he.net) Quit (Ping timeout: 480 seconds)
[6:44] * zack_dolby (~textual@219.117.239.161.static.zoot.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[6:44] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[6:44] * sjustlaptop (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[6:45] * houkouonchi-home (~linux@houkouonchi-1-pt.tunnel.tserv15.lax1.ipv6.he.net) has joined #ceph
[6:46] * dpippenger (~riven@cpe-76-166-208-250.socal.res.rr.com) Quit (Quit: Leaving.)
[6:47] * zack_dolby (~textual@219.117.239.161.static.zoot.jp) has joined #ceph
[6:48] * ScOut3R (~ScOut3R@BC0652CA.dsl.pool.telekom.hu) Quit (Read error: Operation timed out)
[6:50] * sarob (~sarob@2601:9:7080:13a:e965:ddb2:b45d:487) Quit (Ping timeout: 480 seconds)
[7:01] * zerick (~eocrospom@190.187.21.53) Quit (Read error: Connection reset by peer)
[7:03] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[7:10] * senk (~Adium@ip-5-147-216-213.unitymediagroup.de) Quit (Quit: Leaving.)
[7:10] * senk (~Adium@ip-5-147-216-213.unitymediagroup.de) has joined #ceph
[7:12] * garphy`aw is now known as garphy
[7:14] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[7:16] * srenatus (~stephan@g229133208.adsl.alicedsl.de) has joined #ceph
[7:17] * lupu (~lupu@86.107.101.246) Quit (Ping timeout: 480 seconds)
[7:18] * senk (~Adium@ip-5-147-216-213.unitymediagroup.de) Quit (Ping timeout: 480 seconds)
[7:25] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[7:31] * kiwnix (~kiwnix@00011f91.user.oftc.net) has joined #ceph
[7:32] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[7:33] * ircolle (~Adium@2601:1:8380:2d9:3d4a:bc41:63c8:17ee) has joined #ceph
[7:35] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[7:36] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[7:36] * senk (~Adium@193.174.91.160) has joined #ceph
[7:38] * srenatus (~stephan@g229133208.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[7:39] * garphy is now known as garphy`aw
[7:42] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[7:43] * kiwnix (~kiwnix@00011f91.user.oftc.net) Quit (Quit: Leaving)
[7:44] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:49] * lupu (~lupu@86.107.101.246) has joined #ceph
[8:00] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[8:02] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit ()
[8:03] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[8:03] * Cube (~Cube@66-87-64-173.pools.spcsdns.net) has joined #ceph
[8:09] * garphy`aw is now known as garphy
[8:21] * garphy is now known as garphy`aw
[8:22] * sglwlb (~sglwlb@221.12.27.202) has joined #ceph
[8:23] * sglwlb (~sglwlb@221.12.27.202) Quit ()
[8:23] * sglwlb (~sglwlb@221.12.27.202) has joined #ceph
[8:27] * yanzheng (~zhyan@134.134.137.71) Quit (Remote host closed the connection)
[8:34] * rendar (~s@87.19.183.241) has joined #ceph
[8:36] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[8:40] * srenatus (~stephan@g229133208.adsl.alicedsl.de) has joined #ceph
[8:44] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:46] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[8:48] * srenatus (~stephan@g229133208.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[8:50] * Cube (~Cube@66-87-64-173.pools.spcsdns.net) Quit (Quit: Leaving.)
[8:54] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:58] * lupu (~lupu@86.107.101.246) Quit (Ping timeout: 480 seconds)
[8:59] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Quit: A fine is a tax for doing wrong. A tax is a fine for doing well)
[9:04] * senk (~Adium@193.174.91.160) Quit (Quit: Leaving.)
[9:05] * srenatus (~stephan@185.27.182.16) has joined #ceph
[9:05] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[9:06] * senk (~Adium@193.174.91.160) has joined #ceph
[9:07] * steki (~steki@91.195.39.5) has joined #ceph
[9:23] * fatih (~fatih@c-50-174-71-251.hsd1.ca.comcast.net) has joined #ceph
[9:39] * sarob (~sarob@2601:9:7080:13a:915a:3857:8d8f:7ef4) has joined #ceph
[9:48] * sarob (~sarob@2601:9:7080:13a:915a:3857:8d8f:7ef4) Quit (Ping timeout: 480 seconds)
[9:49] * gaveen (~gaveen@175.157.67.169) has joined #ceph
[9:53] * renzhi (~renzhi@ec2-54-249-150-103.ap-northeast-1.compute.amazonaws.com) has joined #ceph
[9:53] * TMM (~hp@c97185.upc-c.chello.nl) Quit (Quit: Ex-Chat)
[9:58] * alexm_ (~alexm@83.167.43.235) has joined #ceph
[10:02] * thb (~me@2a02:2028:1d9:81f0:6267:20ff:fec9:4e40) has joined #ceph
[10:06] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[10:10] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[10:12] * ScOut3R__ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[10:13] * thomnico (~thomnico@2a01:e35:8b41:120:d5ba:f134:a740:95b9) has joined #ceph
[10:15] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[10:15] * ChanServ sets mode +v andreask
[10:16] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[10:18] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[10:19] * lupu (~lupu@86.107.101.246) has joined #ceph
[10:19] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[10:25] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Remote host closed the connection)
[10:25] * ScOut3R__ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[10:28] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[10:28] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Read error: Operation timed out)
[10:28] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[10:33] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:33] * zapotah (~zapotah@dsl-hkibrasgw1-58c08e-250.dhcp.inet.fi) has joined #ceph
[10:33] <zapotah> anyone about to change the cert on the git repo?
[10:33] <zapotah> or on the site in general
[10:33] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Read error: Connection reset by peer)
[10:33] <zapotah> had some major malfunctions today because of this
[10:34] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:37] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[10:37] * hjjg (~hg@p3EE323E1.dip0.t-ipconnect.de) has joined #ceph
[10:37] * fatih (~fatih@c-50-174-71-251.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:38] <andreask> on the site in general?
[10:40] <zapotah> ceph.com cert expired on 9.2.2014
[10:40] <zapotah> came to work today and tried to deploy
[10:40] <zapotah> no-go
[10:40] * sarob (~sarob@2601:9:7080:13a:fc0f:4b48:f2d5:4f52) has joined #ceph
[10:47] <andreask> ouch .. I see what you mean
[10:48] * sarob (~sarob@2601:9:7080:13a:fc0f:4b48:f2d5:4f52) Quit (Ping timeout: 480 seconds)
[10:49] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) has joined #ceph
[10:50] <andreask> what git repo are you referring to?
[10:50] * sarob (~sarob@2601:9:7080:13a:f0a6:651f:d05a:3b9b) has joined #ceph
[10:50] <zapotah> ceph.com/git
[10:50] <zapotah> cant download the pgp key
[10:51] <zapotah> i worked around it by modifying the url but thats a crude and unsafe hack
[10:57] * mattt (~textual@92.52.76.140) has joined #ceph
[10:58] * sarob (~sarob@2601:9:7080:13a:f0a6:651f:d05a:3b9b) Quit (Ping timeout: 480 seconds)
[11:04] * houkouonchi-work (~linux@12.248.40.138) Quit (Ping timeout: 480 seconds)
[11:06] * capri_oner (~capri@212.218.127.222) has joined #ceph
[11:06] * schmee (~quassel@phobos.isoho.st) Quit (Quit: No Ping reply in 180 seconds.)
[11:09] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[11:13] * capri_on (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[11:16] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[11:17] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: No route to host)
[11:19] * sileht (~sileht@gizmo.sileht.net) Quit (Quit: WeeChat 0.4.2)
[11:19] * sileht (~sileht@gizmo.sileht.net) has joined #ceph
[11:21] * allsystemsarego (~allsystem@188.25.135.30) has joined #ceph
[11:27] <andreask> zapotah: the cert is now updated
[11:30] <zapotah> nice
[11:30] <zapotah> seems to work
[11:30] <zapotah> ill revert the hacks
[11:31] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[11:36] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[11:38] * LiRul (~lirul@91.82.105.2) has joined #ceph
[11:38] <LiRul> hi
[11:39] <LiRul> if i have wd re4 sata hdds under osd daemons, what is the preferred 'osd op threads' settings?
[11:39] <LiRul> i'm using via radosgw with 20 parallel upload/read
[11:42] * zviratko (~zviratko@241-73-239-109.cust.centrio.cz) has joined #ceph
[11:44] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[11:45] * NoX-1984 (~NoX-1984@master-gw.netlantic.net) has joined #ceph
[11:47] * garphy`aw is now known as garphy
[11:48] <andreask> LiRul: I'd say you need to benchmark it with your expected work load, but 4 or 8 osd op threads should be good numbers to start benchmarking
[11:49] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[11:52] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[11:52] <LiRul> andreask: thank you
[11:53] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[11:59] * fdmanana (~fdmanana@bl5-0-172.dsl.telepac.pt) has joined #ceph
[11:59] * lupu (~lupu@86.107.101.246) Quit (Ping timeout: 480 seconds)
[12:06] * zack_dolby (~textual@219.117.239.161.static.zoot.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[12:26] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[12:28] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Quit: Konversation terminated!)
[12:29] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[12:44] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[12:48] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[12:49] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit ()
[12:49] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[12:52] * arbrandes (~arbrandes@177.9.201.101) has joined #ceph
[12:52] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[12:55] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[13:01] * sekon (~harish@li291-152.members.linode.com) Quit (Remote host closed the connection)
[13:02] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * allsystemsarego (~allsystem@188.25.135.30) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * mattt (~textual@92.52.76.140) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * alexm_ (~alexm@83.167.43.235) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * srenatus (~stephan@185.27.182.16) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * rendar (~s@87.19.183.241) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * sglwlb (~sglwlb@221.12.27.202) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Vacum (~vovo@88.130.203.241) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * hflai (hflai@alumni.cs.nctu.edu.tw) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * davidzlap1 (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * tziOm (~bjornar@ns3.uniweb.no) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * BillK (~BillK-OFT@106-68-142-217.dyn.iinet.net.au) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * glambert_ (~glambert@ptr-22.204.219.82.rev.exa.net.uk) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * nwf_ (~nwf@67.62.51.95) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * dlan (~dennis@116.228.88.131) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Sargun (~sargun@208-106-98-2.static.sonic.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Guest650 (~coyo@thinks.outside.theb0x.org) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * LiRul (~lirul@91.82.105.2) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * baffle (baffle@jump.stenstad.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * yuriw1 (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * wattsmarcus5 (~mdw@aa2.linuxbox.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * kuu (~kuu@virtual362.tentacle.fi) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * garphy (~garphy@frank.zone84.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * kitz (~kitz@admin161-194.hampshire.edu) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * blahnana (~bman@us1.blahnana.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * leochill (~leochill@nyc-333.nycbit.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * TheBittern (~thebitter@195.10.250.233) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * flaxy (~afx@78.130.174.164) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * gregsfortytwo (~Adium@2607:f298:a:607:4a:149d:8f20:b9e) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * sage (~quassel@2607:f298:a:607:6d19:193:ed4a:57ef) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * markl (~mark@knm.org) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * amospalla (~amospalla@0001a39c.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * lightspeed (~lightspee@81.187.0.153) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * simulx (~simulx@vpn.expressionanalysis.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * acaos (~zac@209.99.103.42) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * renzhi (~renzhi@ec2-54-249-150-103.ap-northeast-1.compute.amazonaws.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * steki (~steki@91.195.39.5) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * jo0nas (~jonas@188-183-5-254-static.dk.customer.tdc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * LPG_ (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * mmgaggle (~kyle@cerebrum.dreamservers.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * brambles (lechuck@s0.barwen.ch) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * geekmush (~Adium@cpe-66-68-198-33.rgv.res.rr.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * svg (~svg@hydargos.ginsys.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * saturnine (~saturnine@ashvm.saturne.in) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * hughsaunders (~hughsaund@wherenow.org) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * phantomcircuit (~phantomci@covertinferno.org) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Zethrok (~martin@95.154.26.34) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * WintermeW_ (~WintermeW@212-83-158-61.rev.poneytelecom.eu) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Guest823 (~jeremy@ip23.67-202-99.static.steadfastdns.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * wrale (~wrale@wrk-28-217.cs.wright.edu) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * paradon (~thomas@60.234.66.253) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * rBEL (robbe@november.openminds.be) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * `10_ (~10@juke.fm) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * eightyeight (~atoponce@atoponce.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Esmil (esmil@horus.0x90.dk) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * singler (~singler@zeta.kirneh.eu) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * sbadia (~sbadia@yasaw.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * kraken (~kraken@gw.sepia.ceph.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * cephalobot (~ceph@ds2390.dreamservers.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Daviey (~DavieyOFT@bootie.daviey.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * thomnico (~thomnico@2a01:e35:8b41:120:d5ba:f134:a740:95b9) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * thb (~me@0001bd58.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * godog (~filo@0001309c.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * guerby (~guerby@ip165-ipv6.tetaneutral.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * yeled (~yeled@spodder.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * fdmanana (~fdmanana@bl5-0-172.dsl.telepac.pt) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * NoX-1984 (~NoX-1984@master-gw.netlantic.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * zviratko (~zviratko@241-73-239-109.cust.centrio.cz) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * zapotah (~zapotah@dsl-hkibrasgw1-58c08e-250.dhcp.inet.fi) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * mjevans- (~mje@209.141.34.79) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * via (~via@smtp2.matthewvia.info) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * dosaboy_ (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * psieklFH_ (psiekl@wombat.eu.org) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * ingard (~cake@tu.rd.vc) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Machske (~Bram@d5152D87C.static.telenet.be) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * morse (~morse@supercomputing.univpm.it) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * DLange (~DLange@dlange.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * nyerup (irc@jespernyerup.dk) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * grifferz_ (~andy@bitfolk.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * athrift (~nz_monkey@203.86.205.13) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * toabctl (~toabctl@toabctl.de) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * stj (~s@tully.csail.mit.edu) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * mmmucky (~mucky@mucky.socket7.org) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * jerker_ (jerker@82ee1319.test.dnsbl.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * lurbs (user@uber.geek.nz) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * darkfader (~floh@88.79.251.60) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * clayg (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * svenneK (~sk@svenne.krap.dk) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * tomaw (tom@tomaw.noc.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * jnq (~jon@0001b7cc.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * capri_oner (~capri@212.218.127.222) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Kdecherf (~kdecherf@shaolan.kdecherf.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * nhm_ (~nhm@65-128-180-101.mpls.qwest.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * ctd_ (~root@00011932.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * twx (~twx@rosamoln.org) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * ofu_ (ofu@dedi3.fuckner.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * wonko_be_ (bernard@november.openminds.be) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * warrenSusui (~Warren@2607:f298:a:607:61e0:1d53:455e:4435) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * wusui (~Warren@2607:f298:a:607:61e0:1d53:455e:4435) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * stewiem20001 (~stewiem20@195.10.250.233) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * joao (~joao@a79-168-11-205.cpe.netcabo.pt) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * vhasi (vhasi@vha.si) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * wido_ (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Elbandi_ (~ea333@elbandi.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * sig_wall (~adjkru@185.14.185.91) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Meyer^ (meyer@c64.org) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * NafNaf (~NafNaf@5.148.165.184) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * mnash (~chatzilla@vpn.expressionanalysis.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * plantain (~plantain@106.187.96.118) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * todin (tuxadero@kudu.in-berlin.de) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * zackc (~zackc@0001ba60.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * tnt (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * pmatulis (~peter@64.34.151.178) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * cfreak201 (~cfreak200@p4FF3FAF6.dip0.t-ipconnect.de) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * jeremydei (~jdeininge@64.125.69.200) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * [cave] (~quassel@boxacle.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * al (d@niel.cx) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * seif_ (sid11725@id-11725.ealing.irccloud.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * philips (~philips@ec2-23-22-175-220.compute-1.amazonaws.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * HauM1 (~HauM1@login.univie.ac.at) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * musca (musca@tyrael.eu) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * asmaps (~quassel@2a03:4000:2:3c5::80) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Svedrin (svedrin@ketos.funzt-halt.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * arbrandes (~arbrandes@177.9.201.101) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * jbd_ (~jbd_@2001:41d0:52:a00::77) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * senk (~Adium@193.174.91.160) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * shang (~ShangWu@175.41.48.77) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * lxo (~aoliva@lxo.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * ponyofdeath (~vladi@cpe-76-167-201-214.san.res.rr.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * haomaiwang (~haomaiwan@218.71.76.134) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * alexxy (~alexxy@79.173.81.171) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * meis3 (~meise@oglarun.3st.be) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * dis (~dis@109.110.66.216) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * ferai (~quassel@corkblock.jefferai.org) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * jamespage (~jamespage@culvain.gromper.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * raso (~raso@deb-multimedia.org) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * rektide (~rektide@eldergods.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * aarontc (~aarontc@aarontc-1-pt.tunnel.tserv14.sea1.ipv6.he.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * erwyn (~erwyn@markelous.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * tom2 (~jens@s11.jayr.de) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * ChrisWork (~chris@skeeter-mxisp.openmarket.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * wogri (~wolf@nix.wogri.at) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * LCF (ball8@193.231.broadband16.iol.cz) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Ormod (~valtha@ohmu.fi) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * toutour (~toutour@causses.idest.org) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * liiwi (liiwi@idle.fi) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Anticimex (anticimex@95.80.32.80) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * loicd (~loicd@bouncer.dachary.org) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * clag (~clag@cl.noc.accelance.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * sileht (~sileht@gizmo.sileht.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * hjjg (~hg@p3EE323E1.dip0.t-ipconnect.de) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * ircolle (~Adium@2601:1:8380:2d9:3d4a:bc41:63c8:17ee) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Drumplayr (~thomas@107-192-218-58.lightspeed.austtx.sbcglobal.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * eternaleye (~eternaley@c-24-17-202-252.hsd1.wa.comcast.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * c74d (~c74d@2002:4404:712c:0:bc84:f38c:2e99:3ed0) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * dec (~dec@ec2-54-252-14-44.ap-southeast-2.compute.amazonaws.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * asadpanda (~asadpanda@2001:470:c09d:0:20c:29ff:fe4e:a66) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Azrael (~azrael@terra.negativeblue.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * peetaur2 (~peter@dhcp-108-168-3-60.cable.user.start.ca) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * Nats_ (~Nats@telstr575.lnk.telstra.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * mkoderer_ (uid11949@id-11949.ealing.irccloud.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * joshd (~joshd@2607:f298:a:607:a4ee:601f:4e1b:b817) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * chutz (~chutz@rygel.linuxfreak.ca) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * fretb (~fretb@frederik.pw) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * kwmiebach (sid16855@2604:8300:100:200b:6667:3:0:41d7) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * guppy (~quassel@guppy.xxx) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * mdjp (~mdjp@213.229.87.114) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * hjorth (~hjorth@sig9.kill.dk) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * janos (~janos@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * NaioN (stefan@andor.naion.nl) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * bkero (~bkero@216.151.13.66) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * alexbligh (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * saaby (~as@mail.saaby.com) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * bauruine (~bauruine@2a01:4f8:150:6381::545) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * masterpe (~masterpe@2a01:670:400::43) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * fred` (fred@earthli.ng) Quit (graviton.oftc.net resistance.oftc.net)
[13:02] * brother (foobaz@vps1.hacking.dk) Quit (graviton.oftc.net resistance.oftc.net)
[13:05] * arbrandes (~arbrandes@177.9.201.101) has joined #ceph
[13:05] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[13:05] * fdmanana (~fdmanana@bl5-0-172.dsl.telepac.pt) has joined #ceph
[13:05] * NoX-1984 (~NoX-1984@master-gw.netlantic.net) has joined #ceph
[13:05] * zviratko (~zviratko@241-73-239-109.cust.centrio.cz) has joined #ceph
[13:05] * sileht (~sileht@gizmo.sileht.net) has joined #ceph
[13:05] * capri_oner (~capri@212.218.127.222) has joined #ceph
[13:05] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) has joined #ceph
[13:05] * hjjg (~hg@p3EE323E1.dip0.t-ipconnect.de) has joined #ceph
[13:05] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[13:05] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[13:05] * zapotah (~zapotah@dsl-hkibrasgw1-58c08e-250.dhcp.inet.fi) has joined #ceph
[13:05] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[13:05] * thomnico (~thomnico@2a01:e35:8b41:120:d5ba:f134:a740:95b9) has joined #ceph
[13:05] * thb (~me@0001bd58.user.oftc.net) has joined #ceph
[13:05] * senk (~Adium@193.174.91.160) has joined #ceph
[13:05] * ircolle (~Adium@2601:1:8380:2d9:3d4a:bc41:63c8:17ee) has joined #ceph
[13:05] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[13:05] * shang (~ShangWu@175.41.48.77) has joined #ceph
[13:05] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[13:05] * Drumplayr (~thomas@107-192-218-58.lightspeed.austtx.sbcglobal.net) has joined #ceph
[13:05] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:05] * eternaleye (~eternaley@c-24-17-202-252.hsd1.wa.comcast.net) has joined #ceph
[13:05] * Kdecherf (~kdecherf@shaolan.kdecherf.com) has joined #ceph
[13:05] * c74d (~c74d@2002:4404:712c:0:bc84:f38c:2e99:3ed0) has joined #ceph
[13:05] * asadpanda (~asadpanda@2001:470:c09d:0:20c:29ff:fe4e:a66) has joined #ceph
[13:05] * dec (~dec@ec2-54-252-14-44.ap-southeast-2.compute.amazonaws.com) has joined #ceph
[13:05] * mjevans- (~mje@209.141.34.79) has joined #ceph
[13:05] * via (~via@smtp2.matthewvia.info) has joined #ceph
[13:05] * nhm_ (~nhm@65-128-180-101.mpls.qwest.net) has joined #ceph
[13:05] * dosaboy_ (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[13:05] * ctd_ (~root@00011932.user.oftc.net) has joined #ceph
[13:05] * twx (~twx@rosamoln.org) has joined #ceph
[13:05] * ofu_ (ofu@dedi3.fuckner.net) has joined #ceph
[13:05] * wonko_be_ (bernard@november.openminds.be) has joined #ceph
[13:05] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[13:05] * peetaur2 (~peter@dhcp-108-168-3-60.cable.user.start.ca) has joined #ceph
[13:05] * psieklFH_ (psiekl@wombat.eu.org) has joined #ceph
[13:05] * ponyofdeath (~vladi@cpe-76-167-201-214.san.res.rr.com) has joined #ceph
[13:05] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[13:05] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[13:05] * Nats_ (~Nats@telstr575.lnk.telstra.net) has joined #ceph
[13:05] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) has joined #ceph
[13:05] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) has joined #ceph
[13:05] * kwmiebach (sid16855@2604:8300:100:200b:6667:3:0:41d7) has joined #ceph
[13:05] * mkoderer_ (uid11949@id-11949.ealing.irccloud.com) has joined #ceph
[13:05] * haomaiwang (~haomaiwan@218.71.76.134) has joined #ceph
[13:05] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[13:05] * warrenSusui (~Warren@2607:f298:a:607:61e0:1d53:455e:4435) has joined #ceph
[13:05] * wusui (~Warren@2607:f298:a:607:61e0:1d53:455e:4435) has joined #ceph
[13:05] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[13:05] * ingard (~cake@tu.rd.vc) has joined #ceph
[13:05] * stewiem20001 (~stewiem20@195.10.250.233) has joined #ceph
[13:05] * meis3 (~meise@oglarun.3st.be) has joined #ceph
[13:05] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[13:05] * joshd (~joshd@2607:f298:a:607:a4ee:601f:4e1b:b817) has joined #ceph
[13:05] * dis (~dis@109.110.66.216) has joined #ceph
[13:05] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[13:05] * ferai (~quassel@corkblock.jefferai.org) has joined #ceph
[13:05] * Machske (~Bram@d5152D87C.static.telenet.be) has joined #ceph
[13:05] * clag (~clag@cl.noc.accelance.net) has joined #ceph
[13:05] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[13:05] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[13:05] * joao (~joao@a79-168-11-205.cpe.netcabo.pt) has joined #ceph
[13:05] * raso (~raso@deb-multimedia.org) has joined #ceph
[13:05] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[13:05] * rektide (~rektide@eldergods.com) has joined #ceph
[13:05] * fretb (~fretb@frederik.pw) has joined #ceph
[13:05] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[13:05] * ivan` (~ivan`@000130ca.user.oftc.net) has joined #ceph
[13:05] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[13:05] * godog (~filo@0001309c.user.oftc.net) has joined #ceph
[13:05] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[13:05] * guerby (~guerby@ip165-ipv6.tetaneutral.net) has joined #ceph
[13:05] * yeled (~yeled@spodder.com) has joined #ceph
[13:05] * vhasi (vhasi@vha.si) has joined #ceph
[13:05] * guppy (~quassel@guppy.xxx) has joined #ceph
[13:05] * nyerup (irc@jespernyerup.dk) has joined #ceph
[13:05] * wido_ (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[13:05] * Elbandi_ (~ea333@elbandi.net) has joined #ceph
[13:05] * grifferz_ (~andy@bitfolk.com) has joined #ceph
[13:05] * sig_wall (~adjkru@185.14.185.91) has joined #ceph
[13:05] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) has joined #ceph
[13:05] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) has joined #ceph
[13:05] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) has joined #ceph
[13:05] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[13:05] * Meyer^ (meyer@c64.org) has joined #ceph
[13:05] * athrift (~nz_monkey@203.86.205.13) has joined #ceph
[13:05] * NafNaf (~NafNaf@5.148.165.184) has joined #ceph
[13:05] * toabctl (~toabctl@toabctl.de) has joined #ceph
[13:05] * mnash (~chatzilla@vpn.expressionanalysis.com) has joined #ceph
[13:05] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[13:05] * plantain (~plantain@106.187.96.118) has joined #ceph
[13:05] * stj (~s@tully.csail.mit.edu) has joined #ceph
[13:05] * mmmucky (~mucky@mucky.socket7.org) has joined #ceph
[13:05] * jerker_ (jerker@82ee1319.test.dnsbl.oftc.net) has joined #ceph
[13:05] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) has joined #ceph
[13:05] * lurbs (user@uber.geek.nz) has joined #ceph
[13:05] * darkfader (~floh@88.79.251.60) has joined #ceph
[13:05] * clayg (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) has joined #ceph
[13:05] * svenneK (~sk@svenne.krap.dk) has joined #ceph
[13:05] * tomaw (tom@tomaw.noc.oftc.net) has joined #ceph
[13:05] * jnq (~jon@0001b7cc.user.oftc.net) has joined #ceph
[13:05] * seif_ (sid11725@id-11725.ealing.irccloud.com) has joined #ceph
[13:05] * musca (musca@tyrael.eu) has joined #ceph
[13:05] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[13:05] * philips (~philips@ec2-23-22-175-220.compute-1.amazonaws.com) has joined #ceph
[13:05] * HauM1 (~HauM1@login.univie.ac.at) has joined #ceph
[13:05] * zackc (~zackc@0001ba60.user.oftc.net) has joined #ceph
[13:05] * tnt (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) has joined #ceph
[13:05] * pmatulis (~peter@64.34.151.178) has joined #ceph
[13:05] * cfreak201 (~cfreak200@p4FF3FAF6.dip0.t-ipconnect.de) has joined #ceph
[13:05] * [cave] (~quassel@boxacle.net) has joined #ceph
[13:05] * asmaps (~quassel@2a03:4000:2:3c5::80) has joined #ceph
[13:05] * al (d@niel.cx) has joined #ceph
[13:05] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[13:05] * Svedrin (svedrin@ketos.funzt-halt.net) has joined #ceph
[13:05] * jeremydei (~jdeininge@64.125.69.200) has joined #ceph
[13:05] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) has joined #ceph
[13:05] * aarontc (~aarontc@aarontc-1-pt.tunnel.tserv14.sea1.ipv6.he.net) has joined #ceph
[13:05] * erwyn (~erwyn@markelous.net) has joined #ceph
[13:05] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[13:05] * tom2 (~jens@s11.jayr.de) has joined #ceph
[13:05] * mdjp (~mdjp@213.229.87.114) has joined #ceph
[13:05] * ChrisWork (~chris@skeeter-mxisp.openmarket.com) has joined #ceph
[13:05] * Anticimex (anticimex@95.80.32.80) has joined #ceph
[13:05] * wogri (~wolf@nix.wogri.at) has joined #ceph
[13:05] * LCF (ball8@193.231.broadband16.iol.cz) has joined #ceph
[13:05] * Ormod (~valtha@ohmu.fi) has joined #ceph
[13:05] * liiwi (liiwi@idle.fi) has joined #ceph
[13:05] * toutour (~toutour@causses.idest.org) has joined #ceph
[13:05] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[13:05] * loicd (~loicd@bouncer.dachary.org) has joined #ceph
[13:05] * fred` (fred@earthli.ng) has joined #ceph
[13:05] * NaioN (stefan@andor.naion.nl) has joined #ceph
[13:05] * bkero (~bkero@216.151.13.66) has joined #ceph
[13:05] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[13:05] * saaby (~as@mail.saaby.com) has joined #ceph
[13:05] * janos (~janos@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[13:05] * brother (foobaz@vps1.hacking.dk) has joined #ceph
[13:05] * alexbligh (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) has joined #ceph
[13:05] * hjorth (~hjorth@sig9.kill.dk) has joined #ceph
[13:05] * masterpe (~masterpe@2a01:670:400::43) has joined #ceph
[13:05] * bauruine (~bauruine@2a01:4f8:150:6381::545) has joined #ceph
[13:09] * godog (~filo@0001309c.user.oftc.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * thomnico (~thomnico@2a01:e35:8b41:120:d5ba:f134:a740:95b9) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * thb (~me@0001bd58.user.oftc.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * guerby (~guerby@ip165-ipv6.tetaneutral.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * yeled (~yeled@spodder.com) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * svenneK (~sk@svenne.krap.dk) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * clayg (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * darkfader (~floh@88.79.251.60) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * lurbs (user@uber.geek.nz) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * jerker_ (jerker@82ee1319.test.dnsbl.oftc.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * mmmucky (~mucky@mucky.socket7.org) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * stj (~s@tully.csail.mit.edu) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * athrift (~nz_monkey@203.86.205.13) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * grifferz_ (~andy@bitfolk.com) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * morse (~morse@supercomputing.univpm.it) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * Machske (~Bram@d5152D87C.static.telenet.be) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * ingard (~cake@tu.rd.vc) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * psieklFH_ (psiekl@wombat.eu.org) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * dosaboy_ (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * via (~via@smtp2.matthewvia.info) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * mjevans- (~mje@209.141.34.79) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * zapotah (~zapotah@dsl-hkibrasgw1-58c08e-250.dhcp.inet.fi) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * zviratko (~zviratko@241-73-239-109.cust.centrio.cz) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * NoX-1984 (~NoX-1984@master-gw.netlantic.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * fdmanana (~fdmanana@bl5-0-172.dsl.telepac.pt) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * jnq (~jon@0001b7cc.user.oftc.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * toabctl (~toabctl@toabctl.de) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * nyerup (irc@jespernyerup.dk) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * tomaw (tom@tomaw.noc.oftc.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * DLange (~DLange@dlange.user.oftc.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * jeremydei (~jdeininge@64.125.69.200) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * al (d@niel.cx) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * cfreak201 (~cfreak200@p4FF3FAF6.dip0.t-ipconnect.de) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * pmatulis (~peter@64.34.151.178) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * todin (tuxadero@kudu.in-berlin.de) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * plantain (~plantain@106.187.96.118) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * mnash (~chatzilla@vpn.expressionanalysis.com) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * Elbandi_ (~ea333@elbandi.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * vhasi (vhasi@vha.si) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * stewiem20001 (~stewiem20@195.10.250.233) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * wusui (~Warren@2607:f298:a:607:61e0:1d53:455e:4435) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * warrenSusui (~Warren@2607:f298:a:607:61e0:1d53:455e:4435) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * ofu_ (ofu@dedi3.fuckner.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * twx (~twx@rosamoln.org) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * ctd_ (~root@00011932.user.oftc.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * Kdecherf (~kdecherf@shaolan.kdecherf.com) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * capri_oner (~capri@212.218.127.222) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * sig_wall (~adjkru@185.14.185.91) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * NafNaf (~NafNaf@5.148.165.184) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * tnt (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * wido_ (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * joao (~joao@a79-168-11-205.cpe.netcabo.pt) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * nhm_ (~nhm@65-128-180-101.mpls.qwest.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * Meyer^ (meyer@c64.org) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * seif_ (sid11725@id-11725.ealing.irccloud.com) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * zackc (~zackc@0001ba60.user.oftc.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * HauM1 (~HauM1@login.univie.ac.at) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * philips (~philips@ec2-23-22-175-220.compute-1.amazonaws.com) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * [cave] (~quassel@boxacle.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * Svedrin (svedrin@ketos.funzt-halt.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * wonko_be_ (bernard@november.openminds.be) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * asmaps (~quassel@2a03:4000:2:3c5::80) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * musca (musca@tyrael.eu) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * toutour (~toutour@causses.idest.org) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * Ormod (~valtha@ohmu.fi) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * LCF (ball8@193.231.broadband16.iol.cz) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * wogri (~wolf@nix.wogri.at) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * erwyn (~erwyn@markelous.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * raso (~raso@deb-multimedia.org) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * clag (~clag@cl.noc.accelance.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * ponyofdeath (~vladi@cpe-76-167-201-214.san.res.rr.com) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * shang (~ShangWu@175.41.48.77) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * senk (~Adium@193.174.91.160) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * jbd_ (~jbd_@2001:41d0:52:a00::77) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * arbrandes (~arbrandes@177.9.201.101) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * Anticimex (anticimex@95.80.32.80) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * ChrisWork (~chris@skeeter-mxisp.openmarket.com) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * loicd (~loicd@bouncer.dachary.org) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * tom2 (~jens@s11.jayr.de) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * aarontc (~aarontc@aarontc-1-pt.tunnel.tserv14.sea1.ipv6.he.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * rektide (~rektide@eldergods.com) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * dis (~dis@109.110.66.216) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * haomaiwang (~haomaiwan@218.71.76.134) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * lxo (~aoliva@lxo.user.oftc.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * liiwi (liiwi@idle.fi) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * ferai (~quassel@corkblock.jefferai.org) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * jamespage (~jamespage@culvain.gromper.net) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * meis3 (~meise@oglarun.3st.be) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * alexxy (~alexxy@79.173.81.171) Quit (resistance.oftc.net reticulum.oftc.net)
[13:09] * hjorth (~hjorth@sig9.kill.dk) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * janos (~janos@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * NaioN (stefan@andor.naion.nl) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * guppy (~quassel@guppy.xxx) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * fretb (~fretb@frederik.pw) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * kwmiebach (sid16855@2604:8300:100:200b:6667:3:0:41d7) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * Nats_ (~Nats@telstr575.lnk.telstra.net) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * peetaur2 (~peter@dhcp-108-168-3-60.cable.user.start.ca) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * Azrael (~azrael@terra.negativeblue.com) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * c74d (~c74d@2002:4404:712c:0:bc84:f38c:2e99:3ed0) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * eternaleye (~eternaley@c-24-17-202-252.hsd1.wa.comcast.net) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * Drumplayr (~thomas@107-192-218-58.lightspeed.austtx.sbcglobal.net) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * hjjg (~hg@p3EE323E1.dip0.t-ipconnect.de) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * bkero (~bkero@216.151.13.66) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * alexbligh (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * asadpanda (~asadpanda@2001:470:c09d:0:20c:29ff:fe4e:a66) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * saaby (~as@mail.saaby.com) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * mdjp (~mdjp@213.229.87.114) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * joshd (~joshd@2607:f298:a:607:a4ee:601f:4e1b:b817) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * dec (~dec@ec2-54-252-14-44.ap-southeast-2.compute.amazonaws.com) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * ircolle (~Adium@2601:1:8380:2d9:3d4a:bc41:63c8:17ee) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * bauruine (~bauruine@2a01:4f8:150:6381::545) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * fred` (fred@earthli.ng) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * masterpe (~masterpe@2a01:670:400::43) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * chutz (~chutz@rygel.linuxfreak.ca) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * brother (foobaz@vps1.hacking.dk) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * sileht (~sileht@gizmo.sileht.net) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * mkoderer_ (uid11949@id-11949.ealing.irccloud.com) Quit (resistance.oftc.net weber.oftc.net)
[13:09] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[13:09] * acaos (~zac@209.99.103.42) has joined #ceph
[13:09] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) has joined #ceph
[13:09] * simulx (~simulx@vpn.expressionanalysis.com) has joined #ceph
[13:09] * lightspeed (~lightspee@81.187.0.153) has joined #ceph
[13:09] * amospalla (~amospalla@0001a39c.user.oftc.net) has joined #ceph
[13:09] * markl (~mark@knm.org) has joined #ceph
[13:09] * sage (~quassel@2607:f298:a:607:6d19:193:ed4a:57ef) has joined #ceph
[13:09] * gregsfortytwo (~Adium@2607:f298:a:607:4a:149d:8f20:b9e) has joined #ceph
[13:09] * flaxy (~afx@78.130.174.164) has joined #ceph
[13:09] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) has joined #ceph
[13:09] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[13:09] * leochill (~leochill@nyc-333.nycbit.com) has joined #ceph
[13:09] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) has joined #ceph
[13:09] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[13:09] * blahnana (~bman@us1.blahnana.com) has joined #ceph
[13:09] * kitz (~kitz@admin161-194.hampshire.edu) has joined #ceph
[13:09] * garphy (~garphy@frank.zone84.net) has joined #ceph
[13:09] * kuu (~kuu@virtual362.tentacle.fi) has joined #ceph
[13:09] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[13:09] * wattsmarcus5 (~mdw@aa2.linuxbox.com) has joined #ceph
[13:09] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[13:09] * yuriw1 (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) has joined #ceph
[13:09] * baffle (baffle@jump.stenstad.net) has joined #ceph
[13:09] * LiRul (~lirul@91.82.105.2) has joined #ceph
[13:09] * arbrandes (~arbrandes@177.9.201.101) has joined #ceph
[13:09] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[13:09] * fdmanana (~fdmanana@bl5-0-172.dsl.telepac.pt) has joined #ceph
[13:09] * NoX-1984 (~NoX-1984@master-gw.netlantic.net) has joined #ceph
[13:09] * zviratko (~zviratko@241-73-239-109.cust.centrio.cz) has joined #ceph
[13:09] * capri_oner (~capri@212.218.127.222) has joined #ceph
[13:09] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) has joined #ceph
[13:09] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[13:09] * zapotah (~zapotah@dsl-hkibrasgw1-58c08e-250.dhcp.inet.fi) has joined #ceph
[13:09] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[13:09] * thomnico (~thomnico@2a01:e35:8b41:120:d5ba:f134:a740:95b9) has joined #ceph
[13:09] * thb (~me@0001bd58.user.oftc.net) has joined #ceph
[13:09] * senk (~Adium@193.174.91.160) has joined #ceph
[13:09] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[13:09] * shang (~ShangWu@175.41.48.77) has joined #ceph
[13:09] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:09] * Kdecherf (~kdecherf@shaolan.kdecherf.com) has joined #ceph
[13:09] * mjevans- (~mje@209.141.34.79) has joined #ceph
[13:09] * via (~via@smtp2.matthewvia.info) has joined #ceph
[13:09] * nhm_ (~nhm@65-128-180-101.mpls.qwest.net) has joined #ceph
[13:09] * dosaboy_ (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[13:09] * ctd_ (~root@00011932.user.oftc.net) has joined #ceph
[13:09] * twx (~twx@rosamoln.org) has joined #ceph
[13:09] * ofu_ (ofu@dedi3.fuckner.net) has joined #ceph
[13:09] * wonko_be_ (bernard@november.openminds.be) has joined #ceph
[13:09] * psieklFH_ (psiekl@wombat.eu.org) has joined #ceph
[13:09] * ponyofdeath (~vladi@cpe-76-167-201-214.san.res.rr.com) has joined #ceph
[13:09] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) has joined #ceph
[13:09] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) has joined #ceph
[13:09] * haomaiwang (~haomaiwan@218.71.76.134) has joined #ceph
[13:09] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[13:09] * warrenSusui (~Warren@2607:f298:a:607:61e0:1d53:455e:4435) has joined #ceph
[13:09] * wusui (~Warren@2607:f298:a:607:61e0:1d53:455e:4435) has joined #ceph
[13:09] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[13:09] * ingard (~cake@tu.rd.vc) has joined #ceph
[13:09] * stewiem20001 (~stewiem20@195.10.250.233) has joined #ceph
[13:09] * meis3 (~meise@oglarun.3st.be) has joined #ceph
[13:09] * dis (~dis@109.110.66.216) has joined #ceph
[13:09] * ferai (~quassel@corkblock.jefferai.org) has joined #ceph
[13:09] * Machske (~Bram@d5152D87C.static.telenet.be) has joined #ceph
[13:09] * clag (~clag@cl.noc.accelance.net) has joined #ceph
[13:09] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[13:09] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[13:09] * joao (~joao@a79-168-11-205.cpe.netcabo.pt) has joined #ceph
[13:09] * raso (~raso@deb-multimedia.org) has joined #ceph
[13:09] * rektide (~rektide@eldergods.com) has joined #ceph
[13:09] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[13:09] * godog (~filo@0001309c.user.oftc.net) has joined #ceph
[13:09] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[13:09] * guerby (~guerby@ip165-ipv6.tetaneutral.net) has joined #ceph
[13:09] * yeled (~yeled@spodder.com) has joined #ceph
[13:09] * vhasi (vhasi@vha.si) has joined #ceph
[13:09] * nyerup (irc@jespernyerup.dk) has joined #ceph
[13:09] * wido_ (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[13:10] * Elbandi_ (~ea333@elbandi.net) has joined #ceph
[13:10] * grifferz_ (~andy@bitfolk.com) has joined #ceph
[13:10] * sig_wall (~adjkru@185.14.185.91) has joined #ceph
[13:10] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) has joined #ceph
[13:10] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) has joined #ceph
[13:10] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) has joined #ceph
[13:10] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[13:10] * Meyer^ (meyer@c64.org) has joined #ceph
[13:10] * athrift (~nz_monkey@203.86.205.13) has joined #ceph
[13:10] * NafNaf (~NafNaf@5.148.165.184) has joined #ceph
[13:10] * toabctl (~toabctl@toabctl.de) has joined #ceph
[13:10] * mnash (~chatzilla@vpn.expressionanalysis.com) has joined #ceph
[13:10] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[13:10] * plantain (~plantain@106.187.96.118) has joined #ceph
[13:10] * stj (~s@tully.csail.mit.edu) has joined #ceph
[13:10] * mmmucky (~mucky@mucky.socket7.org) has joined #ceph
[13:10] * jerker_ (jerker@82ee1319.test.dnsbl.oftc.net) has joined #ceph
[13:10] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) has joined #ceph
[13:10] * lurbs (user@uber.geek.nz) has joined #ceph
[13:10] * darkfader (~floh@88.79.251.60) has joined #ceph
[13:10] * clayg (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) has joined #ceph
[13:10] * svenneK (~sk@svenne.krap.dk) has joined #ceph
[13:10] * tomaw (tom@tomaw.noc.oftc.net) has joined #ceph
[13:10] * jnq (~jon@0001b7cc.user.oftc.net) has joined #ceph
[13:10] * seif_ (sid11725@id-11725.ealing.irccloud.com) has joined #ceph
[13:10] * musca (musca@tyrael.eu) has joined #ceph
[13:10] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[13:10] * philips (~philips@ec2-23-22-175-220.compute-1.amazonaws.com) has joined #ceph
[13:10] * HauM1 (~HauM1@login.univie.ac.at) has joined #ceph
[13:10] * zackc (~zackc@0001ba60.user.oftc.net) has joined #ceph
[13:10] * tnt (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) has joined #ceph
[13:10] * pmatulis (~peter@64.34.151.178) has joined #ceph
[13:10] * cfreak201 (~cfreak200@p4FF3FAF6.dip0.t-ipconnect.de) has joined #ceph
[13:10] * [cave] (~quassel@boxacle.net) has joined #ceph
[13:10] * asmaps (~quassel@2a03:4000:2:3c5::80) has joined #ceph
[13:10] * al (d@niel.cx) has joined #ceph
[13:10] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[13:10] * Svedrin (svedrin@ketos.funzt-halt.net) has joined #ceph
[13:10] * jeremydei (~jdeininge@64.125.69.200) has joined #ceph
[13:10] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) has joined #ceph
[13:10] * aarontc (~aarontc@aarontc-1-pt.tunnel.tserv14.sea1.ipv6.he.net) has joined #ceph
[13:10] * erwyn (~erwyn@markelous.net) has joined #ceph
[13:10] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[13:10] * tom2 (~jens@s11.jayr.de) has joined #ceph
[13:10] * loicd (~loicd@bouncer.dachary.org) has joined #ceph
[13:10] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[13:10] * toutour (~toutour@causses.idest.org) has joined #ceph
[13:10] * liiwi (liiwi@idle.fi) has joined #ceph
[13:10] * Ormod (~valtha@ohmu.fi) has joined #ceph
[13:10] * LCF (ball8@193.231.broadband16.iol.cz) has joined #ceph
[13:10] * wogri (~wolf@nix.wogri.at) has joined #ceph
[13:10] * Anticimex (anticimex@95.80.32.80) has joined #ceph
[13:10] * ChrisWork (~chris@skeeter-mxisp.openmarket.com) has joined #ceph
[13:10] * sekon (~harish@li291-152.members.linode.com) has joined #ceph
[13:10] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[13:10] * renzhi (~renzhi@ec2-54-249-150-103.ap-northeast-1.compute.amazonaws.com) has joined #ceph
[13:10] * steki (~steki@91.195.39.5) has joined #ceph
[13:10] * jo0nas (~jonas@188-183-5-254-static.dk.customer.tdc.net) has joined #ceph
[13:10] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) has joined #ceph
[13:10] * KindOne (KindOne@0001a7db.user.oftc.net) has joined #ceph
[13:10] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[13:10] * LPG_ (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[13:10] * mmgaggle (~kyle@cerebrum.dreamservers.com) has joined #ceph
[13:10] * wrale (~wrale@wrk-28-217.cs.wright.edu) has joined #ceph
[13:10] * brambles (lechuck@s0.barwen.ch) has joined #ceph
[13:10] * eightyeight (~atoponce@atoponce.user.oftc.net) has joined #ceph
[13:10] * geekmush (~Adium@cpe-66-68-198-33.rgv.res.rr.com) has joined #ceph
[13:10] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) has joined #ceph
[13:10] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[13:10] * svg (~svg@hydargos.ginsys.net) has joined #ceph
[13:10] * saturnine (~saturnine@ashvm.saturne.in) has joined #ceph
[13:10] * hughsaunders (~hughsaund@wherenow.org) has joined #ceph
[13:10] * phantomcircuit (~phantomci@covertinferno.org) has joined #ceph
[13:10] * Zethrok (~martin@95.154.26.34) has joined #ceph
[13:10] * WintermeW_ (~WintermeW@212-83-158-61.rev.poneytelecom.eu) has joined #ceph
[13:10] * Guest823 (~jeremy@ip23.67-202-99.static.steadfastdns.net) has joined #ceph
[13:10] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) has joined #ceph
[13:10] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[13:10] * cephalobot (~ceph@ds2390.dreamservers.com) has joined #ceph
[13:10] * singler (~singler@zeta.kirneh.eu) has joined #ceph
[13:10] * `10_ (~10@juke.fm) has joined #ceph
[13:10] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[13:10] * Esmil (esmil@horus.0x90.dk) has joined #ceph
[13:10] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[13:10] * rBEL (robbe@november.openminds.be) has joined #ceph
[13:10] * paradon (~thomas@60.234.66.253) has joined #ceph
[13:10] * Daviey (~DavieyOFT@bootie.daviey.com) has joined #ceph
[13:10] * sbadia (~sbadia@yasaw.net) has joined #ceph
[13:10] * diegows (~diegows@190.190.17.57) has joined #ceph
[13:10] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[13:10] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[13:10] * allsystemsarego (~allsystem@188.25.135.30) has joined #ceph
[13:10] * mattt (~textual@92.52.76.140) has joined #ceph
[13:10] * alexm_ (~alexm@83.167.43.235) has joined #ceph
[13:10] * srenatus (~stephan@185.27.182.16) has joined #ceph
[13:10] * rendar (~s@87.19.183.241) has joined #ceph
[13:10] * sglwlb (~sglwlb@221.12.27.202) has joined #ceph
[13:10] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[13:10] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) has joined #ceph
[13:10] * Vacum (~vovo@88.130.203.241) has joined #ceph
[13:10] * hflai (hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[13:10] * davidzlap1 (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[13:10] * tziOm (~bjornar@ns3.uniweb.no) has joined #ceph
[13:10] * BillK (~BillK-OFT@106-68-142-217.dyn.iinet.net.au) has joined #ceph
[13:10] * glambert_ (~glambert@ptr-22.204.219.82.rev.exa.net.uk) has joined #ceph
[13:10] * nwf_ (~nwf@67.62.51.95) has joined #ceph
[13:10] * dlan (~dennis@116.228.88.131) has joined #ceph
[13:10] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) has joined #ceph
[13:10] * Sargun (~sargun@208-106-98-2.static.sonic.net) has joined #ceph
[13:10] * Guest650 (~coyo@thinks.outside.theb0x.org) has joined #ceph
[13:10] * ChanServ sets mode +v scuttlemonkey
[13:12] * sileht (~sileht@gizmo.sileht.net) has joined #ceph
[13:12] * hjjg (~hg@p3EE323E1.dip0.t-ipconnect.de) has joined #ceph
[13:12] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[13:12] * ircolle (~Adium@2601:1:8380:2d9:3d4a:bc41:63c8:17ee) has joined #ceph
[13:12] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[13:12] * Drumplayr (~thomas@107-192-218-58.lightspeed.austtx.sbcglobal.net) has joined #ceph
[13:12] * eternaleye (~eternaley@c-24-17-202-252.hsd1.wa.comcast.net) has joined #ceph
[13:12] * c74d (~c74d@2002:4404:712c:0:bc84:f38c:2e99:3ed0) has joined #ceph
[13:12] * asadpanda (~asadpanda@2001:470:c09d:0:20c:29ff:fe4e:a66) has joined #ceph
[13:12] * dec (~dec@ec2-54-252-14-44.ap-southeast-2.compute.amazonaws.com) has joined #ceph
[13:12] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[13:12] * peetaur2 (~peter@dhcp-108-168-3-60.cable.user.start.ca) has joined #ceph
[13:12] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[13:12] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[13:12] * Nats_ (~Nats@telstr575.lnk.telstra.net) has joined #ceph
[13:12] * kwmiebach (sid16855@2604:8300:100:200b:6667:3:0:41d7) has joined #ceph
[13:12] * mkoderer_ (uid11949@id-11949.ealing.irccloud.com) has joined #ceph
[13:12] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[13:12] * joshd (~joshd@2607:f298:a:607:a4ee:601f:4e1b:b817) has joined #ceph
[13:12] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[13:12] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[13:12] * fretb (~fretb@frederik.pw) has joined #ceph
[13:12] * ivan` (~ivan`@000130ca.user.oftc.net) has joined #ceph
[13:12] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[13:12] * guppy (~quassel@guppy.xxx) has joined #ceph
[13:12] * mdjp (~mdjp@213.229.87.114) has joined #ceph
[13:12] * bauruine (~bauruine@2a01:4f8:150:6381::545) has joined #ceph
[13:12] * masterpe (~masterpe@2a01:670:400::43) has joined #ceph
[13:12] * hjorth (~hjorth@sig9.kill.dk) has joined #ceph
[13:12] * alexbligh (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) has joined #ceph
[13:12] * brother (foobaz@vps1.hacking.dk) has joined #ceph
[13:12] * janos (~janos@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[13:12] * saaby (~as@mail.saaby.com) has joined #ceph
[13:12] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[13:12] * bkero (~bkero@216.151.13.66) has joined #ceph
[13:12] * NaioN (stefan@andor.naion.nl) has joined #ceph
[13:12] * fred` (fred@earthli.ng) has joined #ceph
[13:12] * ChanServ sets mode +o ircolle
[13:12] * ChanServ sets mode +v elder
[13:19] <glambert_> anyone else having difficulties with rbd-fuse?
[13:23] * ssss (~sglwlb@124.90.106.171) has joined #ceph
[13:24] * ssss is now known as gnlwlb
[13:24] * gnlwlb (~sglwlb@124.90.106.171) Quit ()
[13:24] * gnlwlb (~sglwlb@124.90.106.171) has joined #ceph
[13:33] * ircolle1 (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[13:34] * ircolle (~Adium@2601:1:8380:2d9:3d4a:bc41:63c8:17ee) Quit (Read error: Connection reset by peer)
[13:37] * thomnico (~thomnico@2a01:e35:8b41:120:d5ba:f134:a740:95b9) Quit (Quit: Ex-Chat)
[13:40] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[13:43] * renzhi (~renzhi@ec2-54-249-150-103.ap-northeast-1.compute.amazonaws.com) Quit (Ping timeout: 480 seconds)
[13:46] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[13:47] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[13:49] * renzhi (~renzhi@122.226.73.152) has joined #ceph
[13:53] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[13:55] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:59] * senk (~Adium@193.174.91.160) Quit (Quit: Leaving.)
[13:59] * senk (~Adium@193.174.91.160) has joined #ceph
[14:07] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:08] * senk (~Adium@193.174.91.160) Quit (Ping timeout: 480 seconds)
[14:08] * srenatus (~stephan@185.27.182.16) Quit (Ping timeout: 480 seconds)
[14:10] * jks (~jks@3e6b5724.rev.stofanet.dk) Quit (Read error: Connection reset by peer)
[14:12] * srenatus (~stephan@185.27.182.16) has joined #ceph
[14:16] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Remote host closed the connection)
[14:18] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:20] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[14:20] * ChanServ sets mode +v andreask
[14:28] * markbby (~Adium@168.94.245.3) has joined #ceph
[14:29] * jcsp (~Adium@0001bf3a.user.oftc.net) has joined #ceph
[14:31] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[14:32] * renzhi (~renzhi@122.226.73.152) Quit (Ping timeout: 480 seconds)
[14:38] * Narb (~Narb@c-98-207-60-126.hsd1.ca.comcast.net) has joined #ceph
[14:41] * schmee (~quassel@phobos.isoho.st) has joined #ceph
[14:46] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[14:48] * renzhi (~renzhi@122.226.73.152) has joined #ceph
[14:48] * sarob (~sarob@2601:9:7080:13a:5c4c:6c38:46e9:ef85) has joined #ceph
[14:50] * gaveen (~gaveen@175.157.67.169) Quit (Remote host closed the connection)
[14:56] * sarob (~sarob@2601:9:7080:13a:5c4c:6c38:46e9:ef85) Quit (Ping timeout: 480 seconds)
[14:56] * sroy (~sroy@2607:fad8:4:6:6e88:14ff:feff:5374) has joined #ceph
[14:58] * sarob (~sarob@2601:9:7080:13a:6d42:717e:7e34:38ff) has joined #ceph
[15:01] <Fetch> has anyone ever seen anything like a rbd plugin for Bacula?
[15:03] * agh (~oftc-webi@gw-to-666.outscale.net) has joined #ceph
[15:03] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[15:05] * thomnico (~thomnico@2a01:e35:8b41:120:d5ba:f134:a740:95b9) has joined #ceph
[15:06] * sarob (~sarob@2601:9:7080:13a:6d42:717e:7e34:38ff) Quit (Ping timeout: 480 seconds)
[15:11] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[15:12] <agh> Hello to all
[15:13] * alexxy[home] (~alexxy@79.173.81.171) has joined #ceph
[15:13] * alexxy (~alexxy@79.173.81.171) Quit (Read error: Connection reset by peer)
[15:13] <agh> What it the configuration value to tell Ceph to wait X seconds before recovering the cluster ?
[15:13] <agh> => My nodes are pretty slow to reboot, and, recovery start before the reboot of a node is over.
[15:13] <agh> I want a delay of, let's say 10 minutes.
[15:13] <agh> I can't find in the doc what it the option to set
[15:13] <agh> any idea ?
[15:16] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[15:17] * haomaiwang (~haomaiwan@218.71.76.134) Quit (Remote host closed the connection)
[15:17] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) Quit (Quit: dereky)
[15:17] * haomaiwang (~haomaiwan@101.78.195.61) has joined #ceph
[15:17] <beardo> agh, this might be what you want: http://ceph.com/docs/next/rados/troubleshooting/troubleshooting-osd/#stopping-w-out-rebalancing
[15:18] <agh> beardo: yes, but this is a manual task
[15:18] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) has joined #ceph
[15:19] <agh> beardo: what i want is to be sure that if, for instance, I have a power cut on one rack, the recover process will not start before 30 minutes
[15:20] * zere (~matt@asklater.com) has joined #ceph
[15:20] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[15:21] * zack_dolby (~textual@e0109-106-188-127-188.uqwimax.jp) has joined #ceph
[15:21] <beardo> agh, hmm, looks like you can adjust the heartbeat timeout intervals: http://ceph.com/docs/next/rados/configuration/mon-osd-interaction/
[15:21] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[15:22] <beardo> but if there's a real problem, the node won't be marked out for that same interval
[15:22] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit (Quit: Leaving.)
[15:22] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[15:22] * owenmurr (~owen@109.175.201.0) has joined #ceph
[15:22] * zack_dolby (~textual@e0109-106-188-127-188.uqwimax.jp) Quit ()
[15:23] * owenmurr (~owen@109.175.201.0) Quit ()
[15:23] * owenmurr (~owen@109.175.201.0) has joined #ceph
[15:24] * owenmurr (~owen@109.175.201.0) Quit ()
[15:24] * owenmurr (~owen@109.175.201.0) has joined #ceph
[15:24] <Gugge-47527> agh: mon_osd_down_out_interval
[15:25] <agh> Gugge-47527: great. Thanks a lot
[15:25] * owenmurr (~owen@109.175.201.0) Quit ()
[15:25] * markbby (~Adium@168.94.245.1) has joined #ceph
[15:26] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[15:26] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[15:28] <Gugge-47527> agh: but as beardo says, it will also wait longer to recover when a real crash happens
[15:29] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit (Read error: Operation timed out)
[15:30] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) Quit (Quit: dereky)
[15:31] <agh> Gugge-47527: yes but is it a problem ?
[15:31] <agh> Gugge-47527: the cluster will be in DEGRADED mode, no?
[15:34] * owenmurr (~owen@109.175.201.0) has joined #ceph
[15:35] * owenmurr (~owen@109.175.201.0) Quit ()
[15:36] <Gugge-47527> agh: it will take longer before it starts to recover
[15:36] <Gugge-47527> if that is a problem for you i cant really tell you :)
[15:40] * owenmurr (~owen@109.175.201.0) has joined #ceph
[15:40] * owenmurr (~owen@109.175.201.0) Quit ()
[15:41] * owenmurr (~owen@109.175.201.0) has joined #ceph
[15:45] * owenmurr (~owen@109.175.201.0) Quit ()
[15:45] * owenmurr (~owen@109.175.201.0) has joined #ceph
[15:45] * owenmurr (~owen@109.175.201.0) Quit ()
[15:47] * owenmurr (~owen@109.175.201.0) has joined #ceph
[15:48] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[15:49] * owenmurr (~owen@109.175.201.0) Quit ()
[15:49] * owenmurr (~owen@109.175.201.0) has joined #ceph
[15:50] * thomnico (~thomnico@2a01:e35:8b41:120:d5ba:f134:a740:95b9) Quit (Quit: Ex-Chat)
[15:51] * senk (~Adium@212.201.122.52) has joined #ceph
[15:51] * owenmurr (~owen@109.175.201.0) Quit ()
[15:51] * owenmurr (~owen@109.175.201.0) has joined #ceph
[15:51] * sarob (~sarob@2601:9:7080:13a:d8bf:c125:540f:bcd2) has joined #ceph
[15:52] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:53] * lupu (~lupu@86.107.101.246) has joined #ceph
[15:53] <kitz> agh: If you're doing planned maintenance then you'll likely want to manually set noout before rebooting the node.
[15:53] * owenmurr (~owen@109.175.201.0) Quit ()
[15:53] * jks (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[15:54] * owenmurr (~owen@109.175.201.0) has joined #ceph
[15:54] <agh> kitz: yes sure. But my goal is to deal with power cuts.
[15:54] <agh> kitz: i have 40 servers accros 4 racks.
[15:55] <agh> kitz: each server has a simple power cord, but odd nod is connected to feed A, and even nodes to feed B.
[15:55] <kitz> So, a power cut is a temporary loss of power?
[15:56] <agh> kitz: I want to be sure that if there is a power cut of 15 minutes for instance, that data will not be balanced everywhere
[15:56] <agh> kitz: yes
[15:56] <agh> in fact, we had this problem in prod :
[15:56] <kitz> yeah, then mon_osd_down_out_interval sounds like what you're looking for.
[15:56] <agh> Feed A down for one hour. So data is rebalanced accross the cluster.
[15:56] * dereky (~derek@129-2-129-152.wireless.umd.edu) has joined #ceph
[15:57] <agh> Then, Feed A goes UP again, but... feed B goes down...
[15:57] <agh> before that recovery is done...
[15:57] <kitz> though, really you might be looking for UPSs.
[15:57] * owenmurr (~owen@109.175.201.0) Quit ()
[15:58] <agh> it was planned maintenance from the DC... But it was a little bit longer thant expected :/
[15:58] <agh> Our idea is to say "Ceph can deal with a degraded cluster" (we have replica size of 2). So if the half of the cluster is down for an hour : no problem. Datas should still be accessible.
[15:59] <agh> (by the way, sorry for my english)
[15:59] <kitz> You're still going to run into problems though. (your english i totally fine)
[15:59] <agh> kitz: what kind of problems ?
[15:59] * sarob (~sarob@2601:9:7080:13a:d8bf:c125:540f:bcd2) Quit (Ping timeout: 480 seconds)
[16:00] <kitz> If feed A is cut and you receive writes to the nodes on feed B, then when A comes back and feed B is cut ceph will know that A is out of date
[16:00] <kitz> (assuming that B is cut before it can update all of the nodes on A)
[16:01] <agh> yes
[16:01] <agh> the best way should be to have dual-power servers... plugged on feed A and feed B...
[16:09] * linuxkidd (~linuxkidd@2001:420:2100:2258:39d3:de25:be2d:1e03) has joined #ceph
[16:10] <Gugge-47527> and remember you need over half of the monitors to be running
[16:11] * thomnico (~thomnico@2a01:e35:8b41:120:d5ba:f134:a740:95b9) has joined #ceph
[16:15] <Fetch> is there a downside to using rbd to create and mount a filesystem in Ceph vice CephFS, if I'm not attempting multiple writers?
[16:15] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[16:17] <Gugge-47527> the downside is that you dont get a distributed fs :)
[16:17] <Gugge-47527> but of you only need a single host to access it, you are fine
[16:18] <Fetch> thanks. Setting up a bacula server with ceph as the storage, so supporting a cephfs mount requires something silly like fedora 20, when centos+rbd might suffice for me ;)
[16:19] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[16:20] * renzhi (~renzhi@122.226.73.152) Quit (Ping timeout: 480 seconds)
[16:21] * mauro (~mauro@dynamic-adsl-62-10-148-212.clienti.tiscali.it) has joined #ceph
[16:22] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:24] * agh (~oftc-webi@gw-to-666.outscale.net) Quit (Quit: Page closed)
[16:25] * JCL (~JCL@2601:9:3280:5a3:a8d5:5d85:6233:169b) has joined #ceph
[16:25] * JCL (~JCL@2601:9:3280:5a3:a8d5:5d85:6233:169b) Quit (Remote host closed the connection)
[16:26] * mauro (~mauro@dynamic-adsl-62-10-148-212.clienti.tiscali.it) Quit (Quit: Sto andando via)
[16:26] * JCL (~JCL@2601:9:3280:5a3:8480:23cf:94d3:1bda) has joined #ceph
[16:27] * garphy is now known as garphy`aw
[16:28] * joelio (~Joel@88.198.107.214) has joined #ceph
[16:28] <joelio> hey hey
[16:29] <joelio> anyone using cache pools in anger yet? Just loaded up our setup with an SSD only pool, wondering if it's a good idea to try cache pool now, whilst the SSDs are unladen
[16:32] * LiRul (~lirul@91.82.105.2) has left #ceph
[16:33] * senk (~Adium@212.201.122.52) Quit (Ping timeout: 480 seconds)
[16:38] * chris_lu_ (~ccc2@bolin.Lib.lehigh.EDU) Quit (Quit: Leaving)
[16:45] * fghaas (~florian@91-119-115-62.dynamic.xdsl-line.inode.at) has joined #ceph
[16:45] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:46] * ChanServ sets mode +v sage
[16:47] * gnlwlb (~sglwlb@124.90.106.171) Quit ()
[16:49] <singler> Fetch: you can use cephfs on fuse using centos
[16:50] * dereky (~derek@129-2-129-152.wireless.umd.edu) Quit (Ping timeout: 480 seconds)
[16:50] * hjjg (~hg@p3EE323E1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[16:52] * sarob (~sarob@2601:9:7080:13a:2859:ab5b:764f:6398) has joined #ceph
[16:56] * haomaiwa_ (~haomaiwan@218.71.76.134) has joined #ceph
[16:58] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[16:59] * senk (~Adium@ip-5-147-216-213.unitymediagroup.de) has joined #ceph
[17:00] * sarob (~sarob@2601:9:7080:13a:2859:ab5b:764f:6398) Quit (Ping timeout: 480 seconds)
[17:02] * sarob (~sarob@2601:9:7080:13a:6066:4901:fc5:f952) has joined #ceph
[17:02] * NoX-1984 (~NoX-1984@master-gw.netlantic.net) Quit ()
[17:03] * haomaiwang (~haomaiwan@101.78.195.61) Quit (Read error: Operation timed out)
[17:04] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Read error: Operation timed out)
[17:05] * renzhi (~renzhi@122.226.73.152) has joined #ceph
[17:07] * diegows (~diegows@190.190.17.57) Quit (Read error: Operation timed out)
[17:10] * sarob (~sarob@2601:9:7080:13a:6066:4901:fc5:f952) Quit (Ping timeout: 480 seconds)
[17:15] * thomnico (~thomnico@2a01:e35:8b41:120:d5ba:f134:a740:95b9) Quit (Quit: Ex-Chat)
[17:18] * reed (~reed@net-188-153-202-54.cust.dsl.teletu.it) has joined #ceph
[17:19] * renzhi (~renzhi@122.226.73.152) Quit (Read error: Operation timed out)
[17:21] * xarses (~andreww@12.164.168.117) has joined #ceph
[17:22] * sprachgenerator (~sprachgen@130.202.135.192) has joined #ceph
[17:22] <joelio> hmm.. can't actually use this SSD pool. I feel as though I'm missing an anchor to the hosts, but not sure..https://gist.github.com/joelio/cde87b24f3e1cd7b5631
[17:23] * senk (~Adium@ip-5-147-216-213.unitymediagroup.de) Quit (Quit: Leaving.)
[17:23] <joelio> Once the SSD pool is created - I then run - ceph osd pool set ssd crush_ruleset 3
[17:23] <joelio> can't rados bench is.. just sits there with 0 throughput
[17:23] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[17:31] * reed (~reed@net-188-153-202-54.cust.dsl.teletu.it) Quit (Quit: Ex-Chat)
[17:31] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[17:32] * sameer (~sadhikar@fmdmzpr01-ext.fm.intel.com) has joined #ceph
[17:33] * sameer (~sadhikar@fmdmzpr01-ext.fm.intel.com) Quit ()
[17:36] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:37] * steki (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[17:38] <zere> ceph looks like it would be really good as a key-(large-)value store, and i wanted to try it out, but i don't have an environment on which i can get root/sudo - how difficult is that going to make deploying ceph?
[17:39] * gregsfortytwo (~Adium@2607:f298:a:607:4a:149d:8f20:b9e) Quit (Quit: Leaving.)
[17:39] * gregsfortytwo (~Adium@2607:f298:a:607:8df8:dbcc:3bf4:6ec) has joined #ceph
[17:39] * davidzlap1 (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[17:40] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[17:41] * renzhi (~renzhi@122.226.73.152) has joined #ceph
[17:42] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:43] <fghaas> zere: that will render it practically impossible, but why not do it in VMs?
[17:44] * bitblt (~don@128-107-239-235.cisco.com) has joined #ceph
[17:45] <zere> the machines i have access to are already VMs, and i fear that running another VM within them is just going to compound the i/o latency.
[17:46] <zere> perhaps i misunderstood something - what inside ceph, rather than the ceph-deploy scripts, requires privileged access?
[17:48] <pmatulis> zere: the daemons
[17:48] <joelio> zere: Ceph contains a k/v store already (leveldb) but that's internal and not exposed as a service. Not sure why you wouldn't run redis in VMs or whatever.. perhaps you could explain what you're trying to achieve?
[17:49] <joelio> and a key/large binary blob, sounds like S3 to me..
[17:50] <joelio> which is exposed as a service - http://ceph.com/docs/master/man/8/radosgw/
[17:53] * lupu (~lupu@86.107.101.246) Quit (Ping timeout: 480 seconds)
[17:54] <jcsp> zere: it depends what kind of storage you intend to use. Naturally using hard drives directly (for best performance with Ceph) will require root.
[17:55] <jcsp> you can run as an unprivileged user, but you would have to write some scripts of your own that didn't require root to start the ceph services as an unprivileged user, write a special config file that pointed to log files in a home directory, etc etc.
[17:55] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[17:55] <zere> yup, S3 would be a good fit, but we're looking to self-host. we have a very large number of images (roughly 5 billion at the moment), ranging from a few 100 bytes to 50kB, and it's a read-heavy (but not exclusive) load.
[17:56] <jcsp> zere: with Ceph, you have a choice between using RadosGW, which exposes an S3-compatible interface, or using librados to access Ceph's object store natively.
[17:56] <jcsp> tbh I'm a bit surprised you have a 5 billion image repository but don't have root on any servers though!
[17:57] <zere> jcsp: yup. using librados directly matches the way we do things at present, with the front-end servers aware of the partitioning on the storage nodes.
[17:58] <zere> but the existing (in-house) software isn't able to handle dynamic reconfiguration, and that's really starting to hurt.
[17:58] <jcsp> if you were to use librados the clients would not have to be aware of the data placement: you would just be talking to ceph the cluster, and the rest would be under the hood
[17:59] * alram (~alram@38.122.20.226) has joined #ceph
[17:59] <jcsp> it would be interesting to compare riak and ceph for this kind of use case.
[17:59] * mattt (~textual@92.52.76.140) Quit (Read error: Connection reset by peer)
[18:01] <zere> riak was one of the suggested alternatives. personally, i think ceph (with partition-aware clients) is a better fit, and would perform better. also, rados is incredibly interesting from a data center redundancy point of view.
[18:02] <zere> i'd like to benchmark one against the other, but i've so far not managed to understand whether it's possible for me to deploy ceph in such a restricted environment.
[18:03] <jcsp> it's possible, but your benchmark results would be junk. A real ceph deployment would be using block devices (HDDs, SSDs) natively: you'll need to get that environment to make an authentic comparison.
[18:03] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:04] * senk (~Adium@ip-5-147-216-213.unitymediagroup.de) has joined #ceph
[18:04] <jcsp> it's common to do this kind of evaluation on small-scale systems, like a 3-node cluster
[18:05] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[18:05] * davidzlap1 (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[18:05] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[18:06] * owenmurr (~owen@109.175.201.0) has joined #ceph
[18:06] <zere> fair enough. that makes a lot of sense. i'll have to see what i can do (or get EC2 nodes). thanks for your help!
[18:06] * thomnico (~thomnico@2a01:e35:8b41:120:d5ba:f134:a740:95b9) has joined #ceph
[18:08] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Quit: Leaving.)
[18:15] * owenmurr (~owen@109.175.201.0) Quit (Quit: leaving)
[18:15] * owenmurr (~owen@109.175.201.0) has joined #ceph
[18:16] * owenmurr (~owen@109.175.201.0) Quit ()
[18:16] * reed (~reed@net-188-153-202-54.cust.dsl.teletu.it) has joined #ceph
[18:17] * owenmurr (~owenmurr@109.175.201.0) has joined #ceph
[18:18] * owenmurr (~owenmurr@109.175.201.0) Quit ()
[18:20] * renzhi (~renzhi@122.226.73.152) Quit (Read error: Operation timed out)
[18:22] <joelio> zere: you would use s3 api's and change the endpoint to be your ceph cluster (in front of load balancers)
[18:22] <joelio> not use librados (that's for lower level tools)
[18:23] <joelio> This is a bit of prototype code I wrote (even supports mulit-part chunking) https://gist.github.com/joelio/f91f0d46aa9de53cd5e3
[18:23] * renzhi (~renzhi@122.226.73.152) has joined #ceph
[18:27] * rotbeard (~redbeard@2a02:908:df10:5c80:76f0:6dff:fe3b:994d) has joined #ceph
[18:30] * chris_lu (~ccc2@bolin.Lib.lehigh.EDU) has joined #ceph
[18:36] * alexm_ (~alexm@83.167.43.235) Quit (Remote host closed the connection)
[18:39] * thomnico (~thomnico@2a01:e35:8b41:120:d5ba:f134:a740:95b9) Quit (Quit: Ex-Chat)
[18:40] * xmltok (~xmltok@216.103.134.250) has joined #ceph
[18:42] * thomnico (~thomnico@2a01:e35:8b41:120:5995:1299:b167:bba8) has joined #ceph
[18:42] * thomnico (~thomnico@2a01:e35:8b41:120:5995:1299:b167:bba8) Quit ()
[18:44] * arbrandes (~arbrandes@177.9.201.101) Quit (Quit: Leaving)
[18:47] * diegows (~diegows@mail.bittanimation.com) has joined #ceph
[18:56] * srenatus (~stephan@185.27.182.16) Quit (Ping timeout: 480 seconds)
[18:57] * sprachgenerator (~sprachgen@130.202.135.192) Quit (Ping timeout: 480 seconds)
[18:58] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[18:58] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (Quit: jlogan)
[18:58] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[19:00] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[19:02] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[19:02] * sprachgenerator (~sprachgen@vis-v410v141.mcs.anl-external.org) has joined #ceph
[19:03] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) has joined #ceph
[19:03] * markbby (~Adium@168.94.245.1) has joined #ceph
[19:04] * ScOut3R (~scout3r@54024282.dsl.pool.telekom.hu) has joined #ceph
[19:04] * danieagle (~Daniel@186.214.63.14) has joined #ceph
[19:09] * sjustwork (~sam@2607:f298:a:607:1826:d60:af0c:2a6) has joined #ceph
[19:10] <loicd> houkouonchi-home: I think my question to you has more to do with #ceph-devel that #ceph. Would you like to ask here anyway ?
[19:10] <loicd> s/that/than/
[19:10] <kraken> loicd meant to say: houkouonchi-home: I think my question to you has more to do with #ceph-devel than #ceph. Would you like to ask here anyway ?
[19:14] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[19:14] * WarrenUsui (~Warren@2607:f298:a:607:118b:373d:22bd:745) has joined #ceph
[19:14] <houkouonchi-work> houkouonchi-home: wrong tab complete?
[19:15] <loicd> houkouonchi-work: there are two of you ;-)
[19:16] <houkouonchi-work> houkouonchi-home: well I ask because I don't remember asking a question, heh
[19:16] <houkouonchi-work> and I am an inktank employee....
[19:16] <houkouonchi-work> so figured that was directed at someone else
[19:16] <houkouonchi-work> or should have been
[19:17] <loicd> I was about to ask a question. During the standup I mentionned that it would be nice to have more disk space on gitbuilder-precise-i386. Would you be the one to ask about that ?
[19:17] <loicd> And that question probably belongs to #ceph-devel but I don't see you there houkouonchi-work ;-)
[19:17] <houkouonchi-work> oh your question.. see I was confused.. probably should go to #inktank
[19:21] * warrenSusui (~Warren@2607:f298:a:607:61e0:1d53:455e:4435) Quit (Ping timeout: 480 seconds)
[19:21] * wusui (~Warren@2607:f298:a:607:61e0:1d53:455e:4435) Quit (Ping timeout: 480 seconds)
[19:21] * wusui (~Warren@2607:f298:a:607:118b:373d:22bd:745) has joined #ceph
[19:22] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[19:23] * lupu (~lupu@86.107.101.246) has joined #ceph
[19:24] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) has joined #ceph
[19:28] <wrale> I'm trying to set up 802.1Qaz with tagged VLANs on my Mellanox 10GbE 1024SXs to prioritize Ceph replication traffic (VLAN 30) vs. generic compute traffic (VLAN 40). It seems ETS (802.1Qaz) implements a "user priority" from the VLAN tag to select from the "traffic classes". I cannot seem to find documentation on setting a default "user priority" and thus "traffic class" on a per VLAN basis on the switch. It seems that the standard way of setting VL
[19:28] <wrale> AN priority is via the NIC configuration on the actual nodes. Can anyone tell me if setting a default "user priority" is beyond the realm of possibility and/or logic? :)
[19:29] <wrale> I opened a ticket with Mellanox. I am anxiously awaiting their input.
[19:33] * srenatus (~stephan@g229133208.adsl.alicedsl.de) has joined #ceph
[19:33] <fghaas> if I understand the mlnx architecture correctly, you actually shouldn't need to do that, because the NIC acts like a 64-port switch to your host, and traffic segregated into two VLANs shouldn't respectively interfere at all
[19:34] <fghaas> wrale: ^^
[19:34] * sarob (~sarob@2001:4998:effd:600:f94d:de27:4fed:9a41) has joined #ceph
[19:34] * cce (~cce@50.56.54.167) Quit (Quit: leaving)
[19:35] * cce (~cce@50.56.54.167) has joined #ceph
[19:35] <wrale> fghaas: thank you.. i think it may be different in my case, because VLAN 30 and VLAN 40 share a single 10GbE link (trunk mode)
[19:35] <wrale> (at the host)
[19:36] <fghaas> I know they do, but they still ought to be doing the separation correctly at the HBA level
[19:37] <fghaas> because you are getting two (or more) separate eth interfaces from your mlnx card, correct?
[19:39] <wrale> One Intel NIC will be in bridge mode and over that bridge will come two virtual interfaces, one for VLAN30 and one for VLAN40.. I'm new to this, so maybe that's dumb.
[19:40] <wrale> I guess I need to set user priority in the NIC configuration on each virtual interface ?
[19:40] * Koma (~Koma@0001c112.user.oftc.net) Quit (Quit: ups I did it again!)
[19:40] <fghaas> wait. so does your HBA support SR-IOV or no?
[19:41] * Koma (~Koma@0001c112.user.oftc.net) has joined #ceph
[19:41] <wrale> Yes, I believe it supports SR-IOV: http://ark.intel.com/products/41282/Intel-82599ES-10-Gigabit-Ethernet-Controller
[19:43] * c74d (~c74d@2002:4404:712c:0:bc84:f38c:2e99:3ed0) Quit (Remote host closed the connection)
[19:43] <fghaas> I've only worked with the connectX3 devices, so not sure if any of the following applies to you
[19:44] <fghaas> for those, you normally install OFED and run mlnxofedinstall --enable-sriov
[19:45] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[19:45] <fghaas> and then you modprobe mlx4_core with options to enable ib, ethernet, or both, and set your number of VFs
[19:46] <fghaas> and then you should see X incarnations of your device in lspci, where X == (VFs + 1)
[19:47] <fghaas> if you see only one entry in your lspci, then either you're not running on SR-IOV capable {hard,firm}ware, or you neglected to properly configure it
[19:48] <wrale> that sounds good.. i'm looking at the intel docs for the analogous functions
[19:49] <fghaas> all of which, btw, is rather firmly off topic for this channel. :) but I had the pleasure of working with that driver dev team once, so that's why I'm sharing what little info I picked up... all under a no warranties disclaimer of course
[19:50] * owenmurr (~owenmurr@109.175.201.0) has joined #ceph
[19:50] <wrale> right on.. i appreciate the help. i couldn't seem to find the proper IRC for this issue.. i posted in #ethernet to no avail.. ceph is primary reason i need QoS after all, it seems
[19:51] <fghaas> IRC will probably be a bad choice right now, as I believe the relevant dev team is in Israel where it's past office hours now
[19:51] <wrale> ah.. that makes sense
[19:51] <fghaas> but http://community.mellanox.com/docs/DOC-1317 might help
[19:52] <wrale> cool.. i'll take a look.. thanks again
[19:55] <fghaas> no worries wrale
[19:55] <wrale> i sometimes think my cluster architecture is a bit too ambitious :) .. back to work
[19:56] * dmick (~dmick@2607:f298:a:607:f422:8795:b48b:3a2b) has joined #ceph
[19:57] <fghaas> wrale: no, that approach (using SR-IOV for network traffic segregation) is actually quite sound. the hoops you were trying to jump through with local bridging... not so much :)
[19:59] <fghaas> also, take a look at http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/18017 for an interesting thread
[20:00] <wrale> fghaas: am i correct in assuming the difference one difference between the two approaches is that SR-IOV offloads the "bridging" to the NIC, where bridging in the OS is CPU-intensive.. and as a result, better performance is possible with SR-IOV?
[20:01] <fghaas> kinda-sorta, but what you're actually interested in is segmentation, which is pretty much the opposite of bridging
[20:01] * sarob (~sarob@2001:4998:effd:600:f94d:de27:4fed:9a41) Quit (Remote host closed the connection)
[20:01] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[20:02] <wrale> fghaas: more reading on my part is clearly necessary..lol.. thanks
[20:03] <fghaas> that certainly never hurts :)
[20:03] <wrale> starting here: http://www.intel.com/content/dam/www/public/us/en/documents/solution-briefs/10-gbe-ethernet-flexible-port-partitioning-brief.pdf
[20:03] * markbby (~Adium@168.94.245.1) has joined #ceph
[20:04] * schmee (~quassel@phobos.isoho.st) Quit (Remote host closed the connection)
[20:04] * schmee (~quassel@phobos.isoho.st) has joined #ceph
[20:05] <wrale> I admit to being slightly confused by the mention of OFED.. I've used infiniband for MPI before in a previous life.. and FC for SAN for storage, but I understood Ceph to be purely ethernet-based.. I need to read that gmane post to understand where RDMA comes in, for sure.. (and how it replaces my initial VLANs approach)
[20:05] * Pedras (~Adium@216.207.42.132) has joined #ceph
[20:05] <fghaas> right, but iirc most of the mellanox driver magic (including the fancy but not IB related parts) ship as part of OFED
[20:06] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[20:06] <wrale> i definitely regret getting intel HBAs and mellanox switches... I should have done mellanox across the board.. *shakes fist at supermicro fat twin squared*..lol
[20:07] <janos> how is the fat twin otehrwise?
[20:07] <janos> looks like a neat form factor
[20:07] <fghaas> that ML post is something slightly different still though, which is to use accelio as a messaging transport for Ceph -- but I haven't been involved in that discussion much, gregsfortytwo and sage and have
[20:07] <fghaas> s/sage and/sage/
[20:07] <kraken> fghaas meant to say: that ML post is something slightly different still though, which is to use accelio as a messaging transport for Ceph -- but I haven't been involved in that discussion much, gregsfortytwo and sage have
[20:07] <wrale> janos: I really like it.. each 2U in mine has four nodes, each with 256GB RAM, two six-core xeons, two 3TB HDD and one 240TB SSD... pretty excited to do Ceph + Mesos
[20:08] <janos> awesome
[20:08] <fghaas> 240 TB SSD sounds like you work for the NSA on a black budget
[20:08] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Quit: Konversation terminated!)
[20:08] <janos> haha
[20:09] <wrale> Facilities folks lost their noodles over the power density, though..lol.. i don't think they've done HPC here before
[20:09] <fghaas> and wrale, Ceph at this point is not tied to Ethernet at all. it'll run via, say, IPoIB, but currently there's no deeper integration with RDMA capable transports
[20:10] <wrale> fghaas: that's cool. i think the RDMA approach, if possible, would be more performant, no? shorter stack?
[20:10] <wrale> *reading*
[20:10] <fghaas> so "IP based" would be a more applicable monitor
[20:10] <LCF> 39781 GB + 33130 GB ssd only
[20:10] <LCF> 1/4 from that magic 240tb :-p
[20:10] <wrale> 240GB* SSD.. lol
[20:11] <wrale> fghaas: and i see yet more learning is afoot for me.. :)
[20:11] <fghaas> wrale: that would be the expectation, yes, but consider how SDP, while having the ostensibly simpler stack, usually gets its butt kicked by TCP over IPoIB
[20:12] <fghaas> (and, yes people will jump in and say "but SDP sucks!", but that's my point -- it's hard to beat an army of developers that have been optimizing the TCP stack in Linux for 20 years)
[20:13] <wrale> i'm certainly behind the times.. i didn't realize we had come far enough to use IPoIB in production..
[20:13] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[20:13] <wrale> agreed about army of developers
[20:13] * sarob (~sarob@2001:4998:effd:600:3cf7:a1df:2b55:1a1c) has joined #ceph
[20:14] <wrale> i think i'm just confused, though.. hmmm..
[20:15] <fghaas> ... ok, I've veered even more OT than you have. I'll shut up now :)
[20:15] <wrale> :) lol
[20:15] <wrale> ceph, ceph, ceph.. there.. fixed it
[20:19] * dtalton2 (~don@wsip-70-166-101-169.ph.ph.cox.net) has joined #ceph
[20:19] * bitblt (~don@128-107-239-235.cisco.com) Quit (Read error: Connection reset by peer)
[20:21] * Cube (~Cube@12.248.40.138) has joined #ceph
[20:37] * rotbeard (~redbeard@2a02:908:df10:5c80:76f0:6dff:fe3b:994d) Quit (Read error: Permission denied)
[20:38] * rotbeard (~redbeard@2a02:908:df10:5c80:76f0:6dff:fe3b:994d) has joined #ceph
[20:39] * mozg (~andrei@host86-184-125-218.range86-184.btcentralplus.com) has joined #ceph
[20:39] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[20:41] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[20:42] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Remote host closed the connection)
[20:42] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) Quit (Quit: leaving)
[20:43] * mauricev (~mauricev@usseinstein.aecom.yu.edu) has joined #ceph
[20:43] * mauricev (~mauricev@usseinstein.aecom.yu.edu) Quit ()
[20:44] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[20:45] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[20:47] * dtalton2 (~don@wsip-70-166-101-169.ph.ph.cox.net) Quit (Quit: Leaving)
[20:54] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[20:54] * mtanski (~mtanski@69.193.178.202) Quit ()
[20:55] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) has joined #ceph
[20:58] * Boltsky (~textual@office.deviantart.net) has joined #ceph
[21:07] * ScOut3R (~scout3r@54024282.dsl.pool.telekom.hu) Quit ()
[21:18] * sarob (~sarob@2001:4998:effd:600:3cf7:a1df:2b55:1a1c) Quit (Remote host closed the connection)
[21:19] * sarob (~sarob@2001:4998:effd:600:3cf7:a1df:2b55:1a1c) has joined #ceph
[21:22] * sarob_ (~sarob@2001:4998:effd:600:29db:d907:57f4:ac68) has joined #ceph
[21:27] * sarob (~sarob@2001:4998:effd:600:3cf7:a1df:2b55:1a1c) Quit (Ping timeout: 480 seconds)
[21:32] * lupu (~lupu@86.107.101.246) Quit (Ping timeout: 480 seconds)
[21:41] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[21:43] * garphy`aw is now known as garphy
[21:46] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[21:48] * leochill (~leochill@nyc-333.nycbit.com) Quit (Quit: Leaving)
[21:48] * leochill (~leochill@nyc-333.nycbit.com) has joined #ceph
[21:50] * senk (~Adium@ip-5-147-216-213.unitymediagroup.de) Quit (Quit: Leaving.)
[21:53] * sarob (~sarob@2001:4998:effd:600:c120:cbb4:796a:83ca) has joined #ceph
[21:57] * c74d (~c74d@2002:4404:712c:0:19f0:6255:efbc:aa) has joined #ceph
[21:58] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[21:58] * owenmurr (~owenmurr@109.175.201.0) Quit (Quit: leaving)
[21:58] * sarob_ (~sarob@2001:4998:effd:600:29db:d907:57f4:ac68) Quit (Ping timeout: 480 seconds)
[21:59] <wrale> In electing a bandwidth cap for the replication network (VLAN), is there any standard way to do the calculation? If the journal is kept on a separate SSD, do I need to allow bandwidth significantly beyond that which theoretically possible for the OSD spindles? Say, for example, I have two OSD spindles each capable of ~130MBps sequential read/write, do I need much more than say 2Gbps for replication?
[22:01] * madkiss (~madkiss@80.110.11.182) has joined #ceph
[22:01] * sarob (~sarob@2001:4998:effd:600:c120:cbb4:796a:83ca) Quit (Ping timeout: 480 seconds)
[22:02] * sroy (~sroy@2607:fad8:4:6:6e88:14ff:feff:5374) Quit (Quit: Quitte)
[22:02] * LPG_ (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) Quit (Remote host closed the connection)
[22:03] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[22:05] <fghaas> wrale: you probably want to first fix your OSDs if you want to max out your replication network; using an enterprise SSD for OSD journals is generally the much preferred option
[22:06] <fghaas> (I sincerely hope I'm not bursting too many bubbles for you today ;) )
[22:07] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[22:07] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[22:07] <wrale> fghaas: what do you mean by fix? and no, all is good.. I'm persistent in finding a solution.. deadlines loom
[22:08] <wrale> i really appreciate the help.. there are few mentors in my place of employment .. :)
[22:08] <fghaas> well by fix I mean you are unlikely to be able to max out your replication network bandwidth if you're bottlenecking on your (spinner-backed) OSDs
[22:09] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:10] <fghaas> if all you can write is something like 130MB/s, that's just over what Gigabit Ethernet can do, and if you've got say 4 OSDs you could do that with a much cheaper quad-port GbE NIC rather than the fancy 10GbE parts you're using now
[22:10] <wrale> I have one Samsung SM843 "entry enterprise" ssd, on which the OS and journals are planned to go.. each of the two Seagate 3TB sata st3000dm001 will be dedicated to an OSD.. (all of this in a single physical node)...
[22:10] <wrale> (there are 67 nodes in the complex)
[22:10] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[22:11] * sarob (~sarob@2001:4998:effd:600:8027:32e:727f:d1d) has joined #ceph
[22:11] <fghaas> careful, iirc recall correctly there is a *big* bw difference between the samsung 84x "pro" series and the non-pro ones
[22:11] <fghaas> plus from what we've found out Samsung 840 Pro are particularly tricky to tune for optimal Ceph OSD performance
[22:12] <wrale> being good with 2Gbps is cool by me, because the rest of the 10GbE will be used for things like Spark (hadoop in RAM type of thing)
[22:12] <wrale> Hmm..
[22:12] <wrale> The SSD won't house anything but the journal, but I guess that is already figured in your reply
[22:12] <fghaas> yes
[22:13] <wrale> And I want to confirm that you mean to say that the SM843 is slower than I probably think, correct?
[22:15] * srenatus (~stephan@g229133208.adsl.alicedsl.de) Quit (Read error: Operation timed out)
[22:17] <fghaas> not sure, I'm not exactly familiar with every SSD product under the sun :)
[22:17] <wrale> That's cool.. i think i just misunderstood
[22:18] <wrale> sorry :)
[22:18] <wrale> i suppose i need to try bonnie on this drive
[22:19] * c74d (~c74d@2002:4404:712c:0:19f0:6255:efbc:aa) Quit (Remote host closed the connection)
[22:19] <fghaas> I'm just saying that you (a) shouldn't be under the assumption that product X *can* actually do Y-10% of throughput where Y is the tech spec bandwidth (that applies to the device), and also (b) you shouldn't be expecting a Ceph OSD to be able to do Y-10% just because that's what its journal can do
[22:19] * c74d (~c74d@2002:4404:712c:0:20b4:1c2f:be32:b15e) has joined #ceph
[22:21] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[22:22] <wrale> That makes perfect sense.. I do agree.. I was erring on the side of fast estimates of performance not because I really expect that to be the outcome, but because I wanted to set the bandwidth throttle to eclipse actual performance.. if that makes any sense
[22:23] <wrale> That's to say, I wanted to theoretically set aside enough bandwidth to allow full-speed replication, whatever the speed, given my setup...
[22:23] <wrale> Many variables make this quite difficult, indeed
[22:24] <fghaas> exactly. not least because, for example, an RBD write will be spread out differently depending on how many OSD you have at that point
[22:24] <wrale> i agree.. the main rift for me in that is that some of my users will be doing MPI jobs across the entire cluster via Mesos.. kinda like a DDoS on ceph, i would think
[22:25] <wrale> Most of their jobs will use SSD as workspace, without touching ceph until writing out say a 200GB file (one for the cluster as a whole)..
[22:26] <wrale> I read some stories on the mailing list about computation slowing significantly upon rebuild, without QoS or otherwise throttling
[22:27] <wrale> I'm hoping more for consistency than stellar performance.. HPC jobs just take longer, if disk IO bound, I suppose.
[22:27] <wrale> (longer is not a problem)
[22:28] * madkiss1 (~madkiss@089144232199.atnat0041.highway.a1.net) has joined #ceph
[22:29] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[22:29] <wrale> Said another way, I wanted to construct non-blocking transport for replication, given one average-performance SSD housing two journals, and two HDDs each backing an object storage daemon
[22:29] * madkiss (~madkiss@80.110.11.182) Quit (Ping timeout: 480 seconds)
[22:29] <wrale> *two average performance HDD
[22:30] <wrale> I think I'll try 70% compute and 30% replication.. we'll see where that goes, i guess
[22:30] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[22:30] * ChanServ sets mode +v andreask
[22:30] <wrale> My worry was that would create a lapping situation, where write outstripped the ability to replicate, by far
[22:31] <wrale> That logic may be erroneous ..
[22:31] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[22:36] * sprachgenerator (~sprachgen@vis-v410v141.mcs.anl-external.org) Quit (Ping timeout: 480 seconds)
[22:43] * markl (~mark@knm.org) Quit (Quit: leaving)
[22:44] * markl (~mark@knm.org) has joined #ceph
[22:44] * jeremydei (~jdeininge@64.125.69.200) Quit (Ping timeout: 480 seconds)
[22:44] * ScOut3R (~scout3r@54024282.dsl.pool.telekom.hu) has joined #ceph
[22:45] * jeremydei (~jdeininge@64.125.69.200) has joined #ceph
[22:53] * toutour_ (~toutour@causses.idest.org) has joined #ceph
[22:55] * toutour (~toutour@causses.idest.org) Quit (Read error: Connection reset by peer)
[22:57] * rotbeard (~redbeard@2a02:908:df10:5c80:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[23:05] * jeremydei (~jdeininge@64.125.69.200) Quit (Read error: Operation timed out)
[23:05] * jeremydei (~jdeininge@64.125.69.200) has joined #ceph
[23:06] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[23:09] * linuxkidd (~linuxkidd@2001:420:2100:2258:39d3:de25:be2d:1e03) Quit (Quit: Leaving)
[23:11] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[23:14] * pingu (~christian@nat-gw1.syd4.anchor.net.au) has joined #ceph
[23:15] * JeffK (~JeffK@38.99.52.10) has joined #ceph
[23:15] * jeremydei (~jdeininge@64.125.69.200) Quit (Ping timeout: 480 seconds)
[23:15] <pingu> I'd like to build a tool that does some kind af a write-delete test via librados to every possible OSD. Is that possible? I'd have to craft OIDs that hit each one.
[23:16] <pingu> It certainly seems possible with brute force, but, I was hoping there was something a little more refined.
[23:16] <pingu> This way I can monitor which OSDs are troublesome and slow/broken.
[23:17] <fghaas> ceph osd pool create test 200 200
[23:17] <fghaas> ceph osd pool set size 200
[23:17] <fghaas> (if you have 200 OSDs)
[23:17] <fghaas> means that every one of your objects is being kept on all OSDs
[23:18] <pingu> which would mean that a write would have to go to every OSD?
[23:18] <dmick> there's also osd bench which can target specific OSDs
[23:18] <pingu> dmick: that sounds like what I want
[23:19] <fghaas> pingu: yeah, that's what you asked for -- if what you *want* is something different, then, yes, go with ceph osd bench :)
[23:19] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[23:19] * imjustmatthew_ (~imjustmat@pool-72-84-198-231.rcmdva.fios.verizon.net) has joined #ceph
[23:19] <fghaas> (you did say "via librados", which ceph osd bench doesn't do)
[23:19] <pingu> fghaas: right, okay. Thanks.
[23:19] <pingu> I wasn't quite clear in that I want to hit all OSDs, but each OSD individually.
[23:20] <fghaas> yes -- do note however that if some ofyour OSDs get slow, Ceph will notice that and make the information available on a cluster level (slow requests)
[23:21] * imjustmatthew (~imjustmat@pool-71-176-218-130.rcmdva.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[23:21] <pingu> Hmmn. We just noticed the other day that when doing thousands of simultaneous writes to thousands of new OIDs, a few of our OSDs magically crashed, silently.
[23:22] <pingu> So I was hoping to replicate that to some extent too.
[23:22] <fghaas> silently? as in, never get marked down and don't drop out after the mon osd down out interval?
[23:23] * ScOut3R (~scout3r@54024282.dsl.pool.telekom.hu) Quit ()
[23:23] <fghaas> oh and pingu, say hi to Andrew Cowie when you see him ;)
[23:23] * jeremydei (~jdeininge@64.125.69.200) has joined #ceph
[23:24] <pingu> fghaas: sure ;)
[23:24] <pingu> fghaas: silently as in, we run ceph-osd under daemontools
[23:25] <pingu> and it restarted, aparrently, with no segfault or anything of the sort.
[23:25] * diegows (~diegows@mail.bittanimation.com) Quit (Ping timeout: 481 seconds)
[23:26] <fghaas> /var/log/ceph/osd/{cluster}-{id}.log not saying anything meaningful at all?
[23:26] <fghaas> sure it wasn't some angel process sending it a -SIGTERM?
[23:26] <fghaas> (whatever monitoring or somethingorother you may have in place?)
[23:27] <pingu> I haven't looked into it just yet. I didn't see it first hand, thus I was hoping to replicate it so I could.
[23:27] <lurbs> That just made me think of something. Doesn't look like OSD or monitor processes are OOM adjusted by default at all. Should they be?
[23:28] <fghaas> oh hi lurbs, it must be too late for me to be up if this channel is filling up with aussies and kiwis :)
[23:29] <pingu> lurbs: that'd be a good idea I'd say. They don't seem to eat very much memory.
[23:29] <lurbs> There are Australians here? I didn't think it was that sort of channel.
[23:32] <lurbs> fghaas: Must be late for you, it's almost lunchtime here. After a simply delighful morning dealing with an NTP DDoS.
[23:32] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[23:32] <fghaas> urgh, well yes it's going on midnight here, and NTP ddos does sound hilarious
[23:32] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[23:34] * jeremydei (~jdeininge@64.125.69.200) Quit (Ping timeout: 480 seconds)
[23:35] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:37] * allsystemsarego (~allsystem@188.25.135.30) Quit (Quit: Leaving)
[23:41] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[23:42] * jeremydei (~jdeininge@64.125.69.200) has joined #ceph
[23:46] <fghaas> joshd, gregsfortytwo: could it be that the recommendation against using the qcow2 image format on rbd, and the fact that only raw images are supported for Qemu, aren't documented anywhere? Or is that no longer true and my knowledge is outdated?
[23:47] <gregsfortytwo> it could be; I have no idea
[23:47] <gregsfortytwo> in a meeting, sorry
[23:48] <joshd> fghaas: technically you can use qcow2, but it just adds overhead and makes live-migration with caching unsafe
[23:49] <joshd> fghaas: the docs could be more explicit about it though - it's just mentioned at the end of the qemu and openstack instructions iirc
[23:49] * garphy is now known as garphy`aw
[23:49] <fghaas> joshd, yes, plus the qemu examples all explicitly specify format=raw, but they don't really explain why
[23:51] * renzhi (~renzhi@122.226.73.152) Quit (Ping timeout: 480 seconds)
[23:51] <fghaas> also, if by "at the end of the qemu instructions" you refer to http://ceph.com/docs/next/rbd/qemu-rbd/, then that actually doesn't mention anything specific
[23:53] <joshd> yeah, I was thinking of http://ceph.com/docs/next/rbd/qemu-rbd/#running-qemu-with-rbd and http://ceph.com/docs/master/rbd/rbd-openstack/#booting-from-a-block-device
[23:54] * rendar (~s@87.19.183.241) Quit ()
[23:57] <andreask> joshd: and like the example in the first link "qemu-img convert -f qcow2 -O rbd debian_squeeze.qcow2 rbd:data/squeeze" does in fact an import an the convert in one step?
[23:57] * jeremydei (~jdeininge@64.125.69.200) Quit (Ping timeout: 480 seconds)
[23:58] <joshd> yes
[23:58] <andreask> great, thx
[23:59] * toutour_ is now known as toutour

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.