#ceph IRC Log

Index

IRC Log for 2015-12-22

Timestamps are in GMT/BST.

[0:04] * herrsergio (~herrsergi@104.194.26.170) has joined #ceph
[0:05] * herrsergio is now known as Guest2162
[0:06] * Guest2121 (~herrsergi@200.77.224.239) Quit (Ping timeout: 480 seconds)
[0:09] * squizzi (~squizzi@nat-pool-rdu-t.redhat.com) Quit (Ping timeout: 480 seconds)
[0:11] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Remote host closed the connection)
[0:11] * duderonomy (~duderonom@c-24-7-50-110.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[0:27] * fen (~fen@x5f72856b.dyn.telefonica.de) Quit (Quit: fen)
[0:29] * Zethrok (~martin@95.154.26.34) Quit (Remote host closed the connection)
[0:35] * Zethrok (~martin@95.154.26.34) has joined #ceph
[0:40] * oracular1 (~legion@84ZAAAIOR.tor-irc.dnsbl.oftc.net) has joined #ceph
[0:49] * Guest2162 (~herrsergi@104.194.26.170) Quit (Ping timeout: 480 seconds)
[0:52] * johnavp1989 (~jpetrini@pool-100-14-5-21.phlapa.fios.verizon.net) has joined #ceph
[0:54] <motk> hrm
[0:54] * fsimonce (~simon@host159-1-dynamic.54-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:55] <motk> why are the ceph udev rules for rh/centos in /usr/lib/udev?
[0:56] <motk> oh, that path is allowed
[0:56] <motk> hrm
[0:57] * KaneK (~kane@12.206.204.58) Quit (Quit: KaneK)
[1:00] * KaneK (~kane@12.206.204.58) has joined #ceph
[1:02] <motk> ok got it
[1:02] <motk> somewhere in ceph-disk activate my osd dirs are becoming root:root, on centos7
[1:03] <motk> ceph-disk prepare sets perms correctly, but by the the ceph-ansible goes to activate them the perms have changed
[1:03] <motk> any clues welcome
[1:04] * hgichon (~hgichon@112.220.91.130) has joined #ceph
[1:08] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[1:10] <motk> tumbleweeds
[1:10] * oracular1 (~legion@84ZAAAIOR.tor-irc.dnsbl.oftc.net) Quit ()
[1:12] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[1:13] * Rachana (~Rachana@2601:87:3:3601::65f2) Quit (Quit: Leaving)
[1:16] * xarses (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[1:16] * johnavp1989 (~jpetrini@pool-100-14-5-21.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[1:20] * MACscr1 (~Adium@2601:247:4101:a0be:4517:3cfa:5d24:ff38) has joined #ceph
[1:23] * MACscr (~Adium@2601:247:4101:a0be:ad51:e14c:9763:2fab) Quit (Ping timeout: 480 seconds)
[1:25] * EinstCrazy (~EinstCraz@117.15.122.189) Quit (Remote host closed the connection)
[1:29] * ircolle (~Adium@2601:285:201:2bf9:8fb:624b:6c07:10a) Quit (Quit: Leaving.)
[1:32] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:34] * oms101 (~oms101@p20030057EA00F000C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:37] * notmyname1 (~SaneSmith@65-183-154-104-dhcp.burlingtontelecom.net) has joined #ceph
[1:43] * oms101 (~oms101@p20030057EA00D000C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:45] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[1:55] * cathode (~cathode@50-198-166-81-static.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[1:55] <motk> alright, so ceph-disk prepare is definitely setting the wrong ownership
[2:01] * rendar (~I@host60-178-dynamic.251-95-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[2:01] * EinstCrazy (~EinstCraz@111.30.21.47) has joined #ceph
[2:02] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[2:07] * notmyname1 (~SaneSmith@4MJAAANDZ.tor-irc.dnsbl.oftc.net) Quit ()
[2:07] * angdraug (~angdraug@64.124.158.100) Quit (Quit: Leaving)
[2:09] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[2:09] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[2:10] * swami1 (~swami@116.50.70.73) has joined #ceph
[2:13] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[2:27] * linjan (~linjan@176.195.200.157) Quit (Ping timeout: 480 seconds)
[2:34] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:98ed:1d4e:2346:59fe) Quit (Ping timeout: 480 seconds)
[2:35] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[2:37] * yanzheng (~zhyan@182.149.64.193) has joined #ceph
[2:41] * shawniverson (~shawniver@199.66.65.48) Quit (Remote host closed the connection)
[2:43] * KaneK (~kane@12.206.204.58) Quit (Quit: KaneK)
[2:49] * i_m (~ivan.miro@88.206.113.199) has joined #ceph
[2:53] * KaneK (~kane@12.206.204.58) has joined #ceph
[2:54] * KaneK (~kane@12.206.204.58) Quit ()
[3:04] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[3:05] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[3:06] * doppelgrau_ (~doppelgra@p5DC074F3.dip0.t-ipconnect.de) has joined #ceph
[3:08] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[3:11] * Steki (~steki@87.116.182.17) has joined #ceph
[3:11] * doppelgrau (~doppelgra@p5DC06012.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:11] * doppelgrau_ is now known as doppelgrau
[3:11] * Kupo1 (~tyler.wil@23.111.254.159) Quit (Read error: Connection reset by peer)
[3:16] * johnavp1989 (~jpetrini@pool-100-14-5-21.phlapa.fios.verizon.net) has joined #ceph
[3:18] * BManojlovic (~steki@cable-89-216-175-201.dynamic.sbb.rs) Quit (Ping timeout: 480 seconds)
[3:24] * johnavp1989 (~jpetrini@pool-100-14-5-21.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[3:25] * zhaochao (~zhaochao@111.161.77.238) has joined #ceph
[3:30] * vbellur (~vijay@216.4.56.139) Quit (Ping timeout: 480 seconds)
[3:30] * kefu (~kefu@114.92.107.250) has joined #ceph
[3:41] * vbellur (~vijay@216.4.56.139) has joined #ceph
[3:44] * dneary (~dneary@pool-96-237-170-97.bstnma.fios.verizon.net) has joined #ceph
[3:55] * naoto (~naotok@2401:bd00:b001:8920:27:131:11:254) has joined #ceph
[3:56] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[3:57] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[3:57] * dyasny (~dyasny@dsl.198.58.171.119.ebox.ca) Quit (Ping timeout: 480 seconds)
[4:00] * aeroevan (~aeroevan@00015f77.user.oftc.net) Quit (Quit: ZNC 1.6.1 - http://znc.in)
[4:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[4:02] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[4:03] * aeroevan (~aeroevan@00015f77.user.oftc.net) has joined #ceph
[4:04] * _303 (~Azerothia@176.56.230.162) has joined #ceph
[4:05] * swami1 (~swami@116.50.70.73) Quit (Read error: Connection reset by peer)
[4:08] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[4:10] * davidzlap (~Adium@2605:e000:1313:8003:29cd:4074:9e2a:22f) has joined #ceph
[4:13] * davidzlap (~Adium@2605:e000:1313:8003:29cd:4074:9e2a:22f) Quit ()
[4:13] * davidz (~davidz@2605:e000:1313:8003:39f1:c7cd:d93b:b86f) Quit (Quit: Leaving.)
[4:13] * davidzlap (~Adium@2605:e000:1313:8003:29cd:4074:9e2a:22f) has joined #ceph
[4:18] * kefu (~kefu@114.92.107.250) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[4:19] * kefu (~kefu@114.92.107.250) has joined #ceph
[4:19] * doppelgrau (~doppelgra@p5DC074F3.dip0.t-ipconnect.de) Quit (Read error: Connection reset by peer)
[4:19] * doppelgrau (~doppelgra@p5DC074F3.dip0.t-ipconnect.de) has joined #ceph
[4:23] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[4:23] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[4:25] * johnavp1989 (~jpetrini@pool-100-14-5-21.phlapa.fios.verizon.net) has joined #ceph
[4:26] * naoto_ (~naotok@27.131.11.254) has joined #ceph
[4:31] * naoto (~naotok@2401:bd00:b001:8920:27:131:11:254) Quit (Ping timeout: 480 seconds)
[4:34] * _303 (~Azerothia@84ZAAAIXJ.tor-irc.dnsbl.oftc.net) Quit ()
[4:50] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[4:50] * kefu (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[4:51] * kefu (~kefu@114.92.107.250) has joined #ceph
[4:52] * emsnyder (~emsnyder@65.170.86.132) has joined #ceph
[4:52] * Jyron (~Kottizen@59-234-47-212.rev.cloud.scaleway.com) has joined #ceph
[4:54] * dneary (~dneary@pool-96-237-170-97.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[4:54] * shohn1 (~shohn@dslb-088-075-234-032.088.075.pools.vodafone-ip.de) has joined #ceph
[4:57] * shohn (~shohn@dslc-082-082-190-255.pools.arcor-ip.net) Quit (Ping timeout: 480 seconds)
[5:00] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) Quit (Quit: Ex-Chat)
[5:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[5:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[5:03] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[5:05] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[5:13] * Vacuum__ (~Vacuum@88.130.220.13) has joined #ceph
[5:14] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[5:19] * haomaiwa_ (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[5:20] * Vacuum_ (~Vacuum@i59F7912C.versanet.de) Quit (Ping timeout: 480 seconds)
[5:22] * Jyron (~Kottizen@84ZAAAIZD.tor-irc.dnsbl.oftc.net) Quit ()
[5:23] * haomaiwa_ (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[5:23] * haomaiwa_ (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[5:24] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Ping timeout: 480 seconds)
[5:28] * kefu (~kefu@114.92.107.250) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:33] * kefu (~kefu@114.92.107.250) has joined #ceph
[5:36] <motk> hrm
[5:36] * cooldharma06 (~chatzilla@14.139.180.40) has joined #ceph
[5:36] <motk> have a couple of osds the osd tree claim to be up
[5:36] <motk> but tere's no process for them
[5:43] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[5:44] * kefu (~kefu@114.92.107.250) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:45] * Chrissi_ (~Deiz@37.48.81.27) has joined #ceph
[6:00] * jamespage (~jamespage@2a00:1098:0:80:1000:42:0:1) Quit (Quit: Coyote finally caught me)
[6:01] * haomaiwa_ (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[6:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[6:08] * kefu (~kefu@114.92.107.250) has joined #ceph
[6:09] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) Quit (Quit: Bye)
[6:13] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[6:14] * Chrissi_ (~Deiz@84ZAAAI1P.tor-irc.dnsbl.oftc.net) Quit ()
[6:18] * kefu (~kefu@114.92.107.250) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:25] * kefu (~kefu@114.92.107.250) has joined #ceph
[6:28] * Revo84 (~Qiasfah@107.181.161.209) has joined #ceph
[6:50] * m8x (~dunrong@182.150.27.112) Quit (Ping timeout: 480 seconds)
[6:58] * m8x (~dunrong@182.150.27.112) has joined #ceph
[6:58] * Revo84 (~Qiasfah@76GAAAI1P.tor-irc.dnsbl.oftc.net) Quit ()
[6:59] * m8x (~dunrong@182.150.27.112) Quit ()
[6:59] * m8x (~dunrong@182.150.27.112) has joined #ceph
[7:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[7:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[7:07] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[7:08] * m8x (~dunrong@182.150.27.112) Quit (Ping timeout: 480 seconds)
[7:08] * bliu (~liub@203.192.156.9) Quit (Ping timeout: 480 seconds)
[7:08] * m8x (~dunrong@182.150.27.112) has joined #ceph
[7:09] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:21] * amote (~amote@121.244.87.116) has joined #ceph
[7:33] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[7:34] * matx (~xanax`@94.242.195.186) has joined #ceph
[7:39] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) Quit (Quit: leaving)
[7:56] * bliu (~liub@203.192.156.9) has joined #ceph
[8:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[8:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[8:04] * matx (~xanax`@4MJAAANNL.tor-irc.dnsbl.oftc.net) Quit ()
[8:04] * CoZmicShReddeR (~SweetGirl@76GAAAI4Y.tor-irc.dnsbl.oftc.net) has joined #ceph
[8:17] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:26] * KaneK (~kane@cpe-172-88-240-14.socal.res.rr.com) has joined #ceph
[8:29] * nardial (~ls@dslb-088-076-176-152.088.076.pools.vodafone-ip.de) has joined #ceph
[8:30] * KaneK (~kane@cpe-172-88-240-14.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[8:31] * KaneK (~kane@cpe-172-88-240-14.socal.res.rr.com) has joined #ceph
[8:33] * Steki (~steki@87.116.182.17) Quit (Quit: Ja odoh a vi sta 'ocete...)
[8:34] * CoZmicShReddeR (~SweetGirl@76GAAAI4Y.tor-irc.dnsbl.oftc.net) Quit ()
[8:34] * KrimZon (~Revo84@ns316491.ip-37-187-129.eu) has joined #ceph
[8:37] * KaneK (~kane@cpe-172-88-240-14.socal.res.rr.com) Quit (Quit: KaneK)
[8:38] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[8:40] * enax (~enax@hq.ezit.hu) has joined #ceph
[8:46] * Kurt (~Adium@2001:628:1:5:98c1:3d88:2453:5e77) has joined #ceph
[8:57] * fen (~fen@x5f728be2.dyn.telefonica.de) has joined #ceph
[8:57] * KaneK (~kane@cpe-172-88-240-14.socal.res.rr.com) has joined #ceph
[9:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[9:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[9:04] * KrimZon (~Revo84@84ZAAAI7P.tor-irc.dnsbl.oftc.net) Quit ()
[9:07] * analbeard (~shw@support.memset.com) has joined #ceph
[9:17] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) has joined #ceph
[9:30] * fen (~fen@x5f728be2.dyn.telefonica.de) Quit (Quit: fen)
[9:31] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) has joined #ceph
[9:39] * kefu (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[9:39] * kefu (~kefu@114.92.107.250) has joined #ceph
[9:39] * fen (~fen@80.147.192.56) has joined #ceph
[9:43] * jasuarez (~jasuarez@237.Red-83-39-111.dynamicIP.rima-tde.net) has joined #ceph
[9:48] * fsimonce (~simon@host159-1-dynamic.54-79-r.retail.telecomitalia.it) has joined #ceph
[9:48] * bvi (~bastiaan@185.56.32.1) has joined #ceph
[9:48] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[9:49] * kefu (~kefu@114.92.107.250) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[9:50] * kefu (~kefu@114.92.107.250) has joined #ceph
[9:54] * DanFoster (~Daniel@office.34sp.com) has joined #ceph
[9:58] * haomaiwa_ (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[10:00] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[10:00] * zigo_ is now known as zigo
[10:01] * haomaiwa_ (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[10:01] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[10:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[10:08] * jamespage (~jamespage@2a00:1098:0:80:1000:42:0:1) has joined #ceph
[10:30] * ricin (~Hejt@206.54.167.90) has joined #ceph
[10:37] * overclk (~vshankar@59.93.69.9) has joined #ceph
[10:37] * fsimonce (~simon@host159-1-dynamic.54-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[10:44] * EinstCrazy (~EinstCraz@111.30.21.47) Quit (Remote host closed the connection)
[10:47] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:51] * fsimonce (~simon@host159-1-dynamic.54-79-r.retail.telecomitalia.it) has joined #ceph
[10:59] * ricin (~Hejt@4MJAAANSI.tor-irc.dnsbl.oftc.net) Quit ()
[11:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[11:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[11:13] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[11:25] * daviddcc (~dcasier@80.12.43.139) has joined #ceph
[11:30] * haomaiwa_ (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[11:32] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[11:33] * kefu (~kefu@114.92.107.250) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:49] * linjan (~linjan@176.195.74.42) has joined #ceph
[11:50] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[11:54] * EinstCrazy (~EinstCraz@117.15.122.189) has joined #ceph
[12:01] * haomaiwa_ (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[12:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[12:05] * rdas (~rdas@121.244.87.116) has joined #ceph
[12:06] <jayjay> hi guys
[12:07] <jayjay> ever seen this message in the osd log?
[12:07] <jayjay> bad crc in data 1243302609 != exp 2767557086
[12:08] <jayjay> followed by
[12:08] <jayjay> [IPADDRESS]:6812/4835 submit_message osd_op_reply(1193267 rbd_data.652ce437983aa.000000000000b1cd [set-alloc-hint object_size 4194304 write_size 4194304,write 0~2097152] v2433'1178 uv1178 ondisk = 0) v6 remote, [IPADDRESS]:0/4124572900, failed lossy con, dropping message 0x7f69ddb42580
[12:10] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:c44b:ab6e:2395:bd27) has joined #ceph
[12:10] * kefu (~kefu@114.92.107.250) has joined #ceph
[12:14] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) has joined #ceph
[12:14] * ChanServ sets mode +o nhm
[12:21] * nardial (~ls@dslb-088-076-176-152.088.076.pools.vodafone-ip.de) Quit (Quit: Leaving)
[12:29] <skoude> hmm.. any idea why I get this: Error EINVAL: key for client.cinder exists but cap osd does not match
[12:29] <skoude> I'm trying to give cinder user rwx acess to pools with command: ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=ceph_fast, allow rwx pool=ceph_slow, allow rx pool=images'
[12:30] * brian (~textual@109.255.114.93) Quit (Ping timeout: 480 seconds)
[12:35] * daviddcc (~dcasier@80.12.43.139) Quit (Ping timeout: 480 seconds)
[12:37] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[12:43] * zhaochao (~zhaochao@111.161.77.238) Quit (Quit: ChatZilla 0.9.92 [Iceweasel 38.5.0/20151216011944])
[12:45] <jayjay> you have to use ceph auth caps
[12:46] * analbeard (~shw@support.memset.com) has joined #ceph
[12:46] <jayjay> instead of ceph auth get-or-create
[12:57] * drankis (~drankis__@89.111.13.198) has joined #ceph
[12:57] * drankis (~drankis__@89.111.13.198) Quit (Remote host closed the connection)
[13:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[13:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[13:06] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[13:13] * sudocat (~dibarra@2602:306:8bc7:4c50::46) Quit (Ping timeout: 480 seconds)
[13:18] * steveeJ (~junky@HSI-KBW-149-172-252-139.hsi13.kabel-badenwuerttemberg.de) Quit (Ping timeout: 480 seconds)
[13:22] * naoto_ (~naotok@27.131.11.254) Quit (Quit: Leaving...)
[13:23] * fen (~fen@80.147.192.56) Quit (Quit: fen)
[13:24] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[13:27] * LDA (~DM@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[13:28] * wjw-freebsd (~wjw@vpn.ecoracks.nl) has joined #ceph
[13:30] * hgichon (~hgichon@112.220.91.130) Quit (Ping timeout: 480 seconds)
[13:31] * brian (~textual@109.255.114.93) has joined #ceph
[13:37] * EinstCra_ (~EinstCraz@218.69.72.130) has joined #ceph
[13:38] * kefu is now known as kefu|afk
[13:38] * scuttlemonkey is now known as scuttle|afk
[13:42] * ade (~abradshaw@dslb-088-075-062-049.088.075.pools.vodafone-ip.de) has joined #ceph
[13:42] * georgem (~Adium@184.151.190.219) has joined #ceph
[13:42] * georgem (~Adium@184.151.190.219) Quit ()
[13:42] * georgem (~Adium@206.108.127.16) has joined #ceph
[13:42] * rdas (~rdas@121.244.87.116) has joined #ceph
[13:43] * EinstCrazy (~EinstCraz@117.15.122.189) Quit (Ping timeout: 480 seconds)
[13:46] * pabluk_ is now known as pabluk
[13:46] * diegows (~diegows@190.190.21.75) has joined #ceph
[13:47] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[13:48] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[13:53] * kefu|afk (~kefu@114.92.107.250) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:57] * fen (~fen@p578ac6ee.dip0.t-ipconnect.de) has joined #ceph
[14:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[14:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[14:03] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[14:03] * allaok (~allaok@machine107.orange-labs.com) has left #ceph
[14:04] * fen_ (~fen@p578ac6ee.dip0.t-ipconnect.de) has joined #ceph
[14:07] * fen_ (~fen@p578ac6ee.dip0.t-ipconnect.de) Quit ()
[14:09] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[14:10] * fen (~fen@p578ac6ee.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[14:13] * fen (~fen@p578ac6ee.dip0.t-ipconnect.de) has joined #ceph
[14:21] * kefu (~kefu@114.92.107.250) has joined #ceph
[14:22] * sudocat (~dibarra@173-11-146-140-houston.txt.hfc.comcastbusiness.net) has joined #ceph
[14:26] * LDA (~DM@host217-114-156-249.pppoe.mark-itt.net) Quit (Quit: Nettalk6 - www.ntalk.de)
[14:30] * sudocat (~dibarra@173-11-146-140-houston.txt.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[14:31] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[14:33] * duderonomy (~duderonom@c-24-7-50-110.hsd1.ca.comcast.net) has joined #ceph
[14:35] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[14:36] * drupal (~AotC@195-154-231-147.rev.poneytelecom.eu) has joined #ceph
[14:40] * fen (~fen@p578ac6ee.dip0.t-ipconnect.de) Quit (Quit: fen)
[14:42] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[14:43] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[14:45] * dlan_ (~dennis@116.228.88.131) Quit (Remote host closed the connection)
[14:52] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:55] * Rachana (~Rachana@2601:87:3:3601::65f2) has joined #ceph
[14:59] * kefu (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[15:00] * kefu (~kefu@211.22.145.245) has joined #ceph
[15:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[15:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[15:06] * drupal (~AotC@76GAAAJKA.tor-irc.dnsbl.oftc.net) Quit ()
[15:07] * linjan (~linjan@176.195.74.42) Quit (Remote host closed the connection)
[15:08] * linjan (~linjan@176.195.74.42) has joined #ceph
[15:13] * TheSov2 (~TheSov@cip-248.trustwave.com) has joined #ceph
[15:16] * kefu_ (~kefu@114.92.107.250) has joined #ceph
[15:19] * dyasny (~dyasny@dsl.198.58.152.115.ebox.ca) has joined #ceph
[15:19] * kefu (~kefu@211.22.145.245) Quit (Ping timeout: 480 seconds)
[15:22] * daviddcc (~dcasier@nat-pool-cdg-u.redhat.com) has joined #ceph
[15:24] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[15:28] * Marqin (~marqin@spiramirabilis.net) has joined #ceph
[15:28] <Marqin> "Since monitors are light-weight, it is possible to run them on the same host as an OSD; however, we recommend running them on separate hosts, because fsync issues with the kernel may impair performance." -> are there some public benchmarks on this performance drop?
[15:29] * kanagaraj (~kanagaraj@27.7.8.160) has joined #ceph
[15:29] <TheSov2> Marqin, not that I know of, it becomes noticible as you scale up your cluster
[15:29] <TheSov2> if your cluster is less than 20-30 osds u wont notice
[15:30] <TheSov2> that said. never keep your monitors on your osd servers thats just a bad idea
[15:30] <Marqin> 2-4 osd :D
[15:30] <TheSov2> osd servers?
[15:31] <TheSov2> or osds total
[15:31] <Marqin> i mean, we have 4 servers, and we need to connect them with shared disk, so puting monitor on 3 of them and osd an all seems resonable
[15:32] * kefu_ (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[15:33] * kefu (~kefu@114.92.107.250) has joined #ceph
[15:36] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[15:37] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit ()
[15:39] <doppelgrau> Marqin: I run the mons with OSDs, currently about 23 nodes, 80 OSDs
[15:40] <TheSov2> 4 servers, hah you wont notice anything
[15:40] <doppelgrau> Marqin: since the mon-DB is on a dedicated SSD, I do not see any mon-related trouble
[15:40] <TheSov2> doppelgrau, for my monitor configs i just mount /var to ssd
[15:41] * vbellur (~vijay@216.4.56.139) Quit (Ping timeout: 480 seconds)
[15:41] <doppelgrau> TheSov2: I use /var/lib/ceph :)
[15:41] * kefu (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[15:41] <TheSov2> have you tried infernalis yet?
[15:41] <TheSov2> I am curious about the possible speed increases
[15:42] * steveeJ (~junky@141.37.31.187) has joined #ceph
[15:42] <TheSov2> im tired of people saying ceph is slow
[15:42] * yanzheng (~zhyan@182.149.64.193) Quit (Quit: This computer has gone to sleep)
[15:42] * kefu (~kefu@211.22.145.245) has joined #ceph
[15:42] <doppelgrau> no, currently neither time nor ressources for setting up a testcluster
[15:43] * XMVD (~Xmd@78.85.35.236) has joined #ceph
[15:51] * Xmd (~Xmd@78.85.35.236) Quit (Ping timeout: 480 seconds)
[15:56] * daviddcc (~dcasier@nat-pool-cdg-u.redhat.com) Quit (Ping timeout: 480 seconds)
[15:58] <Marqin> ceph-deploy does everything via ssh or needs to connect to cluster? ( I want cluster to use onlyprivate network, but deploy that via public network )
[16:00] * vbellur (~vijay@216.4.56.146) has joined #ceph
[16:00] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[16:00] * TheSov3 (~TheSov@38.106.143.234) has joined #ceph
[16:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[16:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[16:03] * enax (~enax@hq.ezit.hu) Quit (Ping timeout: 480 seconds)
[16:05] <doppelgrau> Marqin: no idea, I use ansible for setup
[16:07] * TheSov2 (~TheSov@cip-248.trustwave.com) Quit (Ping timeout: 480 seconds)
[16:07] * TheSov2 (~TheSov@204.13.200.248) has joined #ceph
[16:08] * TheSov3 (~TheSov@38.106.143.234) Quit (Ping timeout: 480 seconds)
[16:10] * vbellur (~vijay@216.4.56.146) Quit (Ping timeout: 480 seconds)
[16:14] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[16:14] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[16:18] * Joppe4899 (~SinZ|offl@46.19.139.126) has joined #ceph
[16:19] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) Quit (Quit: Bye guys! (??????????????????? ?????????)
[16:19] * kefu (~kefu@211.22.145.245) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[16:21] * Xmd (~Xmd@78.85.35.236) has joined #ceph
[16:24] * XMVD (~Xmd@78.85.35.236) Quit (Ping timeout: 480 seconds)
[16:25] * cooldharma06 (~chatzilla@14.139.180.40) Quit (Quit: ChatZilla 0.9.92 [Iceweasel 21.0/20130515140136])
[16:27] * steveeJ (~junky@141.37.31.187) Quit (Ping timeout: 480 seconds)
[16:28] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[16:34] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[16:34] * lincolnb (~lincoln@c-71-57-68-189.hsd1.il.comcast.net) has joined #ceph
[16:36] * ircolle (~Adium@2601:285:201:2bf9:4417:f326:1f3e:2410) has joined #ceph
[16:40] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[16:48] * Joppe4899 (~SinZ|offl@84ZAAAJQO.tor-irc.dnsbl.oftc.net) Quit ()
[16:52] * neurodrone (~neurodron@pool-108-50-194-171.nwrknj.fios.verizon.net) has joined #ceph
[16:53] <neurodrone> Hello! Has anyone witnessed a problem with how levelDB files consume space on mons?
[16:53] <neurodrone> I am seeing my `/var/lib/ceph/mon/ceph-test-mon01/store.db` slowly grow and then saturate completely.
[16:53] * b0e (~aledermue@213.95.25.82) has joined #ceph
[16:53] <neurodrone> What is an acceptable level of storage `/var/lib/ceph/` should have on the mons?
[16:55] <doppelgrau> neurodrone: I had that problem when my storage was too slow => had manually to restart the mons wirh compact on restart
[16:55] <doppelgrau> neurodrone: since putting the leveldbs on SSDs the problem seems to be gone
[16:55] <neurodrone> I see. How big was your space?
[16:55] <neurodrone> Mine is around 20G for that mount.
[16:56] <neurodrone> Is there a general recommendation on how big it generally should be?
[16:56] <doppelgrau> IIRC 40G or 60G
[16:56] <neurodrone> I see.
[16:56] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[16:56] <neurodrone> Also, how should I restart mons with compaction?
[16:57] <doppelgrau> but I compacted the leveldb when it was growing behing (about) 16GB
[16:57] <neurodrone> Is there a particular command I can run without using any level-db specific tools?
[16:57] <neurodrone> I see.
[16:57] * mattbenjamin (~mbenjamin@aa2.linuxbox.com) has joined #ceph
[16:57] <neurodrone> I think that's a good approach.
[16:57] <doppelgrau> you can also you ceph tell to compact
[16:57] <doppelgrau> but only on mon at the same time, they can leave the quorum during compact
[16:57] <neurodrone> I see.
[16:58] <neurodrone> Cool, I will do this one at a time. Thanks for the tips! :)
[16:58] <doppelgrau> so ???ad hoc??? compact the dbs, on the long run: faster storage
[16:58] <neurodrone> Indeed. That makes sense.
[17:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[17:01] <TheSov2> akhh the storage subreddit makes me want to tear my hair out. its just full of marketing guys for big storage
[17:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[17:01] <TheSov2> sorry for the rant
[17:02] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[17:03] * dlan (~dennis@116.228.88.131) has joined #ceph
[17:05] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[17:05] <rkeene> :-D
[17:06] * SaneSmith (~chrisinaj@93.158.215.170) has joined #ceph
[17:09] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:09] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[17:09] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) has joined #ceph
[17:16] * wjw-freebsd (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[17:19] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) has joined #ceph
[17:23] <Marqin> TheSov2: pssst, do you want to buy some big storage?
[17:24] <TheSov2> ....
[17:24] <TheSov2> i dont mind storage vendors
[17:24] <TheSov2> i hate storage vendors trying to bullshit me
[17:24] <TheSov2> tintri was just here trying to sell me storage for a million bucks
[17:25] <TheSov2> 200grand per array for 3 years.
[17:25] <TheSov2> support
[17:25] <TheSov2> so tco on 4 arrays without personelle for 3 years is 1.8 mill
[17:26] <TheSov2> for 250tb of storage....
[17:26] <Marqin> usd?
[17:26] <TheSov2> yes
[17:26] <TheSov2> oh and get this. they didnt mention this but they are working on SDS
[17:26] <TheSov2> i found out from their company prospectus
[17:26] <TheSov2> so they see the writing on the wall with SDS
[17:27] <TheSov2> yet they want to sell me their all flash POS for 2 million dollars
[17:27] <Marqin> :D
[17:27] <Marqin> maybe they just got tons os cheap pendrives from china
[17:27] <Marqin> *tons of
[17:28] <TheSov2> and all flash storage makes no sense
[17:28] <TheSov2> any system that needs that kind of performance should have an all flash DAS
[17:29] <TheSov2> the purpose of san is shared resources and managment, if you have a system that needs .05ms latency a san is not for you
[17:30] <Marqin> yup
[17:32] <rkeene> If you need 50 microsecnds of latency for a block write of 4096 bytes you will need to modulate at 12 nanosecond intervals -- 81MHz
[17:32] <rkeene> Rounding up, 82MHz
[17:32] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Read error: Connection reset by peer)
[17:33] <rkeene> So you could accomplish that with a Fast Ethernet bus for short distances, and Gigabit Ethernet for pretty long distances, 10GbE/8GBFC for a SAN
[17:33] <TheSov2> for 1 system rkeen
[17:33] <TheSov2> not put 200 on it
[17:34] <TheSov2> now*
[17:34] <TheSov2> and thats just the network side
[17:34] <TheSov2> now pass that into the nic's memory
[17:34] <rkeene> It really doesn't matter how many systems there are for distance with any modern bus (which will use microsegmentation)
[17:34] <TheSov2> and then down to OS level
[17:34] <rkeene> You'll have all the HBA/OS overhead no matter how far the storage is from the system
[17:35] <TheSov2> correct
[17:35] <rkeene> Even if you hook the storage right into the PCI bus (I forget what that's called, it's the new hotness)
[17:35] <TheSov2> DAS
[17:35] <TheSov2> its old LOL
[17:35] <rkeene> No, it's not DAS
[17:35] <TheSov2> oh theres something new?
[17:35] <TheSov2> you mean the DMA HBA?
[17:35] <TheSov2> has access to ram
[17:35] <rkeene> No
[17:36] <rkeene> Every HBA will do DMA (or RDMA)
[17:36] <TheSov2> not every hba believe me
[17:36] <rkeene> https://en.wikipedia.org/wiki/NVM_Express
[17:38] * SaneSmith (~chrisinaj@84ZAAAJSW.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[17:40] * tsg (~tgohad@fmdmzpr03-ext.fm.intel.com) has joined #ceph
[17:41] * overclk (~vshankar@59.93.69.9) Quit (Quit: Zzzzz...)
[17:44] * tuhnis (~CobraKhan@76GAAAJT9.tor-irc.dnsbl.oftc.net) has joined #ceph
[17:45] * saltsa (~joonas@dsl-hkibrasgw1-58c018-65.dhcp.inet.fi) Quit (Quit: Lost terminal)
[17:46] * saltsa (~joonas@dsl-hkibrasgw1-58c018-65.dhcp.inet.fi) has joined #ceph
[17:47] <rkeene> All I'm saying is, 0.5ms isn't setting the bar very high for differentiating between DAS and SAN, since we can modulate quickly enough to overcome both the distance and "virtual distance" as the bits filter through transforms at various levels (HBA, FC switch, SAN intake port, SAN cache lookup) and if you have enough machines you can usually get better performance out of it (due to economy of scales and not needing to support the full load of all clients at all times...
[17:48] * gregmark (~Adium@68.87.42.115) has joined #ceph
[17:48] * vata1 (~vata@207.96.182.162) has joined #ceph
[17:52] <TheSov2> so you would put a system that has a .5ms sensitivity on a san?
[17:52] <rkeene> I'm saying you wouldn't get 0.5ms latency from a SAN any more than any other "nearness" factor, it's not a low enough number to matter
[17:53] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:53] <rkeene> NVMe over FC would be nice though :-)
[17:54] <TheSov2> well then my compellents must be in bad shape cuz on a 8 gig fibre im getting 20-30 ms over fiber where as my das units are typically in the 1ms region
[17:54] * nardial (~ls@dslb-088-076-176-152.088.076.pools.vodafone-ip.de) has joined #ceph
[17:54] <TheSov2> and my das's have flash cache
[17:54] * nardial (~ls@dslb-088-076-176-152.088.076.pools.vodafone-ip.de) Quit ()
[17:54] <rkeene> 20ms is terrible service times
[17:55] * steveeJ (~junky@HSI-KBW-149-172-252-139.hsi13.kabel-badenwuerttemberg.de) has joined #ceph
[17:57] <TheSov2> according to what i read 8 gig fibre has typical latency of 15ms
[17:57] <TheSov2> for spinning disk systems
[17:58] <TheSov2> which is what i have.
[17:58] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[17:59] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[18:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[18:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[18:06] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:06] <DanFoster> http://www.thewhir.com/web-hosting-news/netapp-buys-all-flash-storage-vendor-solidfire-for-870m
[18:06] <rkeene> The speed the link modulates has rather little to do with it (as we've seen we can get 0.5ms out of 81MHz over a short cable)
[18:06] <rkeene> Oh God, NetApp is the worst NAS vendor
[18:06] <rkeene> It's even worse when people try to use NetApp devices as a SAN
[18:07] <TheSov2> DanFoster, that was my whole point the all flash companies are dying
[18:07] <TheSov2> solidfire sold, violin is having issues
[18:07] <TheSov2> and purestorage is stagnant
[18:07] <rkeene> RAMCloud is still going strong :-)
[18:07] <DanFoster> I reckon Solidfired's investors are pretty happy ;)
[18:08] <TheSov2> basically all flash is not selling well
[18:08] <TheSov2> indeed since the company was hemmoraging money lol
[18:08] * emsnyder (~emsnyder@65.170.86.132) Quit (Ping timeout: 480 seconds)
[18:08] <DanFoster> That's not a problem if you end up getting bought for a crapload of money :)
[18:08] <TheSov2> im just saying any system that NEEEDS that kind of performance, is better off with like a intel 750 inside the box
[18:09] <DanFoster> So long as you can stick the burn rate, it's pretty standard tech exit strategy.
[18:09] <rkeene> (RAMCloud is by the guy who came up with Tcl, he talked about it at his keynote speech at the Tcl Conference I attended)
[18:09] <TheSov2> do people just add cloud to the name and get money thrown at them
[18:09] <DanFoster> You don't think that the reducing cost of solid state means the life of hdds is limited in any case?
[18:10] * johnavp1989 (~jpetrini@pool-100-14-5-21.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[18:11] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[18:12] <rkeene> TheSov2, RAMCloud has latency specifications of 0.05ms for reads and 0.15ms for writes (over the SAN)
[18:12] <TheSov2> oh?
[18:12] <TheSov2> how affordable are they?
[18:12] <TheSov2> will they sell me 180tb for less than a mill?
[18:13] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[18:13] <rkeene> /JOIN RAMCloud
[18:14] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[18:14] <TheSov2> on what irc server?
[18:14] <DanFoster> I haven't asked, but prices keep coming down at a good clip.
[18:14] <rkeene> Freenode
[18:14] * tuhnis (~CobraKhan@76GAAAJT9.tor-irc.dnsbl.oftc.net) Quit ()
[18:14] <rkeene> I though this was freenode, sorry, I'm on a bunch
[18:18] * kefu (~kefu@114.92.107.250) has joined #ceph
[18:18] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[18:18] * dugravot61 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[18:18] <TheSov2> rkeene, i dont think ramcloud has commercial support
[18:19] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:19] <rkeene> There's definitely commercial support, according to the founder (J. Ousterhout)
[18:19] <TheSov2> im looking on google
[18:20] <TheSov2> unless my googlefu is weak today
[18:20] <rkeene> Let me find the recordings from the conference
[18:20] <rkeene> TheSov2, It's kind of long, but here's the founder talking about it and Tcl: https://www.youtube.com/watch?v=L0Qjk1BZDww
[18:21] <TheSov2> ill get on that
[18:21] * kefu (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[18:22] * kefu (~kefu@211.22.145.245) has joined #ceph
[18:23] * herrsergio (~herrsergi@201.141.118.69) has joined #ceph
[18:23] * herrsergio is now known as Guest2272
[18:24] <rkeene> (Tcl is awesome, btw)
[18:26] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[18:27] * johnavp19891 (~jpetrini@166.170.34.130) has joined #ceph
[18:32] * bvi (~bastiaan@185.56.32.1) Quit (Ping timeout: 480 seconds)
[18:32] * dugravot61 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[18:37] * LDA (~DM@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[18:43] * pabluk is now known as pabluk_
[18:45] * DanFoster (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[18:50] <evilrob> right... so how fucked am I if I have 2 boxes in a cluster (dev stole one of the three) and one went down?
[18:50] <evilrob> I'm guessing the answer is "quite"
[18:52] <T1> it depends on your configuration, the location of MONs and how many copies of data your pool had
[18:52] <T1> so.. perhaps, perhaps not
[18:53] * johnavp19891 (~jpetrini@166.170.34.130) Quit (Ping timeout: 480 seconds)
[18:56] * jasuarez (~jasuarez@237.Red-83-39-111.dynamicIP.rima-tde.net) Quit (Quit: WeeChat 1.3)
[18:57] * mgolub (~Mikolaj@91.225.203.114) has joined #ceph
[18:57] * mykola (~Mikolaj@91.225.203.114) has joined #ceph
[18:57] * Guest2272 (~herrsergi@201.141.118.69) Quit (Ping timeout: 480 seconds)
[18:59] * adept256 (~Phase@h-213.61.149.100.host.de.colt.net) has joined #ceph
[19:21] * shohn1 (~shohn@dslb-088-075-234-032.088.075.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[19:21] * herrserg1o (~herrsergi@201.141.118.69) has joined #ceph
[19:29] * adept256 (~Phase@84ZAAAJYS.tor-irc.dnsbl.oftc.net) Quit ()
[19:34] * scuttle|afk is now known as scuttlemonkey
[19:36] * kefu (~kefu@211.22.145.245) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:38] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:38] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[19:40] * angdraug (~angdraug@64.124.158.100) has joined #ceph
[19:50] * vbellur (~vijay@205.197.242.157) has joined #ceph
[19:51] * mtb` (~mtb`@157.130.171.46) has joined #ceph
[19:54] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) has joined #ceph
[19:54] * mtb` (~mtb`@157.130.171.46) Quit (Read error: Connection reset by peer)
[20:04] <TheSov2> have any of you guys used ceph-dokan for windows?
[20:05] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[20:10] <rkeene> No, I've looked at porting my FUSE-based applications to Dokan though and I got the impression that Dokan was less stable than FUSE... But no actual experience
[20:11] * mtb` (~mtb`@157.130.171.46) has joined #ceph
[20:11] * mtb` (~mtb`@157.130.171.46) Quit ()
[20:12] * mtb` (~mtb`@157.130.171.46) has joined #ceph
[20:14] * xarses (~xarses@50.255.30.155) has joined #ceph
[20:15] * xarses (~xarses@50.255.30.155) Quit (Remote host closed the connection)
[20:15] * xarses (~xarses@50.255.30.155) has joined #ceph
[20:17] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[20:17] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) has joined #ceph
[20:21] * vbellur (~vijay@205.197.242.157) Quit (Ping timeout: 480 seconds)
[20:23] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[20:23] * linuxkidd (~linuxkidd@49.sub-70-209-96.myvzw.com) Quit (Quit: Leaving)
[20:26] * linuxkidd (~linuxkidd@49.sub-70-209-96.myvzw.com) has joined #ceph
[20:29] * vbellur (~vijay@205.197.242.183) has joined #ceph
[20:31] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[20:37] * vbellur (~vijay@205.197.242.183) Quit (Ping timeout: 480 seconds)
[20:41] * sudocat (~dibarra@2602:306:8bc7:4c50::46) has joined #ceph
[20:42] * Rachana (~Rachana@2601:87:3:3601::65f2) Quit (Quit: Leaving)
[20:43] * Rachana (~Rachana@2601:87:3:3601::65f2) has joined #ceph
[20:43] * Rachana (~Rachana@2601:87:3:3601::65f2) Quit ()
[20:44] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[20:50] * Rachana (~Rachana@2601:87:3:3601::65f2) has joined #ceph
[20:52] <TheSov2> crap
[20:52] <TheSov2> theres no good way to setup iscsi for ceph
[20:53] <TheSov2> i want to be able to port RBD's to iSCSI targets and have them both highly available and multipath
[20:53] <TheSov2> but linux doesnt do active/active
[20:53] <Kvisle> tgt?
[20:53] <TheSov2> well ceph doesnt i mean
[20:53] <TheSov2> it doesnt seem to pass persistent reservation
[20:54] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[20:54] <TheSov2> Kvisle, any
[20:58] * ade (~abradshaw@dslb-088-075-062-049.088.075.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[21:04] * Rachana (~Rachana@2601:87:3:3601::65f2) Quit (Quit: Leaving)
[21:06] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[21:08] * mgolub (~Mikolaj@91.225.203.114) Quit (Quit: away)
[21:08] * mykola (~Mikolaj@91.225.203.114) Quit (Quit: away)
[21:09] * Rachana (~Rachana@2601:87:3:3601::65f2) has joined #ceph
[21:09] * Rachana (~Rachana@2601:87:3:3601::65f2) Quit ()
[21:13] * ychen (~ychen@69.25.143.32) has joined #ceph
[21:15] <T1> TheSov2: IMO iscsi-support is a bit outside cephs core functionality
[21:15] * ychen (~ychen@69.25.143.32) Quit ()
[21:16] <TheSov2> yes which is problematic for anything not linux
[21:17] <T1> I'd like to think there are enough iscsi targets available for *nix that should be able to serve an rbd over iscsi
[21:17] <TheSov2> there are many that can
[21:17] <T1> I havn't really looked around, but I'd like to think so
[21:17] <TheSov2> but im not going to run production from them
[21:18] <TheSov2> i need HA
[21:18] <T1> last time I used iscsi it was for service a zfs block device for my wifes macbook for timemachine
[21:18] <TheSov2> I use it everyday along fibre channel
[21:18] <TheSov2> the iscsi is the lower end storage we got here
[21:19] <T1> choose $vendor with commercial support and just ignore that the backend block device is an rbd?
[21:19] <TheSov2> T1, ? the goal is to use ceph
[21:19] * Rachana (~Rachana@2601:87:3:3601::65f2) has joined #ceph
[21:20] <T1> I mean that there should be some commercial iscsi target vendors out there that can provide HA
[21:20] <TheSov2> hmmm
[21:20] <T1> it's just some software
[21:20] <joshd> HA iscsi w/rbd is in progress with lio
[21:20] <TheSov2> but LIO sucks
[21:20] <TheSov2> lol
[21:21] <TheSov2> wait the lio does that allow active active?
[21:21] <joshd> yes
[21:21] <TheSov2> so why cant i use LIO now?
[21:21] <TheSov2> oh cuz it doesnt speak to ceph directly
[21:21] <TheSov2> i have to map rbds first
[21:22] <TheSov2> well, show me the direction in which to throw money
[21:22] <T1> what was it you said about non-persistent reservations?
[21:22] <joshd> you can use lio now, but not active/active
[21:22] <rkeene> You can map the RBDs on multiple nodes and do iSCSI multipathing out them (I've done it before, using Linux as an FC target)
[21:22] <joshd> http://tracker.ceph.com/projects/ceph/wiki/Clustered_SCSI_target_using_RBD
[21:22] * kanagaraj (~kanagaraj@27.7.8.160) Quit (Quit: Leaving)
[21:22] <rkeene> (But not with Ceph, using DRBD and shared-nothing disks)
[21:22] <TheSov2> rkeene, if you dont pass persistend reservations u get screwed
[21:23] <T1> we use UUID on the rbds to ensure correct mapping of rbds after reboot/adding/removing runtime
[21:23] <joshd> an updated patchset should show up on linux-scsi any day now
[21:23] <rkeene> T1, Yeah -- that's similar to what I did, I updated the Page 83 data with the serial number so Solaris MPxIO would understand it
[21:23] <TheSov2> joshd, are you telling me that the ceph iscsi LIO module is coming out any day now?
[21:24] <T1> rkeene: sorry, you lost me at "page 83 data".. :)
[21:24] <rkeene> Using DRBD in primary/primary with mode... A ? I can't remember which mode it was, but it worked well enough for ZFS (other modes did not) -- so I had two independant nodes that shared nothing serving data
[21:24] <rkeene> T1, Part of the SCSI specification
[21:25] <T1> ah, as in disk id?
[21:25] <rkeene> Yeah
[21:25] <T1> or similar
[21:25] <joshd> TheSov2: yes, it should be all ready this spring
[21:25] <TheSov2> holy shit
[21:25] <TheSov2> with persistent reservations?
[21:25] <joshd> yup
[21:25] <TheSov2> so i can use LIO with ceph to vmware and shit wont break?
[21:26] <TheSov2> say yes josh
[21:26] <joshd> that's the idea
[21:26] <TheSov2> SAY YES
[21:26] <TheSov2> OMG
[21:26] <joshd> :)
[21:26] <TheSov2> I LOVE YOU MAN
[21:26] <T1> "wont break" ..
[21:26] <T1> last famous words... ;)
[21:26] <TheSov2> T1, you dont understand
[21:26] <joshd> thank Mike Christie - he's the one doing the work
[21:26] <TheSov2> all thats been holding me back is the lack of HA iscsi for vmware
[21:26] <TheSov2> where is he
[21:26] <TheSov2> im going to make it rain on him
[21:27] <joshd> haha, he's at Red Hat, unfortunately not in this channel atm
[21:28] <TheSov2> i need to know what redhat charges for ceph support, on ubuntu
[21:29] <T1> I heard numbers a while ago for "ceph storage" licenses
[21:30] <T1> started at ~30.000DKK for each node for the first up to 20 or 30 nodes
[21:30] <TheSov2> oh god are gonna do that by the osd crap
[21:30] <T1> then getting lower..
[21:30] <joshd> it's capacity based these days
[21:30] <T1> but still a shitload
[21:30] <joshd> not per-osd
[21:30] <TheSov2> 30K per node...
[21:30] <TheSov2> thats not usd right
[21:31] <TheSov2> because thats bullshit
[21:31] <T1> no, DKK
[21:31] <T1> divde by 5.5
[21:31] <T1> (on average)
[21:31] <TheSov2> 4k per node
[21:31] <TheSov2> ok
[21:31] <TheSov2> phew
[21:31] <TheSov2> much better
[21:31] <T1> but if it's capacity based now
[21:31] <T1> still a shitload
[21:32] <T1> and probably with build in expectations on yearly growth
[21:32] <joshd> TheSov2: if you like I can point you to RH folks who can talk about it
[21:32] <TheSov2> i wouldnt mind
[21:33] <TheSov2> if its negotiable thats better
[21:33] <TheSov2> we stopped using redhat OS because of the cost
[21:33] <T1> like IBMs TSM capacity based licenses there you cannot turn down a 10% yearly added data, so the license always gets 10% larger year after year aftre year
[21:34] * JWilbur (~AG_Scott@162.216.46.176) has joined #ceph
[21:39] <TheSov2> T1, wtf
[21:39] <T1> mmm - gotta love it
[21:39] <TheSov2> 10 percent per year
[21:40] <TheSov2> thats a doubling every 7 yeras
[21:40] <T1> thats why we havn't switched to capacity based and still keep our existing cpu based licensing model
[21:40] <T1> yup
[21:40] <TheSov2> whats the natural log of 10
[21:43] <TheSov2> yep
[21:43] <TheSov2> every 7 years that doubles
[21:44] <TheSov2> so in 7 years 100TB would be 200TB, then 7 after that 400TB, then 7 after that 800TB, then 7 after that 1.6PB
[21:44] <TheSov2> hmmm considering growth it may not be that bad but ibm is too expensive in general
[21:46] * dyasny (~dyasny@dsl.198.58.152.115.ebox.ca) Quit (Ping timeout: 480 seconds)
[21:49] * linuxkidd (~linuxkidd@49.sub-70-209-96.myvzw.com) Quit (Ping timeout: 480 seconds)
[21:49] * linuxkidd (~linuxkidd@49.sub-70-209-96.myvzw.com) has joined #ceph
[21:50] * tsg (~tgohad@fmdmzpr03-ext.fm.intel.com) Quit (Remote host closed the connection)
[22:02] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[22:03] <zao> T1: At least they straightened out the CPU-flavored license recently, right?
[22:03] <zao> It was a nice mess with HT and sockets in the past.
[22:04] * JWilbur (~AG_Scott@162.216.46.176) Quit ()
[22:04] <TheSov2> hyperthreading is bullshit
[22:04] <TheSov2> every test i have ever done shows me how bullshit it is
[22:04] * dyasny (~dyasny@dsl.198.58.172.15.ebox.ca) has joined #ceph
[22:04] <T1> zao: afaik there has been no real changes to VU licensing the last 5 or 6 years
[22:04] <TheSov2> im down with AMD cores, not threads
[22:05] <zao> Huh. Maybe my tape wrangler is spinning tales of times past then.
[22:05] <T1> the last big change was that newer CPU types cost 70VU compared to older types that only cost 50VU
[22:05] <T1> then I started with TSM for 10 or 12 years ago every core cost 50VU
[22:05] <zao> Also, surely you mean IBM Spectrum Protect[TM]? :P
[22:06] <T1> now it's really 70VU for everything
[22:06] <T1> eeeeeek.. yeah
[22:06] <T1> somewhere in boss-land that name change probably makes sense
[22:06] <zao> Good thing that 'ISP' doesn't overload any existing term.
[22:06] <T1> _everything_ apart fro IBM will keep TSM
[22:07] <zao> indeed
[22:07] * cathode (~cathode@50-198-166-81-static.hfc.comcastbusiness.net) has joined #ceph
[22:07] <T1> hell - someone still uses adstar
[22:07] <T1> the commandline tools are still named dsmc for the BA client
[22:08] <T1> and dsmadmc for the admin client
[22:08] <T1> s/someone/ some
[22:09] <zao> I guess that the capacity bloat might make sense if you're keeping up with the LTO revisions and keep the same number of slots in your robot?
[22:09] <T1> perhaps
[22:10] <T1> IMO it's a catch 22 just to get more money without anyone noticing
[22:10] <T1> most of it is not listed along with the license costs
[22:12] <T1> but once you read the fine print and ask for an example on how much we'll have to cough up in 5 years time you can see it
[22:24] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[22:25] * LDA (~DM@host217-114-156-249.pppoe.mark-itt.net) Quit (Quit: Nettalk6 - www.ntalk.de)
[22:28] <mnaser> docs.ceph.com not loading? :(
[22:31] * diegows (~diegows@190.190.21.75) Quit (Quit: Leaving)
[22:35] <mnaser> 404 now
[22:35] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[22:39] * i_m (~ivan.miro@88.206.113.199) Quit (Ping timeout: 480 seconds)
[22:47] * herrserg1o (~herrsergi@201.141.118.69) Quit (Ping timeout: 480 seconds)
[22:48] <evilrob> planning training and conferences for my team for the next year. Are there options for ceph other than the openstack summit?
[22:53] <joshd> evilrob: there are a bunch of ceph days http://tracker.ceph.com/projects/ceph/wiki/CAB_2015-12-09
[22:54] * georgem (~Adium@206.108.127.16) has joined #ceph
[22:56] <evilrob> thanks for that link. the main ceph days page didn't have anything for next year yet.
[22:56] <evilrob> I bet I can swing the San Jose one. I work for cisco. so a trip to the mother-ship isn't out of the question.
[22:57] <evilrob> and I'm in Austin where the 25 Apr event probably is (coordinating with OS summit)
[22:58] <joshd> yeah, I think that's with the OS summit there
[23:05] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[23:05] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[23:06] * mattbenjamin (~mbenjamin@aa2.linuxbox.com) Quit (Quit: Leaving.)
[23:06] * vata1 (~vata@207.96.182.162) Quit (Ping timeout: 480 seconds)
[23:07] * Diablodoct0r (~Heliwr@46.166.188.222) has joined #ceph
[23:08] * xarses (~xarses@50.255.30.155) Quit (Remote host closed the connection)
[23:08] * xarses (~xarses@50.255.30.155) has joined #ceph
[23:13] * TheSov2 (~TheSov@204.13.200.248) Quit (Ping timeout: 480 seconds)
[23:19] * wkennington (~william@c-50-184-242-109.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[23:22] * wkennington (~william@c-50-184-242-109.hsd1.ca.comcast.net) has joined #ceph
[23:25] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[23:26] * georgem (~Adium@206.108.127.16) has joined #ceph
[23:28] * joshd (~jdurgin@206.169.83.146) Quit (Ping timeout: 480 seconds)
[23:28] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Ping timeout: 480 seconds)
[23:33] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[23:33] * vbellur (~vijay@209.117.45.10) has joined #ceph
[23:34] * samx (~Adium@cpe-172-90-97-62.socal.res.rr.com) has joined #ceph
[23:35] <samx> Anyone using the chef-ceph cookbook here? I'm trying to run kitchen converge and having some issues.
[23:37] * Diablodoct0r (~Heliwr@4MJAAAOIM.tor-irc.dnsbl.oftc.net) Quit ()
[23:41] * Rachana (~Rachana@2601:87:3:3601::65f2) Quit (Quit: Leaving)
[23:42] * Rachana (~Rachana@2601:87:3:3601::65f2) has joined #ceph
[23:43] <Marqin> Hm, ceph website and debian repo are down
[23:43] * joshd (~jdurgin@66-194-8-225.static.twtelecom.net) has joined #ceph
[23:49] * mtb` (~mtb`@157.130.171.46) Quit (Ping timeout: 480 seconds)
[23:50] * xarses (~xarses@50.255.30.155) Quit (Ping timeout: 480 seconds)
[23:53] * qhartman (~qhartman@den.direwolfdigital.com) has joined #ceph
[23:55] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[23:56] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.