#ceph IRC Log

Index

IRC Log for 2015-06-16

Timestamps are in GMT/BST.

[0:01] <TheSov2> and if u want to use raspberri pi's 1tb of replicated storage will only cost 670 bucks
[0:02] <Sysadmin88> ceph on raspberry Pis... interesting
[0:02] <Sysadmin88> benchmark away :)
[0:03] <TheSov2> obviously 1 osd per system...
[0:03] <TheSov2> and u would have to use normal computers for monitors
[0:03] <Sysadmin88> 100mbit :(
[0:03] <TheSov2> yeah but so what
[0:03] <TheSov2> the scale is what you want
[0:03] <TheSov2> stick a 1 tb usb drive on each pi
[0:03] <TheSov2> setup 3 monitors
[0:03] <Sysadmin88> don't forget IOPS
[0:04] * jskinner (~jskinner@host-95-2-129.infobunker.com) Quit (Ping timeout: 480 seconds)
[0:04] <TheSov2> the latency will be shit but you get a fuck ton of storage at a decent bandwidth
[0:04] <TheSov2> if you are doing object store, its perfect
[0:04] <Sysadmin88> probably spend a ton of switches as well for all those PIs
[0:05] <TheSov2> naw
[0:05] <TheSov2> get the dell os less switches
[0:05] <TheSov2> and install the free switch os
[0:05] <TheSov2> forgot its name
[0:05] <TheSov2> u can get a 48 port for like 250 bucks
[0:05] * visbits (~textual@8.29.138.28) Quit (Quit: Textual IRC Client: www.textualapp.com)
[0:05] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[0:05] <TheSov2> dell power connects
[0:06] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[0:06] * Schaap (~Vidi@8Q4AABKBM.tor-irc.dnsbl.oftc.net) Quit ()
[0:07] * bildramer (~poller@exit1.tor-proxy.net.ua) has joined #ceph
[0:07] <TheSov2> like seriously i would use that for backups
[0:07] <TheSov2> a cluster to just throw backups in
[0:08] <TheSov2> thats with 128gig memory sticks, just throw a 1tb usb hard drive on instead
[0:09] * dmatson (~david.mat@192.41.52.12) has joined #ceph
[0:09] * linuxkidd (~linuxkidd@209.163.164.50) Quit (Quit: Leaving)
[0:10] <TheSov2> 1000 raspberry pi's with 2tb external usb disks attached and memory cards, with case, and power cord, would cost 200k and give you 2PB raw
[0:10] <TheSov2> its shitty
[0:10] <TheSov2> LOL
[0:10] <doppelgrau> and a nighmare in cabeling :)
[0:10] <jidar> lawl
[0:11] <jidar> are you trying to make people cringe in here?
[0:11] <TheSov2> lol
[0:11] <TheSov2> no but for building a home lab 5 of them is pretty cheap
[0:11] <TheSov2> 970 bucks
[0:11] * puffy (~puffy@161.170.193.99) has joined #ceph
[0:11] <TheSov2> 10TB
[0:12] * oro (~oro@79.120.135.209) Quit (Ping timeout: 480 seconds)
[0:12] <TheSov2> ceph is already ported to arm, just add the repo
[0:13] <TheSov2> if u bought no disks at all. the price for 5 of them complete is super cheap 320 bucks with all the fixings
[0:14] <TheSov2> someone has already started on it http://millibit.blogspot.co.uk/2015/01/ceph-pi-adding-osd-and-more-performance.html
[0:15] <TheSov2> bah only 3 units
[0:15] <TheSov2> u cant build a decent cluster on 3 units
[0:15] <TheSov2> you got to get ahead of the size
[0:17] * joao (~joao@249.38.136.95.rev.vodafone.pt) Quit (Quit: Leaving)
[0:19] * analbeard (~shw@5.80.205.222) Quit (Quit: Leaving.)
[0:21] <Sysadmin88> at least ceph makes it easy to migrate off them... lol
[0:23] * Concubidated (~Adium@199.119.131.10) has joined #ceph
[0:29] * diq (~diq@c-50-161-114-166.hsd1.ca.comcast.net) Quit (Quit: Leaving...)
[0:30] * vata (~vata@cable-21.246.173-197.electronicbox.net) has joined #ceph
[0:30] * tomc (~tomc@192.41.52.12) has joined #ceph
[0:31] <tomc> Anyone have any idea about getting a mon db to compact while the cluster is not healthy?
[0:31] <tomc> is it possible?
[0:33] * midnightrunner (~midnightr@216.113.160.71) Quit (Ping timeout: 480 seconds)
[0:34] <doppelgrau> tomc: I would try to ask on the maillinglist, I think more people read that
[0:35] * TheSov2 (~TheSov@cip-248.trustwave.com) Quit (Quit: Leaving)
[0:35] * fsimonce (~simon@host253-71-dynamic.3-87-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:35] * midnightrunner (~midnightr@216.113.160.71) has joined #ceph
[0:36] * lcurtis (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[0:36] <cholcombe> is the ceph.com/debian-hammer link down?
[0:36] * bildramer (~poller@8Q4AABKCJ.tor-irc.dnsbl.oftc.net) Quit ()
[0:36] <cholcombe> correction, is ceph.com down? lol
[0:38] * LRWerewolf (~Bonzaii@194.63.142.220) has joined #ceph
[0:40] * diq (~diq@c-50-161-114-166.hsd1.ca.comcast.net) has joined #ceph
[0:42] * danieagle (~Daniel@187.35.206.151) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[0:42] * diq (~diq@c-50-161-114-166.hsd1.ca.comcast.net) Quit ()
[0:43] <dmick> looks like someone is mirroring repos
[0:52] * rlrevell (~leer@184.52.129.221) has joined #ceph
[0:53] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[0:53] * fdmanana__ (~fdmanana@bl13-129-165.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[0:54] * Concubidated (~Adium@199.119.131.10) Quit (Quit: Leaving.)
[0:57] * moore_ (~moore@64.202.160.88) Quit (Remote host closed the connection)
[1:03] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[1:05] * tomc (~tomc@192.41.52.12) Quit (Quit: tomc)
[1:07] * LRWerewolf (~Bonzaii@5NZAADV3P.tor-irc.dnsbl.oftc.net) Quit ()
[1:12] * puffy1 (~puffy@161.170.193.99) has joined #ceph
[1:12] * puffy (~puffy@161.170.193.99) Quit (Read error: Connection reset by peer)
[1:12] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[1:19] <ska> How many mds stand-by's am I allow to have?
[1:22] <doppelgrau> ska: one or two standby mds seems common, but I do not know if there is an upper limit
[1:28] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) has joined #ceph
[1:28] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Remote host closed the connection)
[1:29] <dmick> ceph.com back
[1:34] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[1:37] <ska> doppelgrau: I'd like to (for simplicity) setup each node with osd, mon, and mds servers so that my deploy is symmetric.
[1:38] <ska> This is for test system only, performance is not important for us.
[1:38] <doppelgrau> ska: then test it :)
[1:38] <ska> doppelgrau: its legal?
[1:39] <doppelgrau> ska: but I???ve heard that very large number of monitors (>7 or 9) somehow tend to make trubble
[1:39] <doppelgrau> ska: in the worst case you???ll need to remove some mds-Server I guess
[1:40] * towen (~towen@c-98-230-203-84.hsd1.nm.comcast.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[1:40] <ska> doppelgrau: sure.. I only intend to have 3 nodes total..
[1:40] <ska> 2 mds on standby and one active.
[1:41] <doppelgrau> sounds reasonable
[1:42] * marrusl (~mark@rrcs-70-60-101-195.midsouth.biz.rr.com) Quit (Ping timeout: 480 seconds)
[1:47] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) has joined #ceph
[1:48] * bene-at-car-repair (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[1:51] * nsoffer (~nsoffer@bzq-79-180-80-9.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[1:55] <fxmulder> does the ceph-users list have a spam issue? I was just unsubscribed from it because of bounce issues
[1:56] * oms101 (~oms101@p20030057EA098700C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:04] * oms101 (~oms101@p20030057EA087400C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[2:10] * tmrz (~quassel@198-84-192-38.cpe.teksavvy.com) has joined #ceph
[2:11] * jskinner (~jskinner@173-28-1-197.client.mchsi.com) has joined #ceph
[2:11] * jskinner (~jskinner@173-28-1-197.client.mchsi.com) Quit ()
[2:12] * puffy (~puffy@161.170.193.99) has joined #ceph
[2:12] * puffy1 (~puffy@161.170.193.99) Quit (Read error: Connection reset by peer)
[2:22] * marrusl (~mark@rrcs-70-60-101-195.midsouth.biz.rr.com) has joined #ceph
[2:23] * haomaiwa_ (~haomaiwan@183.206.171.154) Quit (Remote host closed the connection)
[2:26] * xarses (~xarses@166.175.190.188) Quit (Ping timeout: 480 seconds)
[2:27] * jclm1 (~jclm@ip-64-134-187-212.public.wayport.net) has joined #ceph
[2:28] * puffy1 (~puffy@161.170.193.135) has joined #ceph
[2:33] * jclm (~jclm@ip-64-134-187-212.public.wayport.net) Quit (Ping timeout: 480 seconds)
[2:34] * t4nk262 (~oftc-webi@67-43-142-107.border7-dynamic.dsl.sentex.ca) Quit (Remote host closed the connection)
[2:34] * yanzheng (~zhyan@125.71.107.110) has joined #ceph
[2:36] * puffy (~puffy@161.170.193.99) Quit (Ping timeout: 480 seconds)
[2:40] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[2:42] * puffy1 (~puffy@161.170.193.135) Quit (Quit: Leaving.)
[2:43] * ibravo (~ibravo@72.83.69.64) Quit (Quit: Leaving)
[2:48] * xarses (~xarses@12.10.113.130) has joined #ceph
[2:50] * Concubidated (~Adium@199.119.131.10) has joined #ceph
[2:51] * LeaChim (~LeaChim@host86-175-32-176.range86-175.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:59] * Debesis (~0x@140.217.38.86.mobile.mezon.lt) Quit (Ping timeout: 480 seconds)
[3:03] * haomaiwang (~haomaiwan@218.94.96.134) has joined #ceph
[3:09] <TheSov> im doing it....im gonna build a raspi cluster
[3:14] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[3:14] * scuttlemonkey is now known as scuttle|afk
[3:14] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) has joined #ceph
[3:16] * fam is now known as fam_away
[3:17] * fam_away is now known as fam
[3:22] * jbautista- (~wushudoin@transit-86-181-132-209.redhat.com) has joined #ceph
[3:22] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[3:24] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[3:25] * georgem (~Adium@192-171-33-102.cpe.pppoe.ca) has joined #ceph
[3:30] * wushudoin_ (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[3:30] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[3:31] * jbautista- (~wushudoin@transit-86-181-132-209.redhat.com) Quit (Ping timeout: 480 seconds)
[3:31] * shyu (~Shanzhi@119.254.120.66) has joined #ceph
[3:31] * jbautista- (~wushudoin@38.140.108.2) has joined #ceph
[3:37] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) has joined #ceph
[3:40] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[3:43] * kefu (~kefu@114.92.97.251) has joined #ceph
[3:46] * marrusl (~mark@rrcs-70-60-101-195.midsouth.biz.rr.com) Quit (Quit: sync && halt)
[3:52] * shang (~ShangWu@111-83-153-202.EMOME-IP.hinet.net) has joined #ceph
[3:55] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[3:56] * yguang11 (~yguang11@12.31.82.125) Quit (Remote host closed the connection)
[3:56] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[4:04] * yguang11 (~yguang11@12.31.82.125) Quit (Ping timeout: 480 seconds)
[4:06] * eqhmcow_ (~eqhmcow@adsl-74-242-202-15.rmo.bellsouth.net) Quit (Quit: leaving)
[4:08] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[4:08] * delaf (~delaf@legendary.xserve.fr) Quit (Ping timeout: 480 seconds)
[4:09] * sankarshan (~sankarsha@183.87.39.242) has joined #ceph
[4:12] * midnightrunner (~midnightr@216.113.160.71) Quit (Remote host closed the connection)
[4:12] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[4:12] * kefu (~kefu@114.92.97.251) has joined #ceph
[4:22] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[4:22] * kefu (~kefu@114.92.97.251) has joined #ceph
[4:27] * zhaochao (~zhaochao@125.39.8.226) has joined #ceph
[4:27] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[4:29] * kefu (~kefu@114.92.97.251) has joined #ceph
[4:29] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[4:31] * bkopilov (~bkopilov@bzq-79-183-58-206.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[4:32] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[4:33] * yguang11_ (~yguang11@2001:4998:effd:7801::1024) has joined #ceph
[4:40] * yguang11 (~yguang11@12.31.82.125) Quit (Ping timeout: 480 seconds)
[4:41] * shang_ (~ShangWu@111-83-7-68.EMOME-IP.hinet.net) has joined #ceph
[4:48] * shang (~ShangWu@111-83-153-202.EMOME-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[5:00] * wushudoin_ (~wushudoin@transit-86-181-132-209.redhat.com) has joined #ceph
[5:02] * georgem1 (~Adium@23-91-150-96.cpe.pppoe.ca) has joined #ceph
[5:04] * vata (~vata@cable-21.246.173-197.electronicbox.net) Quit (Quit: Leaving.)
[5:05] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[5:08] * jbautista- (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[5:08] * georgem (~Adium@192-171-33-102.cpe.pppoe.ca) Quit (Ping timeout: 480 seconds)
[5:09] * wushudoin_ (~wushudoin@transit-86-181-132-209.redhat.com) Quit (Ping timeout: 480 seconds)
[5:09] * wushudoin_ (~wushudoin@38.140.108.2) has joined #ceph
[5:11] * overclk (~overclk@121.244.87.117) has joined #ceph
[5:13] * Vacuum_ (~Vacuum@88.130.200.173) has joined #ceph
[5:19] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[5:19] * Vacuum__ (~Vacuum@88.130.212.48) Quit (Ping timeout: 480 seconds)
[5:24] * squ (~Thunderbi@46.109.36.167) has joined #ceph
[5:26] * ivotron (uid25461@id-25461.brockwell.irccloud.com) Quit (Quit: Connection closed for inactivity)
[5:33] * jclm (~jclm@ip-64-134-187-212.public.wayport.net) has joined #ceph
[5:34] * shang_ (~ShangWu@111-83-7-68.EMOME-IP.hinet.net) Quit (Quit: Ex-Chat)
[5:38] * jclm1 (~jclm@ip-64-134-187-212.public.wayport.net) Quit (Ping timeout: 480 seconds)
[5:49] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[5:52] * flisky (~Thunderbi@106.39.60.34) Quit (Remote host closed the connection)
[5:52] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[6:06] * KevinPerks (~Adium@2606:a000:80ad:1300:597a:9e58:f677:e520) Quit (Quit: Leaving.)
[6:13] * georgem1 (~Adium@23-91-150-96.cpe.pppoe.ca) Quit (Quit: Leaving.)
[6:18] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) has joined #ceph
[6:20] * jclm (~jclm@ip-64-134-187-212.public.wayport.net) Quit (Ping timeout: 480 seconds)
[6:21] * tganguly (~tganguly@121.244.87.117) has joined #ceph
[6:47] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[6:54] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[7:05] * yguang11_ (~yguang11@2001:4998:effd:7801::1024) Quit ()
[7:08] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:09] * sjm (~sjm@49.32.0.234) has joined #ceph
[7:13] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[7:14] * kefu (~kefu@114.92.97.251) has joined #ceph
[7:27] * shylesh (~shylesh@121.244.87.118) has joined #ceph
[7:29] * m0zes (~mozes@beocat.cis.ksu.edu) Quit (Ping timeout: 480 seconds)
[7:36] <snerd> can anyone suggest the best way to expose an existing rados object with radosgw?
[7:39] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[7:41] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:43] * sjm (~sjm@49.32.0.234) Quit (Ping timeout: 480 seconds)
[7:45] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[7:45] <kuroneko> ls
[7:45] <kuroneko> ugh. sorry
[7:46] * kefu (~kefu@114.92.97.251) has joined #ceph
[7:46] * ifur (~osm@0001f63e.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:48] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:51] * m0zes (~mozes@beocat.cis.ksu.edu) has joined #ceph
[7:55] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:01] * calvinx (~calvin@101.100.172.246) has joined #ceph
[8:05] * hostranger (~rulrich@2a02:41a:3999::85) has joined #ceph
[8:06] * hostranger (~rulrich@2a02:41a:3999::85) has left #ceph
[8:10] * kefu (~kefu@114.92.97.251) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[8:13] * atc (~oftc-webi@122.146.93.152) has joined #ceph
[8:13] * atc (~oftc-webi@122.146.93.152) Quit ()
[8:13] * cok (~chk@2a02:2350:18:1010:9000:4f10:3a02:d14c) has joined #ceph
[8:14] * kefu (~kefu@114.92.97.251) has joined #ceph
[8:15] * Mika_c (~Mk@122.146.93.152) has joined #ceph
[8:17] <snerd> hrm, looking at the rados clonedata cmd
[8:18] <snerd> what's --object-locator?
[8:18] * sjm (~sjm@49.32.0.234) has joined #ceph
[8:18] * Sysadmin88 (~IceChat77@2.124.164.69) Quit (Quit: IceChat - Its what Cool People use)
[8:18] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[8:28] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[8:29] * kefu (~kefu@192.154.200.66) has joined #ceph
[8:30] * bobrik_______ (~bobrik@83.243.64.45) Quit (Quit: (null))
[8:35] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:37] * kefu (~kefu@192.154.200.66) Quit (Max SendQ exceeded)
[8:38] * kefu (~kefu@192.154.200.66) has joined #ceph
[8:38] * Nacer (~Nacer@2001:41d0:fe82:7200:44ab:210b:5810:a626) Quit (Remote host closed the connection)
[8:38] * dugravot6 (~dugravot6@nat-persul-montet.wifi.univ-lorraine.fr) has joined #ceph
[8:40] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[8:48] * zhaochao_ (~zhaochao@111.161.77.241) has joined #ceph
[8:52] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[8:53] <freman> hi! anyone able to privide som tips on performance issue on a newly installe all flash ceph cluster? When we do write test we get 900MB/s write. but read tests are only 200MB/s all servers are on 10GBit connections.
[8:54] * zhaochao (~zhaochao@125.39.8.226) Quit (Ping timeout: 480 seconds)
[8:59] * oro (~oro@79.120.135.209) has joined #ceph
[9:00] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) has joined #ceph
[9:02] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:06] * dgurtner (~dgurtner@178.197.231.240) has joined #ceph
[9:06] * kefu_ (~kefu@114.92.97.251) has joined #ceph
[9:07] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[9:08] * analbeard (~shw@support.memset.com) has joined #ceph
[9:09] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:10] * kefu (~kefu@192.154.200.66) Quit (Ping timeout: 480 seconds)
[9:12] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[9:12] * ivotron (uid25461@id-25461.brockwell.irccloud.com) has joined #ceph
[9:14] * bitserker (~toni@88.87.194.130) has joined #ceph
[9:14] * m0zes (~mozes@beocat.cis.ksu.edu) Quit (Ping timeout: 480 seconds)
[9:18] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:26] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:32] * bobrik_______ (~bobrik@109.167.249.178) has joined #ceph
[9:32] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[9:34] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) has joined #ceph
[9:34] * bobrik________ (~bobrik@109.167.249.178) has joined #ceph
[9:35] * kefu_ (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[9:35] * kefu (~kefu@114.92.97.251) has joined #ceph
[9:36] * ksperis (~ksperis@46.218.42.103) Quit (Quit: Leaving)
[9:38] * fsimonce (~simon@host253-71-dynamic.3-87-r.retail.telecomitalia.it) has joined #ceph
[9:40] * haomaiwa_ (~haomaiwan@218.94.96.134) has joined #ceph
[9:40] * haomaiwang (~haomaiwan@218.94.96.134) Quit (Read error: Connection reset by peer)
[9:40] * bobrik_______ (~bobrik@109.167.249.178) Quit (Ping timeout: 480 seconds)
[9:43] * yanzheng1 (~zhyan@182.139.21.245) has joined #ceph
[9:44] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:47] * yanzheng (~zhyan@125.71.107.110) Quit (Ping timeout: 480 seconds)
[9:55] * m0zes (~mozes@beocat.cis.ksu.edu) has joined #ceph
[9:58] <freman> hi! anyone able to privide som tips on performance issue on a newly installe all flash ceph cluster? When we do write test we get 900MB/s write. but read tests are only 200MB/s all servers are on 10GBit connections.
[10:00] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[10:01] * delaf (~delaf@legendary.xserve.fr) has joined #ceph
[10:02] * kefu (~kefu@114.92.97.251) has joined #ceph
[10:03] * cok (~chk@2a02:2350:18:1010:9000:4f10:3a02:d14c) Quit (Quit: Leaving.)
[10:04] * flisky (~Thunderbi@106.39.60.34) Quit (Quit: flisky)
[10:04] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[10:05] * nsoffer (~nsoffer@bzq-79-182-131-63.red.bezeqint.net) has joined #ceph
[10:06] * cok (~chk@2a02:2350:18:1010:38b2:ee8f:97c0:2fdd) has joined #ceph
[10:07] * oro (~oro@79.120.135.209) Quit (Ping timeout: 480 seconds)
[10:07] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[10:08] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[10:10] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[10:11] * kefu (~kefu@114.92.97.251) has joined #ceph
[10:15] * capri_oner (~capri@212.218.127.222) has joined #ceph
[10:15] * ghartz_ (~ghartz@AStrasbourg-651-1-211-89.w109-221.abo.wanadoo.fr) has joined #ceph
[10:15] <JarekO_> hi
[10:16] * flisky1 (~Thunderbi@106.39.60.34) has joined #ceph
[10:16] <JarekO_> is that normal when i am try to make cleanup in cosbench test:
[10:16] <JarekO_> <cls> cls/rgw/cls_rgw.cc:1947: ERROR: rgw_obj_remove(): cls_cxx_remove returned -2
[10:16] <JarekO_> ?
[10:16] * branto (~branto@178-253-136-248.3pp.slovanet.sk) has joined #ceph
[10:16] <JarekO_> this is from ceph-osd.XX.log
[10:17] * mivaho (~quassel@xternal.xs4all.nl) has joined #ceph
[10:17] * haomaiwang (~haomaiwan@218.94.96.134) has joined #ceph
[10:19] * Hau_MI (~HauM1@login.univie.ac.at) has joined #ceph
[10:19] * infinity1 (~brendon@web2.artsopolis.com) has joined #ceph
[10:19] * kaisan (~kai@zaphod.xs4all.nl) has joined #ceph
[10:19] * BranchPr1dictor (branch@predictor.org.pl) has joined #ceph
[10:19] * kingcu_ (~kingcu@kona.ridewithgps.com) has joined #ceph
[10:19] * liiwi_ (liiwi@idle.fi) has joined #ceph
[10:19] * CSa__ (~christian@mintzer.imp.fu-berlin.de) has joined #ceph
[10:19] * frickler_ (~jens@v1.jayr.de) has joined #ceph
[10:19] * Zethrok_ (~martin@95.154.26.34) has joined #ceph
[10:19] * kefu_ (~kefu@114.92.97.251) has joined #ceph
[10:19] * vsi_ (vsi@kapsi.fi) has joined #ceph
[10:20] * kefu (~kefu@114.92.97.251) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * flisky (~Thunderbi@106.39.60.34) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * haomaiwa_ (~haomaiwan@218.94.96.134) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * analbeard (~shw@support.memset.com) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * zhaochao_ (~zhaochao@111.161.77.241) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * Hemanth (~Hemanth@121.244.87.117) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * T1w (~jens@node3.survey-it.dk) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * vikhyat (~vumrao@121.244.87.116) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * overclk (~overclk@121.244.87.117) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * [arx] (~arx@sniff-the.kittypla.net) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * liiwi (liiwi@idle.fi) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * BranchPredictor (branch@predictor.org.pl) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * zimboboyd (~zimboboyd@ip5b43818b.dynamic.kabel-deutschland.de) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * kingcu (~kingcu@kona.ridewithgps.com) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * infinity_ (~brendon@web2.artsopolis.com) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * JFQ (~ghartz@AStrasbourg-651-1-211-89.w109-221.abo.wanadoo.fr) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * vsi (vsi@kapsi.fi) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * Krazypoloc (~Krazypolo@rrcs-67-52-43-151.west.biz.rr.com) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * raso (~raso@deb-multimedia.org) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * capri_on (~capri@212.218.127.222) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * kaisan_ (~kai@zaphod.xs4all.nl) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * mivaho_ (~quassel@xternal.xs4all.nl) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * Zethrok (~martin@95.154.26.34) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * frickler (~jens@v1.jayr.de) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * jamespage (~jamespage@culvain.gromper.net) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * philips (~philips@ec2-54-196-103-51.compute-1.amazonaws.com) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * HauM1 (~HauM1@login.univie.ac.at) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * CSa_ (~christian@mintzer.imp.fu-berlin.de) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * sep (~sep@95.62-50-191.enivest.net) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * nwf (~nwf@00018577.user.oftc.net) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * beardo (~beardo__@beardo.cc.lehigh.edu) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * MaZ- (~maz@00016955.user.oftc.net) Quit (magnet.oftc.net helix.oftc.net)
[10:20] * vsi_ (vsi@kapsi.fi) Quit ()
[10:22] * joao (~joao@249.38.136.95.rev.vodafone.pt) has joined #ceph
[10:22] * ChanServ sets mode +o joao
[10:22] * [arx] (~arx@sniff-the.kittypla.net) has joined #ceph
[10:23] * delaf (~delaf@legendary.xserve.fr) Quit (Quit: leaving)
[10:23] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[10:23] * beardo (~beardo__@beardo.cc.lehigh.edu) has joined #ceph
[10:23] * delaf (~delaf@legendary.xserve.fr) has joined #ceph
[10:23] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[10:23] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[10:23] * zhaochao (~zhaochao@111.161.77.241) has joined #ceph
[10:24] * overclk (~overclk@121.244.87.117) has joined #ceph
[10:24] * dgurtner_ (~dgurtner@178.197.231.81) has joined #ceph
[10:24] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[10:24] * sep (~sep@2a04:2740:1:0:52e5:49ff:feeb:32) has joined #ceph
[10:24] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[10:24] * philips (~philips@ec2-54-196-103-51.compute-1.amazonaws.com) has joined #ceph
[10:25] * MaZ- (~maz@00016955.user.oftc.net) has joined #ceph
[10:25] * analbeard (~shw@support.memset.com) has joined #ceph
[10:25] * raso (~raso@deb-multimedia.org) has joined #ceph
[10:25] * gaveen (~gaveen@123.231.121.26) has joined #ceph
[10:25] * dgurtner (~dgurtner@178.197.231.240) Quit (Ping timeout: 480 seconds)
[10:25] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:25] * delaf (~delaf@legendary.xserve.fr) Quit ()
[10:26] * delaf (~delaf@legendary.xserve.fr) has joined #ceph
[10:26] * delaf (~delaf@legendary.xserve.fr) Quit ()
[10:28] * delaf (~delaf@legendary.xserve.fr) has joined #ceph
[10:32] * fdmanana__ (~fdmanana@bl13-129-165.dsl.telepac.pt) has joined #ceph
[10:33] * nsoffer (~nsoffer@bzq-79-182-131-63.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[10:38] * delaf (~delaf@legendary.xserve.fr) Quit (Quit: leaving)
[10:39] * delaf (~delaf@legendary.xserve.fr) has joined #ceph
[10:40] * delaf (~delaf@legendary.xserve.fr) Quit ()
[10:40] * delaf (~delaf@legendary.xserve.fr) has joined #ceph
[10:45] * nwf (~nwf@00018577.user.oftc.net) has joined #ceph
[10:46] * boolman (boolman@79.138.78.238) has joined #ceph
[10:47] <boolman> I think my crushmap is bad. my PG's is broken
[10:47] <boolman> stuck unclean since forever, current state active+remapped
[10:48] <boolman> http://pastebin.com/EmGEnW4Q
[10:49] * cok (~chk@2a02:2350:18:1010:38b2:ee8f:97c0:2fdd) has left #ceph
[10:50] <boolman> pg_num and pgp_num is set to 200
[10:52] <boolman> anyone? :)
[10:52] <Mika_c> how many replica you set on your pool???
[10:54] <boolman> min_size 1, size 2
[10:55] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:56] <boolman> hm, if I create another pool it seems fine
[10:56] <boolman> so its just with the rbd pool
[10:57] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:57] * dugravot6 (~dugravot6@nat-persul-montet.wifi.univ-lorraine.fr) Quit (Quit: Leaving.)
[10:58] <boolman> worked after I recreated it
[10:59] * kefu_ (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[10:59] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[10:59] * kefu (~kefu@114.92.97.251) has joined #ceph
[11:03] * MrHome (~jonas@85.115.12.132) has joined #ceph
[11:05] <MrHome> hi, i ran exactly in to this issue http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2014-December/045188.html . i ran into it after i removed buckets within a crush map, where a rule was left which was still assigned to pool. until now the cep cluster is not responding and i don't know how to recover. luckily this is a demo setup, where no critical data is inside. does anyone know how to recover from this issue?
[11:06] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[11:09] <MrHome> ceph -s on a monitor loops in 2015-06-16 09:09:15.021114 7f5090ef8700 0 -- 10.1.128.6:0/2648337 >> 10.1.128.7:6789/0 pipe(0x2c32a00 sd=7 :0 s=1 pgs=0 cs=0 l=1 c=0x2c2aae0).fault
[11:11] * kapil_ (~ksharma@2620:113:80c0:5::2222) Quit (Quit: Leaving)
[11:19] * overclk (~overclk@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:23] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[11:23] * sjm (~sjm@49.32.0.234) Quit (Ping timeout: 480 seconds)
[11:29] * Debesis (~0x@140.217.38.86.mobile.mezon.lt) has joined #ceph
[11:31] * haomaiwang (~haomaiwan@218.94.96.134) Quit (Remote host closed the connection)
[11:45] * jks (~jks@178.155.151.121) has joined #ceph
[11:47] * capri_oner (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[11:48] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[11:49] * kefu (~kefu@114.92.97.251) has joined #ceph
[11:50] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Remote host closed the connection)
[11:52] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[11:52] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[11:53] * kefu (~kefu@114.92.97.251) has joined #ceph
[11:54] * sjm (~sjm@49.32.0.234) has joined #ceph
[11:55] * frickler_ is now known as frickler
[11:56] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[11:56] * kefu (~kefu@114.92.97.251) has joined #ceph
[11:57] * kapil (~ksharma@2620:113:80c0:5::2222) has joined #ceph
[12:00] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[12:01] * kefu (~kefu@114.92.97.251) has joined #ceph
[12:06] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[12:07] * kefu (~kefu@114.92.97.251) has joined #ceph
[12:09] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[12:10] * kefu (~kefu@114.92.97.251) has joined #ceph
[12:12] * gaveen (~gaveen@123.231.121.26) Quit (Ping timeout: 480 seconds)
[12:15] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[12:17] * kefu (~kefu@114.92.97.251) has joined #ceph
[12:20] * gaveen (~gaveen@175.157.145.65) has joined #ceph
[12:23] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[12:23] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[12:24] * kefu (~kefu@li336-244.members.linode.com) has joined #ceph
[12:25] * Mika_c (~Mk@122.146.93.152) Quit (Quit: Konversation terminated!)
[12:25] * sjm (~sjm@49.32.0.234) Quit (Read error: Connection reset by peer)
[12:26] * overclk (~overclk@121.244.87.124) has joined #ceph
[12:26] * lordjumblebee (~oftc-webi@124-171-65-67.dyn.iinet.net.au) has joined #ceph
[12:26] <lordjumblebee> hi all!
[12:26] * sjm (~sjm@49.32.0.234) has joined #ceph
[12:26] <lordjumblebee> anyone here able to explain the rgw_gc settings and how tweaking them can change the way it does things
[12:27] <lordjumblebee> the documentation doesn't really explain it too much in depth, just "change this for this" but what impact it has on performance isn't explained
[12:27] * haomaiwang (~haomaiwan@183.206.171.154) has joined #ceph
[12:29] * sleinen1 (~Adium@macsl.switch.ch) has joined #ceph
[12:30] * Concubidated (~Adium@199.119.131.10) Quit (Ping timeout: 480 seconds)
[12:32] * kefu (~kefu@li336-244.members.linode.com) Quit (Ping timeout: 480 seconds)
[12:32] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[12:33] * haomaiwa_ (~haomaiwan@183.206.163.154) has joined #ceph
[12:35] * kefu (~kefu@114.92.97.251) has joined #ceph
[12:35] * haomaiwang (~haomaiwan@183.206.171.154) Quit (Ping timeout: 480 seconds)
[12:36] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Ping timeout: 480 seconds)
[12:37] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[12:38] * kefu (~kefu@114.92.97.251) has joined #ceph
[12:39] * bilco105 is now known as bilco105_
[12:40] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[12:41] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) has joined #ceph
[12:41] * kefu (~kefu@ec2-54-92-37-227.ap-northeast-1.compute.amazonaws.com) has joined #ceph
[12:43] * kefu (~kefu@ec2-54-92-37-227.ap-northeast-1.compute.amazonaws.com) Quit (Max SendQ exceeded)
[12:48] * cok (~chk@2a02:2350:18:1010:6018:e57d:b611:883e) has joined #ceph
[12:52] * kefu (~kefu@114.92.97.251) has joined #ceph
[12:55] * flisky1 (~Thunderbi@106.39.60.34) Quit (Quit: flisky1)
[12:55] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[12:57] * bilco105_ is now known as bilco105
[12:58] * arbrandes (~arbrandes@177.45.221.205) has joined #ceph
[12:59] * kefu (~kefu@114.92.97.251) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:01] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[13:02] * sjm (~sjm@49.32.0.234) Quit (Ping timeout: 480 seconds)
[13:03] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[13:10] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[13:10] * fdmanana__ (~fdmanana@bl13-129-165.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[13:19] * KevinPerks (~Adium@2606:a000:80ad:1300:20bd:93ac:b7f8:8712) has joined #ceph
[13:23] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) has joined #ceph
[13:25] * liiwi_ is now known as liiwi
[13:29] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[13:29] * dalgaaf (uid15138@id-15138.charlton.irccloud.com) has joined #ceph
[13:30] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[13:31] * fdmanana__ (~fdmanana@bl13-129-165.dsl.telepac.pt) has joined #ceph
[13:34] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[13:41] * sleinen1 (~Adium@macsl.switch.ch) Quit (Read error: Connection reset by peer)
[13:43] * sleinen (~Adium@macsl.switch.ch) has joined #ceph
[13:45] * fdmanana__ (~fdmanana@bl13-129-165.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[13:46] * sjm (~sjm@49.32.0.242) has joined #ceph
[13:50] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[13:50] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[13:51] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[13:52] * i_m (~ivan.miro@deibp9eh1--blueice4n2.emea.ibm.com) has joined #ceph
[13:52] * zhaochao (~zhaochao@111.161.77.241) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 38.0.1/20150526223604])
[13:53] * bobrik_________ (~bobrik@109.167.249.178) has joined #ceph
[13:57] * overclk (~overclk@121.244.87.124) Quit (Ping timeout: 480 seconds)
[13:58] * shylesh (~shylesh@121.244.87.118) Quit (Ping timeout: 480 seconds)
[13:58] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[13:59] * bobrik________ (~bobrik@109.167.249.178) Quit (Ping timeout: 480 seconds)
[13:59] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[14:01] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[14:03] * linjan (~linjan@195.110.41.9) has joined #ceph
[14:04] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[14:06] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[14:06] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[14:06] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[14:06] * lucas1 (~Thunderbi@218.76.52.64) Quit ()
[14:10] * shyu (~Shanzhi@119.254.120.66) Quit (Remote host closed the connection)
[14:11] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[14:13] * gaveen (~gaveen@175.157.145.65) Quit (Ping timeout: 480 seconds)
[14:17] * tganguly (~tganguly@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:17] * bobrik_________ (~bobrik@109.167.249.178) Quit (Quit: (null))
[14:20] * bobrik__________ (~bobrik@109.167.249.178) has joined #ceph
[14:21] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[14:23] * ifur (~osm@0001f63e.user.oftc.net) Quit (Quit: leaving)
[14:24] * fdmanana__ (~fdmanana@bl13-129-165.dsl.telepac.pt) has joined #ceph
[14:25] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[14:26] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[14:27] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) has joined #ceph
[14:33] * gaveen (~gaveen@123.231.127.40) has joined #ceph
[14:36] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[14:39] * fam is now known as fam_away
[14:42] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[14:42] * kefu (~kefu@114.92.97.251) has joined #ceph
[14:45] * bene-at-car-repair (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[14:46] * squ (~Thunderbi@46.109.36.167) Quit (Quit: squ)
[14:46] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[14:47] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[14:48] * kefu (~kefu@114.92.97.251) has joined #ceph
[14:50] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[14:50] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[14:51] * kefu (~kefu@114.92.97.251) has joined #ceph
[14:55] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[14:55] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[14:56] * kefu (~kefu@114.92.97.251) has joined #ceph
[14:58] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[14:58] * kanagaraj (~kanagaraj@nat-pool-bos-t.redhat.com) has joined #ceph
[14:59] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[14:59] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[15:00] * kefu (~kefu@114.92.97.251) has joined #ceph
[15:02] * scuttle|afk is now known as scuttlemonkey
[15:03] * overclk (~overclk@121.244.87.117) has joined #ceph
[15:03] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) Quit (Quit: Leaving)
[15:03] * madkiss (~madkiss@2001:6f8:12c3:f00f:9195:a3e6:fa22:e3bd) Quit (Quit: Leaving.)
[15:04] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[15:05] * kefu (~kefu@114.92.97.251) has joined #ceph
[15:07] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[15:08] * kefu (~kefu@114.92.97.251) has joined #ceph
[15:10] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) has joined #ceph
[15:13] * xarses (~xarses@12.10.113.130) Quit (Ping timeout: 480 seconds)
[15:13] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) has joined #ceph
[15:15] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) has joined #ceph
[15:18] * dgurtner (~dgurtner@178.197.231.243) has joined #ceph
[15:20] * dgurtner_ (~dgurtner@178.197.231.81) Quit (Ping timeout: 480 seconds)
[15:22] * oro (~oro@79.120.135.209) has joined #ceph
[15:23] * bilco105 is now known as bilco105_
[15:27] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[15:28] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[15:29] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[15:29] * marrusl (~mark@nat-pool-rdu-u.redhat.com) has joined #ceph
[15:29] * kefu (~kefu@114.92.97.251) has joined #ceph
[15:30] * lordjumblebee (~oftc-webi@124-171-65-67.dyn.iinet.net.au) Quit (Quit: Page closed)
[15:30] * marrusl (~mark@nat-pool-rdu-u.redhat.com) Quit ()
[15:30] * marrusl (~mark@nat-pool-rdu-u.redhat.com) has joined #ceph
[15:30] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[15:32] * bilco105_ is now known as bilco105
[15:32] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[15:32] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:33] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[15:33] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit ()
[15:33] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[15:34] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[15:34] * kefu (~kefu@114.92.97.251) has joined #ceph
[15:34] * sjm (~sjm@49.32.0.242) has left #ceph
[15:36] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[15:36] * RomeroJnr (~h0m3r@hosd.leaseweb.net) has joined #ceph
[15:37] * ChrisNBl_ (~ChrisNBlu@dhcp-ip-217.dorf.rwth-aachen.de) has joined #ceph
[15:37] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: Goodbye)
[15:37] * ChrisNBl_ is now known as ChrisNBlum
[15:39] <RomeroJnr> Hi, after removing all used pools, Ceph still claims to have 700 gb used.. how long does it usually take for it to realize that it doesn't have any data on it?
[15:40] <m0zes> does anyone know if the metadata pool can be erasure encoded?
[15:41] <m0zes> for cephfs?
[15:41] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[15:42] * kefu (~kefu@114.92.97.251) has joined #ceph
[15:46] * bobrik__________ (~bobrik@109.167.249.178) Quit (Quit: (null))
[15:49] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[15:49] * Concubidated (~Adium@aptilo1-uspl.us.ericsson.net) has joined #ceph
[15:52] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[15:53] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[15:53] * peem (~piotr@office.forlinux.co.uk) has joined #ceph
[15:56] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[15:57] * kefu (~kefu@114.92.97.251) has joined #ceph
[15:57] <gregsfortytwo> m0zes: definitely not; the metadata pool makes extensive use of omap (leveldb), which EC pools don't support :(
[15:57] <m0zes> gregsfortytwo: fair enough.
[15:59] <m0zes> the reason I was asking was that we recently had a power outage, nodes went down. We fired everything back up, things started recovering (massive writes moving data around), and someone accidentally triggered an epo. xfs corruption all over the place on the metadata osds.
[16:00] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[16:00] * yanzheng1 (~zhyan@182.139.21.245) Quit (Quit: This computer has gone to sleep)
[16:00] <m0zes> was hoping there would be a way to spread the chunks around and make it more likely to recover in that situation. when only 2/3 of the osds came back.
[16:01] <m0zes> I haven't done a lot of work trying to recover some of them yet. I am sure we can get it all back in our current situation. I was just thinking of the future.
[16:02] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[16:02] * linjan (~linjan@195.110.41.9) has joined #ceph
[16:03] * xarses (~xarses@166.175.189.90) has joined #ceph
[16:04] <m0zes> the corruption only happened on the pools that were on our ssds. so, I am sure the SSDs are buffering/caching things they shouldn't have.
[16:05] <gregsfortytwo> :(
[16:06] <flaf> m0zes: or maybe your ssds not have a (good) system of power loss protection ? Which is the model of SSD?
[16:06] <flaf> *have not
[16:07] <m0zes> flaf: they are terrible. Lite-On ECT-480N9S
[16:08] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) has joined #ceph
[16:09] * bkopilov (~bkopilov@bzq-79-183-58-206.red.bezeqint.net) has joined #ceph
[16:10] * capri (~capri@212.218.127.222) has joined #ceph
[16:11] * linuxkidd (~linuxkidd@209.163.164.50) has joined #ceph
[16:12] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[16:13] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit ()
[16:13] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[16:13] <flaf> m0zes: When you say "terrible", is it positive or negative?
[16:14] <m0zes> negative. I would get rid of them in a heartbeat if we hat the money to replace them with intel DC class SSDs.
[16:14] <m0zes> s/hat/had/
[16:15] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[16:16] <flaf> Ok m0zes, I see. Thx for the feedback. ;)
[16:17] * vata (~vata@207.96.182.162) has joined #ceph
[16:18] * bobrik__________ (~bobrik@83.243.64.45) has joined #ceph
[16:19] * delaf (~delaf@legendary.xserve.fr) Quit (Ping timeout: 480 seconds)
[16:22] * kanagaraj (~kanagaraj@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[16:22] * bkopilov (~bkopilov@bzq-79-183-58-206.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[16:22] * calvinx (~calvin@101.100.172.246) Quit (Quit: calvinx)
[16:25] * reed (~reed@net-2-40-202-79.cust.dsl.teletu.it) has joined #ceph
[16:27] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[16:28] * kefu (~kefu@114.92.97.251) has joined #ceph
[16:29] * lovejoy (~lovejoy@213.83.69.6) has joined #ceph
[16:29] * lovejoy (~lovejoy@213.83.69.6) Quit ()
[16:30] * jyoti-ranjan (~ranjanj@idp01webcache2-z.apj.hpecore.net) has joined #ceph
[16:31] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[16:31] * bkopilov (~bkopilov@bzq-79-183-58-206.red.bezeqint.net) has joined #ceph
[16:31] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[16:32] <jyoti-ranjan> Not able to create a bucket using radosgw client code
[16:32] <jyoti-ranjan> can anyone help me to triage
[16:36] * jskinner (~jskinner@host-95-2-129.infobunker.com) has joined #ceph
[16:37] * TheSov2 (~TheSov@cip-248.trustwave.com) has joined #ceph
[16:43] * xarses_ (~xarses@166.175.59.43) has joined #ceph
[16:44] * xarses_ (~xarses@166.175.59.43) Quit (Remote host closed the connection)
[16:44] * xarses_ (~xarses@166.175.59.43) has joined #ceph
[16:45] * bobrik___________ (~bobrik@83.243.64.45) has joined #ceph
[16:46] * dgurtner_ (~dgurtner@178.197.231.57) has joined #ceph
[16:46] * towen (~towen@c-98-230-203-84.hsd1.nm.comcast.net) has joined #ceph
[16:47] * analbeard (~shw@support.memset.com) Quit (Remote host closed the connection)
[16:47] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[16:48] * dgurtner (~dgurtner@178.197.231.243) Quit (Ping timeout: 480 seconds)
[16:49] * bobrik__________ (~bobrik@83.243.64.45) Quit (Ping timeout: 480 seconds)
[16:50] * xarses (~xarses@166.175.189.90) Quit (Ping timeout: 480 seconds)
[16:50] * analbeard (~shw@support.memset.com) has joined #ceph
[16:50] * yghannam (~yghannam@0001f8aa.user.oftc.net) has joined #ceph
[16:53] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[16:57] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[16:58] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit (Remote host closed the connection)
[16:59] * gaveen (~gaveen@123.231.127.40) Quit (Remote host closed the connection)
[16:59] <peem> Hi. I'm trying to set up apache as a proxy for radosgw secure traffic in hammer. "s3cmd ls" works fine, but "s3cmd rb s3://bucket" does not, returning "S3 error: 405 (MethodNotAllowed):" any hints to waht to look for to debug it ?
[17:00] * cok (~chk@2a02:2350:18:1010:6018:e57d:b611:883e) Quit (Quit: Leaving.)
[17:01] * kefu (~kefu@114.92.97.251) has joined #ceph
[17:01] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[17:04] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:05] * bitserker1 (~toni@88.87.194.130) has joined #ceph
[17:06] * bitserker (~toni@88.87.194.130) Quit (Read error: Connection reset by peer)
[17:07] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) has joined #ceph
[17:08] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[17:10] * yanzheng (~zhyan@182.139.21.245) has joined #ceph
[17:14] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit (Quit: Leaving)
[17:15] * reed (~reed@net-2-40-202-79.cust.dsl.teletu.it) Quit (Quit: Ex-Chat)
[17:16] <tuxcrafter> i bought a bunch of ssds that i want to use for journaling in ceph
[17:16] <tuxcrafter> i saw some benchmark options to compare the performance
[17:17] <tuxcrafter> but is there some recommended way to benchmarking?
[17:21] <boolman> fio ?
[17:22] <boolman> https://wiki.ceph.com/Guides/How_To/Benchmark_Ceph_Cluster_Performance
[17:22] <m0zes> so, I've got a few pgs in an EC pool (k=8, m=4) that ar stuck down+remapped+peering. 'pg 25.7f1 is stuck inactive for 588999.255200, current state down+remapped+peering, last acting [210,234,99,123,183,2147483647,154,339,38,399,216,2147483647]'
[17:22] <m0zes> it looks like there are only two chunks missing. if there are more the 8 chunks, I would have assumed it would be able to rebuild the pg without the missing osd.
[17:23] <m0zes> "peering is blocked due to down osds"
[17:23] <m0zes> I would prefer not to mark the osd as lost, as I'm not sure it is, yet.
[17:24] <m0zes> and if only 1 osd is missing, why are 2 chunks left unmapped?
[17:26] * shohn (~shohn@88.128.80.231) has joined #ceph
[17:26] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[17:27] <peem> Hi. I'm trying to set up apache as a proxy for radosgw secure traffic in hammer. "s3cmd ls" works fine, but "s3cmd rb s3://bucket" does not, returning "S3 error: 405 (MethodNotAllowed):" any hints to waht to look for to debug it ?
[17:29] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[17:31] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:31] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[17:31] * kefu (~kefu@114.92.97.251) has joined #ceph
[17:34] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[17:35] * madkiss (~madkiss@2001:6f8:12c3:f00f:bc6e:3384:e615:862b) has joined #ceph
[17:38] * delaf (~delaf@legendary.xserve.fr) has joined #ceph
[17:41] * oro (~oro@79.120.135.209) Quit (Ping timeout: 480 seconds)
[17:47] * ChrisNBlum (~ChrisNBlu@dhcp-ip-217.dorf.rwth-aachen.de) Quit (Quit: ZNC - http://znc.in)
[17:49] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[17:49] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[17:58] * kefu (~kefu@114.92.97.251) Quit (Ping timeout: 480 seconds)
[17:59] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[18:02] * moore (~moore@64.202.160.88) has joined #ceph
[18:03] * moore_ (~moore@64.202.160.88) has joined #ceph
[18:03] * moore (~moore@64.202.160.88) Quit (Read error: Connection reset by peer)
[18:06] <tuxcrafter> boolman: thx
[18:06] <tuxcrafter> i know i saw some specific benchmarks for ssds
[18:06] <tuxcrafter> i will try to find them
[18:08] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) Quit (Quit: Leaving)
[18:10] * xarses (~xarses@172.56.12.120) has joined #ceph
[18:11] * xarses_ (~xarses@166.175.59.43) Quit (Ping timeout: 480 seconds)
[18:12] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[18:15] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[18:15] * Concubidated (~Adium@aptilo1-uspl.us.ericsson.net) Quit (Ping timeout: 480 seconds)
[18:16] * kefu (~kefu@114.92.97.251) has joined #ceph
[18:17] <tuxcrafter> http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
[18:17] * madkiss (~madkiss@2001:6f8:12c3:f00f:bc6e:3384:e615:862b) Quit (Quit: Leaving.)
[18:17] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[18:17] * ChrisNBlum (~ChrisNBlu@dhcp-ip-217.dorf.rwth-aachen.de) has joined #ceph
[18:18] * dyasny (~dyasny@198.251.58.23) has joined #ceph
[18:19] * sjm (~sjm@183.87.82.210) has joined #ceph
[18:20] * linjan (~linjan@213.8.240.146) has joined #ceph
[18:21] * peem (~piotr@office.forlinux.co.uk) Quit (Quit: Konversation terminated!)
[18:22] * madkiss (~madkiss@2001:6f8:12c3:f00f:adb6:92d5:ccee:4dc4) has joined #ceph
[18:23] * naga1 (~oftc-webi@idp01webcache6-z.apj.hpecore.net) has joined #ceph
[18:23] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[18:24] * kefu (~kefu@114.92.97.251) has joined #ceph
[18:25] <naga1> i configured radosgw with my ceph cluster, when i do swift --debug -A http://10.1.195.32/auth/1.0 -U testuser:swift -K 'UnKUdpMv5l4VBAv1+EFLYWUm46kxlGwLyfQUcI\/M' list, it is throwing ClientException: Auth GET failed: http://10.1.195.32/auth/1.0 404 Not Found Account not found
[18:25] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Remote host closed the connection)
[18:25] <naga1> can somebody help me please?
[18:26] * shohn (~shohn@88.128.80.231) Quit (Quit: Leaving.)
[18:29] * jyoti-ranjan (~ranjanj@idp01webcache2-z.apj.hpecore.net) Quit (Read error: Connection reset by peer)
[18:31] * yanzheng (~zhyan@182.139.21.245) Quit (Quit: This computer has gone to sleep)
[18:33] * i_m (~ivan.miro@deibp9eh1--blueice4n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[18:34] * zenpac (~zenpac3@66.55.33.66) Quit (Ping timeout: 480 seconds)
[18:34] * zenpac (~zenpac3@66.55.33.66) has joined #ceph
[18:34] * xarses_ (~xarses@166.175.59.43) has joined #ceph
[18:35] * Hemanth (~Hemanth@117.213.179.217) has joined #ceph
[18:37] * vbellur (~vijay@pax.operations.onair.aero) has joined #ceph
[18:39] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[18:40] * branto (~branto@178-253-136-248.3pp.slovanet.sk) has left #ceph
[18:41] * xarses (~xarses@172.56.12.120) Quit (Ping timeout: 480 seconds)
[18:42] * sleinen (~Adium@macsl.switch.ch) Quit (Ping timeout: 480 seconds)
[18:42] * scuttlemonkey is now known as scuttle|afk
[18:44] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[18:45] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[18:45] * Concubidated (~Adium@aptilo1-uspl.us.ericsson.net) has joined #ceph
[18:54] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[18:57] * bjornar_ (~bjornar@ti0099a430-1131.bb.online.no) has joined #ceph
[18:57] * scuttle|afk is now known as scuttlemonkey
[18:59] * ngoswami (~ngoswami@1.39.15.158) has joined #ceph
[19:02] * dugravot6 (~dugravot6@2a01:e35:8bbf:4060:90e8:7025:df13:1773) has joined #ceph
[19:03] * dugravot6 (~dugravot6@2a01:e35:8bbf:4060:90e8:7025:df13:1773) Quit ()
[19:04] * MrHome (~jonas@85.115.12.132) Quit (Quit: This computer has gone to sleep)
[19:05] * kefu (~kefu@114.92.97.251) Quit (Quit: Textual IRC Client: www.textualapp.com)
[19:09] * Jase (~Tarazed@46.36.36.127) has joined #ceph
[19:11] * kawa2014 (~kawa@212.110.41.244) Quit (Quit: Leaving)
[19:12] * burley (~khemicals@cpe-98-28-239-78.cinci.res.rr.com) Quit (Ping timeout: 480 seconds)
[19:12] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) has joined #ceph
[19:13] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[19:14] * sleinen1 (~Adium@2001:620:0:82::105) has joined #ceph
[19:15] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[19:17] * yguang11 (~yguang11@2001:4998:effd:600:3564:d366:2d8e:c224) has joined #ceph
[19:21] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:23] * dyasny (~dyasny@198.251.58.23) Quit (Ping timeout: 480 seconds)
[19:24] * burley (~khemicals@cpe-98-28-239-78.cinci.res.rr.com) has joined #ceph
[19:25] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[19:25] * bjornar_ (~bjornar@ti0099a430-1131.bb.online.no) Quit (Ping timeout: 480 seconds)
[19:28] * sleinen1 (~Adium@2001:620:0:82::105) Quit (Ping timeout: 480 seconds)
[19:30] * Hemanth (~Hemanth@117.213.179.217) Quit (Ping timeout: 480 seconds)
[19:32] * Hemanth (~Hemanth@117.192.228.38) has joined #ceph
[19:32] * t4nk085 (~oftc-webi@64-7-156-32.border8-dynamic.dsl.sentex.ca) has joined #ceph
[19:33] <t4nk085> Hello, we're repurposing 4 SuperMicro Servers - Supermicro (2U) 8 SATA Bay with 6025B-URB X7DBU 2x Intel Xeon Quad Core and 32GB RAM each. I'll be adding a dual Infiniband QDR MHQH29B-XTR card to each of the 4 servers so they can talk to each other and to our two Proxmox Hosts. Ceph will be used as virtual storage to run our VM's. I'd like to add 1 SSD drive to each of these servers but I don't want to lose a SAT
[19:33] <t4nk085> I'm thinking of using a PCIe SSD instead but the prices I've seen for ones recommened to work with Ceph are high for us. Are there any tested, cheaper (~$500 range) PCIe SSD cards this community thinks would work for us?
[19:33] <t4nk085> The other option for me is to put our journaling on a SATA disk. Is this advisable or should we use up a SATA bay for an SSD drive and when we are ready spend the money on purchasing a PCIe SSD to reclaim that SATA bay for an OSD? Any advice you can provide for me would be greatly appreciated.
[19:34] * dyasny (~dyasny@198.251.58.23) has joined #ceph
[19:35] <Lyncos> t4nk085 cheap PCIe are not that reliable... and since you will put many journal on it.. it is a bigger risk
[19:35] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[19:36] <Lyncos> We tried some OCz cheap PCIe .. it works.. but the monitoring options (especially for wearness) are kinda weak
[19:36] <Lyncos> we currently using dual micron P320H but it's not cheap...
[19:36] <Lyncos> if you have Raid controllers with cache .. you can enable the cache and not use ssd at all.. we got good results
[19:37] <Lyncos> we did one raid-0 per drive
[19:37] * jbautista- (~wushudoin@transit-86-181-132-209.redhat.com) has joined #ceph
[19:37] * Jase (~Tarazed@7R2AABRE0.tor-irc.dnsbl.oftc.net) Quit ()
[19:38] * puffy (~puffy@216.207.42.129) has joined #ceph
[19:42] * vbellur (~vijay@pax.operations.onair.aero) Quit (Read error: Connection timed out)
[19:42] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Quit: Exeunt dneary)
[19:43] * PeterRabbit (~QuantumBe@cloud.tor.ninja) has joined #ceph
[19:44] * wushudoin_ (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[19:45] * jbautista- (~wushudoin@transit-86-181-132-209.redhat.com) Quit (Ping timeout: 480 seconds)
[19:46] * jbautista- (~wushudoin@38.140.108.2) has joined #ceph
[19:46] * dgurtner (~dgurtner@178.197.231.156) has joined #ceph
[19:46] * vbellur (~vijay@pax.operations.onair.aero) has joined #ceph
[19:47] <monsted> t4nk085: the Intel 750 series seems like a good balance of price and quality
[19:48] * dgurtner_ (~dgurtner@178.197.231.57) Quit (Ping timeout: 480 seconds)
[19:54] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[19:56] <t4nk085> Lyncos: Thank you for the reply. Our servers will have 2 of the 8 SATA bays in Raid 1 for the O/S...the remaining 6 bays will have 4 used as OSD, 1 spare and 1 for journaling. My thought was these 6 bays (not used for the O/S) would be JBOD. At least that's what I was thinking of doing. Are you saying that if my Raid controller for the 6 OSD disks has cache on it we might not need to use a SATA disk/bay for journalling?
[19:56] <t4nk085> monsted: Thank you for the suggestion. I will look at these Intel 750 series for sure.
[19:57] <Lyncos> t4nk085: I would put journalling on same drives
[19:57] <Lyncos> t4nk085: we also tried the Intel 750 and it is a good choice.. but we prefer the microns :-)
[20:01] * bjornar_ (~bjornar@ti0099a430-1131.bb.online.no) has joined #ceph
[20:05] * vbellur (~vijay@pax.operations.onair.aero) Quit (Read error: Connection reset by peer)
[20:08] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[20:08] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[20:12] * sjm (~sjm@183.87.82.210) Quit (Quit: Leaving.)
[20:12] * nsoffer (~nsoffer@bzq-79-180-80-9.red.bezeqint.net) has joined #ceph
[20:13] * sleinen1 (~Adium@vpn-ho-b-132.switch.ch) has joined #ceph
[20:13] * PeterRabbit (~QuantumBe@5NZAADXTT.tor-irc.dnsbl.oftc.net) Quit ()
[20:13] * pico1 (~Jase@tor-exit1.arbitrary.ch) has joined #ceph
[20:14] * fdmanana__ (~fdmanana@bl13-129-165.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[20:16] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:17] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[20:17] * naga1 (~oftc-webi@idp01webcache6-z.apj.hpecore.net) Quit (Remote host closed the connection)
[20:19] * fdmanana__ (~fdmanana@bl13-129-165.dsl.telepac.pt) has joined #ceph
[20:23] * linuxkidd (~linuxkidd@209.163.164.50) Quit (Quit: Leaving)
[20:26] * ivotron (uid25461@id-25461.brockwell.irccloud.com) Quit (Quit: Connection closed for inactivity)
[20:28] * midnightrunner (~midnightr@216.113.160.71) has joined #ceph
[20:30] * MrBy2 (~jonas@HSI-KBW-46-223-128-56.hsi.kabel-badenwuerttemberg.de) has joined #ceph
[20:32] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[20:32] * sleinen2 (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[20:32] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Read error: Connection reset by peer)
[20:33] * sjm (~sjm@183.87.82.210) has joined #ceph
[20:33] <scheuk> anyone familiar with ceph-disk and using dmcrypt?
[20:34] * sleinen (~Adium@2001:620:0:82::105) has joined #ceph
[20:34] * MrBy2 (~jonas@HSI-KBW-46-223-128-56.hsi.kabel-badenwuerttemberg.de) Quit (Read error: Connection reset by peer)
[20:35] * MrBy2 (~jonas@HSI-KBW-46-223-128-56.hsi.kabel-badenwuerttemberg.de) has joined #ceph
[20:36] * MrBy2 (~jonas@HSI-KBW-46-223-128-56.hsi.kabel-badenwuerttemberg.de) Quit ()
[20:38] * sleinen1 (~Adium@vpn-ho-b-132.switch.ch) Quit (Ping timeout: 480 seconds)
[20:39] * midnightrunner (~midnightr@216.113.160.71) Quit (Remote host closed the connection)
[20:40] * LeaChim (~LeaChim@host86-132-233-125.range86-132.btcentralplus.com) has joined #ceph
[20:40] * sleinen2 (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:40] * midnightrunner (~midnightr@216.113.160.71) has joined #ceph
[20:43] * pico1 (~Jase@9S0AAA7L0.tor-irc.dnsbl.oftc.net) Quit ()
[20:43] * Pirate (~Grum@torrouter.ml-ext.ucar.edu) has joined #ceph
[20:44] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[20:50] <t4nk085> Lyncos: Ahh, now I see. So use 5 of my 6 SATA bays for OSD with journalling on these 5 SATA disks (I'll leave one for spare). Thank you.
[20:50] <Lyncos> yeah .. why you need 1 spare ?
[20:51] <burley> leave your spares on a shelf somewhere, not in a chassis :)
[20:51] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[20:52] <m0zes> maybe a spare slot, in case they need to purchase a real journal?
[20:55] * ngoswami (~ngoswami@1.39.15.158) Quit (Quit: Leaving)
[20:58] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[21:07] * Hemanth (~Hemanth@117.192.228.38) Quit (Quit: Leaving)
[21:13] * Pirate (~Grum@5NZAADXW3.tor-irc.dnsbl.oftc.net) Quit ()
[21:21] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[21:26] <TheSov2> why is it recommended to not use raid cards with ceph?
[21:28] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) has joined #ceph
[21:28] * evilrob00 just upgraded to hammer.
[21:30] <evilrob00> I tried to `ceph-deploy rgw create node1` and get the error about the missing bootstrap-rgw keyring. It seems that civetweb is going to be more maintainable than fcgi going forward. how do I get my giant-upgraded-to-hammer cluster working with ceph-deploy for rgw nodes?
[21:33] <evilrob00> is it the same key as the ceph.client.radosgw.keyring that was created earlier?
[21:34] * ira (~ira@208.217.184.210) has joined #ceph
[21:37] * Hemanth (~Hemanth@117.192.228.38) has joined #ceph
[21:38] * sankarshan (~sankarsha@183.87.39.242) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[21:53] * mgolub (~Mikolaj@91.225.200.116) has joined #ceph
[21:59] * Larsen (~andreas@larsen.pl) Quit (Remote host closed the connection)
[21:59] <rlrevell> what does libceph: error connecting to $MON:IP:6789 error -101 mean?
[21:59] <rlrevell> i get it when i try to reboot the whole cluster, nodes hang at shutdown trying to unmount their RBD devices
[22:00] <rlrevell> (the ceph nodes themselves mount RBD devices to back up their root filesystems)
[22:07] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[22:13] * mgolub (~Mikolaj@91.225.200.116) Quit (Quit: away)
[22:18] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[22:23] * marrusl (~mark@nat-pool-rdu-u.redhat.com) Quit (Quit: sync && halt)
[22:24] * marrusl (~mark@nat-pool-rdu-u.redhat.com) has joined #ceph
[22:24] * ibravo (~ibravo@72.198.142.104) Quit (Quit: Leaving)
[22:25] * Sysadmin88 (~IceChat77@2.124.164.69) has joined #ceph
[22:27] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Quit: Leaving.)
[22:37] * nwf (~nwf@00018577.user.oftc.net) Quit (Max SendQ exceeded)
[22:39] * Destreyf (~quassel@email.newagecomputers.info) has joined #ceph
[22:42] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has left #ceph
[22:43] <Destreyf> Question for you genius folk, when doing a SSD journal, if i have 4 OSD's in a machine do i need 4 SSD's to jounal to, or can i use just one and symlink the journal?
[22:46] * Destreyf_ (~quassel@host-24-49-108-79.beyondbb.com) has joined #ceph
[22:46] <doppelgrau> Destreyf: you can use one SSD (if it???s fast enough and durable enough).
[22:46] <m0zes> Destreyf: 4 seperate journal partitions on the ssd. one for each osd.
[22:47] <doppelgrau> Destreyf: downside is, if you???ll loose the SSD, you yloose all 4 OSDs => larger failure (but with default crush rule that means ???only??? more data movement for repair)
[22:49] <TheSov2> would you guys recommend size 3 for production data?
[22:49] <TheSov2> or bigger
[22:49] <TheSov2> ?
[22:51] <m0zes> depends on the failure domain. I use size 2 on data I don't care about. size 4 (min_size 3) on data I do care about. and ec 8+4 for data I cannot loose.
[22:51] <m0zes> s/oo/o/
[22:51] * bene-at-car-repair (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[22:51] * sleinen (~Adium@2001:620:0:82::105) Quit (Ping timeout: 480 seconds)
[22:52] * nwf (~nwf@00018577.user.oftc.net) has joined #ceph
[22:52] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) has joined #ceph
[22:53] * Destreyf (~quassel@email.newagecomputers.info) Quit (Ping timeout: 480 seconds)
[22:53] * yguang11 (~yguang11@2001:4998:effd:600:3564:d366:2d8e:c224) Quit (Remote host closed the connection)
[22:58] * ivotron (uid25461@id-25461.brockwell.irccloud.com) has joined #ceph
[23:01] * yguang11 (~yguang11@2001:4998:effd:600:8080:3c3f:c4f8:d6d1) has joined #ceph
[23:01] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[23:01] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) Quit ()
[23:02] * notarima (~drdanick@193.33.216.23) has joined #ceph
[23:03] <Destreyf_> doppelgrau, m0zes thank you for your feedback, this is exactly what i needed to hear, we'll have 3 machines with 4 osd's each so we should be good
[23:04] <Destreyf_> Other question i have is how much logging is done by ceph by default, i'd rather not have 2 SSD's + 4 OSD Drives in each machine (budget deployment) would a HDD be okay for the OS?
[23:05] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) Quit (Ping timeout: 480 seconds)
[23:05] <Sysadmin88> 2 OSD per SSD, why not get an extra HDD in that SATA port instead of an SSD?
[23:07] <doppelgrau> Sysadmin88: depends on your workload what makes more sense for you
[23:07] <Sysadmin88> on a low budget probably struggling for capacity
[23:07] <doppelgrau> Sysadmin88: capacity or IO/s
[23:08] <doppelgrau> Sysadmin88: with internal logfile a HDD makes less than 100 IO/s
[23:08] <doppelgrau> (usually)
[23:09] <Sysadmin88> indeed, depends what his workload is :) is he driving massive IOPS on a budget 3 node system... or is he needing capacity. it's his choice.
[23:09] <Sysadmin88> hopefully nothing too mission critical with low budget...
[23:11] * bjornar_ (~bjornar@ti0099a430-1131.bb.online.no) Quit (Ping timeout: 480 seconds)
[23:12] * jschmid (~jxs@ip9234f579.dynamic.kabel-deutschland.de) has joined #ceph
[23:14] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) Quit (Remote host closed the connection)
[23:15] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:16] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[23:16] <Destreyf_> Sysadmin88: It -is- and isn't mission critial
[23:16] <Destreyf_> using proxmox to do VM's
[23:16] <Destreyf_> however its only going to be 4-5 VM's
[23:17] <Destreyf_> 2 of which are "critical"
[23:17] <Destreyf_> but if we have performance degredation it won't be a terrible hit
[23:17] <Destreyf_> just has to be better than our "jumpy" speeds. we have a DRBD NFS share and it goes between 2MB/s and 90MB/s
[23:18] <Destreyf_> (I didn't set the DRBD/NFS share up, so it could be misconfigured but meh)
[23:18] <TheSov2> what exactly does erasurecoding in ceph do?
[23:18] <TheSov2> i mean it doesnt have a raid so to speak so how is it doing EC
[23:19] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) has joined #ceph
[23:21] * ichavero (~ichavero@189.231.108.162) has joined #ceph
[23:22] <Destreyf_> TheSov2: there's a diagram here
[23:22] <Destreyf_> http://ceph.com/docs/master/rados/operations/erasure-code/
[23:22] <Destreyf_> that might explain more
[23:22] * georgem (~Adium@184.151.178.68) has joined #ceph
[23:22] * georgem (~Adium@184.151.178.68) Quit ()
[23:22] <ichavero> hello i have a problem mounting my cluster, i get this error in dmesg: libceph: auth method 'x' error -1 can somebody give me a hint?
[23:22] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[23:23] <TheSov2> ok i see you are using more space to setup erasure codes
[23:23] <Destreyf_> Basically it works under the same principal as raid 5
[23:23] <Destreyf_> split + bitmask
[23:25] * Hemanth (~Hemanth@117.192.228.38) Quit (Ping timeout: 480 seconds)
[23:25] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) Quit (Quit: Leaving)
[23:28] <TheSov2> how much is the EC overhead?
[23:28] <TheSov2> 25 percent?
[23:28] <gleam> depends on how you configure it
[23:29] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[23:30] * rendar (~I@host180-128-dynamic.61-82-r.retail.telecomitalia.it) has joined #ceph
[23:30] * xarses_ (~xarses@166.175.59.43) Quit (Remote host closed the connection)
[23:32] * notarima (~drdanick@8Q4AABLOL.tor-irc.dnsbl.oftc.net) Quit ()
[23:32] * hifi1 (~Uniju@198.50.231.22) has joined #ceph
[23:33] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) has joined #ceph
[23:34] * vbellur (~vijay@107-1-123-195-ip-static.hfc.comcastbusiness.net) has joined #ceph
[23:35] * Lyncos (~lyncos@208.71.184.41) has left #ceph
[23:36] <m0zes> k+m (k is how many chunks to split the block into, m is the number of checksum blocks to create. more or less)
[23:37] * m0zes does 8+4 for his EC pool.
[23:37] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[23:46] <Destreyf_> m0zes: that's evil :P
[23:48] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[23:49] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[23:51] * jskinner (~jskinner@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[23:54] * rlrevell (~leer@184.52.129.221) has joined #ceph
[23:57] * t4nk085 (~oftc-webi@64-7-156-32.border8-dynamic.dsl.sentex.ca) Quit (Remote host closed the connection)
[23:58] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[23:59] <TheSov2> m0zes, 8+4 is essentially 1/3rd overhead correct?

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.