#ceph IRC Log

Index

IRC Log for 2015-06-17

Timestamps are in GMT/BST.

[0:00] <gleam> 1GB of data takes 1.5GB to store on disk
[0:00] <gleam> whether you call that 50% overhead or 33% overhead..
[0:00] * Concubidated (~Adium@aptilo1-uspl.us.ericsson.net) Quit (Quit: Leaving.)
[0:01] * sjm (~sjm@183.87.82.210) Quit (Quit: Leaving.)
[0:01] * linjan (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[0:01] * marrusl (~mark@nat-pool-rdu-u.redhat.com) Quit (Quit: sync && halt)
[0:01] <TheSov2> well 66 percent is data
[0:01] <TheSov2> and 33 percent is overhead
[0:02] <TheSov2> thats how i look at it
[0:02] * hifi1 (~Uniju@9S0AAA7VY.tor-irc.dnsbl.oftc.net) Quit ()
[0:05] * yguang11 (~yguang11@2001:4998:effd:600:8080:3c3f:c4f8:d6d1) Quit (Remote host closed the connection)
[0:05] <TheSov2> EC is prod ready?
[0:05] <TheSov2> and i assume there is performance loss?
[0:06] * murmur1 (~datagutt@8Q4AABLQY.tor-irc.dnsbl.oftc.net) has joined #ceph
[0:11] <gleam> 33% of your storage capacity is lost to overhead, yeah
[0:11] <gleam> and yeah
[0:11] <gleam> on writes and on recovery
[0:12] * dyasny (~dyasny@198.251.58.23) Quit (Ping timeout: 480 seconds)
[0:12] * rlrevell (~leer@184.52.129.221) has left #ceph
[0:14] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) has joined #ceph
[0:14] * Vacuum_ (~Vacuum@88.130.200.173) Quit (Ping timeout: 480 seconds)
[0:17] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) Quit (Quit: Leaving)
[0:20] * TheSov2 (~TheSov@cip-248.trustwave.com) Quit (Read error: Connection reset by peer)
[0:22] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[0:22] * oro (~oro@79.120.135.209) has joined #ceph
[0:23] * rendar (~I@host180-128-dynamic.61-82-r.retail.telecomitalia.it) Quit ()
[0:23] * rendar (~I@host180-128-dynamic.61-82-r.retail.telecomitalia.it) has joined #ceph
[0:25] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[0:25] <aarontc> erasure coded data pool for cephfs? good idea, yes? :)
[0:27] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[0:28] * Concubidated (~Adium@199.119.131.10) has joined #ceph
[0:28] * arbrandes (~arbrandes@177.45.221.205) Quit (Quit: Leaving)
[0:30] <aarontc> also, speaking of erasure coding... the docs say with jerasure (the default), even with m=4, you need k+(m-1) OSDs up to recover
[0:30] <aarontc> how does that work?
[0:30] <aarontc> "For instance if jerasure is configured with k=8 and m=4, losing one OSD requires reading from the eleven others to repair."
[0:31] * jschmid (~jxs@ip9234f579.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[0:31] <gleam> probably an error
[0:31] <gleam> oh, actually, no
[0:31] <gleam> because osds aren't dedicated to parity
[0:32] <aarontc> so I guess I'm not understanding the point of m > 1
[0:32] <aarontc> if I can't have 4 down OSDs, with m=4
[0:32] <gleam> i think it's saying it will access all 11 drives
[0:32] <gleam> not that you need all 11 to repair
[0:32] <aarontc> hmm. I'll have to test this
[0:32] <monsted> yeah, i think it'll use 11 if they're there, but accept less
[0:33] <gleam> right
[0:33] <aarontc> I'm going to have to blow my cluster away anyway since there is no way to mark incomplete PGs as up in hammer
[0:34] <aarontc> probably have some fun testing 9.0
[0:35] <davidz1> gleam: I think that k=8 m=4 means that you can lose 4 disks and rebuild from the remainder, so only 8 disks need to be read.
[0:36] <aarontc> davidz1: that's what I thought too, but the docs disagree. gleam was saying the docs might be in error
[0:36] * murmur1 (~datagutt@8Q4AABLQY.tor-irc.dnsbl.oftc.net) Quit ()
[0:37] <aarontc> davidz1: ref: http://ceph.com/docs/master/rados/operations/erasure-code-lrc/
[0:37] * Vale (~Nanobot@x1-6-28-c6-8e-70-f5-a4.cpe.webspeed.dk) has joined #ceph
[0:37] <aarontc> " recovering from the loss of one OSD requires reading from all the others"
[0:39] <gleam> i think that that's ambiguous but not incorrect
[0:40] <gleam> it gives hte wrong impression certainly
[0:40] <gleam> if you have k8/m4 and you lose one, it will read from the 11 remaining drives to recover
[0:42] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:52] <rotbeard> in the red hat FAQs, they say " Not directly, but you can take advantage of erasure-coded based pools if you use a replication-based caching pool in front. However, this configuration is not yet recommended for production deployments
[0:52] <rotbeard> is that the fact in hammer release?
[0:57] <davidz1> gleam aarontc: Based on that link it is plug-in dependent
[0:58] <gleam> fun
[1:01] <doppelgrau> rotbeard: havn???t heard of any ???technical??? problems, but since a cache miss or cache-purge is realy expensive and the logic for cache promote/demote is really basic is should hurt in reality
[1:02] * moore_ (~moore@64.202.160.88) Quit (Remote host closed the connection)
[1:03] <rotbeard> well i see
[1:06] * Vale (~Nanobot@9S0AAA7Y6.tor-irc.dnsbl.oftc.net) Quit ()
[1:07] * Enikma (~hassifa@nx-01.tor-exit.network) has joined #ceph
[1:10] * KevinPerks (~Adium@2606:a000:80ad:1300:20bd:93ac:b7f8:8712) Quit (Quit: Leaving.)
[1:10] * fsimonce (~simon@host253-71-dynamic.3-87-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:11] <doppelgrau> rotbeard: Some applikations like virtual tape libraries might work well I guess, but something like storage for virtual servers would need either a very large cache or sometimes very bad latencies
[1:19] <rotbeard> I thought of using an EC base pool + a replicated cache pool for the openstack/ceph setting I talked about last week (10 rooms, the loss of 1 should be covered). the idea to use 4 replicas + do not automatically rebalance when 1 room goes down leads to a lot of "storage loss" for the vms (net capacity will be gross / 4)
[1:27] * rendar (~I@host180-128-dynamic.61-82-r.retail.telecomitalia.it) Quit ()
[1:28] <doppelgrau> rotbeard: I think the problems are szenrios like ???updatedb??? where the wohle filesystem is scanned => every block that contains a single directory-Inode gets promoted to the cache resulting in large parts of the image gets promoted to the cache
[1:31] * fam_away is now known as fam
[1:31] * fam is now known as fam_away
[1:34] <rotbeard> seems legit. maybe someone can work around that while using the same 4TB disks in the cache pool as in the base pool. that could be enough to handle scenarios like updatedb + increase the overall capacity
[1:35] <rotbeard> i do not understand EC in ceph clearly, but are things like rbd striping also possible?
[1:36] * Enikma (~hassifa@7R2AABRPP.tor-irc.dnsbl.oftc.net) Quit ()
[1:38] * dalgaaf (uid15138@id-15138.charlton.irccloud.com) Quit (Quit: Connection closed for inactivity)
[1:40] * OutOfNoWhere (~rpb@199.68.195.101) has joined #ceph
[1:41] <doppelgrau> rotbeard: If I understood the rbd-striping documentation correctly, it should since the striping happens on the client side using different objects => no difference for the pool/EC. But it might make the promote/demote problem even worse (e.g. using 8kb stripes over 4 4MB objects; a read of 32kb would lead to 16MB promoted to cache => 16MB reads on the OSDs, 16MB(x2 with journal)xreplica write to the OSDs and after that the data is delive
[1:43] <doppelgrau> rotbeard: so a 32kb read could lead to (size=4 on the cache-pool) 16MB reads and 64x2MB writes
[1:44] <doppelgrau> rotbeard: I think that is the reason why the people say you want the cache pool so large, promotes/demoted barely happens
[1:44] <rotbeard> so large in case of capacity or performance?
[1:45] <rotbeard> but: isn't rbd striping _just_ writing to all regarding OSDs in parallel? but maybe I mixed some terms/technologies here.
[1:46] <doppelgrau> rotbeard: capacity (but of course fast enough too)
[1:47] <doppelgrau> rotbeard: If I don???t mix something up, rbd striping is like this: instead dividing the image in 4 MB Blocks, each a single object you group a few of these objects together and divide the blocks in a round robin style every few kb
[1:50] * Enikma (~Bj_o_rn@anonymous6.sec.nl) has joined #ceph
[1:50] <doppelgrau> rotbeard: so if you have a very hot small (<=1 Object, default 4MB) part of a disk it would hit only 1 object => only the OSDs associated with that object. with striping you usually have a bit larger ???regions???, but these are divied in smaller parts over the different objects => this ???hotspot???-IO will get more evenly distributed
[1:51] <rotbeard> well I think I thought about 2 different things. I talked about writing an object to a rbd will result in [osd performance] multiplies with the replica count because of the ability to write across OSDs
[1:52] <doppelgrau> rotbeard: as far as I understand, the object itself is written ???normal???, so going to the primary, from the primary to the others
[1:53] <doppelgrau> rotbeard: rbd-striping ???only??? witches the primary more often since the objects changes more often (although in a short round-robin fashin till the end of the ???disk area??? is reached)
[1:54] <doppelgrau> my spelling is getting really bad, I should go to bed :)
[1:55] <rotbeard> I guess I am completely wrong in understand all the things. I thought that is the reason why we see a lot more write performance than read performance.
[1:55] * oms101 (~oms101@p20030057EA087400C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:56] <rotbeard> doppelgrau, of course: I have to hit a meeting in about 8 hours so I should go too ;)
[1:56] <doppelgrau> rotbeard: with your setup writes go first to the journal => SSD => fast; read come from the platter???
[1:57] <rotbeard> mh. I really have to read a lot more than I currently did.
[1:58] <rotbeard> but thanks so far for pointing things out
[1:58] <doppelgrau> rotbeard: no problem, see you friday :)
[1:58] <rotbeard> yep ;)
[2:03] * yuanz (~yzhou67@192.102.204.38) Quit (Quit: Leaving)
[2:04] * oms101 (~oms101@p20030057EA084600C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[2:04] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[2:07] * yanzheng (~zhyan@182.139.21.245) has joined #ceph
[2:17] * LeaChim (~LeaChim@host86-132-233-125.range86-132.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:18] <TheSov> i just ordered 6 raspberry pi's
[2:18] <TheSov> gonna build me one of them pi lab clusters
[2:20] * Enikma (~Bj_o_rn@8Q4AABLUD.tor-irc.dnsbl.oftc.net) Quit ()
[2:20] * Debesis (~0x@140.217.38.86.mobile.mezon.lt) Quit (Quit: Leaving)
[2:21] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[2:21] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit ()
[2:22] * lxo (~aoliva@lxo.user.oftc.net) Quit (Contact kline@freenode.net with questions. (2015-06-17 00:22:55))
[2:23] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:25] <TheSov> for building a ceph cluster and getting it working very cheap 400 for the full setup
[2:25] <TheSov> but it doesnt scale well per osd
[2:27] <TheSov> for a single osd, you need the pi, a power adaptor, a microsd card, a usb to sata converter and finally a disk. the pi is 35, a power adaptor is 5, micro sd card is 5 usb to sata is 10, and a disk is around 50 for a cost drive, and another 10 for a case for the pi so its 110 dollars per osd.
[2:28] <TheSov> slow as molasses too, the network card on the pi is hung onto the usb. so it shares bandwidth.
[2:29] * Shnaw (~Sliker@readme.tor.camolist.com) has joined #ceph
[2:29] <TheSov> at 110 per osd, you reach parity at 12 disks in terms of speed, but for space you reach parity at 24 drives
[2:30] <TheSov> it would not be cost effective to build a giant cluster of raspberry pi's
[2:31] * dgurtner (~dgurtner@178.197.231.156) Quit (Ping timeout: 480 seconds)
[2:34] * haomaiwa_ (~haomaiwan@183.206.163.154) Quit (Remote host closed the connection)
[2:38] <blahnana> that doesn't really surprise me all that much
[2:38] * Concubidated1 (~Adium@199.119.131.10) has joined #ceph
[2:38] * Concubidated (~Adium@199.119.131.10) Quit (Read error: Connection reset by peer)
[2:39] * lxo (~aoliva@lxo.user.oftc.net) Quit (autokilled: This host is believed to be a source of spam. - Contact support@oftc.net for help. (2015-06-17 00:39:01))
[2:39] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[2:41] * Concubidated (~Adium@199.119.131.10) has joined #ceph
[2:41] * Concubidated1 (~Adium@199.119.131.10) Quit (Read error: Connection reset by peer)
[2:42] <ichavero> hello, i'm getting this error health HEALTH_WARN 33 pgs peering; 16 pgs stuck inactive; 33 pgs stuck unclean; too few pgs per osd (13 < min 20); mds cluster is degraded can i've checked the docs and didn't find how to fix this. i also get a I/O error while trying to mount the cluster. can somebody give me a hint?
[2:44] * oro (~oro@79.120.135.209) Quit (Ping timeout: 480 seconds)
[2:44] <TheSov> blahnana, the idea of self enclosed ssd's per disk is not a bad one though you would just have to scale it better, get a properly fast arm processor and use GPIO direct to ethernet, with 2 sata/sas ports, 1 rust 1 ssd configured via SPI or tftp or something
[2:45] <TheSov> buying an entire system a few osd is more costly than it should be, people should be able to scale ceph granularly
[2:46] <TheSov> ichavero, did u lose an osd?
[2:48] * towen (~towen@c-98-230-203-84.hsd1.nm.comcast.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[2:51] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[2:54] * ira (~ira@208.217.184.210) Quit (Ping timeout: 480 seconds)
[2:59] * Shnaw (~Sliker@8Q4AABLVQ.tor-irc.dnsbl.oftc.net) Quit ()
[3:06] * marrusl (~mark@rrcs-70-60-101-195.midsouth.biz.rr.com) has joined #ceph
[3:07] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[3:10] <ichavero> TheSov: i added one by mistake and i removed it
[3:12] <ichavero> TheSov: this are my osd http://paste.fedoraproject.org/232864/14345035/
[3:12] * shyu (~Shanzhi@119.254.120.66) has joined #ceph
[3:18] * vata (~vata@cable-21.246.173-197.electronicbox.net) has joined #ceph
[3:20] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[3:21] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[3:23] * georgem (~Adium@23.91.150.96) has joined #ceph
[3:24] * shang (~ShangWu@175.41.48.77) has joined #ceph
[3:24] * sankarshan (~sankarsha@183.87.39.242) has joined #ceph
[3:32] * puffy (~puffy@216.207.42.129) Quit (Ping timeout: 480 seconds)
[3:33] <TheSov> you have to remove it from your crush map
[3:33] <TheSov> did u do that
[3:33] <TheSov> 0 0.9 osd.0 DNE
[3:34] <TheSov> that says its down
[3:36] * nsoffer (~nsoffer@bzq-79-180-80-9.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[3:40] * kefu (~kefu@114.92.97.251) has joined #ceph
[3:43] * Kupo1 (~tyler.wil@23.111.254.159) Quit (Read error: Connection reset by peer)
[3:44] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[3:48] * kefu (~kefu@114.92.97.251) Quit (Ping timeout: 480 seconds)
[3:55] <ichavero> TheSov: ohh i actually did a ceph osd rm 0
[3:56] * MrHeavy_ (~MrHeavy@pool-108-54-190-117.nycmny.fios.verizon.net) has joined #ceph
[3:56] * MrHeavy (~MrHeavy@pool-108-54-190-117.nycmny.fios.verizon.net) Quit (Read error: Connection reset by peer)
[3:56] * kefu (~kefu@114.92.97.251) has joined #ceph
[3:56] * zhaochao (~zhaochao@111.161.77.241) has joined #ceph
[4:01] * towen (~towen@c-98-230-203-84.hsd1.nm.comcast.net) has joined #ceph
[4:06] * ivotron (uid25461@id-25461.brockwell.irccloud.com) Quit (Quit: Connection closed for inactivity)
[4:08] <ichavero> TheSov: i still get an I/O error while mounting but the logs are different: http://paste.fedoraproject.org/232871/45068541/
[4:08] * Mika_c (~Mk@122.146.93.152) has joined #ceph
[4:19] * moore (~moore@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[4:20] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[4:21] * kefu (~kefu@114.92.97.251) has joined #ceph
[4:22] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[4:24] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[4:30] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[4:31] * kefu (~kefu@114.92.97.251) has joined #ceph
[4:32] <TheSov> you there?
[4:32] * flisky1 (~Thunderbi@106.39.60.34) has joined #ceph
[4:33] <TheSov> did u remove the auth key
[4:34] <TheSov> and did you remove it from the crush map?
[4:34] <TheSov> http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-the-osd
[4:36] * flisky (~Thunderbi@106.39.60.34) Quit (Ping timeout: 480 seconds)
[4:39] * haomaiwang (~haomaiwan@218.94.96.134) has joined #ceph
[4:42] * Georgyo (~georgyo@shamm.as) Quit (Remote host closed the connection)
[4:43] * moore (~moore@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Remote host closed the connection)
[4:47] * KevinPerks (~Adium@2606:a000:80ad:1300:4f:5f59:58ac:9398) has joined #ceph
[4:47] * lucas1 (~Thunderbi@218.76.52.64) Quit (Read error: Connection reset by peer)
[4:50] <evilrob00> when I run ceph-deploy mon create-initial it dies "KeyNotFoundError: Could not find keyring file: /etc/ceph/ceph.client.admin.keyring on host phl-ceph-01" where that node is my admin node
[4:50] <evilrob00> it is indeed not there, but that step should create it right?
[4:51] * midnightrunner (~midnightr@216.113.160.71) Quit (Ping timeout: 480 seconds)
[4:52] <evilrob00> there is a file "ceph.client.admin.keyring.6322.tmp" and a tmp file. both zero length
[4:52] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[4:59] * ivotron (uid25461@id-25461.brockwell.irccloud.com) has joined #ceph
[5:06] <evilrob00> can I get more debugging out of ceph-deploy on that step?
[5:10] * shang (~ShangWu@175.41.48.77) Quit (Remote host closed the connection)
[5:14] <evilrob00> this just worked for me on a VM. is there some way to tell why it's not working on my box?
[5:14] * Georgyo (~georgyo@2600:3c03:e000:71::1) has joined #ceph
[5:20] * rahatm1 (~rahatm1@d108-180-151-4.bchsia.telus.net) has joined #ceph
[5:23] * OutOfNoWhere (~rpb@199.68.195.101) Quit (Ping timeout: 480 seconds)
[5:23] * calvinx (~calvin@101.100.172.246) has joined #ceph
[5:26] <evilrob00> anyone?
[5:27] * blahnana (~bman@2602:ffe8:200:1::6019:93b3) Quit (Read error: Connection reset by peer)
[5:27] * rahatm1 (~rahatm1@d108-180-151-4.bchsia.telus.net) Quit (Remote host closed the connection)
[5:27] * yguang11_ (~yguang11@nat-dip15.fw.corp.yahoo.com) has joined #ceph
[5:29] <badone> ceph-deploy new ?
[5:30] * blahnana (~bman@2602:ffe8:200:1::6019:93b3) has joined #ceph
[5:30] <evilrob00> that creates the mon keyring fine for me
[5:33] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[5:33] <badone> I run "ceph-deploy new mon1 mon2 mon3" and then "ceph-deploy mon create-initial" and don't get your error
[5:34] * yguang11 (~yguang11@12.31.82.125) Quit (Ping timeout: 480 seconds)
[5:34] <badone> selinux maybe?
[5:35] <TheSov> no
[5:35] <TheSov> no create initial
[5:35] <TheSov> ceph-deploy new mon1 mon2 mon3 etc
[5:35] <TheSov> makes the keys
[5:35] <TheSov> after that
[5:35] <TheSov> u do ceph-deploy gatherkeys
[5:35] <badone> depending on version
[5:35] <TheSov> you should be using the latest
[5:36] <badone> and in the latest I don't need gatherkeys
[5:36] <TheSov> im pretty sure you do to deploy osds
[5:36] <TheSov> unless you are installing osd's on the monitors
[5:36] <badone> we're not talking about OSDs though?
[5:36] <TheSov> in which case.... yuck
[5:37] <evilrob00> badone: I ran it that way and it worked, but I've been doign so much cleansing on this machine trying to make it pristine, not sure if that was all it was.
[5:37] <evilrob00> new cluster, I think in the morning I'm going to blow it away and start over
[5:37] <badone> evilrob00: sure, have fun
[5:38] * squ (~Thunderbi@46.109.36.167) has joined #ceph
[5:38] <TheSov> i must have done the deployment process like 20 times by now trying to get it right
[5:38] <TheSov> so i wrote a cheat sheet
[5:39] <TheSov> prepare nodes
[5:39] <TheSov> ufw disable,edit hosts file,static ips,resolvers,updates
[5:39] <TheSov> move /var to ssd on monitors
[5:39] <TheSov> kernel.pid_max = 4194303 on OSD's
[5:39] <TheSov> basically its a huge list of stuff like that
[5:42] * marrusl (~mark@rrcs-70-60-101-195.midsouth.biz.rr.com) Quit (Ping timeout: 480 seconds)
[5:44] <evilrob00> 34 4TB drives per node. three nodes I'll have plenty of space, but it won't be screaming fast.
[5:44] * overclk (~overclk@121.244.87.117) has joined #ceph
[5:44] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Quit: Leaving.)
[5:50] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[5:50] * kefu (~kefu@114.92.97.251) has joined #ceph
[5:53] * tganguly (~tganguly@121.244.87.117) has joined #ceph
[5:53] * georgem (~Adium@23.91.150.96) Quit (Quit: Leaving.)
[5:56] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) has joined #ceph
[5:57] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) Quit ()
[5:57] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) has joined #ceph
[5:59] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[6:03] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[6:06] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[6:06] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[6:06] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[6:07] * kefu (~kefu@114.92.97.251) has joined #ceph
[6:10] * markl (~mark@knm.org) Quit (Remote host closed the connection)
[6:11] * tganguly (~tganguly@121.244.87.117) Quit (Ping timeout: 480 seconds)
[6:14] * rektide_ (~rektide@eldergods.com) Quit (Quit: Lost terminal)
[6:14] * rektide (~rektide@eldergods.com) has joined #ceph
[6:20] * tganguly (~tganguly@121.244.87.117) has joined #ceph
[6:25] * towen (~towen@c-98-230-203-84.hsd1.nm.comcast.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:32] * yguang11_ (~yguang11@nat-dip15.fw.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[6:34] * rektide (~rektide@eldergods.com) Quit (Remote host closed the connection)
[6:39] * flisky1 (~Thunderbi@106.39.60.34) Quit (Ping timeout: 480 seconds)
[6:39] * amote (~amote@121.244.87.116) has joined #ceph
[6:46] <ichavero> TheSov: now i've removed the key and removed it from the crush map, and yes i have osd's on the monitors
[6:50] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) has joined #ceph
[6:55] * aarontc_ (~aarontc@2001:470:e893::1:1) has joined #ceph
[6:56] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[6:57] * yguang11_ (~yguang11@nat-dip15.fw.corp.yahoo.com) has joined #ceph
[6:59] * aarontc (~aarontc@2001:470:e893::1:1) Quit (Ping timeout: 480 seconds)
[6:59] * kefu is now known as kefu|afk
[7:03] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[7:04] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[7:04] * yguang11 (~yguang11@12.31.82.125) Quit (Ping timeout: 480 seconds)
[7:06] * flisky (~Thunderbi@106.39.60.34) Quit (Remote host closed the connection)
[7:07] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[7:07] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[7:09] * kefu|afk (~kefu@114.92.97.251) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:13] * ska (~skatinolo@cpe-173-174-111-177.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[7:15] * kefu (~kefu@114.92.97.251) has joined #ceph
[7:17] * haomaiwang (~haomaiwan@218.94.96.134) Quit (Ping timeout: 480 seconds)
[7:18] * tganguly (~tganguly@121.244.87.117) Quit (Ping timeout: 480 seconds)
[7:29] * tganguly (~tganguly@121.244.87.117) has joined #ceph
[7:32] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[7:33] * kefu (~kefu@114.92.97.251) has joined #ceph
[7:35] * yguang11_ (~yguang11@nat-dip15.fw.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[7:43] * MrBy2 (~jonas@85.115.23.2) has joined #ceph
[7:48] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[7:57] * aarontc_ (~aarontc@2001:470:e893::1:1) Quit (Quit: Bye!)
[7:59] * haomaiwang (~haomaiwan@218.94.96.134) has joined #ceph
[8:01] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[8:01] * shohn (~shohn@dslb-178-002-076-138.178.002.pools.vodafone-ip.de) has joined #ceph
[8:01] * kefu (~kefu@114.92.97.251) has joined #ceph
[8:03] * cok (~chk@2a02:2350:18:1010:da5:9741:2646:970b) has joined #ceph
[8:07] * peeejayz (~peeejayz@isis57186.sci.rl.ac.uk) Quit (Ping timeout: 480 seconds)
[8:07] * karnan (~karnan@121.244.87.117) has joined #ceph
[8:09] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[8:10] * kefu (~kefu@114.92.97.251) has joined #ceph
[8:11] * KevinPerks (~Adium@2606:a000:80ad:1300:4f:5f59:58ac:9398) Quit (Quit: Leaving.)
[8:12] * peeejayz (~peeejayz@isis57186.sci.rl.ac.uk) has joined #ceph
[8:14] * sjm (~sjm@49.32.0.218) has joined #ceph
[8:15] * kefu (~kefu@114.92.97.251) Quit (Max SendQ exceeded)
[8:16] * kefu (~kefu@114.92.97.251) has joined #ceph
[8:18] * Concubidated (~Adium@199.119.131.10) Quit (Quit: Leaving.)
[8:18] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[8:19] * Larsen (~andreas@2a01:2b0:2000:11::cafe) has joined #ceph
[8:20] * kefu_ (~kefu@114.92.125.213) has joined #ceph
[8:22] * Sysadmin88 (~IceChat77@2.124.164.69) Quit (Quit: IceChat - Its what Cool People use)
[8:24] * kefu_ (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[8:24] * kefu (~kefu@114.92.97.251) Quit (Ping timeout: 480 seconds)
[8:24] * kefu (~kefu@114.92.125.213) has joined #ceph
[8:25] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[8:27] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:27] * Targren (~chatzilla@fl-74-4-66-138.dhcp.embarqhsd.net) has joined #ceph
[8:27] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:28] * Targren (~chatzilla@fl-74-4-66-138.dhcp.embarqhsd.net) Quit ()
[8:34] * aarontc (~aarontc@2001:470:e893::1:1) has joined #ceph
[8:38] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[8:44] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[8:45] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[8:46] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit ()
[8:47] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[8:47] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[8:47] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[8:48] * kefu (~kefu@114.92.125.213) has joined #ceph
[8:48] * sleinen (~Adium@2001:620:0:82::101) has joined #ceph
[8:55] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[8:56] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[8:56] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[8:57] <Be-El> hi
[8:58] * sleinen (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[8:58] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[9:02] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:09] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) has joined #ceph
[9:12] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[9:12] * analbeard (~shw@support.memset.com) has joined #ceph
[9:12] * analbeard (~shw@support.memset.com) has left #ceph
[9:14] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:15] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[9:19] * andrein (~oftc-webi@office.smart-x.net) has joined #ceph
[9:21] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:23] <andrein> Hi guys, I've just deployed a ceph cluster for testing, and I'm trying to deploy a rgw instance. Unfortunately, ceph-deploy fails and I have no idea where to begin troubleshooting it. Ceph-deploy output is here: http://fpaste.org/232925/43452568/
[9:26] * nsoffer (~nsoffer@bzq-79-180-80-9.red.bezeqint.net) has joined #ceph
[9:26] <anorak> hi
[9:28] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[9:29] <anorak> Hi everyone! I have a question. If I have requested a block device from an rbd image for Client A , mounted it as well on Client A. If Client A crashed, would it be possible to re-map the SAME block device on client B?
[9:29] * andrein (~oftc-webi@office.smart-x.net) Quit (Remote host closed the connection)
[9:29] <anorak> If am in the process of setting up client B but how does client B becomes aware of existing block devices?
[9:30] <anorak> I* am in the process...
[9:30] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[9:31] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[9:31] <Be-El> anorak: you can mount a rbd image on multiple clients, yes. but you DO NOT WANT to mount it multiple times without taking precautions
[9:32] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[9:33] <anorak> Be-El: If you mean that I should not mount the block device on multiple clients at the same time. Then yes, i do not intend to. But I am setting up a scenario where I have mounted a block device on Client A and in case Client A crrashed, i would need to re-map the block device on Client B.
[9:33] * fsimonce (~simon@host253-71-dynamic.3-87-r.retail.telecomitalia.it) has joined #ceph
[9:34] <Be-El> anorak: you
[9:34] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[9:34] <Be-El> anorak: you'll run into a similar problem
[9:34] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[9:34] <Be-El> anorak: client A might not have flushed its buffers, so the filesystem on the rbd may not be consistent
[9:35] <anorak> Be-El: oh
[9:35] <Be-El> anorak: the buffers involved are the vm internal buffers (linux page cache etc.), rbd caches (if enabled) etc.
[9:36] <anorak> Be-El: then I am open to suggestions in this case :)
[9:37] <Be-El> anorak: you have to ensure that all write operations are synchronous on the rbd image. this means disabling all the cache between the processes in the vm and the rbd image
[9:37] <anorak> Be-El: ook.
[9:37] <Be-El> anorak: i think the best way to get a comprehensive list of configuration settings for this case is asking on the mailing list. there may be people who have done a similar setup
[9:37] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:38] <Be-El> anorak: simultanous access from client A and client B is easier since you can use filesystems like OCFS2 for the synchronisation
[9:39] <anorak> Be-El: Yeah. I read a similar thread about setting up OCFS2 on top of the rbd image
[9:39] <Be-El> anorak: if you want to have a kind of cold standby for a service, you can also use something like OCFS2 on the rbd and pacemaker for service switching
[9:40] <Be-El> but both are way beyond my own experience
[9:40] <anorak> Be-El: Thanks for the tip! So all clients of ceph clusters are aware of the existing block devices...yes?
[9:41] <Be-El> anorak: no, but a client may be aware of other clients accessing a block device, if the client is using locks or watchers
[9:41] <Be-El> anorak: a client can list all block devices of cause
[9:42] <anorak> Be-El: Gotcha! Thanks!
[9:45] * vikhyat is now known as vikhyat|away
[9:46] * jordanP (~jordan@213.215.2.194) has joined #ceph
[9:47] * linjan (~linjan@213.8.240.146) has joined #ceph
[9:50] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[9:51] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[9:51] * andrein (~oftc-webi@office.smart-x.net) has joined #ceph
[9:52] <andrein> Hi guys, sorry, think my client timed out, anyone had a chance to look over the ceph-deploy log I posted earlier?
[9:56] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[9:57] * flisky (~Thunderbi@106.39.60.34) Quit (Quit: flisky)
[9:57] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit ()
[9:58] * a1-away (~jelle@62.27.85.48) Quit (Remote host closed the connection)
[9:59] <anorak> Another question. Is it possible to map rbd images to specific OSDs? I would take an educational guess and say no...but I have been wrong before :)
[10:02] * loicd1 (~loic@193.54.227.109) has joined #ceph
[10:03] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[10:05] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[10:07] <Be-El> anorak: rbd images operates on pool. and pools can be restricted to certain osds by their associated crush map
[10:07] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[10:07] <Be-El> eh..crush rule
[10:10] <cloud_vision> andien: bt the look at the log you posted.. it look like the issue is not with ceph-deploy but with starting the service on the designated host.. run systemctl status ceph-radosgw.service to some more infor regarding why it fail to start. On a side note... ceph-deploy has some small bugs that need to be resolved directly on the hosts
[10:10] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[10:13] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[10:14] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:15] * Vacuum_ (~Vacuum@i59F79018.versanet.de) has joined #ceph
[10:16] <RomeroJnr> Hi everyone, yesterday I posted a question but I guess I got no replies, it's fairly simple... After setting up a 100 TB cluster for a POC, I created a single pool with somewhat ~6k placement groups (got that number from the pg calc), there we created a few rbd images (40gb, 40gb, 20gb, 20gb). We did some wild 'dd' testing and rados benchmarking on the plataform in order to better understand it's
[10:16] <RomeroJnr> performance. After testing I removed all the images and the pools, however, 'ceph -w' is still showing 695 GB of raw used space. I couldn't find anything related to this behaviour on the official documentation (neither anywhere else). Is this expected?
[10:21] <anorak> Be-El: Ah ok. Can you give me an example of such a rule set? .i.e. Syntax etc
[10:22] <Be-El> anorak: see http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/ for a setup with harddisk based pools and ssd pools
[10:23] * kawa2014 (~kawa@212.77.3.187) has joined #ceph
[10:23] <anorak> Be-El: Thanks!
[10:23] <Be-El> RomeroJnr: do you use external journals for the osds or a cache pool in front of the rbd pool?
[10:24] * andrein (~oftc-webi@office.smart-x.net) Quit (Remote host closed the connection)
[10:25] <RomeroJnr> Be-El: i do use external journals
[10:26] * cok (~chk@2a02:2350:18:1010:da5:9741:2646:970b) Quit (Quit: Leaving.)
[10:26] <Mika_c> Hello,all. Have any ever build radosgw and use swift??? Follow the document???https://ceph.com/docs/v0.78/radosgw/config/
[10:27] <Mika_c> and execute command "swift -V 1.0 -A http://172.23.107.7/auth -U test:swift -K 'gNps4kABZnrKt8hBoJTP1Ev2W1KXuMDSsQdxzJC5' list"
[10:27] <Mika_c> always display "Account not found"
[10:27] * vikhyat|away is now known as vikhyat
[10:28] <Be-El> RomeroJnr: so the space is not taken by the journals. does 'rados ls' still list objects in the pool?
[10:28] <Be-El> Mika_c: 0.78 is ancient
[10:28] <Mika_c> any ideas???
[10:29] <Be-El> Mika_c: if you do not use that ceph version, try the manual for the current version (or the version you use) first
[10:30] <Mika_c> Ok, Got it!
[10:30] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[10:32] * kawa2014 (~kawa@212.77.3.187) Quit (Ping timeout: 480 seconds)
[10:32] <RomeroJnr> Be-El: as I said, the pool was even removed... i have created a different pool afterwards and it shows: rados ls -p POOL-6kPG | wc -l > 159
[10:32] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[10:32] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:33] <Be-El> RomeroJnr: so the space is taken by the objects in the different pool
[10:33] <RomeroJnr> Be-El: ok, so what if i remove all pools, leaving nothing. will this space be reclaimed?
[10:34] <Be-El> RomeroJnr: in that case almost all space should be freed
[10:34] <Be-El> RomeroJnr: you can also have a more direct look at the occupied space
[10:34] <Be-El> RomeroJnr: ceph pg dump lists the number of objects in each PG and their size
[10:35] <Be-El> eh...accumulated size
[10:36] * andrein (~oftc-webi@office.smart-x.net) has joined #ceph
[10:37] <andrein> cloud_vision: Tried to run the commands ceph-deploy runs by hand, this is what I have. looks like an issue with cephx: http://fpaste.org/232943/45298451/
[10:37] <andrein> cloud_vision: also found the following in the ceph log: 2015-06-17 04:30:33.544622 mon.0 [INF] from='client.? <IP>:0/1027059' entity='client.bootstrap-rgw' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.ceph-rgw0", "caps": ["osd", "allow", "rwx", "mon", "allow", "rw"]}]: access denied
[10:38] <RomeroJnr> Be-El: so, all pools removed (too few PGs per OSD (0 < min 30), 699 GB used, 410 TB / 410 TB avail (64 pgs: 64 active+clean; 8 bytes data, 697 GB used, 410 TB / 410 TB avail)
[10:39] <Be-El> RomeroJnr: how many osd do you have in that cluster?
[10:39] <RomeroJnr> Be-El: 300
[10:40] <boolman> I have 101 pg's in stuck in stale. what to do? what I did was stop the osd, flush the journal and recreate it
[10:40] <Be-El> RomeroJnr: so it's about 2 gb / osd....maybe it's the filesystem overhead, e.g. the amount of space used on an otherwise empty osd
[10:40] <boolman> and I can't query stale pg's
[10:41] <tuxcrafter> 2015-06-17 10:40:15.539908 mon.0 [INF] pgmap v31278: 320 pgs: 1 active+clean+scrubbing+deep, 133 active+clean, 186 down+peering; 1285 GB data, 2358 GB used, 2296 GB / 4655 GB avail
[10:41] <Be-El> RomeroJnr: how much space is occupied on the osd partitions right now?
[10:41] <tuxcrafter> how do i calculate how long it is going to take to be in an active+clean state again
[10:41] * jrocha (~jrocha@vagabond.cern.ch) has joined #ceph
[10:41] <RomeroJnr> Be-El: around 2.4G
[10:42] <Be-El> RomeroJnr: and 2.4G * 300 ~ 699 G
[10:42] <Be-El> RomeroJnr: so it's just the housekeeping overhead
[10:42] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:43] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Connection reset by peer)
[10:43] <Be-El> tuxcrafter: with that setup its taking forever, since no pgs are backfilled at all
[10:43] * shylesh (~shylesh@121.244.87.118) has joined #ceph
[10:46] <tuxcrafter> Be-El: i forgot to "ceph osd set noout" before turning all my nodes off
[10:46] <tuxcrafter> i turned them back on one by one
[10:46] <tuxcrafter> and this is the state it is in now
[10:47] <tuxcrafter> it is working for a few hours now
[10:47] <tuxcrafter> working as in doing some scrubbing
[10:47] <Be-El> tuxcrafter: some osd seems to be missing or being stuck. you need to find out which are affected
[10:50] <tuxcrafter> Be-El: thx i found it, one node with two osds didnt come up
[10:50] <tuxcrafter> now it is doing some rebuilding again
[10:54] <boolman> any tip on my stuck in stale pg's?
[10:54] <T1w> wooo
[10:54] <Mika_c> Be-El: Hi, follow the new version document. The status still "Account not found" but I got the error "ClientException: Auth GET failed: http://192.168.11.10/auth 404 Not Found".
[10:54] <T1w> I've gotten a go for ceph
[10:54] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[10:55] <T1w> with 10gbit network etc
[10:55] <Be-El> Mika_c: and at this point i have to delegate you to someone with more experience in setting up radosgw
[10:55] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[10:56] * andrein (~oftc-webi@office.smart-x.net) Quit (Remote host closed the connection)
[10:59] * analbeard (~shw@support.memset.com) has joined #ceph
[11:00] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:03] * treenerd (~treenerd@85.193.140.98) has joined #ceph
[11:04] <treenerd> Hi; does anyone have an Idea of the Average Latency value in rados bench? Is that seconds milliseconds microseconds; I can't get any information about that?
[11:08] <gregsfortytwo> it's seconds, but you shouldn't read too much into them if they're a bit large (it's not a very efficient program and it generally overloads the system)
[11:12] <treenerd> so if it is the value around 0.200s for write performance it would be a common value you think?
[11:14] * nigwil (~Oz@li1101-124.members.linode.com) Quit (Quit: leaving)
[11:23] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[11:24] * nigwil (~Oz@li1101-124.members.linode.com) has joined #ceph
[11:28] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[11:29] * overclk (~overclk@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:29] * zhaochao_ (~zhaochao@111.161.77.241) has joined #ceph
[11:35] * zhaochao (~zhaochao@111.161.77.241) Quit (Ping timeout: 480 seconds)
[11:35] * zhaochao_ is now known as zhaochao
[11:36] * haomaiwang (~haomaiwan@218.94.96.134) Quit (Remote host closed the connection)
[11:39] * ChrisNBl_ (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[11:39] * ChrisNBl_ (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit ()
[11:42] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) has joined #ceph
[11:42] * analbeard (~shw@support.memset.com) Quit (Read error: No route to host)
[11:44] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) has joined #ceph
[11:49] * analbeard (~shw@support.memset.com) has joined #ceph
[11:50] * ZyTer (~ZyTer@ghostbusters.apinnet.fr) Quit (Remote host closed the connection)
[11:50] * ZyTer (~ZyTer@ghostbusters.apinnet.fr) has joined #ceph
[11:55] * linjan (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[11:57] * treenerd (~treenerd@85.193.140.98) Quit (Ping timeout: 480 seconds)
[11:57] * dgurtner (~dgurtner@178.197.231.156) has joined #ceph
[11:58] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[12:00] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[12:05] * treenerd (~treenerd@85.193.140.98) has joined #ceph
[12:05] * linjan (~linjan@213.8.240.146) has joined #ceph
[12:06] * thomnico (~thomnico@2a01:e35:8b41:120:7dec:d817:cfd5:d106) has joined #ceph
[12:08] * oro (~oro@91.146.191.230) has joined #ceph
[12:08] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Remote host closed the connection)
[12:08] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[12:12] * sjm (~sjm@49.32.0.218) Quit (Ping timeout: 480 seconds)
[12:12] * kefu is now known as kefu|afk
[12:16] * oro (~oro@91.146.191.230) Quit (Ping timeout: 480 seconds)
[12:24] * sjm (~sjm@49.32.0.218) has joined #ceph
[12:26] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[12:27] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[12:28] * Mika_c (~Mk@122.146.93.152) Quit (Quit: Konversation terminated!)
[12:28] * haomaiwang (~haomaiwan@183.206.160.253) has joined #ceph
[12:30] * kefu (~kefu@114.92.125.213) has joined #ceph
[12:30] * kefu|afk (~kefu@114.92.125.213) Quit (Read error: No route to host)
[12:33] * kefu (~kefu@114.92.125.213) Quit (Read error: Connection reset by peer)
[12:33] * kefu (~kefu@114.92.125.213) has joined #ceph
[12:33] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[12:34] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[12:35] * thomnico (~thomnico@2a01:e35:8b41:120:7dec:d817:cfd5:d106) Quit (Ping timeout: 480 seconds)
[12:36] * loicd2 (~loic@80.12.35.18) has joined #ceph
[12:37] * shyu (~Shanzhi@119.254.120.66) Quit (Remote host closed the connection)
[12:40] * Debesis (~0x@140.217.38.86.mobile.mezon.lt) has joined #ceph
[12:43] * loicd1 (~loic@193.54.227.109) Quit (Ping timeout: 480 seconds)
[12:44] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[12:45] * kefu (~kefu@114.92.125.213) has joined #ceph
[12:56] * vpol (~vpol@000131a0.user.oftc.net) has joined #ceph
[12:58] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[12:58] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[13:00] * kefu (~kefu@114.92.125.213) has joined #ceph
[13:14] * wushudoin_ (~wushudoin@transit-86-181-132-209.redhat.com) has joined #ceph
[13:20] * oro (~oro@91.146.191.230) has joined #ceph
[13:21] * kefu (~kefu@114.92.125.213) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:22] * jbautista- (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[13:23] * wushudoin_ (~wushudoin@transit-86-181-132-209.redhat.com) Quit (Ping timeout: 480 seconds)
[13:23] * wushudoin_ (~wushudoin@38.140.108.2) has joined #ceph
[13:25] * ngoswami_ (~ngoswami@121.244.87.116) has joined #ceph
[13:26] * ngoswami (~ngoswami@121.244.87.116) Quit (Ping timeout: 480 seconds)
[13:33] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[13:33] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[13:39] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[13:46] * marrusl (~mark@nat-pool-rdu-u.redhat.com) has joined #ceph
[13:50] * KevinPerks (~Adium@2606:a000:80ad:1300:316f:d3e:da99:54d1) has joined #ceph
[13:55] * sleinen (~Adium@2001:620:0:82::101) has joined #ceph
[13:56] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) has joined #ceph
[13:57] * zhaochao (~zhaochao@111.161.77.241) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 38.0.1/20150526223604])
[13:57] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) Quit (Read error: Connection reset by peer)
[14:01] * overclk (~overclk@121.244.87.117) has joined #ceph
[14:02] * tganguly (~tganguly@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:03] * RomeroJnr (~h0m3r@hosd.leaseweb.net) Quit ()
[14:04] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) Quit (Quit: Ex-Chat)
[14:04] * georgem (~Adium@184.151.190.178) has joined #ceph
[14:05] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[14:06] * overclk (~overclk@121.244.87.117) Quit ()
[14:06] * overclk (~overclk@121.244.87.117) has joined #ceph
[14:06] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[14:06] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[14:07] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit ()
[14:08] * loicd1 (~loic@193.54.227.109) has joined #ceph
[14:08] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[14:10] * smerz (~ircircirc@37.74.194.90) has joined #ceph
[14:12] * naga1 (~oftc-webi@idp01webcache4-z.apj.hpecore.net) has joined #ceph
[14:12] * loicd2 (~loic@80.12.35.18) Quit (Ping timeout: 480 seconds)
[14:15] * naga1 (~oftc-webi@idp01webcache4-z.apj.hpecore.net) Quit ()
[14:18] * naga1 (~oftc-webi@idp01webcache4-z.apj.hpecore.net) has joined #ceph
[14:24] * vbellur (~vijay@107-1-123-195-ip-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[14:25] * naga1 (~oftc-webi@idp01webcache4-z.apj.hpecore.net) Quit (Quit: Page closed)
[14:26] * oro (~oro@91.146.191.230) Quit (Ping timeout: 480 seconds)
[14:28] * georgem (~Adium@184.151.190.178) Quit (Quit: Leaving.)
[14:37] * bilco105 is now known as bilco105_
[14:38] * bilco105_ is now known as bilco105
[14:40] <magicrob1tmonkey> does anyone have an example of a working osd crush location hook script?
[14:44] * magicrob1tmonkey is now known as magicrobotmonkey
[14:46] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) has joined #ceph
[14:49] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[14:51] * kanagaraj (~kanagaraj@nat-pool-bos-t.redhat.com) has joined #ceph
[14:51] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[14:52] * oro (~oro@91.146.191.230) has joined #ceph
[14:53] * squ (~Thunderbi@46.109.36.167) Quit (Quit: squ)
[14:56] * trociny (~mgolub@93.183.239.2) Quit (Quit: ??????????????)
[14:58] * Concubidated (~Adium@129.192.176.66) has joined #ceph
[14:59] * Destreyf_ (~quassel@host-24-49-108-79.beyondbb.com) Quit (Ping timeout: 480 seconds)
[15:03] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[15:09] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) has joined #ceph
[15:10] * dyasny (~dyasny@198.251.58.23) has joined #ceph
[15:13] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Remote host closed the connection)
[15:14] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[15:16] * MrBy2 (~jonas@85.115.23.2) Quit (Quit: This computer has gone to sleep)
[15:20] * kefu (~kefu@114.92.125.213) has joined #ceph
[15:23] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[15:24] * yanzheng (~zhyan@182.139.21.245) Quit (Quit: This computer has gone to sleep)
[15:24] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[15:25] * yanzheng (~zhyan@182.139.21.245) has joined #ceph
[15:30] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[15:35] * JFQ (~ghartz@AStrasbourg-651-1-152-164.w90-40.abo.wanadoo.fr) has joined #ceph
[15:36] * fmardini (~fmardini@213.61.152.126) has joined #ceph
[15:36] * jskinner (~jskinner@host-95-2-129.infobunker.com) has joined #ceph
[15:39] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[15:41] * ksperis (~ksperis@46.218.42.103) has joined #ceph
[15:41] * ghartz_ (~ghartz@AStrasbourg-651-1-211-89.w109-221.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[15:42] * yanzheng (~zhyan@182.139.21.245) Quit (Quit: This computer has gone to sleep)
[15:44] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[15:44] * yanzheng (~zhyan@182.139.21.245) has joined #ceph
[15:47] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[15:56] * trociny (~mgolub@93.183.239.2) has joined #ceph
[15:57] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[15:58] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[15:59] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[16:00] * kanagaraj (~kanagaraj@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[16:04] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:05] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[16:05] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:06] * linuxkidd (~linuxkidd@209.163.164.50) has joined #ceph
[16:08] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[16:08] * yanzheng (~zhyan@182.139.21.245) Quit (Quit: This computer has gone to sleep)
[16:09] * vpol (~vpol@000131a0.user.oftc.net) Quit (Quit: vpol)
[16:10] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[16:11] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[16:12] * tupper (~tcole@173.38.117.84) has joined #ceph
[16:17] * cok (~chk@2a02:2350:18:1010:558b:ce58:a3a2:f7d) has joined #ceph
[16:17] * yanzheng (~zhyan@182.139.21.245) has joined #ceph
[16:18] * MrBy2 (~jonas@HSI-KBW-46-223-128-56.hsi.kabel-badenwuerttemberg.de) has joined #ceph
[16:20] * MrBy2 (~jonas@HSI-KBW-46-223-128-56.hsi.kabel-badenwuerttemberg.de) Quit (Read error: Connection reset by peer)
[16:21] * MrBy2 (~jonas@HSI-KBW-46-223-128-56.hsi.kabel-badenwuerttemberg.de) has joined #ceph
[16:24] * kanagaraj (~kanagaraj@nat-pool-bos-t.redhat.com) has joined #ceph
[16:27] * MrBy2 (~jonas@HSI-KBW-46-223-128-56.hsi.kabel-badenwuerttemberg.de) Quit (Quit: Leaving)
[16:28] <jeroenvh> Hi, does anyone know more about "osd recovery max active". I was wondering, when I set it to 1, is there anyway to calculate the bandwidth per OSD? We are a bit concerned with our SSD-only ceph cluster that even with max_active=1 the recovery will take lots of bw
[16:31] * dgurtner (~dgurtner@178.197.231.156) Quit (Ping timeout: 480 seconds)
[16:31] * vikhyat (~vumrao@49.248.194.83) has joined #ceph
[16:32] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[16:33] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[16:33] * ChanServ sets mode +o elder
[16:36] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) has joined #ceph
[16:36] * dgurtner (~dgurtner@178.197.231.156) has joined #ceph
[16:40] * vpol (~vpol@000131a0.user.oftc.net) has joined #ceph
[16:41] * markl (~mark@knm.org) has joined #ceph
[16:47] * sjm (~sjm@49.32.0.218) has left #ceph
[16:47] * cdelatte (~cdelatte@vlandnat.mystrotv.com) has joined #ceph
[16:52] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[16:56] * shylesh (~shylesh@121.244.87.118) Quit (Remote host closed the connection)
[17:01] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:03] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:03] * zaitcev (~zaitcev@2001:558:6001:10:61d7:f51f:def8:4b0f) has joined #ceph
[17:08] * magicboiz (~magicboiz@2a01:7d00:501:c000::1a0b) has joined #ceph
[17:08] * magicboiz (~magicboiz@2a01:7d00:501:c000::1a0b) Quit (Read error: Connection reset by peer)
[17:09] * magicboiz (~magicboiz@2a01:7d00:501:c000::1a0b) has joined #ceph
[17:09] * vikhyat (~vumrao@49.248.194.83) Quit (Quit: Leaving)
[17:10] * yguang11 (~yguang11@2001:4998:effd:600:5103:fe60:609:5578) has joined #ceph
[17:10] * dopesong (~dopesong@lb1.mailer.data.lt) has joined #ceph
[17:12] * magicboiz (~magicboiz@2a01:7d00:501:c000::1a0b) Quit ()
[17:12] * javcasalc (~javcasalc@2a01:7d00:501:c000::1a0b) has joined #ceph
[17:13] * ircolle (~Adium@2601:285:201:2bf9:e965:3daa:abd9:324c) has joined #ceph
[17:13] * javcasalc (~javcasalc@2a01:7d00:501:c000::1a0b) Quit ()
[17:13] * magicboiz (~magicboiz@2a01:7d00:501:c000::1a0b) has joined #ceph
[17:16] * marrusl (~mark@nat-pool-rdu-u.redhat.com) Quit (Ping timeout: 480 seconds)
[17:16] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) Quit (Quit: Leaving)
[17:18] * wschulze1 (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[17:21] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[17:22] * moore (~moore@64.202.160.88) has joined #ceph
[17:23] * wschulze1 (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit ()
[17:24] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:26] * towen (~towen@c-98-230-203-84.hsd1.nm.comcast.net) has joined #ceph
[17:27] * ira (~ira@167.220.23.74) has joined #ceph
[17:28] * treenerd (~treenerd@85.193.140.98) Quit (Quit: Verlassend)
[17:29] * loicd1 (~loic@193.54.227.109) Quit (Quit: Leaving.)
[17:30] * sleinen (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[17:33] * Administrator_ (~Administr@172.245.26.218) Quit (Ping timeout: 480 seconds)
[17:35] * towen (~towen@c-98-230-203-84.hsd1.nm.comcast.net) Quit (Ping timeout: 480 seconds)
[17:36] * kefu is now known as kefu|afk
[17:38] * marrusl (~mark@nat-pool-rdu-u.redhat.com) has joined #ceph
[17:39] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[17:39] <magicboiz> hi
[17:39] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) has joined #ceph
[17:40] <magicboiz> I'm planning to setup a ceph cluster with 3 nodes, each node with 6 OSD (SATA 7200), 10Gb network. How could I predict the performance (read, write, IOPS, latency...)
[17:40] <magicboiz> ?
[17:40] <magicboiz> thx :)
[17:40] * kefu|afk is now known as kefu
[17:41] * fmardini (~fmardini@213.61.152.126) Quit (Ping timeout: 480 seconds)
[17:43] <gleam> rados bench, fio
[17:44] * dgurtner (~dgurtner@178.197.231.156) Quit (Ping timeout: 480 seconds)
[17:44] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) has joined #ceph
[17:45] * smerz (~ircircirc@37.74.194.90) Quit (Remote host closed the connection)
[17:47] * Wes_d (~wes_d@140.247.242.44) has joined #ceph
[17:47] * kefu is now known as kefu|afk
[17:48] * cok (~chk@2a02:2350:18:1010:558b:ce58:a3a2:f7d) Quit (Quit: Leaving.)
[17:48] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[17:48] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[17:49] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[17:51] * shaunm (~shaunm@74.215.76.114) Quit (Ping timeout: 480 seconds)
[17:51] * Mika_c (~Mk@59-115-159-173.dynamic.hinet.net) has joined #ceph
[17:54] <magicboiz> gleam: but is there any way (formula, whatever...) to PREDICT a performance (with some degree of error, of course)??
[17:56] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[17:56] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[17:58] * steve_d (~oftc-webi@sdainard.pcic.uvic.ca) has joined #ceph
[17:59] <steve_d> Hello, I'm wondering if rbd locks are based on client auth key, or by host id (name/ip/etc) or some other method
[17:59] * jordanP (~jordan@213.215.2.194) Quit (Quit: Leaving)
[18:04] * marrusl (~mark@nat-pool-rdu-u.redhat.com) Quit (Quit: sync && halt)
[18:04] <steve_d> so if I have two hosts which are both using the same client auth key, is it possible they could both mount the same rbd volume at the same time without explicitly setting the shared lock option
[18:04] * marrusl (~mark@nat-pool-rdu-u.redhat.com) has joined #ceph
[18:06] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[18:07] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Read error: Connection reset by peer)
[18:07] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[18:09] * cmdrk (~lincoln@c-71-194-163-11.hsd1.il.comcast.net) Quit (Ping timeout: 480 seconds)
[18:10] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:10] * kawa2014 (~kawa@89.184.114.246) Quit (Remote host closed the connection)
[18:12] * cmdrk (~lincoln@c-71-194-163-11.hsd1.il.comcast.net) has joined #ceph
[18:12] * cok (~chk@2a02:2350:18:1010:88f7:bbe6:2bb4:7a66) has joined #ceph
[18:13] * Debesis_ (~0x@5.254.46.84.mobile.mezon.lt) has joined #ceph
[18:13] * kefu|afk (~kefu@114.92.125.213) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:14] * MACscr1 (~Adium@2601:247:4102:c3ac:a023:ab4e:88c5:7062) has joined #ceph
[18:15] * nhm (~nhm@184-97-242-33.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[18:16] * MACscr (~Adium@2601:d:c800:de3:3ce6:91bf:607a:52a1) Quit (Ping timeout: 480 seconds)
[18:16] * xarses (~xarses@166.175.58.7) has joined #ceph
[18:16] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[18:18] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) has joined #ceph
[18:19] * Debesis (~0x@140.217.38.86.mobile.mezon.lt) Quit (Ping timeout: 480 seconds)
[18:20] * vpol (~vpol@000131a0.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:21] * Mika_c (~Mk@59-115-159-173.dynamic.hinet.net) Quit (Remote host closed the connection)
[18:21] * Mika_c (~Mk@59-115-159-173.dynamic.hinet.net) has joined #ceph
[18:21] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[18:26] * bjornar_ (~bjornar@ti0099a430-1131.bb.online.no) has joined #ceph
[18:27] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[18:28] * cok (~chk@2a02:2350:18:1010:88f7:bbe6:2bb4:7a66) Quit (Quit: Leaving.)
[18:31] * georgem1 (~Adium@fwnat.oicr.on.ca) has joined #ceph
[18:31] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Read error: Connection reset by peer)
[18:32] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[18:32] * bjornar_ (~bjornar@ti0099a430-1131.bb.online.no) Quit (Remote host closed the connection)
[18:35] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[18:36] * scuttlemonkey is now known as scuttle|afk
[18:39] * tganguly (~tganguly@122.172.151.28) has joined #ceph
[18:45] <tuxcrafter> 2015-06-17 18:44:24.865190 mon.0 [INF] pgmap v38779: 320 pgs: 319 active+clean, 1 active+clean+inconsistent; 1417 GB data, 2835 GB used, 3681 GB / 6517 GB avail
[18:45] <tuxcrafter> can i fix that>
[18:45] <tuxcrafter> i dont see ceph working on rebalancing anymore
[18:46] <tuxcrafter> 2015-06-17 18:45:47.057110 mon.0 [INF] pgmap v38788: 320 pgs: 318 active+clean, 1 active+clean+inconsistent, 1 active+clean+scrubbing+deep; 1417 GB data, 2835 GB used, 3681 GB / 6517 GB avail; 6485 B/s wr, 2 op/s
[18:46] <tuxcrafter> oh it is still doing some things
[18:46] * scuttle|afk is now known as scuttlemonkey
[18:47] * logan (~a@63.143.49.103) has joined #ceph
[18:49] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[18:49] * Wes_d (~wes_d@140.247.242.44) Quit (Quit: Wes_d)
[18:51] * Wes_d (~wes_d@65.112.10.214) has joined #ceph
[18:54] <Be-El> tuxcrafter: inconsistent means that the copies of the pg in question are different
[18:55] <Be-El> tuxcrafter: if you are sure that the copy on the acting osd is valid, you can repair it with ceph repair
[18:55] <Be-El> tuxcrafter: if that copy is the broken one, ceph repair may result in data loss
[18:58] * Hemanth (~Hemanth@117.213.183.238) has joined #ceph
[19:04] * haomaiwa_ (~haomaiwan@183.206.168.253) has joined #ceph
[19:05] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[19:07] * haomaiwang (~haomaiwan@183.206.160.253) Quit (Ping timeout: 480 seconds)
[19:13] * mgolub (~Mikolaj@91.225.200.117) has joined #ceph
[19:14] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[19:17] * shylesh (~shylesh@1.22.49.83) has joined #ceph
[19:19] * LeaChim (~LeaChim@host86-132-233-125.range86-132.btcentralplus.com) has joined #ceph
[19:21] * Sysadmin88 (~IceChat77@2.124.164.69) has joined #ceph
[19:22] * marrusl (~mark@nat-pool-rdu-u.redhat.com) Quit (Remote host closed the connection)
[19:24] * tganguly (~tganguly@122.172.151.28) Quit (Quit: Leaving)
[19:25] * mgolub (~Mikolaj@91.225.200.117) Quit (Ping timeout: 480 seconds)
[19:25] * mgolub (~Mikolaj@91.225.200.223) has joined #ceph
[19:32] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[19:34] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[19:34] * Mika_c (~Mk@59-115-159-173.dynamic.hinet.net) Quit (Quit: Konversation terminated!)
[19:41] * marrusl (~mark@nat-pool-rdu-u.redhat.com) has joined #ceph
[19:41] * linjan (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[19:46] * lovejoy (~lovejoy@213.83.69.6) has joined #ceph
[19:46] * lovejoy (~lovejoy@213.83.69.6) Quit ()
[19:50] * linjan (~linjan@176.195.244.194) has joined #ceph
[19:52] * jks (~jks@178.155.151.121) Quit (Ping timeout: 480 seconds)
[19:52] * Hemanth (~Hemanth@117.213.183.238) Quit (Ping timeout: 480 seconds)
[19:53] * Hemanth (~Hemanth@117.192.232.78) has joined #ceph
[19:53] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[19:57] * davidz1 (~davidz@cpe-23-242-27-128.socal.res.rr.com) Quit (Quit: Leaving.)
[19:59] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) has joined #ceph
[19:59] * dopesong (~dopesong@lb1.mailer.data.lt) Quit (Quit: Leaving...)
[20:01] * puffy (~puffy@216.207.42.144) has joined #ceph
[20:02] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) Quit (Quit: Leaving)
[20:04] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[20:05] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[20:06] * xarses_ (~xarses@172.56.12.132) has joined #ceph
[20:09] * davidz (~davidz@cpe-23-242-27-128.socal.res.rr.com) has joined #ceph
[20:09] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[20:12] * xarses (~xarses@166.175.58.7) Quit (Ping timeout: 480 seconds)
[20:13] * moore (~moore@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[20:13] * linjan (~linjan@176.195.244.194) Quit (Ping timeout: 480 seconds)
[20:23] * georgem1 (~Adium@fwnat.oicr.on.ca) Quit (Ping timeout: 480 seconds)
[20:30] * midnightrunner (~midnightr@216.113.160.71) has joined #ceph
[20:30] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[20:32] * linjan (~linjan@213.8.240.146) has joined #ceph
[20:34] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[20:35] * dopesong_ (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[20:35] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Read error: Connection reset by peer)
[20:40] * shaunm (~shaunm@172.56.28.184) has joined #ceph
[20:42] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[20:43] * dopesong_ (~dopesong@88-119-94-55.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[20:49] * georgem (~Adium@184.151.190.178) has joined #ceph
[20:51] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) has joined #ceph
[20:51] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) has joined #ceph
[20:52] * shohn (~shohn@dslb-178-002-076-138.178.002.pools.vodafone-ip.de) Quit (Quit: Leaving.)
[20:53] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[20:55] * sleinen1 (~Adium@2001:620:0:82::101) has joined #ceph
[20:55] * xarses_ (~xarses@172.56.12.132) Quit (Remote host closed the connection)
[20:58] * oro (~oro@91.146.191.230) Quit (Ping timeout: 480 seconds)
[21:01] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:01] * xarses (~xarses@172.56.12.132) has joined #ceph
[21:01] * xarses (~xarses@172.56.12.132) Quit (Remote host closed the connection)
[21:02] * xarses (~xarses@172.56.12.132) has joined #ceph
[21:04] * xarses_ (~xarses@166.175.58.7) has joined #ceph
[21:05] * Wes_d (~wes_d@65.112.10.214) Quit (Quit: Wes_d)
[21:07] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[21:08] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[21:08] * rektide (~rektide@eldergods.com) has joined #ceph
[21:10] * shylesh (~shylesh@1.22.49.83) Quit (Remote host closed the connection)
[21:10] * xarses (~xarses@172.56.12.132) Quit (Ping timeout: 480 seconds)
[21:13] * Wes_d (~wes_d@65.112.10.214) has joined #ceph
[21:17] * wesbell_ (~wesbell@vpngac.ccur.com) has joined #ceph
[21:23] * ngoswami_ (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[21:25] * yanzheng (~zhyan@182.139.21.245) Quit (Quit: This computer has gone to sleep)
[21:25] * moore (~moore@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Remote host closed the connection)
[21:27] * mattronix (~quassel@mail.mattronix.nl) Quit (Read error: Connection reset by peer)
[21:28] * mattronix (~quassel@mail.mattronix.nl) has joined #ceph
[21:30] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) has joined #ceph
[21:32] * dopesong (~dopesong@78-56-228-178.static.zebra.lt) has joined #ceph
[21:34] * dopesong_ (~dopesong@78-56-228-178.static.zebra.lt) has joined #ceph
[21:34] * dopesong (~dopesong@78-56-228-178.static.zebra.lt) Quit (Read error: Connection reset by peer)
[21:36] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[21:39] * georgem (~Adium@184.151.190.178) Quit (Quit: Leaving.)
[21:44] * shaunm (~shaunm@172.56.28.184) Quit (Ping timeout: 480 seconds)
[21:44] * Wes_d (~wes_d@65.112.10.214) Quit (Quit: Wes_d)
[21:47] * steve_d (~oftc-webi@sdainard.pcic.uvic.ca) Quit (Quit: Page closed)
[21:50] * yanzheng (~zhyan@182.139.21.245) has joined #ceph
[21:50] * yanzheng (~zhyan@182.139.21.245) Quit ()
[21:51] * as0bu (~as0bu@c-98-230-203-84.hsd1.nm.comcast.net) has joined #ceph
[21:52] * nhm (~nhm@172.56.8.122) has joined #ceph
[21:52] * ChanServ sets mode +o nhm
[21:53] * Wes_d (~wes_d@65.112.10.214) has joined #ceph
[21:53] * fdmanana__ (~fdmanana@bl13-129-165.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[21:56] * moore (~moore@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[21:57] * Wes_d (~wes_d@65.112.10.214) Quit ()
[21:59] * mattronix (~quassel@mail.mattronix.nl) Quit (Remote host closed the connection)
[21:59] * mattronix (~quassel@mail.mattronix.nl) has joined #ceph
[22:02] * dupont-y (~dupont-y@2a01:e34:ec92:8070:e412:c5f7:a8d4:fc09) has joined #ceph
[22:02] * calvinx (~calvin@101.100.172.246) Quit (Quit: calvinx)
[22:02] * dyasny (~dyasny@198.251.58.23) Quit (Remote host closed the connection)
[22:02] * dopesong (~dopesong@lb1.mailer.data.lt) has joined #ceph
[22:05] * mattronix (~quassel@mail.mattronix.nl) Quit (Remote host closed the connection)
[22:07] * linjan (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[22:07] * mattronix (~quassel@mail.mattronix.nl) has joined #ceph
[22:08] * dopesong_ (~dopesong@78-56-228-178.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[22:11] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[22:17] * Flynn (~stefan@ip-81-30-69-189.fiber.nl) has joined #ceph
[22:17] * mattronix_ (~quassel@mail.mattronix.nl) has joined #ceph
[22:17] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[22:18] <Flynn> Good day all. I use ceph-deploy. Do I have to add the monitor config by hand to ceph.conf and then distribute to all clients/nodes? Why doens???t ceph-deploy update ceph.conf itself? Or am I wrong here?
[22:18] * nsoffer (~nsoffer@bzq-79-180-80-9.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[22:21] * mattronix_ (~quassel@mail.mattronix.nl) Quit (Remote host closed the connection)
[22:23] * mattronix (~quassel@mail.mattronix.nl) Quit (Ping timeout: 480 seconds)
[22:24] * mattronix (~quassel@mail.mattronix.nl) has joined #ceph
[22:24] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[22:28] * jschmid (~jxs@ip9234f579.dynamic.kabel-deutschland.de) has joined #ceph
[22:30] * mattronix_ (~quassel@mail.mattronix.nl) has joined #ceph
[22:31] * dyasny (~dyasny@198.251.58.23) has joined #ceph
[22:31] <ira> loicd: Ping?
[22:31] <ira> pardon.
[22:32] * dyasny (~dyasny@198.251.58.23) Quit ()
[22:32] * dyasny (~dyasny@198.251.58.23) has joined #ceph
[22:33] * mattronix (~quassel@mail.mattronix.nl) Quit (Ping timeout: 480 seconds)
[22:33] <monsted> Flynn: ceph-deploy can deploy it all
[22:33] * wer_ (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Ping timeout: 480 seconds)
[22:34] <Flynn> Monsted: when I folllow the procedure and add extra monitor hosts, ceph-deploy does not add them to ceph.conf. It does however create them correctly.
[22:35] <Flynn> same holds for OSD???s. I would have expected that ceph-deploy adds them to ceph.conf as well.
[22:36] <alfredodeza> Flynn: that is because, for a few releases now, ceph doesn't need/require every OSD/MON in ceph.conf
[22:36] * wer_ (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[22:36] <alfredodeza> there is a way to 'push' a ceph.conf with ceph-deploy
[22:37] <alfredodeza> which is with `ceph-deploy admin push` I think
[22:37] * mattronix_ (~quassel@mail.mattronix.nl) Quit (Remote host closed the connection)
[22:39] * mgolub (~Mikolaj@91.225.200.223) Quit (Quit: away)
[22:40] * mattronix (~quassel@mail.mattronix.nl) has joined #ceph
[22:44] * kanagaraj (~kanagaraj@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[22:44] * vbellur (~vijay@107-1-123-195-ip-static.hfc.comcastbusiness.net) has joined #ceph
[22:44] <Flynn> OK; thanks. I leave them out then. Thanks a lot for your answer.
[22:44] * dopesong_ (~dopesong@78-56-228-178.static.zebra.lt) has joined #ceph
[22:46] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[22:46] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[22:47] * Alssi_ (~Alssi@114.111.60.56) has joined #ceph
[22:51] * shaunm (~shaunm@cpe-65-185-127-82.cinci.res.rr.com) has joined #ceph
[22:51] * dopesong (~dopesong@lb1.mailer.data.lt) Quit (Ping timeout: 480 seconds)
[22:52] * nils_ (~nils@doomstreet.collins.kg) Quit (Ping timeout: 480 seconds)
[22:54] * Hemanth (~Hemanth@117.192.232.78) Quit (Ping timeout: 480 seconds)
[22:54] * dupont-y (~dupont-y@2a01:e34:ec92:8070:e412:c5f7:a8d4:fc09) Quit (Quit: Ex-Chat)
[22:54] * jskinner (~jskinner@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[23:04] * cdelatte (~cdelatte@vlandnat.mystrotv.com) Quit (Ping timeout: 480 seconds)
[23:07] * nhm (~nhm@172.56.8.122) Quit (Ping timeout: 480 seconds)
[23:08] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[23:08] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Quit: Leaving.)
[23:08] <evilrob00> yeah "ceph-deploy admin <node> [<node> .. <node>]"
[23:09] * linuxkidd (~linuxkidd@209.163.164.50) Quit (Quit: Leaving)
[23:11] * Vacuum__ (~Vacuum@i59F796E7.versanet.de) has joined #ceph
[23:12] * wschulze1 (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[23:15] * tupper (~tcole@173.38.117.84) Quit (Ping timeout: 480 seconds)
[23:15] * marrusl (~mark@nat-pool-rdu-u.redhat.com) Quit (Quit: sync && halt)
[23:16] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[23:16] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit ()
[23:16] * wer_ (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Ping timeout: 480 seconds)
[23:17] * Vacuum_ (~Vacuum@i59F79018.versanet.de) Quit (Ping timeout: 480 seconds)
[23:18] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:21] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[23:22] * shaunm (~shaunm@cpe-65-185-127-82.cinci.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:24] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[23:26] * wer_ (~wer@2600:1003:b849:eebe:49d0:5fd6:1a69:c315) has joined #ceph
[23:30] * diq (~diq@2601:646:200:be8:34f7:8c7c:633b:a1ba) has joined #ceph
[23:31] * ira (~ira@167.220.23.74) Quit (Ping timeout: 480 seconds)
[23:32] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[23:33] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[23:37] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[23:37] * bilco105 is now known as bilco105_
[23:41] * diq (~diq@2601:646:200:be8:34f7:8c7c:633b:a1ba) Quit (Quit: Linkinus - http://linkinus.com)
[23:43] * ira (~ira@208.217.184.210) has joined #ceph
[23:53] * jclm (~jclm@ip-64-134-187-212.public.wayport.net) has joined #ceph
[23:59] * fdmanana__ (~fdmanana@bl13-129-165.dsl.telepac.pt) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.