#ceph IRC Log

Index

IRC Log for 2014-06-30

Timestamps are in GMT/BST.

[0:00] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[0:01] * sarob (~sarob@2601:9:1d00:c7f:3cb3:e0c0:4b0a:8a64) has joined #ceph
[0:04] * dis (~dis@109.110.67.36) has joined #ceph
[0:05] * cronix1 (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[0:09] * sarob (~sarob@2601:9:1d00:c7f:3cb3:e0c0:4b0a:8a64) Quit (Ping timeout: 480 seconds)
[0:16] * analbeard (~shw@host81-147-14-90.range81-147.btcentralplus.com) Quit (Quit: Leaving.)
[0:24] * rendar (~I@host173-6-dynamic.55-79-r.retail.telecomitalia.it) Quit ()
[0:32] * amaron (~amaron@cable-178-148-239-68.dynamic.sbb.rs) Quit (Ping timeout: 480 seconds)
[0:38] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[0:42] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[0:49] * Zethrok_ (~martin@95.154.26.34) has joined #ceph
[0:51] * Zethrok (~martin@95.154.26.34) Quit (Ping timeout: 480 seconds)
[1:01] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[1:07] * BManojlovic (~steki@cable-94-189-160-74.dynamic.sbb.rs) Quit (Ping timeout: 480 seconds)
[1:09] * newbie|2 (~kvirc@117.151.54.178) Quit (Ping timeout: 480 seconds)
[1:09] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:16] * lupu (~lupu@86.107.101.214) has joined #ceph
[1:20] * lupu1 (~lupu@86.107.101.246) has joined #ceph
[1:30] * codice_ (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[1:32] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (Ping timeout: 480 seconds)
[1:43] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[2:01] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[2:06] * fmanana (~fdmanana@bl5-3-231.dsl.telepac.pt) Quit (Quit: Leaving)
[2:07] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[2:08] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:11] * newbie|2 (~kvirc@111.174.239.37) has joined #ceph
[2:13] * huangjun (~kvirc@111.174.239.37) has joined #ceph
[2:19] * newbie|2 (~kvirc@111.174.239.37) Quit (Ping timeout: 480 seconds)
[2:24] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[2:28] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Remote host closed the connection)
[2:35] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[2:37] * LeaChim (~LeaChim@host86-161-90-122.range86-161.btcentralplus.com) Quit (Read error: Operation timed out)
[2:39] * jtaguinerd (~jtaguiner@103.14.60.184) has joined #ceph
[2:50] * tdasilva_ (~quassel@dhcp-0-26-5a-b5-f8-68.cpe.townisp.com) has joined #ceph
[2:51] * dmsimard_away (~dmsimard@198.72.123.202) has joined #ceph
[2:51] * dmsimard_away is now known as dmsimard
[2:51] * dmsimard (~dmsimard@198.72.123.202) Quit ()
[2:54] * tdasilva (~quassel@dhcp-0-26-5a-b5-f8-68.cpe.townisp.com) Quit (Ping timeout: 480 seconds)
[2:56] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[2:56] * flaxy (~afx@78.130.171.69) Quit (Quit: halt)
[3:01] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[3:09] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:11] * tdasilva_ (~quassel@dhcp-0-26-5a-b5-f8-68.cpe.townisp.com) Quit (Remote host closed the connection)
[3:12] * sz0 (~sz0@46.197.48.116) Quit ()
[3:17] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[3:20] * ifur (~osm@hornbill.csc.warwick.ac.uk) Quit (Ping timeout: 480 seconds)
[3:24] * ifur (~osm@hornbill.csc.warwick.ac.uk) has joined #ceph
[3:26] * tiger (~textual@58.213.102.114) has joined #ceph
[3:27] * sz0 (~sz0@46.197.48.116) has joined #ceph
[3:34] * DV_ (~veillard@libvirt.org) Quit (Remote host closed the connection)
[3:34] * DV_ (~veillard@veillard.com) has joined #ceph
[3:41] <MACscr> darkfader: did you manually setup the partitions for the journal?
[3:42] <MACscr> also, https://ceph.com/docs/master/rados/operations/add-or-rm-osds/ doesnt mention anything about specifying a pool
[3:43] <MACscr> also, i deleted the pool called rbd as i created my own. is a that a problem? lol
[3:51] * zack_dolby (~textual@e0109-49-132-43-197.uqwimax.jp) has joined #ceph
[3:55] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Remote host closed the connection)
[3:57] <JCL> MACscr: You have to build your CRUSh hierarchy and rulesets so that you can update your pool parameters to choose to which OSDs your pool data will be distributed
[3:58] <JCL> MACscr: You can delete pool rbd absolutely. It is just here as a helper to start quicker for newbies
[4:00] * markbby (~Adium@168.94.245.3) has joined #ceph
[4:00] <JCL> MACscr: Read this http://ceph.com/docs/master/rados/operations/crush-map/
[4:01] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[4:02] * diegows (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[4:02] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[4:02] * tiger (~textual@58.213.102.114) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[4:05] * markbby (~Adium@168.94.245.3) has joined #ceph
[4:06] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:07] * tiger (~textual@58.213.102.114) has joined #ceph
[4:09] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:41] * bkopilov (~bkopilov@213.57.17.98) Quit (Ping timeout: 480 seconds)
[4:45] * chrisjones (~chrisjone@12.237.137.162) Quit (Quit: chrisjones)
[4:52] * chrisjones (~chrisjone@12.237.137.162) has joined #ceph
[4:58] * zhaochao (~zhaochao@106.38.204.67) has joined #ceph
[5:01] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[5:03] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[5:06] * Vacum_ (~vovo@88.130.204.185) has joined #ceph
[5:09] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:10] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[5:13] * Vacum (~vovo@i59F7A287.versanet.de) Quit (Ping timeout: 480 seconds)
[5:28] * tiger (~textual@58.213.102.114) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[5:31] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[5:44] * shang (~ShangWu@175.41.48.77) has joined #ceph
[5:50] * chrisjones (~chrisjone@12.237.137.162) Quit (Quit: chrisjones)
[5:51] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[5:55] * chrisjones (~chrisjone@12.237.137.162) has joined #ceph
[5:57] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[5:58] * DV__ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[6:01] * sarob_ (~sarob@2601:9:1d00:c7f:2188:4bdd:11a1:971b) has joined #ceph
[6:02] * sarob__ (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[6:03] * DV_ (~veillard@veillard.com) Quit (Ping timeout: 480 seconds)
[6:05] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:06] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[6:09] * chrisjones (~chrisjone@12.237.137.162) Quit (Quit: chrisjones)
[6:09] * sarob_ (~sarob@2601:9:1d00:c7f:2188:4bdd:11a1:971b) Quit (Ping timeout: 480 seconds)
[6:10] * sarob__ (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:12] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[6:18] <MACscr> JCL: damn, that doesnt sound like fun at all
[6:18] <MACscr> and doesnt seem like it should be that complicated
[6:19] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) has joined #ceph
[6:19] <MACscr> no ideal why they dont just allow you to specify what pool a osd belongs too. Seems like it could be so much simpler
[6:21] <MACscr> also, seems a bit odd, but none of my ceph.conf files have any type of mapping listed
[6:22] <JCL> MACscr: Cause a default map is built when you create your cluster with 1 ruleset in firefly and 3 rulesets in dumpling if I remember correctly.
[6:23] <JCL> Read the doc so that you can extract the crushmap and edit it
[6:23] <JCL> The URL I gave you
[6:23] <MACscr> JCL: is there an easier way? aka, deleting the cluster and starting over from scratch?
[6:24] <JCL> Why would you do this?
[6:24] <JCL> The default crush and the rules are dead simple
[6:25] <JCL> And strating over will not change the default crush map being built
[6:25] <JCL> s/strating/starting/
[6:25] <kraken> JCL meant to say: And starting over will not change the default crush map being built
[6:25] <MACscr> so the answer is simply no. lol
[6:26] <JCL> Learn anc practice young padawan.
[6:26] <JCL> :-)
[6:27] <MACscr> damn, this is going to take a long time
[6:27] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[6:28] <JCL> Not that long. Don't worry. But sure it will be a lot of reading ;-P
[6:30] <JCL> Patience and time are better than force and rage an old proverb says
[6:30] <MACscr> im not that style of learner. I learn by step by step howto's versus man pages
[6:31] <JCL> So do I. And web pages contain everything step by step
[6:31] <MACscr> no, they contain every step, even ones not needed
[6:31] <MACscr> right now its information overload
[6:32] <MACscr> and honestly more complicated than it should be for such basic settings
[6:32] <JCL> Because Ceph if complex and very rich
[6:32] <MACscr> yes, but that doesnt mean its tool set has to be that way unless you want very specific needs
[6:33] <MACscr> simply doing two different pools shouldnt be considered an advanced feature
[6:33] <MACscr> should just be a flag to specify the pool when its created. case closed
[6:34] <JCL> It's not an advance feature. We teach it in the basic class called Ceph Fundamentals
[6:34] <MACscr> er, when the osd is created
[6:34] <MACscr> or even easier, just a flag to assign it to a pool
[6:38] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[6:39] <sherry> JCL: what other classes you teach rather than the fundamentals?
[6:40] <JCL> Perf and Tuning
[6:40] <JCL> Ceph & OpenStack
[6:40] <sherry> is there anyone related to cache tiering?
[6:41] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) has joined #ceph
[6:41] <JCL> That will be when the material is refreshed next for FireFly
[6:41] <JCL> EC and Cache Tiering
[6:41] <sherry> ahh.. that's too far! I'm struggling with that at the moment!
[6:41] <JCL> Primary Affinity
[6:42] <sherry> could u provide me the link related to your courses?
[6:42] <JCL> So you'll be a guru before then ;-)
[6:42] <JCL> www.inktank.com
[6:43] <sherry> don't think so! since I reported a bug and I haven't received any confirmation or solution related to that!
[6:44] <sherry> thanks for the link btw
[6:44] <JCL> Sherry: Your tracker entry if I remember is related to CephFS and Cache Tiering. Correct?
[6:44] <sherry> yeah...
[6:46] <sherry> not sure if I'm doing sth really wrong or tiering objector doesn't flush the objects to the cold storage when writes come from the CephFS
[6:46] <kraken> ???_???
[6:47] <MACscr> hmm, so i installed ceph-deploy on a system that isnt part of the ceph cluster. Seems like it can only be used for basic initial launch, but then i need to pretty much do all the customizations from an actual ceph node. is that correct?
[6:49] <JCL> The ceph-deploynode can be used to customize the cluster if the admin keyring is copied onto the ceph-deploy node
[6:50] <MACscr> hmm, i did i thought
[6:51] <MACscr> ceph.client.admin.keyring?
[6:51] <JCL> If you can issue a ceph -s command from the ceph-deploy node then there is a good chance you have the admin keyring
[6:51] <JCL> Yup
[6:51] <MACscr> i cant
[6:51] <MACscr> 2014-06-29 23:50:32.465696 7fdcfe520700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
[6:51] <MACscr> 2014-06-29 23:50:32.465740 7fdcfe520700 0 librados: client.admin initialization error (2) No such file or directory
[6:52] <sherry> MACscr: change the permission of /etce/ceph/ceph.client.admin.keyring to 644!
[6:52] <MACscr> it is
[6:53] <MACscr> http://pastie.org/pastes/9339195/text?key=cenwpdjyue53vayklbrvpg
[6:53] <sherry> not that
[6:54] <sherry> the one which is in /etc/ceph!
[6:54] <MACscr> ah, my bad
[6:54] <MACscr> so im assuming i need to move it there since it doesnt exist there?
[6:55] <sherry> no, u don't need to do it manually
[6:56] <sherry> maybe try "ceph-deploy admin host-name"
[6:57] <v2> MACscr: I think you need to run "gatherkeys" from /etc/ceph directory?
[6:57] <sherry> v2 is right
[6:57] <MACscr> ah, that seems rather odd. If it should be there, why not put it there by default?
[6:57] * tiger (~textual@58.213.102.114) has joined #ceph
[6:59] * vbellur (~vijay@122.178.240.55) Quit (Read error: Operation timed out)
[7:01] * rdas (~rdas@121.244.87.115) has joined #ceph
[7:01] <MACscr> also, why is it that my ceph.conf file has very little info setup in it? seems l like it should have assignments other than just a few monitor assignments like so: http://pastie.org/9339207
[7:02] <MACscr> and i apologize if i seem a bit to critical, but UX design/development is a big part of my job
[7:02] <MACscr> and typically developers are terrible at it since its not really their job
[7:03] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[7:03] <MACscr> though they obviously need someone doing that as part of their team
[7:04] <v2> MACscr: That's the intital configuration dumped by ceph-deploy. I guess as you add OSDs or MDSs the configuration file needs to be reflected with those.
[7:04] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[7:04] <MACscr> v2: so as they are added, it has to be manually updated after running the commands?
[7:04] <sherry> MAXscr: u also need to add public_network/cluster_network into ur ceph.conf
[7:05] <MACscr> so many hoops
[7:05] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[7:05] <v2> MACscr: I'm not sure if ceph-deploy does that for you. Maybe someone else can answer that. But, configuration entries for osd/mds are very much required to be there.
[7:06] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) Quit (Ping timeout: 480 seconds)
[7:06] <JCL> As long as you use the default settings, nothing needs to be added in the ceph.conf file.
[7:06] <MACscr> well i meant more after ceph deploy is used, i would think normal ceph commands like creating an osd, etc, would do it
[7:06] <JCL> ceph.conf will only need to be expanded in very specific cases in production environments
[7:07] <MACscr> so all the tutorials that show ceph.conf's are doing things that arent needed?
[7:07] <MACscr> or are they just outdated docs?
[7:07] <JCL> Creating an OSD does not require any additional parameter as long as you use the default install paths
[7:10] <JCL> And don't worry, ceph and its various components have somewhere between 400 and 500 parameters so it may require tuning and adjusments and entries on the node side and the client side sometime.
[7:10] <JCL> Although client parameters are not that numerous
[7:11] <JCL> And the fact that it does not require a lot of parameters in the ceph.conf file makes it easier for beginners to set up theu first cluster too :-)
[7:12] <JCL> s/theu/their/
[7:12] <kraken> JCL meant to say: And the fact that it does not require a lot of parameters in the ceph.conf file makes it easier for beginners to set up their first cluster too :-)
[7:12] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Remote host closed the connection)
[7:13] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[7:14] <MACscr> well the ceph.conf all mapped out made it look easier to make assignments of osd's, etc
[7:15] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[7:16] <MACscr> ok, so trying to wrap my head around the ceph public and cluster networks. So is the public network just the traffic between the client and the ceph storage and the cluster is just communication between each monitor/osd's?
[7:18] <MACscr> i apologize for the 20 questions and i really appreciate the help. Im a single employee IT consultant, so unfortunately that means I have to wear a lot of hats
[7:19] <lurbs> MACscr: Mostly. Monitors exist on the public network, because clients need to reach them.
[7:19] * michalefty (~micha@p20030071CF588100F890BDDEC535DA67.dip0.t-ipconnect.de) has joined #ceph
[7:19] <MACscr> so clients connect to the monitors and the monitors connect to the osd's?
[7:20] <MACscr> note, my monitors are actually on the same nodes as the osd's
[7:21] <v2> MACscr: clients contact the OSDs directly for I/O
[7:23] <MACscr> ha, ok. So this doesnt really help me answer my original question =P
[7:24] <JCL> Public network is used for client2mon, mon2osd and client2osd
[7:24] <JCL> Cluster network is used for osd2osd
[7:26] * iggy (~iggy@theiggy.com) Quit (Quit: leaving)
[7:30] * saurabh (~saurabh@121.244.87.117) Quit (Read error: Operation timed out)
[7:30] <MACscr> ok, so should my "mon_host" ip's be the public network ip's or does it matter?
[7:30] <MACscr> i have 4 different networks (wan, lan, cluster, and storage)
[7:30] <MACscr> so just trying to figure this all out
[7:31] <MACscr> right now i have cluster and storage networks assigned as cluster and public
[7:31] <MACscr> but when i deployed things initially, it used the lan network
[7:34] * rongze (~rongze@114.54.30.94) has joined #ceph
[7:35] * vbellur (~vijay@209.132.188.8) has joined #ceph
[7:36] <MACscr> also, right now im reading http://ceph.com/docs/master/rados/configuration/ceph-conf/ and it definitely seems to imply that all the [mon] and [osd] settings should be configured there. I have at least setup the network settings. It shows how to see the running config for a node, but how are we supposed to push out the config changes to all the nodes and have them apply?
[7:45] <MACscr> hmm, looks like i am just supposed to scp them to each host and then restart ceph?
[7:45] <sherry> ceph-deploy --overwrite-conf config push [hostname1 hostname2 ... hostnameX]
[7:47] <MACscr> ah, thanks
[7:49] <MACscr> sherry: seems to me that if i specify a cluster/public network, i have to specify the ip's for each host in the conf file or else it wont know what ip's to use to contact each other
[7:49] <MACscr> so im not sure why JCL said these arent needed
[7:52] * iggy (~iggy@theiggy.com) has joined #ceph
[7:53] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:55] <JCL> No you specify a subnet for each and host will bind automatically
[7:55] <JCL> 192.168.1.0/24
[7:55] <JCL> But you can operate the cluster without these directives. That's what my lines meant.
[7:57] <sherry> MACscr: do u follow this document> https://ceph.com/docs/master/start/quick-ceph-deploy/, it was written that If you have more than one network interface, add the public network setting under the [global] section of your Ceph configuration file. public network = {ip-address}/{netmask}
[7:59] <MACscr> so if i have 3 monitors, am i supposed to to just identify them as [mon.a] and [mon.b], etc?
[8:00] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[8:00] <MACscr> its a bit more obvious for osd's
[8:13] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[8:13] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) Quit (Remote host closed the connection)
[8:14] <loicd> my cluster is HEALTH_WARN (mixture of emperor & dumpling) ceph -s does not show the reason for the warning http://paste.ubuntu.com/7724939/ Is there a way to figure out why ?
[8:14] * Sysadmin88 (~IceChat77@94.4.20.0) Quit (Quit: Pull the pin and count to what?)
[8:15] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) has joined #ceph
[8:17] <JCL> ceph health detail may be
[8:20] * hasues1 (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[8:20] <Nats_> anyone who has upgrade from emperor to firefly - is recovery/backfill performance any better?
[8:23] * Cube (~Cube@66-87-130-37.pools.spcsdns.net) has joined #ceph
[8:24] <MACscr> lol, well crap, i guess pushing out network config changes really screws up a cluster
[8:24] * vbellur (~vijay@209.132.188.8) Quit (Read error: Operation timed out)
[8:25] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) Quit (Remote host closed the connection)
[8:25] <MACscr> everything is just faulting when i try to start it
[8:26] * madkiss (~madkiss@p5099fdaa.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[8:27] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[8:27] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) has joined #ceph
[8:29] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has left #ceph
[8:36] * vbellur (~vijay@121.244.87.117) has joined #ceph
[8:37] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[8:40] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[8:40] * imriz (~imriz@82.81.163.130) has joined #ceph
[8:43] * iggy (~iggy@theiggy.com) Quit (Quit: leaving)
[8:44] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[8:45] * iggy (~iggy@theiggy.com) has joined #ceph
[8:47] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has left #ceph
[8:53] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[8:54] * saurabh (~saurabh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[8:55] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[8:55] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has left #ceph
[8:57] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[8:58] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[9:03] * Cube1 (~Cube@66-87-64-58.pools.spcsdns.net) has joined #ceph
[9:03] * Cube (~Cube@66-87-130-37.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[9:04] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[9:06] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:07] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[9:08] * jordanP (~jordan@185.23.92.11) has joined #ceph
[9:18] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:20] * thb (~me@2a02:2028:ba:ecc0:158e:5f8e:fc39:24ef) has joined #ceph
[9:21] * madkiss (~madkiss@p549FC20E.dip0.t-ipconnect.de) has joined #ceph
[9:23] * rendar (~I@host149-182-dynamic.37-79-r.retail.telecomitalia.it) has joined #ceph
[9:26] * longguang (~chatzilla@123.126.33.253) Quit (Remote host closed the connection)
[9:26] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[9:27] * tiger (~textual@58.213.102.114) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[9:30] * analbeard (~shw@support.memset.com) has joined #ceph
[9:31] * lucas1 (~Thunderbi@222.240.148.130) Quit (Quit: lucas1)
[9:37] * tiger (~textual@58.213.102.114) has joined #ceph
[9:40] * fsimonce (~simon@host13-187-dynamic.26-79-r.retail.telecomitalia.it) has joined #ceph
[9:41] * bitserker (~toni@70.59.79.188.dynamic.jazztel.es) has joined #ceph
[9:45] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[9:46] * amaron (~amaron@cable-178-148-239-68.dynamic.sbb.rs) has joined #ceph
[9:51] * ikrstic (~ikrstic@c82-214-88-26.loc.akton.net) has joined #ceph
[9:52] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[9:52] * ikrstic (~ikrstic@c82-214-88-26.loc.akton.net) Quit ()
[9:54] * ikrstic (~ikrstic@c82-214-88-26.loc.akton.net) has joined #ceph
[9:57] * bitserker (~toni@70.59.79.188.dynamic.jazztel.es) Quit (Ping timeout: 480 seconds)
[9:58] * Zethrok_ (~martin@95.154.26.34) Quit (Ping timeout: 480 seconds)
[10:01] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Remote host closed the connection)
[10:04] * cronix1 (~cronix@5.199.139.166) Quit (Ping timeout: 480 seconds)
[10:05] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[10:05] * ChanServ sets mode +v andreask
[10:12] * Guest709 (~root@c39a0907.test.dnsbl.oftc.net) has joined #ceph
[10:13] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[10:17] * Guest709 (~root@c39a0907.test.dnsbl.oftc.net) Quit (Remote host closed the connection)
[10:17] * LeaChim (~LeaChim@host86-161-90-122.range86-161.btcentralplus.com) has joined #ceph
[10:21] * beardo (~sma310@beardo.cc.lehigh.edu) Quit (Ping timeout: 480 seconds)
[10:23] * cabrillo (~cabrillo@usimovil.ifca.es) Quit (Quit: Leaving)
[10:25] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[10:31] * lolo (~lol@c39a0907.test.dnsbl.oftc.net) has joined #ceph
[10:32] * thomnico (~thomnico@2a01:e35:8b41:120:98ea:a568:8f1e:a5b4) has joined #ceph
[10:33] * amaron (~amaron@cable-178-148-239-68.dynamic.sbb.rs) Quit (Ping timeout: 480 seconds)
[10:35] * lolo (~lol@c39a0907.test.dnsbl.oftc.net) Quit (Read error: Connection reset by peer)
[10:47] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[10:48] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[10:48] * ChanServ sets mode +v andreask
[10:53] * LeaChim (~LeaChim@host86-161-90-122.range86-161.btcentralplus.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * ikrstic (~ikrstic@c82-214-88-26.loc.akton.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * fsimonce (~simon@host13-187-dynamic.26-79-r.retail.telecomitalia.it) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * jordanP (~jordan@185.23.92.11) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * iggy (~iggy@theiggy.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * vbellur (~vijay@121.244.87.117) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * rongze (~rongze@114.54.30.94) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * rdas (~rdas@121.244.87.115) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * huangjun (~kvirc@111.174.239.37) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * codice_ (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * lupu1 (~lupu@86.107.101.246) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * JCL (~JCL@2601:9:5980:39b:bcf1:8f20:1986:d04d) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * imjustmatthew (~imjustmat@pool-74-110-226-158.rcmdva.fios.verizon.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * lincolnb (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * zigo (quasselcor@ipv6-ftp.gplhost.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * swills (~swills@mouf.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * qhartman (~qhartman@64.207.33.50) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * sjust (~sjust@2607:f298:a:607:f003:5cd3:300e:a2db) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * tws_ (~traviss@rrcs-24-123-86-154.central.biz.rr.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * gregsfortytwo (~Adium@2607:f298:a:607:40c8:3887:435f:7674) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * lpabon (~lpabon@nat-pool-bos-t.redhat.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * houkouonchi-home (~linux@2001:470:c:c69::2) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * kfei (~root@114-27-89-92.dynamic.hinet.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * jksM_ (~jks@3e6b5724.rev.stofanet.dk) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * sverrest (~sverrest@cm-84.208.166.184.getinternet.no) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * mdjp (~mdjp@2001:41d0:52:100::343) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * grepory (uid29799@id-29799.uxbridge.irccloud.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * saaby (~as@mail.saaby.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * joerocklin_ (~joe@cpe-65-185-149-56.woh.res.rr.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * sage (~quassel@cpe-23-242-158-79.socal.res.rr.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * simulx (~simulx@66-194-114-178.static.twtelecom.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * hflai (~hflai@alumni.cs.nctu.edu.tw) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * v2 (~venky@ov42.x.rootbsd.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * garphy`aw (~garphy@frank.zone84.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * carter (~carter@li98-136.members.linode.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * ponyofdeath (~vladi@cpe-66-27-98-26.san.res.rr.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * philips (~philips@ec2-54-196-103-51.compute-1.amazonaws.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * cce (~cce@50.56.54.167) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * benner (~benner@162.243.49.163) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * shk (sid33582@id-33582.charlton.irccloud.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * devicenull (sid4013@id-4013.ealing.irccloud.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * Gugge-47527 (gugge@kriminel.dk) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * kwmiebach__ (sid16855@id-16855.charlton.irccloud.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * designated (~rroberts@host-177-39-52-24.midco.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * stj (~stj@tully.csail.mit.edu) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * eternaleye (~eternaley@50.245.141.73) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * \ask (~ask@oz.develooper.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * higebu (~higebu@www3347ue.sakura.ne.jp) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * guppy (~quassel@guppy.xxx) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * hughsaunders (~hughsaund@wherenow.org) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * Georgyo (~georgyo@shamm.as) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * oblu (~o@62.109.134.112) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * ccourtaut_ (~ccourtaut@ks362468.kimsufi.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * bkero (~bkero@216.151.13.66) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * sadbox (~jmcguire@sadbox.org) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * eightyeight (~atoponce@atoponce.user.oftc.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * KindOne (kindone@0001a7db.user.oftc.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * kraken (~kraken@gw.sepia.ceph.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * houkouonchi-work (~linux@12.248.40.138) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * leochill (~leochill@nyc-333.nycbit.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * phantomcircuit (~phantomci@blockchain.ceo) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * aarontc (~aarontc@static-50-126-79-230.hlbo.or.frontiernet.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * leseb_ (~leseb@88-190-214-97.rev.dedibox.fr) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * rturk|afk (~rturk@nat-pool-rdu-t.redhat.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * Daviey (~DavieyOFT@bootie.daviey.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * rendar (~I@host149-182-dynamic.37-79-r.retail.telecomitalia.it) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * saurabh (~saurabh@121.244.87.117) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * Cube1 (~Cube@66-87-64-58.pools.spcsdns.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * imriz (~imriz@82.81.163.130) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * lalatenduM (~lalatendu@121.244.87.117) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * shang (~ShangWu@175.41.48.77) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * zhaochao (~zhaochao@106.38.204.67) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * markbby (~Adium@168.94.245.3) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * sz0 (~sz0@46.197.48.116) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * lollipop (~s51itxsyc@23.94.38.19) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * beardo_ (~sma310@216-164-125-67.c3-0.drf-ubr1.atw-drf.pa.cable.rcn.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * dmit2k (~Adium@balticom-131-176.balticom.lv) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * djh-work1 (~daniel@141.52.73.152) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * allig8r (~allig8r@128.135.219.116) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * fretb (~fretb@drip.frederik.pw) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * pmatulis (~peter@ec2-23-23-42-7.compute-1.amazonaws.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * runfromnowhere (~runfromno@pool-108-29-25-203.nycmny.fios.verizon.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * athrift (~nz_monkey@203.86.205.13) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * jlogan (~Thunderbi@72.5.59.176) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * terje (~joey@184-96-155-130.hlrn.qwest.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * boichev (~boichev@213.169.56.130) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * scuttle|afk (~scuttle@nat-pool-rdu-t.redhat.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * nwf (~nwf@67.62.51.95) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * Meths (~meths@2.25.191.11) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * magicrobotmonkey (~abassett@ec2-50-18-55-253.us-west-1.compute.amazonaws.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * schmee (~quassel@phobos.isoho.st) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * saturnine (~saturnine@ashvm.saturne.in) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * amospalla (~amospalla@amospalla.es) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * cephalobot (~ceph@ds2390.dreamservers.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * brambles (~xymox@s0.barwen.ch) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * winston-d (~ubuntu@ec2-54-244-213-72.us-west-2.compute.amazonaws.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * singler (~singler@zeta.kirneh.eu) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * tchmnkyz (~jeremy@0001638b.user.oftc.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * gleam (gleam@dolph.debacle.org) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * wmat (wmat@wallace.mixdown.ca) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * wedge (lordsilenc@bigfoot.xh.se) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * Fetch (fetch@gimel.cepheid.org) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * jackhill (~jackhill@bog.hcoop.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * ismell (~ismell@host-64-17-89-79.beyondbb.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * dignus (~jkooijman@53520F05.cm-6-3a.dynamic.ziggo.nl) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * mondkalbantrieb (~quassel@mondkalbantrieb.de) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * fouxm (~foucault@ks3363630.kimsufi.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * `10 (~10@69.169.91.14) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * darkfader (~floh@88.79.251.60) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * tcatm (~quassel@mneme.draic.info) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * tank100 (~tank@84.200.17.138) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * blynch (~blynch@vm-nat.msi.umn.edu) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * purpleidea (~james@199.180.99.171) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * tnt_ (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * kevincox (~kevincox@4.s.kevincox.ca) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * nhm (~nhm@184-97-129-14.mpls.qwest.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * joshd (~joshd@38.122.20.226) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * jnq (~jnq@0001b7cc.user.oftc.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * mongo (~gdahlman@voyage.voipnw.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * Azrael (~azrael@terra.negativeblue.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * s3an2 (~root@korn.s3an.me.uk) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * loicd (~loicd@54.242.96.84.rev.sfr.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * sage___ (~quassel@38.122.20.226) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * [caveman] (~quassel@boxacle.net) Quit (reticulum.oftc.net resistance.oftc.net)
[10:53] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (reticulum.oftc.net resistance.oftc.net)
[10:56] * bkero (~bkero@216.151.13.66) has joined #ceph
[10:56] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[10:56] * aarontc (~aarontc@static-50-126-79-230.hlbo.or.frontiernet.net) has joined #ceph
[10:56] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[10:56] * designated (~rroberts@host-177-39-52-24.midco.net) has joined #ceph
[10:56] * stj (~stj@tully.csail.mit.edu) has joined #ceph
[10:56] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) has joined #ceph
[10:56] * Daviey (~DavieyOFT@bootie.daviey.com) has joined #ceph
[10:56] * eternaleye (~eternaley@50.245.141.73) has joined #ceph
[10:56] * \ask (~ask@oz.develooper.com) has joined #ceph
[10:56] * leochill (~leochill@nyc-333.nycbit.com) has joined #ceph
[10:56] * higebu (~higebu@www3347ue.sakura.ne.jp) has joined #ceph
[10:56] * guppy (~quassel@guppy.xxx) has joined #ceph
[10:56] * hughsaunders (~hughsaund@wherenow.org) has joined #ceph
[10:56] * Georgyo (~georgyo@shamm.as) has joined #ceph
[10:56] * phantomcircuit (~phantomci@blockchain.ceo) has joined #ceph
[10:56] * eightyeight (~atoponce@atoponce.user.oftc.net) has joined #ceph
[10:56] * oblu (~o@62.109.134.112) has joined #ceph
[10:56] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[10:56] * KindOne (kindone@0001a7db.user.oftc.net) has joined #ceph
[10:56] * ivan` (~ivan`@000130ca.user.oftc.net) has joined #ceph
[10:56] * ccourtaut_ (~ccourtaut@ks362468.kimsufi.com) has joined #ceph
[10:56] * sadbox (~jmcguire@sadbox.org) has joined #ceph
[10:56] * leseb_ (~leseb@88-190-214-97.rev.dedibox.fr) has joined #ceph
[10:56] * rturk|afk (~rturk@nat-pool-rdu-t.redhat.com) has joined #ceph
[11:03] * rturk|afk (~rturk@nat-pool-rdu-t.redhat.com) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * ccourtaut_ (~ccourtaut@ks362468.kimsufi.com) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * Georgyo (~georgyo@shamm.as) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * hughsaunders (~hughsaund@wherenow.org) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * guppy (~quassel@guppy.xxx) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * higebu (~higebu@www3347ue.sakura.ne.jp) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * eternaleye (~eternaley@50.245.141.73) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * stj (~stj@tully.csail.mit.edu) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * designated (~rroberts@host-177-39-52-24.midco.net) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * houkouonchi-work (~linux@12.248.40.138) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * aarontc (~aarontc@static-50-126-79-230.hlbo.or.frontiernet.net) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * kraken (~kraken@gw.sepia.ceph.com) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * sadbox (~jmcguire@sadbox.org) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * \ask (~ask@oz.develooper.com) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * oblu (~o@62.109.134.112) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * bkero (~bkero@216.151.13.66) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * leseb_ (~leseb@88-190-214-97.rev.dedibox.fr) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * phantomcircuit (~phantomci@blockchain.ceo) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * eightyeight (~atoponce@atoponce.user.oftc.net) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * KindOne (kindone@0001a7db.user.oftc.net) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * leochill (~leochill@nyc-333.nycbit.com) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (charon.oftc.net resistance.oftc.net)
[11:03] * Daviey (~DavieyOFT@bootie.daviey.com) Quit (charon.oftc.net resistance.oftc.net)
[11:04] * bkero (~bkero@216.151.13.66) has joined #ceph
[11:04] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[11:04] * aarontc (~aarontc@static-50-126-79-230.hlbo.or.frontiernet.net) has joined #ceph
[11:04] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[11:04] * designated (~rroberts@host-177-39-52-24.midco.net) has joined #ceph
[11:04] * stj (~stj@tully.csail.mit.edu) has joined #ceph
[11:04] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) has joined #ceph
[11:04] * Daviey (~DavieyOFT@bootie.daviey.com) has joined #ceph
[11:04] * eternaleye (~eternaley@50.245.141.73) has joined #ceph
[11:04] * \ask (~ask@oz.develooper.com) has joined #ceph
[11:04] * leochill (~leochill@nyc-333.nycbit.com) has joined #ceph
[11:04] * higebu (~higebu@www3347ue.sakura.ne.jp) has joined #ceph
[11:04] * guppy (~quassel@guppy.xxx) has joined #ceph
[11:04] * hughsaunders (~hughsaund@wherenow.org) has joined #ceph
[11:04] * Georgyo (~georgyo@shamm.as) has joined #ceph
[11:04] * phantomcircuit (~phantomci@blockchain.ceo) has joined #ceph
[11:04] * eightyeight (~atoponce@atoponce.user.oftc.net) has joined #ceph
[11:04] * oblu (~o@62.109.134.112) has joined #ceph
[11:04] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[11:04] * KindOne (kindone@0001a7db.user.oftc.net) has joined #ceph
[11:04] * ivan` (~ivan`@000130ca.user.oftc.net) has joined #ceph
[11:04] * ccourtaut_ (~ccourtaut@ks362468.kimsufi.com) has joined #ceph
[11:04] * sadbox (~jmcguire@sadbox.org) has joined #ceph
[11:04] * leseb_ (~leseb@88-190-214-97.rev.dedibox.fr) has joined #ceph
[11:04] * rturk|afk (~rturk@nat-pool-rdu-t.redhat.com) has joined #ceph
[11:06] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[11:06] * brambles (~xymox@s0.barwen.ch) has joined #ceph
[11:06] * rendar (~I@host149-182-dynamic.37-79-r.retail.telecomitalia.it) has joined #ceph
[11:06] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[11:06] * imriz (~imriz@82.81.163.130) has joined #ceph
[11:06] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[11:06] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) has joined #ceph
[11:06] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) has joined #ceph
[11:06] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[11:06] * shang (~ShangWu@175.41.48.77) has joined #ceph
[11:06] * zhaochao (~zhaochao@106.38.204.67) has joined #ceph
[11:06] * markbby (~Adium@168.94.245.3) has joined #ceph
[11:06] * sz0 (~sz0@46.197.48.116) has joined #ceph
[11:06] * lollipop (~s51itxsyc@23.94.38.19) has joined #ceph
[11:06] * wmat (wmat@wallace.mixdown.ca) has joined #ceph
[11:06] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[11:06] * beardo_ (~sma310@216-164-125-67.c3-0.drf-ubr1.atw-drf.pa.cable.rcn.com) has joined #ceph
[11:06] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[11:06] * dmit2k (~Adium@balticom-131-176.balticom.lv) has joined #ceph
[11:06] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[11:06] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) has joined #ceph
[11:06] * djh-work1 (~daniel@141.52.73.152) has joined #ceph
[11:06] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) has joined #ceph
[11:06] * allig8r (~allig8r@128.135.219.116) has joined #ceph
[11:06] * fretb (~fretb@drip.frederik.pw) has joined #ceph
[11:06] * pmatulis (~peter@ec2-23-23-42-7.compute-1.amazonaws.com) has joined #ceph
[11:06] * runfromnowhere (~runfromno@pool-108-29-25-203.nycmny.fios.verizon.net) has joined #ceph
[11:06] * athrift (~nz_monkey@203.86.205.13) has joined #ceph
[11:06] * jlogan (~Thunderbi@72.5.59.176) has joined #ceph
[11:06] * terje (~joey@184-96-155-130.hlrn.qwest.net) has joined #ceph
[11:06] * boichev (~boichev@213.169.56.130) has joined #ceph
[11:06] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) has joined #ceph
[11:06] * wedge (lordsilenc@bigfoot.xh.se) has joined #ceph
[11:06] * scuttle|afk (~scuttle@nat-pool-rdu-t.redhat.com) has joined #ceph
[11:06] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[11:06] * nwf (~nwf@67.62.51.95) has joined #ceph
[11:06] * Meths (~meths@2.25.191.11) has joined #ceph
[11:06] * magicrobotmonkey (~abassett@ec2-50-18-55-253.us-west-1.compute.amazonaws.com) has joined #ceph
[11:06] * schmee (~quassel@phobos.isoho.st) has joined #ceph
[11:06] * saturnine (~saturnine@ashvm.saturne.in) has joined #ceph
[11:06] * amospalla (~amospalla@amospalla.es) has joined #ceph
[11:06] * cephalobot (~ceph@ds2390.dreamservers.com) has joined #ceph
[11:06] * winston-d (~ubuntu@ec2-54-244-213-72.us-west-2.compute.amazonaws.com) has joined #ceph
[11:06] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[11:06] * singler (~singler@zeta.kirneh.eu) has joined #ceph
[11:06] * tchmnkyz (~jeremy@0001638b.user.oftc.net) has joined #ceph
[11:06] * gleam (gleam@dolph.debacle.org) has joined #ceph
[11:06] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) has joined #ceph
[11:06] * SpamapS (~clint@xencbyrum2.srihosting.com) has joined #ceph
[11:06] * darkfader (~floh@88.79.251.60) has joined #ceph
[11:06] * [caveman] (~quassel@boxacle.net) has joined #ceph
[11:06] * joshd (~joshd@38.122.20.226) has joined #ceph
[11:06] * kevincox (~kevincox@4.s.kevincox.ca) has joined #ceph
[11:06] * jnq (~jnq@0001b7cc.user.oftc.net) has joined #ceph
[11:06] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[11:06] * sage___ (~quassel@38.122.20.226) has joined #ceph
[11:06] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[11:06] * `10 (~10@69.169.91.14) has joined #ceph
[11:06] * fouxm (~foucault@ks3363630.kimsufi.com) has joined #ceph
[11:06] * tcatm (~quassel@mneme.draic.info) has joined #ceph
[11:06] * loicd (~loicd@54.242.96.84.rev.sfr.net) has joined #ceph
[11:06] * mondkalbantrieb (~quassel@mondkalbantrieb.de) has joined #ceph
[11:06] * dignus (~jkooijman@53520F05.cm-6-3a.dynamic.ziggo.nl) has joined #ceph
[11:06] * nhm (~nhm@184-97-129-14.mpls.qwest.net) has joined #ceph
[11:06] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) has joined #ceph
[11:06] * mongo (~gdahlman@voyage.voipnw.net) has joined #ceph
[11:06] * tnt_ (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) has joined #ceph
[11:06] * ismell (~ismell@host-64-17-89-79.beyondbb.com) has joined #ceph
[11:06] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) has joined #ceph
[11:06] * jackhill (~jackhill@bog.hcoop.net) has joined #ceph
[11:06] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[11:06] * purpleidea (~james@199.180.99.171) has joined #ceph
[11:06] * s3an2 (~root@korn.s3an.me.uk) has joined #ceph
[11:06] * tank100 (~tank@84.200.17.138) has joined #ceph
[11:06] * Fetch (fetch@gimel.cepheid.org) has joined #ceph
[11:06] * blynch (~blynch@vm-nat.msi.umn.edu) has joined #ceph
[11:06] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[11:06] * brambles (~xymox@s0.barwen.ch) Quit (Max SendQ exceeded)
[11:07] * brambles (~xymox@s0.barwen.ch) has joined #ceph
[11:07] * funnel (~funnel@23.226.237.192) has joined #ceph
[11:07] * LeaChim (~LeaChim@host86-161-90-122.range86-161.btcentralplus.com) has joined #ceph
[11:07] * ikrstic (~ikrstic@c82-214-88-26.loc.akton.net) has joined #ceph
[11:07] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[11:07] * fsimonce (~simon@host13-187-dynamic.26-79-r.retail.telecomitalia.it) has joined #ceph
[11:07] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[11:07] * jordanP (~jordan@185.23.92.11) has joined #ceph
[11:07] * iggy (~iggy@theiggy.com) has joined #ceph
[11:07] * rongze (~rongze@114.54.30.94) has joined #ceph
[11:07] * mjeanson (~mjeanson@00012705.user.oftc.net) has joined #ceph
[11:07] * rdas (~rdas@121.244.87.115) has joined #ceph
[11:07] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[11:07] * huangjun (~kvirc@111.174.239.37) has joined #ceph
[11:07] * codice_ (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[11:07] * lupu1 (~lupu@86.107.101.246) has joined #ceph
[11:07] * JCL (~JCL@2601:9:5980:39b:bcf1:8f20:1986:d04d) has joined #ceph
[11:07] * imjustmatthew (~imjustmat@pool-74-110-226-158.rcmdva.fios.verizon.net) has joined #ceph
[11:07] * lincolnb (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) has joined #ceph
[11:07] * zigo (quasselcor@ipv6-ftp.gplhost.com) has joined #ceph
[11:07] * swills (~swills@mouf.net) has joined #ceph
[11:07] * qhartman (~qhartman@64.207.33.50) has joined #ceph
[11:07] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) has joined #ceph
[11:07] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[11:07] * sjust (~sjust@2607:f298:a:607:f003:5cd3:300e:a2db) has joined #ceph
[11:07] * tws_ (~traviss@rrcs-24-123-86-154.central.biz.rr.com) has joined #ceph
[11:07] * gregsfortytwo (~Adium@2607:f298:a:607:40c8:3887:435f:7674) has joined #ceph
[11:07] * lpabon (~lpabon@nat-pool-bos-t.redhat.com) has joined #ceph
[11:07] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) has joined #ceph
[11:07] * houkouonchi-home (~linux@2001:470:c:c69::2) has joined #ceph
[11:07] * kfei (~root@114-27-89-92.dynamic.hinet.net) has joined #ceph
[11:07] * jksM_ (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[11:07] * sverrest (~sverrest@cm-84.208.166.184.getinternet.no) has joined #ceph
[11:07] * mdjp (~mdjp@2001:41d0:52:100::343) has joined #ceph
[11:07] * grepory (uid29799@id-29799.uxbridge.irccloud.com) has joined #ceph
[11:07] * saaby (~as@mail.saaby.com) has joined #ceph
[11:07] * joerocklin_ (~joe@cpe-65-185-149-56.woh.res.rr.com) has joined #ceph
[11:07] * sage (~quassel@cpe-23-242-158-79.socal.res.rr.com) has joined #ceph
[11:07] * simulx (~simulx@66-194-114-178.static.twtelecom.net) has joined #ceph
[11:07] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[11:07] * hflai (~hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[11:07] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) has joined #ceph
[11:07] * v2 (~venky@ov42.x.rootbsd.net) has joined #ceph
[11:07] * garphy`aw (~garphy@frank.zone84.net) has joined #ceph
[11:07] * carter (~carter@li98-136.members.linode.com) has joined #ceph
[11:07] * ponyofdeath (~vladi@cpe-66-27-98-26.san.res.rr.com) has joined #ceph
[11:07] * philips (~philips@ec2-54-196-103-51.compute-1.amazonaws.com) has joined #ceph
[11:07] * cce (~cce@50.56.54.167) has joined #ceph
[11:07] * benner (~benner@162.243.49.163) has joined #ceph
[11:07] * kwmiebach__ (sid16855@id-16855.charlton.irccloud.com) has joined #ceph
[11:07] * Gugge-47527 (gugge@kriminel.dk) has joined #ceph
[11:07] * devicenull (sid4013@id-4013.ealing.irccloud.com) has joined #ceph
[11:07] * shk (sid33582@id-33582.charlton.irccloud.com) has joined #ceph
[11:07] * brambles (~xymox@s0.barwen.ch) Quit (Max SendQ exceeded)
[11:11] * dennis_ is now known as denn1s
[11:11] * jordanP (~jordan@185.23.92.11) Quit (Ping timeout: 480 seconds)
[11:11] * brambles (~xymox@s0.barwen.ch) has joined #ceph
[11:14] * dneary (~dneary@87-231-145-225.rev.numericable.fr) has joined #ceph
[11:15] * zack_dol_ (~textual@e0109-49-132-43-197.uqwimax.jp) has joined #ceph
[11:16] * zack_dolby (~textual@e0109-49-132-43-197.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[11:18] * flaxy (~afx@78.130.171.69) has joined #ceph
[11:21] * zack_dol_ (~textual@e0109-49-132-43-197.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:22] * jordanP (~jordan@185.23.92.11) has joined #ceph
[11:25] * vbellur (~vijay@122.178.240.55) has joined #ceph
[11:26] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[11:28] * flaxy (~afx@78.130.171.69) Quit (Quit: WeeChat 0.4.2)
[11:29] * flaxy (~afx@78.130.171.69) has joined #ceph
[11:29] * tiger (~textual@58.213.102.114) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[11:31] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[11:36] * michalefty (~micha@p20030071CF588100F890BDDEC535DA67.dip0.t-ipconnect.de) has left #ceph
[11:41] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[11:53] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[11:56] * ufven (~ufven@130-229-28-186-dhcp.cmm.ki.se) Quit ()
[12:00] * ufven (~ufven@130-229-29-136-dhcp.cmm.ki.se) has joined #ceph
[12:01] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[12:02] * pressureman (~pressurem@62.217.45.26) has joined #ceph
[12:03] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[12:04] * dvanders (~dvanders@pb-d-128-141-237-218.cern.ch) has joined #ceph
[12:06] * cronix1 (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[12:13] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[12:14] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[12:17] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[12:21] * cronix1 (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[12:22] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[12:23] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[12:25] * zhaochao_ (~zhaochao@106.38.204.75) has joined #ceph
[12:26] * zhaochao_ (~zhaochao@106.38.204.75) Quit ()
[12:29] * zhaochao (~zhaochao@106.38.204.67) Quit (Remote host closed the connection)
[12:29] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[12:35] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Ping timeout: 480 seconds)
[12:37] * lucas1 (~Thunderbi@222.240.148.130) Quit (Ping timeout: 480 seconds)
[12:40] * jordanP (~jordan@185.23.92.11) Quit (Read error: Operation timed out)
[12:42] * DV__ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[12:44] * jordanP (~jordan@185.23.92.11) has joined #ceph
[12:47] * amaron (~amaron@cable-178-148-239-68.dynamic.sbb.rs) has joined #ceph
[12:48] * cronix1 (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[12:49] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[12:51] * DV__ (~veillard@veillard.com) has joined #ceph
[12:56] * Zethrok (~martin@95.154.26.254) has joined #ceph
[12:56] * beardo (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[12:57] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[12:58] * DV__ (~veillard@veillard.com) Quit (Remote host closed the connection)
[12:59] * DV_ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[12:59] * huangjun (~kvirc@111.174.239.37) Quit (Read error: Connection reset by peer)
[12:59] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[13:02] * rwheeler (~rwheeler@173.48.207.57) has joined #ceph
[13:02] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[13:08] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[13:09] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[13:12] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[13:13] * chrisjones (~chrisjone@12.237.137.162) has joined #ceph
[13:15] <_Tass4dar> hi, Dan Mick in the house?
[13:16] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[13:18] * amaron (~amaron@cable-178-148-239-68.dynamic.sbb.rs) Quit (Ping timeout: 480 seconds)
[13:20] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[13:21] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[13:23] * Vacum_ is now known as Vacum
[13:24] <Vacum> Hi. Is it possible to restrict cluster wide (not per osd) the number of parallel backfilling PGs?
[13:31] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[13:34] * tiger (~textual@114.221.52.150) has joined #ceph
[13:35] * tiger (~textual@114.221.52.150) Quit ()
[13:35] * b0e1 (~aledermue@juniper1.netways.de) has joined #ceph
[13:37] * fdmanana (~fdmanana@bl5-3-231.dsl.telepac.pt) has joined #ceph
[13:39] <absynth> Vacum: well, you inject the same command to all OSDs
[13:39] <absynth> but i think you can only restrict num of threads, not PGs (not sure though)
[13:40] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[13:45] * saurabh (~saurabh@121.244.87.117) Quit (Quit: Leaving)
[13:52] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[13:56] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[14:05] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[14:08] * diegows (~diegows@190.190.5.238) has joined #ceph
[14:14] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[14:14] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[14:15] * ChanServ sets mode +v sage
[14:15] * ChanServ sets mode -o scuttle|afk
[14:15] * ChanServ sets mode +v joao
[14:17] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[14:22] * lupu (~lupu@86.107.101.214) has joined #ceph
[14:24] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:29] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[14:29] * ChanServ sets mode +v andreask
[14:29] <Vacum> absynth: I only found "osd max backfills", which is per OSD, not cluster wide. with increasing numbers of OSDs the number of parallel backfillings will increase too?
[14:31] * b0e1 (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[14:34] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[14:34] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) has joined #ceph
[14:35] * rongze (~rongze@114.54.30.94) Quit (Read error: Connection reset by peer)
[14:35] * nhm (~nhm@184-97-129-14.mpls.qwest.net) Quit (Quit: Lost terminal)
[14:35] * rongze (~rongze@114.54.30.94) has joined #ceph
[14:35] * nhm (~nhm@184-97-129-14.mpls.qwest.net) has joined #ceph
[14:35] * ChanServ sets mode +o nhm
[14:36] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[14:38] * markbby (~Adium@168.94.245.3) has joined #ceph
[14:44] <imriz> absynth, all the recovery limits are per OSD
[14:44] * rongze (~rongze@114.54.30.94) Quit (Read error: Connection reset by peer)
[14:45] * rongze (~rongze@114.54.30.94) has joined #ceph
[14:53] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[14:54] * huangjun (~kvirc@117.151.54.155) has joined #ceph
[14:56] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[14:58] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has left #ceph
[14:59] <magicrobotmonkey> what is the number after the / in this 172.16.10.102:6813/32265 not 172.16.10.102:6813/24600 - wrong node
[15:00] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[15:00] * marrusl_ (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[15:02] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:04] * mrjack (mrjack@office.smart-weblications.net) Quit (Ping timeout: 480 seconds)
[15:05] * steveeJ (~junky@client156.amh.kn.studentenwohnheim-bw.de) has joined #ceph
[15:06] * cronix1 (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[15:06] * KevinPerks (~Adium@nat-pool-rdu-u.redhat.com) has joined #ceph
[15:07] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[15:07] * marrusl_ (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[15:07] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[15:07] * primechuck (~primechuc@69.170.148.179) has joined #ceph
[15:11] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[15:12] * michalefty (~micha@p20030071CF4F7F00F890BDDEC535DA67.dip0.t-ipconnect.de) has joined #ceph
[15:15] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[15:17] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[15:21] * cronix1 (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[15:23] <blSnoopy> qemu-img create -f raw rbd:data/foo 10G <= where would i put ceph cluster name?
[15:24] * rongze (~rongze@114.54.30.94) Quit (Remote host closed the connection)
[15:24] * rongze (~rongze@114.54.30.94) has joined #ceph
[15:32] * rongze (~rongze@114.54.30.94) Quit (Ping timeout: 480 seconds)
[15:34] * brad_mssw (~brad@shop.monetra.com) has joined #ceph
[15:35] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[15:35] * jksM (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[15:39] * finster (~finster@cmdline.guru) has joined #ceph
[15:43] * michalefty (~micha@p20030071CF4F7F00F890BDDEC535DA67.dip0.t-ipconnect.de) has left #ceph
[15:44] * cronix1 (~cronix@5.199.139.166) Quit (Ping timeout: 480 seconds)
[15:44] * wmat (wmat@wallace.mixdown.ca) has left #ceph
[15:45] * jcsp (~Adium@0001bf3a.user.oftc.net) has joined #ceph
[15:48] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[15:51] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[15:55] * todin (tuxadero@kudu.in-berlin.de) Quit (Remote host closed the connection)
[15:55] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[15:55] * Coyo (~coyo@thinks.outside.theb0x.org) has joined #ceph
[15:55] * Coyo is now known as Guest40
[16:01] <Anticimex> can a client node be member of multiple clusters?
[16:03] * jtaguinerd (~jtaguiner@103.14.60.184) Quit (Quit: Leaving.)
[16:08] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[16:10] * fsimonce` (~simon@host188-69-dynamic.53-79-r.retail.telecomitalia.it) has joined #ceph
[16:12] * fsimonc`` (~simon@host27-60-dynamic.26-79-r.retail.telecomitalia.it) has joined #ceph
[16:14] <_Tass4dar> Anticimex: yes
[16:14] <_Tass4dar> if you use one ceph cluster only, it is practical to just put a ceph.conf in /etc/ceph
[16:14] <_Tass4dar> but you don't have to
[16:15] <_Tass4dar> you can simply give all relevant options to your client application by command line arguments
[16:15] * fsimonce (~simon@host13-187-dynamic.26-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[16:15] <_Tass4dar> this works both for cephfs and for rbd related stuff
[16:16] * fsimonce` (~simon@host188-69-dynamic.53-79-r.retail.telecomitalia.it) Quit (Read error: Operation timed out)
[16:17] * Cube (~Cube@66-87-64-58.pools.spcsdns.net) has joined #ceph
[16:18] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) has joined #ceph
[16:19] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[16:20] * fsimonc`` is now known as fsimonce
[16:20] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[16:22] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[16:22] * zack_dolby (~textual@e0109-49-132-43-197.uqwimax.jp) has joined #ceph
[16:22] * FL1SK (~quassel@159.118.92.60) has joined #ceph
[16:23] * zack_dolby (~textual@e0109-49-132-43-197.uqwimax.jp) Quit ()
[16:23] * sz0_ (~sz0@141.196.61.55) has joined #ceph
[16:24] * jrankin (~jrankin@nat-pool-rdu-t.redhat.com) has joined #ceph
[16:25] * ircolle is now known as ircolle-away
[16:27] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[16:28] * ikrstic_ (~ikrstic@c82-214-88-26.loc.akton.net) has joined #ceph
[16:29] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[16:30] * sz0_ (~sz0@141.196.61.55) Quit ()
[16:32] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[16:32] * markbby (~Adium@168.94.245.3) has joined #ceph
[16:34] * ikrstic (~ikrstic@c82-214-88-26.loc.akton.net) Quit (Ping timeout: 480 seconds)
[16:35] * hufman (~hufman@cpe-184-58-235-28.wi.res.rr.com) has joined #ceph
[16:35] <hufman> gooooooood morning!
[16:35] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[16:37] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[16:38] * steveeJ (~junky@client156.amh.kn.studentenwohnheim-bw.de) Quit (Remote host closed the connection)
[16:38] <hufman> i have a question about calamari
[16:39] <hufman> it seems to be showing a message on the top saying "Cluster Updates Are Stale. The Cluster isn't updating Calamari. Please contact Administrator"
[16:40] <hufman> and i've traced through the code, and it references a kraken update time
[16:40] <brad_mssw> is it possible to delete the default 'data' and 'metadata' pools if not using CephFS/MDS?
[16:41] <hufman> what is this kraken? i'm guessing it performs a similar role as cthulhu
[16:41] * maethor (~maethor@galactus.lahouze.org) has joined #ceph
[16:43] * bkopilov (~bkopilov@213.57.17.152) has joined #ceph
[16:44] * jordanP (~jordan@185.23.92.11) Quit (Quit: Leaving)
[16:44] <maethor> hi, I'm trying to set capabilities for a given entity on more than one pool
[16:46] <hufman> looking through the code for the dashboard, the warning pops up based on the cluster_update_time_unix, which seems to come from the /api/v1/cluster/{}/osd endpoint
[16:46] <hufman> which is supposed to return an epoch, a cluster_update_time_unix, a pg_state_counts, osds, changed, and removed
[16:46] <maethor> I found this command: ceph auth caps client.foo mon "allow r" osd "allow rwx pool=foo"
[16:46] <hufman> but the rest service is only returning osds and pg_state_counts
[16:47] <maethor> but I also want to access the "bar" pool
[16:48] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[16:48] <magicrobotmonkey> hufman: http://karan-mj.blogspot.com/2014/01/kraken-first-free-ceph-dashboard-in-town.html
[16:49] <hufman> i know of that project, and i've installed it
[16:49] <hufman> but i don't think it's related to the calamari dashboard
[16:49] <magicrobotmonkey> mayve calamari uses some of its code?
[16:50] <magicrobotmonkey> Im still working on getting calamari deployed
[16:51] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[16:51] <maethor> I think calamari and kraken are two different projects
[16:51] <magicrobotmonkey> yea i thought so too
[16:51] <maethor> kraken was started because calamari wasn't free and open source
[16:52] <maethor> now, efforts will probably concentrate on calamari
[16:53] * simulx (~simulx@66-194-114-178.static.twtelecom.net) Quit (Read error: Connection reset by peer)
[16:54] <hufman> i think i have calamari installed, saltstack, diamond+graphite, cthulhu
[16:54] <hufman> all the graphs are populated, dashboards all work
[16:54] <hufman> except for this random bogus warning
[16:55] <magicrobotmonkey> what os are you guys running ceph on?
[16:55] <hufman> ubuntu 12.04
[16:55] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) has joined #ceph
[16:56] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[16:56] <magicrobotmonkey> i have a POC running on ubuntu but i'm getting real sick of all its "quirks"
[16:57] <magicrobotmonkey> im thinking about trying it out on centos
[16:57] * adamcrume (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) has joined #ceph
[16:57] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) Quit (Read error: Operation timed out)
[16:58] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[16:58] * simulx (~simulx@vpn.expressionanalysis.com) has joined #ceph
[16:58] <jcsp> hufman: an internal component of an old version of calamari happened to be called kraken, no relation to the other project of that name
[16:58] <magicrobotmonkey> i guess it's a common cephalopod name
[16:58] <jcsp> there is some cruft in the UI mentioning it apparently, but it should be checking the cluster_update_time fields
[16:59] * andreask (~andreask@zid-vpnn080.uibk.ac.at) has joined #ceph
[16:59] * ChanServ sets mode +v andreask
[16:59] <jcsp> hufman: you might want to check the timezones on your browser/client machine vs your calamari server
[17:00] <jcsp> as the calamari client code is comparing browser time and server time to work out if it's too old
[17:00] <jcsp> they can be different timezones, but they both have to agree on what UTC is
[17:01] * mnash (~chatzilla@vpn.expressionanalysis.com) has joined #ceph
[17:02] <hufman> hmmmm ok
[17:03] * simulx (~simulx@vpn.expressionanalysis.com) Quit (Quit: Nettalk6 - www.ntalk.de)
[17:03] <hufman> thanks :)
[17:04] * simulx (~simulx@66-194-114-178.static.twtelecom.net) has joined #ceph
[17:07] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[17:08] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:08] * thomnico (~thomnico@2a01:e35:8b41:120:98ea:a568:8f1e:a5b4) Quit (Quit: Ex-Chat)
[17:08] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[17:09] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[17:09] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:09] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[17:10] * diegows (~diegows@190.190.5.238) has joined #ceph
[17:11] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[17:16] * xarses (~andreww@12.164.168.117) has joined #ceph
[17:20] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:23] * ircolle-away is now known as ircolle
[17:27] * rotbeard (~redbeard@2a02:908:df19:9900:76f0:6dff:fe3b:994d) has joined #ceph
[17:30] * narb (~Jeff@38.99.52.10) has joined #ceph
[17:30] * madkiss (~madkiss@p549FC20E.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[17:31] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[17:31] * leseb (~leseb@185.21.174.206) has joined #ceph
[17:31] * ikrstic_ (~ikrstic@c82-214-88-26.loc.akton.net) Quit (Quit: Konversation terminated!)
[17:32] * keds (Ked@cpc6-pool14-2-0-cust202.15-1.cable.virginm.net) has joined #ceph
[17:39] * lupu (~lupu@86.107.101.214) has joined #ceph
[17:40] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[17:42] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[17:42] * KevinPerks (~Adium@nat-pool-rdu-u.redhat.com) Quit (Quit: Leaving.)
[17:42] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[17:45] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[17:49] * KevinPerks (~Adium@nat-pool-rdu-u.redhat.com) has joined #ceph
[17:49] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[17:50] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[17:51] * b0e (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[17:52] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[17:53] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) has joined #ceph
[17:53] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[17:53] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[18:00] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[18:00] * andreask (~andreask@zid-vpnn080.uibk.ac.at) Quit (Ping timeout: 480 seconds)
[18:01] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[18:01] <lincolnb> anyone ever see this one? java.lang.NoClassDefFoundError: com/ceph/fs/CephFileAlreadyExistsException
[18:01] <kraken> GenericPrinterAutomationApplet
[18:01] <lincolnb> trying to set up hadoop to work with cephfs
[18:03] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[18:03] * ChanServ sets mode +v andreask
[18:03] <lincolnb> ah, perhaps im missing libcephfs-java
[18:03] <kraken> ExternalJDBCScriptExtractionModule
[18:04] * joef (~Adium@2620:79:0:131:e0fc:5391:9471:b51a) has joined #ceph
[18:05] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[18:05] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[18:07] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[18:07] * ChanServ sets mode +o elder
[18:08] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[18:09] <loicd> alfredodeza: what's that kraken feature ? java ?
[18:09] <kraken> ExternalResultDecorator
[18:09] <loicd> OMG
[18:09] <loicd> java
[18:09] <kraken> GenericAWTDataInterpolationAdapter
[18:09] <loicd> java
[18:09] <kraken> CompositeNullInstantiationPool
[18:09] * loicd goes PM on kraken
[18:09] * jrankin (~jrankin@nat-pool-rdu-t.redhat.com) Quit (Quit: Leaving)
[18:10] * cronix1 (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[18:10] * saurabh (~saurabh@103.6.159.182) has joined #ceph
[18:15] * markbby (~Adium@168.94.245.4) has joined #ceph
[18:19] * adamcrume (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[18:20] <lincolnb> lol
[18:20] <Anticimex> is loic of erasure coding fame here?
[18:20] <Anticimex> oh, "loicd"
[18:20] <Anticimex> 80% correct nick :)
[18:20] <darkfader> erasure coded nickname?
[18:21] <Anticimex> haha
[18:21] <loicd> Anticimex: I'm indeed struggling with erasure code ;-)
[18:21] <Anticimex> i'm... struggling with customers who are pushing down per-byte costs so much i need erasure codes
[18:21] * rongze (~rongze@202.85.220.195) has joined #ceph
[18:21] <Anticimex> so i'm considering ~0.7%-size ssd pool above
[18:22] <Anticimex> (couple of TB ssd usable w/ 3 replicas)
[18:22] <Anticimex> i've seen a cloudwatt presentation on EC, iir
[18:22] <Anticimex> *iirc
[18:22] <Anticimex> i've read various blueprints etc, any pointers what to go look for?
[18:23] <Anticimex> i'm 5 days out from starting to spin up test cluster on some sort of per-hour billing cloud
[18:23] <Anticimex> maybe check ceph's bug thing too :)
[18:23] <Anticimex> if it works, i'm happy.
[18:23] <Anticimex> maybe even check the code
[18:30] * saurabh (~saurabh@103.6.159.182) Quit (Quit: Leaving)
[18:30] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[18:32] * KevinPerks (~Adium@nat-pool-rdu-u.redhat.com) Quit (Quit: Leaving.)
[18:34] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[18:37] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[18:37] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[18:37] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[18:37] * ChanServ sets mode +o elder
[18:39] <lincolnb> hmmm
[18:40] <lincolnb> so I got hadoop/ceph sort-of working.. however, the Ceph plugin for hadoop seems to look for /usr/lib64/libcephfs_jni.so but the file installed by libcephfs_jni1-0.80.1-0.el6.x86_64 is /usr/lib64/libcephfs_jni.so.1
[18:41] <lincolnb> so i did the nasty hack of just symlinking libcephfs_jni.so.1 to libceph_jni.so and I don't get java stack traces any more
[18:41] <kraken> RunnableXMLExtractionTokenizer
[18:41] * steveeJ (~junky@client156.amh.kn.studentenwohnheim-bw.de) has joined #ceph
[18:41] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[18:41] <lincolnb> not sure if bug or me doing something wrong, likely the latter
[18:41] <kraken> ???_???
[18:42] * rongze (~rongze@202.85.220.195) Quit (Ping timeout: 480 seconds)
[18:43] * kraken (~kraken@gw.sepia.ceph.com) Quit (Remote host closed the connection)
[18:43] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[18:44] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[18:44] <alfredodeza> java
[18:44] * alfredodeza just took away the java noise from kraken
[18:44] <alfredodeza> thanks kraken
[18:44] * kraken is flabbergasted by the reinforced elevation of gratitude
[18:44] <lincolnb> yay
[18:46] * cronix1 (~cronix@5.199.139.166) Quit (Ping timeout: 480 seconds)
[18:52] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[18:56] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) has joined #ceph
[18:56] * mrjack_ (mrjack@office.smart-weblications.net) Quit (Ping timeout: 480 seconds)
[18:57] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:57] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[18:58] * mrjack (mrjack@office.smart-weblications.net) has joined #ceph
[19:00] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[19:02] <MACscr> ok, so i redid the networking on the first pool i created and i pushed the conf to all my nodes (though i think i might need to expand the conf as i dont have every osd listed on each node)
[19:03] <loicd> Anticimex: do you have a description of what you would like to achieve ?
[19:03] <MACscr> anyway, now the osd's wont start because of some generic fault error
[19:03] <MACscr> here is my conf: http://pastie.org/pastes/9341020/text?key=s53aqcaatxlsranxagvrg
[19:04] <MACscr> another odd thing that im wresting with is the lack of a /var/log/ceph directory and no logs. Im running debian wheezy if that makes a diff
[19:04] <MACscr> wheezy and firefly
[19:04] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[19:08] * primechu_ (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[19:11] * adamcrume (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) has joined #ceph
[19:11] * BManojlovic (~steki@cable-94-189-160-74.dynamic.sbb.rs) has joined #ceph
[19:12] <Vacum> Midnightmyth: its changing the IPs of your cluster supported at all? :) everything is based on IPs, not host names. so the pgmap and osdmap will contains IPs? and the crushmap too? if you stop the cluster, change all IP, start again - that is a complete mixup then?
[19:12] <Vacum> MACscr: ^^^ that was for you, sorry
[19:13] * primechuck (~primechuc@69.170.148.179) Quit (Remote host closed the connection)
[19:14] * cronix1 (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[19:19] * Discard (~discard@213-245-29-151.rev.numericable.fr) has joined #ceph
[19:19] <Discard> hi there
[19:21] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[19:24] * scuttle|afk is now known as scuttlemonkey
[19:24] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) has joined #ceph
[19:25] * Discard (~discard@213-245-29-151.rev.numericable.fr) Quit (Quit: Discard)
[19:26] * Discard (~discard@213-245-29-151.rev.numericable.fr) has joined #ceph
[19:29] <Discard> hi there
[19:30] <Discard> how could a retrieve rgw_max_chunk_size ?
[19:33] * KevinPerks (~Adium@nat-pool-rdu-u.redhat.com) has joined #ceph
[19:35] * Discard (~discard@213-245-29-151.rev.numericable.fr) Quit (Quit: Discard)
[19:35] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) Quit (Quit: Leaving.)
[19:35] * cronix1 (~cronix@5.199.139.166) Quit (Ping timeout: 480 seconds)
[19:39] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has left #ceph
[19:40] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[19:40] * Pedras (~Adium@216.207.42.132) has joined #ceph
[19:43] * Discard (~discard@213-245-29-151.rev.numericable.fr) has joined #ceph
[19:43] * rotbeard (~redbeard@2a02:908:df19:9900:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[19:46] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[19:47] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[19:48] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) has joined #ceph
[19:48] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[19:54] * steveeJ (~junky@client156.amh.kn.studentenwohnheim-bw.de) Quit (Remote host closed the connection)
[19:56] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[19:57] <Anticimex> loicd: at a high-level i just need to know it works and does not lose data.. for multiple PB production data :)
[19:59] * alram (~alram@cpe-76-167-62-129.socal.res.rr.com) has joined #ceph
[19:59] * joshd1 (~jdurgin@2602:306:c5db:310:a11e:97fc:5abf:4b72) has joined #ceph
[20:02] <MACscr> Vacum: i didnt have it stopped when i did it. should i stop it, change them back, start it and see if it works? if so, then shut it down, do the conf changes to the new network settings and then try starting it up
[20:02] <MACscr> ?
[20:05] * bjornar (~bjornar@ti0099a430-0158.bb.online.no) has joined #ceph
[20:08] * \ask (~ask@oz.develooper.com) Quit (Quit: Bye)
[20:08] <Discard> how to get rgw_max_chunk_size ?
[20:08] * \ask (~ask@oz.develooper.com) has joined #ceph
[20:19] <MACscr> Vacum: ok, was able to stop ceph, change the networking back, push out the config changes, start it again and everything worked
[20:19] <MACscr> darkfader: you around?
[20:23] * bjornar (~bjornar@ti0099a430-0158.bb.online.no) Quit (Ping timeout: 480 seconds)
[20:24] * ScOut3R (~ScOut3R@4E5CC1B5.dsl.pool.telekom.hu) has joined #ceph
[20:24] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Read error: Operation timed out)
[20:26] <loicd> Anticimex: it all depends on what you mean by "know" ;-) If knowing that someone in the world runs a multi PB production cluster without loosing data is enough, you have it. If you need to experience that first hand, it's going to be quite difficult.
[20:28] * imriz (~imriz@82.81.163.130) Quit (Read error: Operation timed out)
[20:29] * markbby (~Adium@168.94.245.3) has joined #ceph
[20:31] <mongo> Note if you stay away from erasure coding, which also may not be a problem, it is possible to get your data out of rbd even if you uninstall ceph entirely.
[20:34] * rendar (~I@host149-182-dynamic.37-79-r.retail.telecomitalia.it) Quit (Read error: Operation timed out)
[20:37] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) has joined #ceph
[20:39] * rendar (~I@host149-182-dynamic.37-79-r.retail.telecomitalia.it) has joined #ceph
[20:42] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) Quit (Remote host closed the connection)
[20:45] * glzhao (~glzhao@123.125.124.17) Quit (Read error: Connection reset by peer)
[20:46] * glzhao (~glzhao@123.125.124.17) has joined #ceph
[20:47] * dmick (~dmick@2607:f298:a:607:3b:58c6:7d4e:7ce4) has joined #ceph
[20:48] <MACscr> also, this aspect is really stupid. if I do "ceph-deploy purgedata" its supposed to allow you to start over and then there in option of "ceph-deploy purge" if you want to even delete the packages as well. So purge does all, purgedata just does configs. Makes sense, but if you run purgedata, you get:
[20:48] <MACscr> [ceph_deploy.install][ERROR ] ceph is still installed on: ['stor1', 'stor2', 'stor3']
[20:48] <MACscr> [ceph_deploy]
[20:48] <MACscr> [ERROR ] RuntimeError: refusing to purge data while ceph is still installed
[20:48] <MACscr> wth
[20:49] <alfredodeza> MACscr: I would argue it is not stupid
[20:50] <MACscr> alfredodeza: why? the purgedata is supposed to simply delete all the settings so i can start over
[20:50] <alfredodeza> the reason why purge and purgedata are separate is because in a lot of situations you might want to remove configs or packages but not destroy data
[20:50] <MACscr> without having to reinstall ceph
[20:50] <alfredodeza> and viceversa
[20:50] <alfredodeza> MACscr: you are assuming that is the case, which is not
[20:51] <alfredodeza> if you want to start over you must run `purge` and then `purgedata`
[20:51] <alfredodeza> you need to reinstall ceph, yes
[20:51] <MACscr> my point is that i just want to purge the configs
[20:51] <alfredodeza> ceph-deploy prevents data removal if ceph is installed because it would get you into an inconsistent state
[20:52] <alfredodeza> if you need a new/different config you can do `ceph-deploy config push {nodes}`
[20:52] <MACscr> from the docs "If at any point you run into trouble and you want to start over, execute the following to purge the configuration:" ceph-deploy purgedata {ceph-node} [{ceph-node}]
[20:52] <Discard> someone could help to to retrieve this : rgw_max_chunk_size (command line ?)
[20:53] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[20:53] <MACscr> alfredodeza: i need to change networking and i cant simply push those to the cluster, everything faults when i try to start it
[20:53] * rweeks (~rweeks@c-24-6-118-113.hsd1.ca.comcast.net) has joined #ceph
[20:53] <alfredodeza> MACscr: that is just one of the suggestions. May need a bit better wording
[20:54] <MACscr> alfredodeza: so how do i start over without having to reinstall the packages?
[20:55] <alfredodeza> it really depends on what 'starting over' means to you. If you want to completely blow away everything, you need both purge and purgedata (in that order)
[20:55] <alfredodeza> it is not possible to do that without removing ceph
[20:56] <MACscr> that doesnt make much sense. What does having system packages installed have anything to do with the settings and data? you should be able to wipe configs and start over without having to reinstall system files imho
[20:56] <janos> that seems odd. before ceph-deploy existed you could blow everything away with a few scripts
[20:57] <alfredodeza> MACscr: I am sorry it doesn't make sense to you. Distributed systems are hard :)
[20:57] <alfredodeza> a lot of commands in ceph-deploy have historical reasons for existing
[20:57] <MACscr> alfredodeza: that has nothing to do with good design
[20:58] <alfredodeza> I tried explaining the need for purge and purgedata (that behavior has not changed I think) but if that is not good design, doesn't make sense, and feels stupid, do open an issue on the tracker, or even better! send us a pull request with some changes
[20:58] <alfredodeza> I am not opposed to have a better discussion about the implementation and how we can make it better
[20:58] <janos> a few bash scripts should do it - blow away the mon directories, umount and blow away the ceph partitions
[20:58] * spredzy (~spredzy@62-210-239-87.rev.poneytelecom.eu) Quit (Ping timeout: 480 seconds)
[20:59] <janos> at least that's what i would do pre-ceph-deploy
[20:59] <alfredodeza> and maybe that is something ceph-deploy could help with
[21:00] <janos> possibly, though it's a grey area - i would avoid putting anything purely deployment-focused in something called "ceph-deply"
[21:00] <janos> it could be argued that cleaning the slate is deployment focused
[21:00] <MACscr> i mean, look at the doc here: http://ceph.com/docs/firefly/start/quick-ceph-deploy/#create-a-cluster. Its completely wrong and misleading versus just poorly worded from what you are telling me
[21:00] <janos> *deploy
[21:01] <alfredodeza> Completely wrong?
[21:01] <alfredodeza> where?
[21:01] <janos> time to write "ceph-destroy" haha
[21:01] <alfredodeza> if you could give me concrete examples maybe I could verify that before opening an issue on the tracker
[21:02] * spredzy (~spredzy@62-210-239-87.rev.poneytelecom.eu) has joined #ceph
[21:02] <alfredodeza> we can't fix things that are incorrect if the description says 'this is misleading' or 'this is completely wrong'
[21:02] <alfredodeza> wrong how? where? misleading? what should it say instead?
[21:03] <alfredodeza> those are the answers that would help us out in trying to get better docs if any issues exist
[21:03] <alfredodeza> generic (or too high level) opinions on docs/ceph-deploy are just not good enough
[21:07] * tws_1 (~traviss@rrcs-24-123-86-154.central.biz.rr.com) has joined #ceph
[21:07] * tws_ (~traviss@rrcs-24-123-86-154.central.biz.rr.com) Quit (Read error: Connection reset by peer)
[21:07] <MACscr> "If at any point you run into trouble and you want to start over, execute the following to purge the configuration:"
[21:07] <Discard> someone could help to to retrieve this : rgw_max_chunk_size (command line ?)
[21:08] <MACscr> from what you are telling me, you cant do it
[21:08] <MACscr> unless you do the purge first. Which it says if you wan to do them "too"
[21:08] <MACscr> which implies you dont have to and you do it second
[21:12] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) Quit (Ping timeout: 480 seconds)
[21:16] <MACscr> As you can see in this picture, ive outlined the issues: http://www.screencast.com/t/OW5TAaFJ2
[21:17] <MACscr> from what you are stating, you have to do purge first (which we assume actually woudl remove the data too, but you are stating that you have to then run purgedata after that?
[21:17] <MACscr> thats completely contrary to what the docs state
[21:23] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[21:24] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[21:37] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[21:38] * sz0 (~sz0@46.197.48.116) Quit (Remote host closed the connection)
[21:47] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[21:47] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[21:50] * Sysadmin88 (~IceChat77@94.4.20.0) has joined #ceph
[21:53] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) has joined #ceph
[21:54] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[21:56] * cronix1 (~cronix@5.199.139.166) Quit (Ping timeout: 480 seconds)
[21:56] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[21:57] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[22:01] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[22:03] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[22:05] * markbby (~Adium@168.94.245.3) has joined #ceph
[22:05] * tws_1 (~traviss@rrcs-24-123-86-154.central.biz.rr.com) Quit (Ping timeout: 480 seconds)
[22:06] * joef (~Adium@2620:79:0:131:e0fc:5391:9471:b51a) Quit (Quit: Leaving.)
[22:06] * cronix1 (~cronix@5.199.139.166) Quit (Ping timeout: 480 seconds)
[22:19] * dis (~dis@109.110.67.36) Quit (Ping timeout: 480 seconds)
[22:22] * dis (~dis@109.110.67.116) has joined #ceph
[22:23] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[22:25] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:27] * mrjack (mrjack@office.smart-weblications.net) Quit (Ping timeout: 480 seconds)
[22:27] * joef (~Adium@2601:9:2a00:690:c924:898d:4768:de9d) has joined #ceph
[22:28] * mrjack (mrjack@pD95F2366.dip0.t-ipconnect.de) has joined #ceph
[22:29] * psieklFH (psiekl@wombat.eu.org) has joined #ceph
[22:29] * psiekl (psiekl@wombat.eu.org) Quit (Read error: Connection reset by peer)
[22:32] * rturk|afk is now known as rturk
[22:32] * scuttlemonkey is now known as scuttle|afk
[22:33] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[22:33] * ChanServ sets mode +v andreask
[22:34] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[22:35] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[22:39] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[22:40] <MACscr> hmm, how do i list all the osd's?
[22:40] <MACscr> i have 4 showing up in ceph -s output, but i should only have 3
[22:42] <JCL> ceph osd tree
[22:43] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[22:45] <MACscr> JCL: thanks!
[22:45] <MACscr> lol, im having a heck of a time things setup again. BTW, is the ceph-deploy tool trustworthy for aligning partitions correctly when creating osds?
[22:48] * KevinPerks (~Adium@nat-pool-rdu-u.redhat.com) Quit (Quit: Leaving.)
[22:53] * joef (~Adium@2601:9:2a00:690:c924:898d:4768:de9d) has left #ceph
[22:56] * mrjack (mrjack@pD95F2366.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[22:56] * mrjack (mrjack@office.smart-weblications.net) has joined #ceph
[23:00] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[23:05] * markbby (~Adium@168.94.245.3) has joined #ceph
[23:05] * andreask (~andreask@zid-vpnn083.uibk.ac.at) has joined #ceph
[23:05] * ChanServ sets mode +v andreask
[23:08] <MACscr> does my ceph-deploy node need access to both the cluster and public network or just the cluster?
[23:13] <hufman> i believe ceph-deploy just connects over ssh to the hostname you give
[23:14] * dis (~dis@109.110.67.116) Quit (Ping timeout: 480 seconds)
[23:15] * andreask (~andreask@zid-vpnn083.uibk.ac.at) has left #ceph
[23:15] * sarob (~sarob@2001:4998:effd:600:64af:33d:df0a:d075) has joined #ceph
[23:18] <MACscr> hufman: now that i think of it, i guess i was just testing with commands like ceph -s to check the status of it, etc, which obviously isnt a ceph-deploy tool
[23:19] <dmick> yeah. ceph-deploy can grant the ability to run ceph commands with "ceph-deploy admin <host>"
[23:19] <dmick> (basically it just copies keys)A
[23:20] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[23:20] <MACscr> anyone know the answer to my question about "ceph-deploy osd create" and partition alignment?
[23:21] * joef (~Adium@2620:79:0:131:e0fc:5391:9471:b51a) has joined #ceph
[23:25] * brad_mssw (~brad@shop.monetra.com) Quit (Read error: Operation timed out)
[23:33] * scuttle|afk is now known as scuttlemonkey
[23:34] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[23:37] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[23:43] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[23:45] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[23:45] * ChanServ sets mode +o elder
[23:46] * rotbeard (~redbeard@2a02:908:df19:9900:76f0:6dff:fe3b:994d) has joined #ceph
[23:46] * hufman (~hufman@cpe-184-58-235-28.wi.res.rr.com) Quit (Quit: leaving)
[23:46] <dmick> what do you mean by "alignment" MACscr
[23:52] <MACscr> dmick: hard drive partition alignment. its very important for proper drive performance
[23:52] <dmick> yes, but alignment implies "with respect to something"
[23:52] <MACscr> sectors
[23:52] <dmick> you don't think there are still any such thing as cylinders, right?
[23:53] <_Tass4dar> there are logical blocks
[23:53] <dmick> I'll guarantee you all the partitions created by ceph-deploy are sector-aligned :)
[23:53] * Hell_Fire__ (~HellFire@123-243-155-184.static.tpgi.com.au) Quit (Read error: Connection reset by peer)
[23:53] * Hell_Fire__ (~HellFire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[23:53] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[23:54] <MACscr> so the journals are at the first group of sectors and actual storage is second?
[23:54] <_Tass4dar> ceph-deploy simply uses generic tools for that, and those have aligning logic where applicable
[23:55] <dmick> if you believe you need careful control of partitions, it's best to create them before letting ceph-deploy create the OSDs
[23:55] <_Tass4dar> you want your journals on separate ssd's anyway ;)
[23:55] <MACscr> not always
[23:55] <dmick> ceph-deploy (or really ceph-disk in the ceph/ source tree) just allocates journal space based on the requested size, and data after, with sgdisk
[23:56] <dmick> I'm interested in what sort of alignment you're referring to still MACscr
[23:56] <_Tass4dar> dmick: thanks for your reply with regard to fedora packaging btw
[23:56] <_Tass4dar> i'll see if i can lend you a hand on the technical part
[23:57] <dmick> presumably you mean "starting the partition on a particular integral-number-of-blocks boundary"?
[23:57] <dmick> _Tass4dar: oh that was you? sure.
[23:57] <_Tass4dar> ceph has pretty nice .spec's already, specifically built for fedora 19 and 20; i don't know why they differ so much from the spec by fedora itself
[23:57] <dmick> the history was: the fedora pkging was done long ago when the .spec file was new
[23:58] <_Tass4dar> an
[23:58] <_Tass4dar> ah
[23:58] <dmick> and the fedora maintainer didn't really keep pace with what was happening in the repo
[23:58] <_Tass4dar> well it seems logical now to come to one unified spec
[23:58] <dmick> and we didn't play in the fedora space in any major way inside the Ceph project
[23:58] <dmick> yes
[23:58] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) Quit (Quit: Leaving.)
[23:58] <dmick> definitely
[23:58] <_Tass4dar> starting with the one from ceph, possibly tuned a bit to be in sync with current fedora best practice
[23:58] <_Tass4dar> but in principle maintained upstream
[23:59] <_Tass4dar> i noticed some dependency issues with the ceph-spec that the fedora-spec didn't have (other packages like qemu-disk depending on ceph in certain ways)
[23:59] <_Tass4dar> but those should be easily mitigated

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.