#ceph IRC Log

Index

IRC Log for 2015-05-20

Timestamps are in GMT/BST.

[0:00] <srk> pg_num = 100 * (numnber of osds) / (replication size)
[0:01] <srk> you can also look at official ceph website: http://ceph.com/pgcalc/
[0:01] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[0:01] <gleam> that should be # of pgs in the entire cluster, across all pools
[0:01] <gleam> fyi
[0:03] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:03] * diq (~diq@nat8.460b.weebly.net) has joined #ceph
[0:04] <srk> jkt: ceph status should show status details successfully, as soon as a monitor is in the quorum.
[0:04] <srk> In your case, since it is a single monitor it should work rightaway..
[0:04] <diq> scuttle|afk: when you get back -> https://twitter.com/scuttlemonkey/status/595381421214212097
[0:05] * georgem (~Adium@72.28.92.15) has joined #ceph
[0:05] <srk> did you check ceph-mon process is running on the monitor node?
[0:06] <jkt> srk: yup; it seems however that I initially started with two monitors, one of them on a host which isn't installed yet, and that this persisted somewhere in /var/lib/ceph/
[0:06] * oro (~oro@72.28.92.10) Quit (Ping timeout: 480 seconds)
[0:06] <jkt> srk: nuking this restored functionality in this rather crude test :)
[0:07] <srk> :)
[0:07] <srk> yes, that /var/lib/ceph stuff hurts
[0:07] <jkt> why do you even need that annoying stuff
[0:07] <jkt> :)
[0:08] <srk> lot of run time stuff goes there..
[0:08] <diq> anyone here running ceph with > 72 drives per physical node?
[0:08] <jkt> srk: that was a midnight attempt at joking
[0:08] <diq> I'll admit I haven't actually used ceph yet. I'm still in RTFM stage.
[0:08] * pvh_sa (~pvh@105-237-252-162.access.mtnbusiness.co.za) has joined #ceph
[0:08] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[0:09] <srk> diq: max i've tried so far per node is 24
[0:09] <diq> srk: does that mean you have to run 24 OSD's per node?
[0:09] <srk> yes
[0:09] <diq> ugh
[0:09] <diq> so 72 is pretty much not feasible.
[0:09] <srk> there will be 24 ceph-osd processes running
[0:10] * daniel2_ (~dshafer@0001b605.user.oftc.net) Quit (Remote host closed the connection)
[0:10] <diq> are there plans to change this?
[0:10] <diq> how do people handle denser ceph deployments?
[0:11] <MRay> my guess is that most people scale horizontally more than vertically
[0:12] * sjm (~sjm@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[0:13] <MRay> less OSD per nodes and more nodes
[0:13] <diq> waste of power
[0:13] <diq> and adds complexity
[0:13] <MRay> your planning to do what?
[0:13] <MRay> HDD with SSD Journals?
[0:14] <gleam> i don't see why you couldn't do 72 disk nodes, but i'd say only do it if you have at least double digit nodes
[0:14] <gleam> don't do 3x72..
[0:14] * rendar (~I@host65-93-dynamic.252-95-r.retail.telecomitalia.it) Quit ()
[0:15] <diq> was hoping to do 72 spinning metal drives per 4U node. No SSD as our system is planning to be immutable.
[0:15] * SaneSmith (~straterra@2WVAACGN1.tor-irc.dnsbl.oftc.net) Quit ()
[0:15] <diq> gleam: oh yeah at least 40 nodes to start
[0:15] <MRay> how many node?
[0:15] <MRay> ok
[0:15] <MRay> how many replication?
[0:15] * Flynn (~stefan@ip-81-30-69-189.fiber.nl) Quit (Quit: Flynn)
[0:16] * loicd (~loicd@93.20.168.179) has joined #ceph
[0:16] <gleam> 40x72 i think is workable if the nodes have otherwise ok specs. are these chassis with one drive per tray or 2-3?
[0:16] <diq> gleam: 2 per tray
[0:16] <diq> supermicro makes one....lemme dig up part #
[0:16] <MRay> Never worked with HDD only cluster
[0:16] <gleam> i know the one you're talking about
[0:17] <MRay> but not sure how CPUs would react with 72 OSD
[0:17] * shohn (~shohn@173-14-159-105-NewEngland.hfc.comcastbusiness.net) has left #ceph
[0:17] <MRay> with full SSDs or SSD for journal, CPU is pretty much a huge bottle neck
[0:17] <diq> http://www.supermicro.com/products/system/4U/6047/SSG-6047R-E1R72L2K.cfm
[0:18] <MRay> I have a similar model but 40 something drives
[0:19] <diq> yeah our storage system does de-duping before the object storage layer. so basically write once and the data is immutable
[0:19] <gleam> there's a 1u chassis from asus with 3 drives per tray, 12 drives per node. it's crazy
[0:19] <diq> we'll never append or delete
[0:19] <gleam> i think you could do 40x72 fine
[0:19] <diq> so I'm not really anticipating a need for SSD
[0:20] <diq> figure things would still be speedy enough with sufficient RAM and CPU
[0:20] <diq> just monitoring 72 OSD's per node isn't appealing
[0:21] <diq> speaking of multiple drives per tray, I'm guessing that the OSD's would die gracefully if their underlying storage were yanked out?
[0:21] <MRay> and you only keep 1 copy on the cluster?
[0:21] <diq> thinking of replacing a dead drive on a try with another good drive
[0:22] <diq> MRay: 3 replicas in cluster, plus replication to another cluster in another DC
[0:22] <gleam> you'd want to keep the other osd on the tray from getting marked out
[0:22] <gleam> (down is what you want, out is what you don't)
[0:23] <Sysadmin88> what networking you planning?
[0:23] <MRay> I dont know how 3 replicas with a HDD-only cluster would react
[0:23] <diq> gleam: so some sort of operator intervention is required. It's not just yank and replace and hope the software handles it
[0:23] <MRay> but on SSD base 3 replica kills the CPU
[0:23] <gleam> well, you could do that
[0:23] <gleam> but it would probably rebalance the data on the good osd
[0:23] <gleam> and then rebalance back
[0:24] <diq> gleam: I can live with that
[0:24] * oro (~oro@72.28.92.10) has joined #ceph
[0:24] <diq> going to test all of this stuff, just thought I'd poke in here and ask some questions first.
[0:24] <diq> thanks everyone!
[0:24] <gleam> if your hardware monkey does hte swap fast enough it won't get marked out
[0:24] <gleam> so there's that too
[0:24] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) has joined #ceph
[0:24] <MRay> dig, I would be curious to get feedback on your POC :p
[0:25] <diq> sure thing
[0:25] <MRay> you should publish result on the ML
[0:25] <diq> I'll be in touch with the InkTank folks soon enough
[0:25] <srk> yea, especially how you figure the network part out :)
[0:26] <gleam> dual 40gbit to each node ;)
[0:26] <diq> our POC doesn't have to be top-fuel-funny-car rubber burning fast. It just has to meet our requirements.
[0:26] <srk> are you doing a 10GB single network or multiple networks?
[0:26] * alram (~alram@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit (Quit: Lost terminal)
[0:26] <diq> multiple networks. switches are cheap.
[0:26] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[0:26] <srk> ok
[0:26] <diq> Is there any software in Linux similar to dummynet on BSD, but for the storage layer? I need a way to simulate a dying drive.
[0:26] <srk> planning on using openstack ?
[0:27] <gleam> immutable makes it sound like big research data
[0:27] <diq> not at the moemnt
[0:27] <gleam> look at the cern deployment details too
[0:27] <diq> gleam: de-duped file uploads
[0:27] <gleam> ahh
[0:27] <diq> no need for SSD b/c we can cache things forever w/nginx, squid, etc
[0:28] <diq> apache traffic server
[0:28] <diq> the storage backend doesn't really get the "hot node" problem
[0:28] <gleam> sounds about right
[0:29] <diq> right now we're evaluating Riak CS, Ceph, swiftstack, and LeoFS
[0:29] <diq> Riak CS still requires RAID so we're not too hot on that
[0:29] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[0:29] <diq> swiftstack and leofs still require central dispatchers/gateways (which we're not huge fans of)
[0:30] * georgem (~Adium@72.28.92.15) Quit (Quit: Leaving.)
[0:30] <diq> we like ceph's architecture, but we're wary of 1 OSD per 1 drive
[0:31] <gleam> what model are oyu looking at w/ceph? object storage w/radosgw?
[0:31] <diq> our stuff on top of librados I think (I can double check that)
[0:32] <gleam> that probably makes the most sense
[0:33] <diq> that brings up a question I had, is there any practical limitation to the number of radosgw instances? Could I have 1 per node?
[0:33] <gleam> good question :)
[0:33] <diq> that way you could point at any live node and your gateway requests would work
[0:34] * pdrakeweb (~pdrakeweb@104.247.39.34) has joined #ceph
[0:38] <diq> just wondering if there's any kind of synchronization or communication between gateways
[0:38] <diq> if there is, an increased number of nodes would complicate things
[0:38] * oro (~oro@72.28.92.10) Quit (Ping timeout: 480 seconds)
[0:38] <diq> if they're completely standalone, it shouldn't matter (in theory)
[0:40] <ShaunR> anybody help me with this, i cant for the life of me get it to go to a good health...
[0:40] <ShaunR> HEALTH_WARN 256 pgs degraded; 256 pgs stuck degraded; 256 pgs stuck inactive; 256 pgs stuck unclean; 256 pgs stuck undersized; 256 pgs undersized
[0:41] <ShaunR> i'm deploying a 3 server ceph test on virtual servers, each OSD is a 10gb drive.
[0:41] <ShaunR> each server is a mon and has a single OSD
[0:45] * LRWerewolf (~Fapiko@exit1.telostor.ca) has joined #ceph
[0:51] * gardenshed (~gardenshe@176.27.51.101) Quit (Remote host closed the connection)
[0:54] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) Quit (Quit: Leaving)
[0:56] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[0:57] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) has joined #ceph
[0:59] * nsoffer (~nsoffer@bzq-109-64-255-238.red.bezeqint.net) has joined #ceph
[1:00] * CheKoLyN (~saguilar@bender.parc.xerox.com) Quit (Quit: Leaving)
[1:13] <srk> ShaunR: What is your replication?
[1:15] * LRWerewolf (~Fapiko@5NZAACMG8.tor-irc.dnsbl.oftc.net) Quit ()
[1:15] * SquallSeeD311 (~Coe|work@tor-exit2-readme.puckey.org) has joined #ceph
[1:15] <ShaunR> 3
[1:16] <srk> so, the cluster has 3 osd hosts with 1 per osd drive per host?
[1:17] <ShaunR> ya, 3 servers, each server has a single 10GB drive for a OSD and acts as a mon
[1:19] * dmick (~dmick@206.169.83.146) has left #ceph
[1:21] * fsimonce (~simon@host11-35-dynamic.32-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:22] <srk> that sounds right. Last time I saw an undersize problem, had only one osd host and replicas set to 3
[1:23] <srk> once the replica cound it set to 1 and pools are recreated, cluster came to active+clean
[1:24] <srk> did you already try changing the pg count?
[1:26] <JoeJulian> Are the osds up and in?
[1:26] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[1:29] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) Quit (Quit: Leaving)
[1:31] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[1:32] <ShaunR> i'm restarting from scratch, give me a sec, should be done in a min
[1:36] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[1:38] * oro (~oro@72.28.92.10) has joined #ceph
[1:38] <ShaunR> When doing a ceph osd tree
[1:38] <ShaunR> i show all 3 osd's as up
[1:38] <ShaunR> now i see 'HEALTH_WARN 64 pgs degraded; 64 pgs stuck unclean; 64 pgs undersized; too few PGs per OSD (21 < min 30)'
[1:39] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[1:42] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) has joined #ceph
[1:43] <JoeJulian> right, 64 pgs and 3 osds means 64/3 pgs per osd.
[1:44] <ShaunR> so whats that mean, i need more pgs from the looks of it
[1:45] * SquallSeeD311 (~Coe|work@7R2AAA3JQ.tor-irc.dnsbl.oftc.net) Quit ()
[1:45] * Vale (~ulterior@185.77.129.54) has joined #ceph
[1:46] <JoeJulian> yes, you'll need to increase your pgs then your pgps, ie "ceph osd pool set pg_num 128; ceph osd pool set pgp_num 128;"
[1:46] <JoeJulian> Meh, forgot to include $poolname
[1:47] <m0zes> s/pool /pool $poolname/g
[1:47] <m0zes> something like that.
[1:47] <ShaunR> k, i bumped to 128
[1:48] <ShaunR> HEALTH_WARN 128 pgs degraded; 128 pgs stuck unclean; 128 pgs undersized
[1:49] <JoeJulian> ceph osd pool get $poolname size
[1:49] <JoeJulian> might want to "ceph osd pool get $poolname min_size" also
[1:49] <ShaunR> size: 2
[1:49] <ShaunR> min_size: 1
[1:53] * xarses (~andreww@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[1:56] <JoeJulian> fpaste the output of "ceph osd dump" (if you have the fpaste utility installed, you can pipe it, otherwise go to fpaste.org and copy/paste)
[1:58] * nsoffer (~nsoffer@bzq-109-64-255-238.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[2:04] * rlrevell (~leer@184.52.129.221) has joined #ceph
[2:10] * tsuraan (~tsuraan@c-71-195-10-137.hsd1.mn.comcast.net) has joined #ceph
[2:10] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[2:10] <tsuraan> anybody know why https://github.com/ceph/ceph/pull/2937 didn't get merged into the 0.80.9 release?
[2:12] <tsuraan> fwiw, master RWLock.h looks like https://github.com/ceph/ceph/blob/master/src/common/RWLock.h, and 0.80.9 RWLock.h looks like https://github.com/ceph/ceph/blob/v0.80.9/src/common/RWLock.h . Seems like a weird omission, and rbd doesn't work for me with glibc 2.20
[2:15] * Vale (~ulterior@8Q4AAAV1X.tor-irc.dnsbl.oftc.net) Quit ()
[2:16] * jclm1 (~jclm@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[2:17] <tsuraan> I'm trying a build with the RWLock.h from master. I'm guessing that maybe there was some work done to be sure that the implicit unlock wasn't being used anywhere, but I can't find any commits to that effect
[2:17] * wushudoin (~wushudoin@c-76-19-134-77.hsd1.ma.comcast.net) has joined #ceph
[2:19] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[2:27] * MRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[2:30] * diq (~diq@nat8.460b.weebly.net) Quit (Quit: Leaving...)
[2:31] * Concubidated (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[2:32] * oro (~oro@72.28.92.10) Quit (Ping timeout: 480 seconds)
[2:36] * oro (~oro@72.28.92.10) has joined #ceph
[2:37] * sjm (~sjm@173-14-159-105-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[2:39] * gardenshed (~gardenshe@176.27.51.101) has joined #ceph
[2:45] * starcoder (~Nanobot@exit1.torproxy.org) has joined #ceph
[2:45] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[2:47] * repj (12bd4284@107.161.19.109) has joined #ceph
[2:47] * fam_away is now known as fam
[2:47] <tsuraan> just found the "firefly" branch on github, and the RWLock is the correct one there. weird.
[2:47] * gardenshed (~gardenshe@176.27.51.101) Quit (Ping timeout: 480 seconds)
[2:50] <tsuraan> ah, that patch (authored 2014-11-15) wasn't merged into firefly until 2015-03-11, two days after the 0.80.9 release. maybe just an oversight
[2:51] * alram (~alram@173-14-159-105-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[2:54] * dneary (~dneary@209-82-80-116.dedicated.allstream.net) has joined #ceph
[2:56] * fam is now known as fam_away
[2:57] * fam_away is now known as fam
[3:00] * jclm1 (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Quit: Leaving.)
[3:05] * oro (~oro@72.28.92.10) Quit (Ping timeout: 480 seconds)
[3:06] * dneary (~dneary@209-82-80-116.dedicated.allstream.net) Quit (Ping timeout: 480 seconds)
[3:13] * srk (~srk@2602:306:836e:91f0:f405:586a:d54d:60d1) Quit (Ping timeout: 480 seconds)
[3:15] * pdrakeweb (~pdrakeweb@104.247.39.34) Quit (Remote host closed the connection)
[3:15] * starcoder (~Nanobot@8Q4AAAV2Q.tor-irc.dnsbl.oftc.net) Quit ()
[3:15] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[3:15] * delcake1 (~datagutt@7R2AAA3MZ.tor-irc.dnsbl.oftc.net) has joined #ceph
[3:17] * haomaiwang (~haomaiwan@114.111.166.250) Quit (Ping timeout: 480 seconds)
[3:18] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) has joined #ceph
[3:26] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) has joined #ceph
[3:37] * georgem (~Adium@64.114.24.114) has joined #ceph
[3:39] * alram (~alram@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit (Quit: Lost terminal)
[3:45] * delcake1 (~datagutt@7R2AAA3MZ.tor-irc.dnsbl.oftc.net) Quit ()
[3:45] * Concubidated (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[3:45] * Concubidated (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[3:46] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[3:52] * joshd (~jdurgin@207.194.157.2) has joined #ceph
[3:53] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[3:53] * srk (~srk@104-54-233-31.lightspeed.austtx.sbcglobal.net) has joined #ceph
[3:53] * georgem (~Adium@64.114.24.114) Quit (Quit: Leaving.)
[3:53] * georgem (~Adium@64.114.24.114) has joined #ceph
[3:55] * zhaochao (~zhaochao@125.39.8.226) has joined #ceph
[3:59] * jeroenvh (~jeroen@37.74.194.90) Quit (Ping timeout: 480 seconds)
[4:02] * georgem (~Adium@64.114.24.114) Quit (Quit: Leaving.)
[4:03] * fam is now known as fam_away
[4:04] * fam_away is now known as fam
[4:06] * joshd (~jdurgin@207.194.157.2) Quit (Quit: Leaving.)
[4:08] * JV (~chatzilla@204.14.239.106) Quit (Ping timeout: 480 seconds)
[4:11] * kefu (~kefu@114.92.123.24) has joined #ceph
[4:14] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Quit: Leaving.)
[4:15] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[4:19] * darkid (~CoZmicShR@176.10.99.207) has joined #ceph
[4:23] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[4:26] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) has joined #ceph
[4:28] * zhiqiang (~zhiqiang@shzdmzpr01-ext.sh.intel.com) has joined #ceph
[4:36] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[4:39] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[4:42] * RodrigoUSA (~RodrigoUS@24.41.238.33) has joined #ceph
[4:42] <RodrigoUSA> hi everyone
[4:42] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[4:44] <RodrigoUSA> i'm new to ceph, i was trying to add a new monitor to my test cluster then don't know what happend it sit there adding and now cannot execute nothing with ceph command because it's sit there
[4:45] <RodrigoUSA> ceph -s sit's there and nothing happends have to CTRL+C because it freeze there
[4:49] * darkid (~CoZmicShR@2WVAACGTU.tor-irc.dnsbl.oftc.net) Quit ()
[4:54] * Swompie` (~mLegion@37.48.65.122) has joined #ceph
[4:54] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[4:55] * MRay (~MRay@107.171.161.165) has joined #ceph
[4:59] * KevinPerks (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[5:15] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[5:17] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) Quit (Quit: Leaving.)
[5:18] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) has joined #ceph
[5:18] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[5:19] * Kupo1 (~tyler.wil@23.111.254.159) Quit (Read error: Connection reset by peer)
[5:24] * Swompie` (~mLegion@8Q4AAAV4M.tor-irc.dnsbl.oftc.net) Quit ()
[5:24] * ylmson (~Schaap@ncc-1701-d.tor-exit.network) has joined #ceph
[5:26] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) has joined #ceph
[5:29] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[5:29] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[5:30] <flisky> RodrigoUSA: could you take a look at the log? usually, it's under /var/log/ceph/.
[5:33] * mtanski (~mtanski@65.244.82.98) Quit (Ping timeout: 480 seconds)
[5:34] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[5:35] <RodrigoUSA> flisky, already did nothing there :/
[5:37] <flisky> RodrigoUSA: increase the log level in /etc/ceph/ceph.conf, and restart the service
[5:38] <flisky> such as 'debug_mon = 20', 'debug_rados = 10', 'debug_ms = 1'
[5:38] <RodrigoUSA> undel [global] ?
[5:38] <flisky> you can dynamically change it by 'ceph tell mon.* injectargs', but it may be stuck in your case.
[5:38] <RodrigoUSA> under*
[5:39] <flisky> yes
[5:39] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) Quit (Remote host closed the connection)
[5:43] <RodrigoUSA> ok testing
[5:43] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) has joined #ceph
[5:44] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) Quit ()
[5:44] * srk (~srk@104-54-233-31.lightspeed.austtx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[5:46] * Mika_c (~Mk@122.146.93.152) has joined #ceph
[5:47] * Vacuum__ (~Vacuum@i59F79CAE.versanet.de) has joined #ceph
[5:47] <RodrigoUSA> flisky, probing other monitors
[5:47] * overclk (~overclk@121.244.87.117) has joined #ceph
[5:47] <flisky> can you telnet other minitor's 6789 port?
[5:52] * Mika_c (~Mk@122.146.93.152) Quit (Read error: Connection reset by peer)
[5:52] * Mika_c (~Mk@122.146.93.152) has joined #ceph
[5:53] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[5:53] <RodrigoUSA> flisky, no
[5:53] <RodrigoUSA> other monitor is not working
[5:53] <RodrigoUSA> I want to remove it
[5:53] * Vacuum_ (~Vacuum@88.130.223.208) Quit (Ping timeout: 480 seconds)
[5:54] * ylmson (~Schaap@5NZAACMKV.tor-irc.dnsbl.oftc.net) Quit ()
[5:54] * lobstar1 (~WedTM@balo.jager.io) has joined #ceph
[5:54] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[5:55] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[5:57] * jeroen (~jeroen@37.74.194.90) has joined #ceph
[5:58] * jeroen is now known as Guest5761
[5:58] * alram (~alram@173-14-159-105-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[6:00] * Mika_c (~Mk@122.146.93.152) Quit (Quit: Konversation terminated!)
[6:00] * Mika_c (~Mk@122.146.93.152) has joined #ceph
[6:02] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[6:11] * Mika_c (~Mk@122.146.93.152) Quit (Quit: Konversation terminated!)
[6:11] * fmanana (~fdmanana@bl13-155-240.dsl.telepac.pt) has joined #ceph
[6:12] * Mika_c (~Mk@122.146.93.152) has joined #ceph
[6:13] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:13] * Mika_c (~Mk@122.146.93.152) Quit (Remote host closed the connection)
[6:14] * Guest5761 (~jeroen@37.74.194.90) Quit (Ping timeout: 480 seconds)
[6:15] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[6:17] * b0e (~aledermue@p54AFF851.dip0.t-ipconnect.de) has joined #ceph
[6:19] * fdmanana (~fdmanana@bl13-130-47.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[6:24] * lobstar1 (~WedTM@53IAAA4XC.tor-irc.dnsbl.oftc.net) Quit ()
[6:24] * PierreW (~SquallSee@spftor1e1.privacyfoundation.ch) has joined #ceph
[6:25] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[6:26] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) has joined #ceph
[6:28] * Concubidated1 (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[6:33] * calvinx (~calvin@101.100.172.246) has joined #ceph
[6:35] * Concubidated (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[6:36] * kefu (~kefu@114.92.123.24) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:40] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[6:42] * JV (~chatzilla@static-50-53-42-60.bvtn.or.frontiernet.net) has joined #ceph
[6:46] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[6:49] * b0e (~aledermue@p54AFF851.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:54] * PierreW (~SquallSee@2WVAACGWM.tor-irc.dnsbl.oftc.net) Quit ()
[7:00] * mtanski (~mtanski@65.244.82.98) has joined #ceph
[7:02] * alram (~alram@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[7:11] * RodrigoUSA (~RodrigoUS@24.41.238.33) Quit (Quit: Leaving)
[7:15] * tacticus (~tacticus@v6.kca.id.au) Quit (Ping timeout: 480 seconds)
[7:15] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[7:15] * JV_ (~chatzilla@204.14.239.106) has joined #ceph
[7:20] * JV (~chatzilla@static-50-53-42-60.bvtn.or.frontiernet.net) Quit (Ping timeout: 480 seconds)
[7:20] * JV_ is now known as JV
[7:22] * kefu (~kefu@114.92.123.24) has joined #ceph
[7:24] * neobenedict (~Curt`@torsrvu.snydernet.net) has joined #ceph
[7:24] * tacticus (~tacticus@v6.kca.id.au) has joined #ceph
[7:26] * wushudoin (~wushudoin@c-76-19-134-77.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[7:26] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) has joined #ceph
[7:32] * jeroen (~jeroen@37.74.194.90) has joined #ceph
[7:33] * jeroen is now known as Guest5765
[7:41] * gaveen (~gaveen@123.231.121.221) has joined #ceph
[7:41] * cholcombe (~chris@pool-108-42-124-94.snfcca.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[7:41] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:42] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[7:43] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:51] * Flynn (~stefan@ip-81-30-69-189.fiber.nl) has joined #ceph
[7:51] * Guest5765 (~jeroen@37.74.194.90) Quit (Ping timeout: 480 seconds)
[7:52] * dmn (~dmn@43.224.156.114) has joined #ceph
[7:54] * neobenedict (~Curt`@5NZAACMMK.tor-irc.dnsbl.oftc.net) Quit ()
[7:54] * Qiasfah (~Aal@176.10.99.200) has joined #ceph
[7:56] * pvh_sa (~pvh@105-237-252-162.access.mtnbusiness.co.za) Quit (Ping timeout: 480 seconds)
[7:56] * dopesong (~dopesong@lb1.mailer.data.lt) has joined #ceph
[7:58] * jeroen_ (~jeroen@37.74.194.90) has joined #ceph
[8:00] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (Ping timeout: 480 seconds)
[8:00] * overclk (~overclk@121.244.87.117) Quit (Ping timeout: 480 seconds)
[8:00] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:02] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[8:02] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:02] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:04] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:04] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:06] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:06] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:06] * Mika_c (~Mk@122.146.93.152) has joined #ceph
[8:08] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:08] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:08] * dopesong (~dopesong@lb1.mailer.data.lt) Quit (Remote host closed the connection)
[8:09] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[8:10] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:10] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[8:12] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:12] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:13] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:13] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:15] * gardenshed (~gardenshe@176.27.51.101) has joined #ceph
[8:15] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:15] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[8:15] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:17] * bvivek (~oftc-webi@idp01webcache6-z.apj.hpecore.net) has joined #ceph
[8:17] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:17] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:19] <bvivek> hi
[8:19] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:19] <bvivek> I am trying to deploy ceph-hammer on 4 nodes(admin, monitor and 2 OSD's). MY servers are behind a proxy server, so when I need to run an apt-get update I need to export our proxy server.
[8:19] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:20] <bvivek> ceph-deploy command fails as its not able to download the key file from ceph.com due to proxy error
[8:20] <bvivek> please tell me how to add the proxy on the server for the ceph user
[8:21] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:21] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:23] * dopesong (~dopesong@lb1.mailer.data.lt) has joined #ceph
[8:23] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:23] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:24] * Qiasfah (~Aal@0SGAAAVQI.tor-irc.dnsbl.oftc.net) Quit ()
[8:24] * Morde (~LRWerewol@tor-exit1-readme.dfri.se) has joined #ceph
[8:25] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[8:25] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:26] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) has joined #ceph
[8:27] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[8:27] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:29] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:29] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:29] * haomaiwang (~haomaiwan@114.111.166.250) has joined #ceph
[8:30] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[8:31] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:31] * gardenshed (~gardenshe@176.27.51.101) Quit (Remote host closed the connection)
[8:32] * dopesong (~dopesong@lb1.mailer.data.lt) Quit (Remote host closed the connection)
[8:32] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[8:32] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[8:33] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:34] * schmee (~quassel@phobos.isoho.st) Quit (Quit: No Ping reply in 180 seconds.)
[8:34] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:35] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:35] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:35] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) Quit (Quit: Always try to be modest, and be proud about it!)
[8:36] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[8:36] * espeer (~quassel@phobos.isoho.st) Quit (Quit: No Ping reply in 180 seconds.)
[8:37] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[8:37] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:39] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:39] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:40] * jnq (~jnq@95.85.22.50) Quit (Remote host closed the connection)
[8:40] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[8:40] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:41] * nardial (~ls@dslb-178-009-182-130.178.009.pools.vodafone-ip.de) has joined #ceph
[8:42] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:42] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:42] <Nats_> put the proxy into /etc/apt/apt.conf permanently
[8:43] * cok (~chk@2a02:2350:18:1010:5cf1:3b0:a53e:a120) has joined #ceph
[8:43] * gardenshed (~gardenshe@176.27.51.101) has joined #ceph
[8:44] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:44] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:45] <SamYaple> Nats_: the key ceph-deploy downloads is a hard coded url I believe
[8:45] <SamYaple> bvivek: try exporting the HTTP_PROXY environment variable. it might get pulled in
[8:46] <SamYaple> alternatively just tweak the ceph-deploy script
[8:46] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:46] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:46] <Nats_> there's also no requirement to use ceph-deploy to install ceph
[8:47] * treenerd (~treenerd@85.193.140.98) has joined #ceph
[8:47] * dschneller (~textual@89.246.242.138) has joined #ceph
[8:47] <Nats_> you can get the packages physically onto the host in the normal way, and then use ceph-deploy for everything else
[8:47] * espeer (~quassel@phobos.isoho.st) has joined #ceph
[8:47] * schmee (~quassel@phobos.isoho.st) has joined #ceph
[8:47] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[8:48] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:48] <SamYaple> its been a while since ive used ceph-deploy, but i believe it pulls down the key each time Nats_, so if that fails the script bombs out
[8:48] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:48] <Nats_> you may be right, i dont really know what key you're referring to
[8:49] <SamYaple> the asc key for the apt repo
[8:49] * MRay (~MRay@107.171.161.165) Quit (Quit: This computer has gone to sleep)
[8:50] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:50] <Nats_> fair enough
[8:50] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:50] * ira (~ira@ip4d1646da.dynamic.kabel-deutschland.de) has joined #ceph
[8:50] <SamYaple> bvivek: looks like you should be alright if you add the repo and key manually. its in a function that i believe only runs if it needs to update the apt repo
[8:52] <SamYaple> update* as in add a new repo
[8:52] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[8:52] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:53] <Nats_> i do this on every new host for example http://pastebin.com/fTRPK39G
[8:53] * espeer (~quassel@phobos.isoho.st) Quit (Quit: No Ping reply in 180 seconds.)
[8:53] <Nats_> then ceph-deploy osd create as per the manual
[8:53] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[8:54] <SamYaple> yup. that would work. adding the repo and key manually
[8:54] * Morde (~LRWerewol@2WVAACGY4.tor-irc.dnsbl.oftc.net) Quit ()
[8:54] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:54] * ira (~ira@ip4d1646da.dynamic.kabel-deutschland.de) Quit ()
[8:54] * JV (~chatzilla@204.14.239.106) Quit (Ping timeout: 480 seconds)
[8:55] * dschneller (~textual@89.246.242.138) Quit (Ping timeout: 480 seconds)
[8:56] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[8:56] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:56] * AtuM (~atum@postar-b.abakus.si) has joined #ceph
[8:58] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[8:58] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[8:58] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[8:58] * Freddy (~Scrin@bolobolo1.torservers.net) has joined #ceph
[8:58] <Be-El> hi
[8:59] <SamYaple> hello Be-El
[8:59] <AtuM> Hi! I'm trying to install ceph (hammer) onto an ubuntu node using ceph-deploy. I'm using the official online "tutorial", but I get many problems. I want to first test it on a single node. I get problems with "gatherkeys", then after I manually create them, I get problems with osd activation... is this a problem specific to ubuntu 14.04 or a common problem with ceph?
[8:59] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[9:00] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:00] * espeer (~quassel@phobos.isoho.st) has joined #ceph
[9:01] * overclk (~overclk@121.244.87.124) has joined #ceph
[9:02] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[9:04] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:05] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[9:06] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:07] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[9:07] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[9:08] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:09] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:09] * dopesong (~dopesong@lb1.mailer.data.lt) has joined #ceph
[9:09] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:10] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:11] * Flynn (~stefan@ip-81-30-69-189.fiber.nl) Quit (Quit: Flynn)
[9:11] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[9:12] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:13] * jks (~jks@178.155.151.121) Quit (Read error: Connection reset by peer)
[9:13] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[9:13] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:14] * jeroen_ (~jeroen@37.74.194.90) Quit (Ping timeout: 480 seconds)
[9:14] * nsoffer (~nsoffer@bzq-109-64-255-238.red.bezeqint.net) has joined #ceph
[9:15] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:15] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[9:17] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:17] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:19] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:19] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:20] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[9:21] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[9:21] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:21] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:23] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:23] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:25] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:25] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:25] * thomnico (~thomnico@2a01:e35:8b41:120:6c3f:ca6d:2f60:51f7) has joined #ceph
[9:27] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:27] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:28] * Freddy (~Scrin@2WVAACGZ1.tor-irc.dnsbl.oftc.net) Quit ()
[9:28] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[9:29] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:30] * jks (~jks@178.155.151.121) has joined #ceph
[9:30] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[9:31] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:31] * fsimonce (~simon@host11-35-dynamic.32-79-r.retail.telecomitalia.it) has joined #ceph
[9:33] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:33] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:34] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[9:34] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:35] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:35] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:36] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:36] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:38] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:38] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:40] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:40] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:42] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:42] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:44] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:44] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:44] * bvivek (~oftc-webi@idp01webcache6-z.apj.hpecore.net) Quit (Remote host closed the connection)
[9:46] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:46] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:48] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:48] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:48] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[9:50] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:50] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:52] * jeroen (~jeroen@ip-213-127-160-90.ip.prioritytelecom.net) has joined #ceph
[9:52] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:52] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:52] * jeroen is now known as Guest5776
[9:53] * analbeard (~shw@support.memset.com) has joined #ceph
[9:53] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[9:54] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:56] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:56] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[9:56] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[9:57] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[9:57] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:58] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:59] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[9:59] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[10:01] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[10:01] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[10:03] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[10:03] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[10:04] * dmn (~dmn@43.224.156.114) Quit (Ping timeout: 480 seconds)
[10:05] * Concubidated1 (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[10:05] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[10:05] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[10:07] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[10:07] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[10:09] * dmn (~dmn@43.224.156.116) has joined #ceph
[10:09] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[10:09] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[10:11] * daviddcc (~dcasier@77.151.197.84) Quit (Ping timeout: 480 seconds)
[10:11] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[10:11] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[10:11] * nardial (~ls@dslb-178-009-182-130.178.009.pools.vodafone-ip.de) Quit (Read error: Connection reset by peer)
[10:11] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[10:13] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[10:13] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[10:15] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[10:15] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[10:17] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[10:17] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[10:19] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[10:19] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[10:21] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[10:21] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[10:23] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[10:23] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[10:25] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[10:25] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[10:25] * qstion (~qstion@37.157.144.44) has joined #ceph
[10:25] * oro (~oro@207.194.125.34) has joined #ceph
[10:26] * zhaochao_ (~zhaochao@111.161.77.238) has joined #ceph
[10:26] * pdrakewe_ (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: No route to host)
[10:27] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[10:28] * toast (~Azerothia@exit1.torproxy.org) has joined #ceph
[10:28] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[10:28] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[10:29] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[10:30] * karnan (~karnan@121.244.87.117) has joined #ceph
[10:30] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[10:31] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) has joined #ceph
[10:31] * zhaochao__ (~zhaochao@125.39.8.226) has joined #ceph
[10:32] * rendar (~I@host51-182-dynamic.20-87-r.retail.telecomitalia.it) has joined #ceph
[10:32] * zhaochao (~zhaochao@125.39.8.226) Quit (Ping timeout: 480 seconds)
[10:32] * zhaochao__ is now known as zhaochao
[10:34] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:35] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[10:38] * zhaochao_ (~zhaochao@111.161.77.238) Quit (Ping timeout: 480 seconds)
[10:39] * pdrakeweb (~pdrakeweb@173-166-50-177-newengland.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[10:39] * pvh_sa (~pvh@197.79.2.8) has joined #ceph
[10:39] * madkiss (~madkiss@2001:6f8:12c3:f00f:74ac:a3fa:98ba:2c41) Quit (Quit: Leaving.)
[10:39] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[10:41] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[10:43] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:43] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[10:46] * jyoti-ranjan (~ranjanj@idp01webcache6-z.apj.hpecore.net) has joined #ceph
[10:46] * jyoti-ranjan (~ranjanj@idp01webcache6-z.apj.hpecore.net) Quit ()
[10:48] * jyoti-ranjan (~ranjanj@idp01webcache6-z.apj.hpecore.net) has joined #ceph
[10:48] * jyoti-ranjan (~ranjanj@idp01webcache6-z.apj.hpecore.net) Quit ()
[10:51] * jyoti-ranjan (~ranjanj@idp01webcache6-z.apj.hpecore.net) has joined #ceph
[10:51] <jyoti-ranjan> #join ceph-dev
[10:51] * jyoti-ranjan (~ranjanj@idp01webcache6-z.apj.hpecore.net) Quit ()
[10:52] * zhaochao__ (~zhaochao@125.39.8.226) has joined #ceph
[10:53] * jyoti-ranjan (~ranjanj@idp01webcache6-z.apj.hpecore.net) has joined #ceph
[10:55] <anorak> AtuM: I believe Ubuntu does not have any issue with ceph.
[10:55] * zhaochao (~zhaochao@125.39.8.226) Quit (Ping timeout: 480 seconds)
[10:56] * zhaochao__ is now known as zhaochao
[10:56] <anorak> AtuM: Also, by single node you mean monitors, osds, admin node all in one?
[10:56] <AtuM> anorak, yes.. all in one.. for testing
[10:57] <AtuM> anorak, I'd like to see how to extend the cluster starting from a single host.. if possible
[10:57] <anorak> AtuM: oh ok. I have never tested it all in one node but surely, ubuntu does not have am y issue with ceph or even with ceph-deploy
[10:57] <SamYaple> you can certainly do an AIO, but im not sure ceph-deploy supports it. It may, i just havent used it in a while
[10:57] <anorak> any*
[10:57] <jyoti-ranjan> latest release for hammer is 0.94.1.
[10:58] <AtuM> I have read that two osds need to be set-up on a single node.
[10:58] <jyoti-ranjan> Is it correct?
[10:58] <anorak> yes, it is possible to setup two osds or more in a single node
[10:58] * toast (~Azerothia@8Q4AAAWBO.tor-irc.dnsbl.oftc.net) Quit ()
[10:58] * JWilbur (~Inuyasha@185.77.129.54) has joined #ceph
[10:58] <SamYaple> without editing the cluster map you cant get a "health_ok" environment
[10:58] <SamYaple> but you can setup with a single OSD AtuM
[11:00] <AtuM> SamYaple, yes, but then i would not have any logical redundancy.. except if I had a raid1 beneath
[11:00] <SamYaple> Sure but with AIO you dont have any really redundancy anyway
[11:01] <SamYaple> you can still replicate objects in a single OSD if you modify the crush map
[11:08] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[11:08] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) has joined #ceph
[11:09] * brutuscat (~brutuscat@196.Red-88-19-187.staticIP.rima-tde.net) has joined #ceph
[11:10] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[11:14] * linjan (~linjan@195.110.41.9) has joined #ceph
[11:15] <AtuM> when deploying from an admin server - must the username performing ceph-deploy commands be the same as the username running the ceph daemon on nodes?
[11:19] * kawa2014 (~kawa@212.110.41.244) Quit (Ping timeout: 480 seconds)
[11:19] * kawa2014 (~kawa@2001:67c:1560:8007::aac:c1a6) has joined #ceph
[11:20] <AtuM> uh - I have not ran "ceph-deploy mon create-initial" just "ceph-deploy mon create <nodename>" this could be the root of the problem...
[11:21] <AtuM> uh-oh.. even worse.. i've used cuttlefish's instructions on hammer installation.. :/
[11:21] <AtuM> my bad
[11:28] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[11:28] * JWilbur (~Inuyasha@8Q4AAAWCA.tor-irc.dnsbl.oftc.net) Quit ()
[11:28] * dug (~Freddy@exit1.ipredator.se) has joined #ceph
[11:33] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) has joined #ceph
[11:37] * Concubidated (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[11:41] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[11:47] * raw_ (~raw@5.79.71.195) has joined #ceph
[11:50] * haomaiwang (~haomaiwan@114.111.166.250) Quit (Remote host closed the connection)
[11:52] <raw_> i have a debian 8.0/jessy with ceph 0.94. i have installed some osd's with ceph-deploy prepare and activate on node:/dev/sdc3 and node:/dev/sdd3. they got formatted by xfs. problem is that those osds are not getting activated/mounted at boot time. other osds on /dev/sda (whole drive) work just fine.
[11:52] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[11:52] <raw_> i have already manually mounted the osds and checked for the sysvinit file, which is there on all osds
[11:53] <raw_> im a bit out of ideas.
[11:57] * dmn (~dmn@43.224.156.116) Quit (Ping timeout: 480 seconds)
[11:58] * dug (~Freddy@5NZAACMR8.tor-irc.dnsbl.oftc.net) Quit ()
[11:58] * tritonx (~Frymaster@185.77.129.88) has joined #ceph
[12:04] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) has joined #ceph
[12:07] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[12:08] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[12:14] * flisky (~Thunderbi@106.39.60.34) Quit (Ping timeout: 480 seconds)
[12:17] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[12:17] <raw_> after more reading, i think my problem is that i have manually created the sda3 partion without putting the correct labels expected by ceph in.
[12:28] * tritonx (~Frymaster@8Q4AAAWDK.tor-irc.dnsbl.oftc.net) Quit ()
[12:28] * zapu (~Nijikokun@2WVAACG47.tor-irc.dnsbl.oftc.net) has joined #ceph
[12:32] * gardenshed (~gardenshe@176.27.51.101) Quit (Remote host closed the connection)
[12:33] * gardenshed (~gardenshe@90.216.134.197) has joined #ceph
[12:35] * Concubidated (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[12:37] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[12:39] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[12:40] * nardial (~ls@dslb-178-009-182-130.178.009.pools.vodafone-ip.de) has joined #ceph
[12:42] * haomaiwang (~haomaiwan@125.33.114.25) has joined #ceph
[12:42] * tw0fish (~tw0fish@UNIX5.ANDREW.CMU.EDU) Quit (Read error: Connection reset by peer)
[12:43] * kawa2014 (~kawa@2001:67c:1560:8007::aac:c1a6) Quit (Ping timeout: 480 seconds)
[12:48] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[12:52] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[12:53] * dmn (~dmn@43.224.156.116) has joined #ceph
[12:55] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[12:57] * fam is now known as fam_away
[12:58] * overclk (~overclk@121.244.87.124) Quit (Ping timeout: 480 seconds)
[12:58] * zapu (~Nijikokun@2WVAACG47.tor-irc.dnsbl.oftc.net) Quit ()
[12:58] * Mousey (~Wizeon@spftor1e1.privacyfoundation.ch) has joined #ceph
[13:01] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[13:02] * Be-El (~quassel@fb08-bioinf28.computational.bio.uni-giessen.de) has joined #ceph
[13:03] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) has joined #ceph
[13:04] * fam_away is now known as fam
[13:06] * overclk (~overclk@121.244.87.117) has joined #ceph
[13:07] * oro (~oro@207.194.125.34) Quit (Ping timeout: 480 seconds)
[13:13] * KevinPerks (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[13:15] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has left #ceph
[13:18] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[13:18] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) Quit ()
[13:18] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[13:19] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[13:20] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) has joined #ceph
[13:23] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[13:25] * brutuscat (~brutuscat@196.Red-88-19-187.staticIP.rima-tde.net) Quit (Remote host closed the connection)
[13:27] * pvh_sa (~pvh@197.79.2.8) Quit (Ping timeout: 480 seconds)
[13:28] * Mousey (~Wizeon@8Q4AAAWEJ.tor-irc.dnsbl.oftc.net) Quit ()
[13:28] * Maariu5_ (~Kristophe@destiny.enn.lu) has joined #ceph
[13:29] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[13:31] * gardenshed (~gardenshe@90.216.134.197) Quit (Ping timeout: 480 seconds)
[13:31] * Mika_c (~Mk@122.146.93.152) Quit (Quit: Konversation terminated!)
[13:31] * sjm (~sjm@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[13:37] * Flynn (~stefan@89.207.24.152) has joined #ceph
[13:39] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[13:42] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[13:44] * shylesh (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[13:45] * shyu (~Shanzhi@119.254.196.66) Quit (Remote host closed the connection)
[13:47] * adeel (~adeel@2602:ffc1:1:face:88b4:4ded:6ea8:87b8) Quit (Remote host closed the connection)
[13:48] * adeel (~adeel@fw1.ridgeway.scc-zip.net) has joined #ceph
[13:49] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[13:52] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[13:54] * nsoffer (~nsoffer@bzq-109-64-255-238.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[13:56] * rotbeard (~redbeard@x5f74d7eb.dyn.telefonica.de) has joined #ceph
[13:57] * nardial (~ls@dslb-178-009-182-130.178.009.pools.vodafone-ip.de) Quit (Quit: Leaving)
[13:58] * xrsanet_ is now known as xrsanet
[13:58] * Maariu5_ (~Kristophe@7R2AAA35U.tor-irc.dnsbl.oftc.net) Quit ()
[13:58] * mollstam (~Zyn@ncc-1701-a.tor-exit.network) has joined #ceph
[14:00] * MACscr (~Adium@2601:d:c800:de3:7540:b508:7d79:f791) has joined #ceph
[14:08] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:12] * fam is now known as fam_away
[14:13] * KevinPerks (~Adium@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[14:14] <dmn> hi, I'm trying to install giant version of ceph and have been following this guide - http://ceph.com/docs/master/start/quick-ceph-deploy/
[14:14] <dmn> but when I try to run ceph-deploy install
[14:14] <dmn> it tries to install hammer version on the node specified
[14:15] <alfredodeza> dmn: with what distro, and if you could provide a full paste of the output somewhere that would be great
[14:15] <dmn> 1 sec
[14:17] * AtuM (~atum@postar-b.abakus.si) Quit (Quit: Leaving)
[14:18] <dmn> alfredodeza: http://pastebin.com/3bvKQ8at
[14:18] <dmn> distro is centos 6.6
[14:19] <alfredodeza> ok
[14:19] <alfredodeza> this looks like a yum/repo/ issue, you may try to clean everything
[14:19] <alfredodeza> `yum clean all` I think is the command
[14:19] <alfredodeza> and then try again
[14:19] <anorak> dmn: ceph-deploy install --release giant YOUR_CEPH_NODE
[14:20] <alfredodeza> oh
[14:20] <alfredodeza> dmn: you want to install *giant* and not hammer
[14:20] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[14:20] <alfredodeza> well that would apply too I guess
[14:20] <dmn> was looking for something like that
[14:20] <dmn> thanks
[14:21] <alfredodeza> yes if you want to install a specific release you can specify it with ``--release``. In `ceph-deploy install --help` you can see all the options
[14:23] * zhaochao (~zhaochao@125.39.8.226) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 38.0/20150517020246])
[14:23] * sjm (~sjm@173-14-159-105-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[14:24] * vbellur (~vijay@121.244.87.124) has joined #ceph
[14:26] <dmn> thanks a lot for your help
[14:26] <dmn> alfredodeza, anorak
[14:26] <anorak> your welcome :)
[14:27] <alfredodeza> my welcome
[14:27] * srk (~srk@2602:306:836e:91f0:4834:b6bb:69a7:9246) has joined #ceph
[14:28] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[14:28] * mollstam (~Zyn@2FBAABZUF.tor-irc.dnsbl.oftc.net) Quit ()
[14:28] * raindog (~starcoder@5NZAACMU7.tor-irc.dnsbl.oftc.net) has joined #ceph
[14:29] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[14:30] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[14:32] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[14:33] * jdillaman (~jdillaman@c-69-143-187-98.hsd1.va.comcast.net) has joined #ceph
[14:34] <redf_> hi, is there a way to set preferred osd on a per client base?
[14:36] * dmn (~dmn@43.224.156.116) Quit (Quit: Leaving)
[14:38] * pvh_sa (~pvh@197.79.2.8) has joined #ceph
[14:45] * pdrakeweb (~pdrakeweb@104.247.39.34) has joined #ceph
[14:45] * oro (~oro@207.194.125.34) has joined #ceph
[14:45] * jdillaman (~jdillaman@c-69-143-187-98.hsd1.va.comcast.net) Quit (Quit: jdillaman)
[14:47] * jdillaman (~jdillaman@c-69-143-187-98.hsd1.va.comcast.net) has joined #ceph
[14:50] * jdillaman (~jdillaman@c-69-143-187-98.hsd1.va.comcast.net) Quit ()
[14:53] * sjm (~sjm@173-14-159-105-NewEngland.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[14:55] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[14:57] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) has joined #ceph
[14:58] * pvh_sa (~pvh@197.79.2.8) Quit (Ping timeout: 480 seconds)
[14:58] * raindog (~starcoder@5NZAACMU7.tor-irc.dnsbl.oftc.net) Quit ()
[14:58] * SinZ|offline (~Catsceo@185.77.129.54) has joined #ceph
[15:00] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) has joined #ceph
[15:01] * kefu (~kefu@114.92.123.24) Quit (Max SendQ exceeded)
[15:02] * kefu (~kefu@114.92.123.24) has joined #ceph
[15:03] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[15:04] * KevinPerks (~Adium@nat-pool-bos-t.redhat.com) has joined #ceph
[15:06] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:07] * srk (~srk@2602:306:836e:91f0:4834:b6bb:69a7:9246) Quit (Ping timeout: 480 seconds)
[15:08] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[15:09] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:11] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[15:11] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[15:13] * Concubidated (~Adium@nat-pool-bos-t.redhat.com) has joined #ceph
[15:15] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Quit: Konversation terminated!)
[15:16] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[15:19] * dyasny (~dyasny@198.251.54.234) has joined #ceph
[15:19] * dyasny_ (~dyasny@198.251.54.234) has joined #ceph
[15:19] * dyasny (~dyasny@198.251.54.234) Quit (Read error: Connection reset by peer)
[15:21] * sugoruyo (~georgev@paarthurnax.esc.rl.ac.uk) has joined #ceph
[15:21] <sugoruyo> anyone have any idea why I'd be getting 40MB/sec with Ceph on a local disk that does around 130MB/sec?
[15:22] <raw_> are you using btrfs?
[15:23] * nsoffer (~nsoffer@bzq-109-66-155-139.red.bezeqint.net) has joined #ceph
[15:23] <raw_> sugoruyo, have you tested directly in the osd mount point?
[15:24] <raw_> sugoruyo, long running deployment or fresh install?
[15:24] <burley> are you journaling to that same disk?
[15:24] * xarses (~andreww@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:25] * alram (~alram@nat-pool-bos-t.redhat.com) has joined #ceph
[15:25] <sugoruyo> raw_: it's XFS, installed a month ago, I get 40MB/sec throughput on a single writer, 120-140 if I dd to a file from /dev/zero on that mount point
[15:25] <sugoruyo> burley: I think so, let me double check
[15:26] <burley> if so, you aren't doing sequential writes, like you would be in your dd test
[15:26] <burley> you are doing the sequential journal write, and then another set back to the disk itself for the data write
[15:26] <sugoruyo> burley: yeah looks like I have a 10 gig journal file in the directory
[15:27] * brutuscat (~brutuscat@196.Red-88-19-187.staticIP.rima-tde.net) has joined #ceph
[15:27] <sugoruyo> burley: so I'm seeking back and forth between journal file and data file?
[15:27] <burley> if you use iostat -x 10 DEVICE -- you can watch the util%, which should be good for a single spinning device to tell you how hard it is being driven
[15:28] <burley> you can't trust it for a SSD or a RAID device or what not, but should be good on a normal hard drive
[15:28] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Quit: Konversation terminated!)
[15:28] * SinZ|offline (~Catsceo@0SGAAAVZ3.tor-irc.dnsbl.oftc.net) Quit ()
[15:28] <sugoruyo> burley: lemme run a test and see
[15:28] * hyst (~Drezil@9U1AAA3P4.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:30] * MrBy (~MrBy@85.115.23.2) has joined #ceph
[15:31] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[15:31] <raw_> sugoruyo, btrfs has the advantage that it can write journal and data at once, so it is not written twice. i had a pool with btrfs in production. while this was fast in the beginning, it was crawling slow after 6 month or so.
[15:33] <sugoruyo> raw_: did you ever figure out why that happened? it'd be interesting to know
[15:33] * wushudoin (~wushudoin@nat-pool-bos-u.redhat.com) has joined #ceph
[15:33] <raw_> but i think thats a general problem with btrfs in write intensive environments. a btrfs balance fixes that to a point. i have read that starting from linux 3.18 btrfs does auto-balance so this could be resolved.
[15:33] <sugoruyo> burley: I'm seeing %util go to 99.XX% and avg-cpu iowait% to ~15%
[15:33] * raghu (~raghu@121.244.87.124) has joined #ceph
[15:34] <sugoruyo> I think it's the journal that's killing me... which is what I'd though of too...
[15:34] <sugoruyo> thought*
[15:35] <burley> so if you want more, break your journal out to a different device
[15:37] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[15:37] <sugoruyo> burley: yeah, that's a tough one to figure out...
[15:38] * pdrakeweb (~pdrakeweb@104.247.39.34) Quit (Read error: Connection reset by peer)
[15:38] <burley> we ended up doing that, its a cost effective way to get more out of the same hardware
[15:38] <raw_> sugoruyo, btrfs is a nice option, but it is not as stable as xfs, so an extra device for xfs journal is recommended. you also can use a single ssd as journal for multiple osds.
[15:38] * pdrakeweb (~pdrakeweb@104.247.39.34) has joined #ceph
[15:39] <sugoruyo> raw_: yeah, it's also a matter of having enough SSD space for the journals I think
[15:39] <burley> sugoruyo: How many OSDs per host?
[15:40] <raw_> journals are about 5-10gb each, thats not the problem
[15:40] <sugoruyo> burley: this is a "dev" cluster used for testing stuff out, it's got 6 disk hosts with I think 7 disks each
[15:40] <sugoruyo> they also have an SSD I'm checking to see what it is
[15:41] <raw_> sugoruyo, if you want to take the risk, you can try btrfs, but watch for the slowdowns after some months. actually, it is not hard to drop a osd and readd it with different filesystem backend in productions if that occures.
[15:41] <burley> so option one to squeeze just a little bit more out is to create the first partition on the disk as a raw partition for journaling to, and requires no more hardware for just a little better performance
[15:41] <burley> then you don't have the filesystem overhead and the journal is on the fastest part of the disk
[15:42] * tw0fish (~tw0fish@UNIX3.ANDREW.CMU.EDU) has joined #ceph
[15:42] * treenerd (~treenerd@85.193.140.98) Quit (Ping timeout: 480 seconds)
[15:42] <sugoruyo> so they all have 128GB SSDs in them
[15:42] <sugoruyo> which are completely unused at the moment
[15:42] <raw_> sugoruyo, one? or more?
[15:42] <burley> but, IME you can't get better than ~4:1 ratio of disks to SSD journal drives via SATA
[15:43] <sugoruyo> raw_: each of the 6 machines has a single SSD of 128GB capacity
[15:43] <burley> so you'd probably be better off journaling to the drive itself, but its not too hard to set it up and test it
[15:44] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[15:44] <sugoruyo> burley, raw_ the problem is what do I do for the main cluster which doesn't have SSDs in it, maybe I should set a couple of the spinning disks aside for the journals... but they'd still suffer contention between all the journals
[15:44] * kefu (~kefu@114.92.123.24) Quit (Max SendQ exceeded)
[15:45] <burley> sugoruyo: Make the journal the first partition on the drive
[15:45] <sugoruyo> burley: yeah, I'll try that
[15:45] * kefu (~kefu@114.92.123.24) has joined #ceph
[15:45] <burley> that's free, and should help eek out a bit more with no money
[15:46] <raw_> sugoruyo, yes, first partition is also what im doing with xfs too.
[15:47] <burley> and remember, don't format the first partition, just use the raw device
[15:47] <sugoruyo> raw_: are you using both xfs and btrfs?
[15:47] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[15:47] <sugoruyo> burley: yeah, just tell ceph.conf to use the /dev/sdX# for the journal
[15:48] <burley> no, not in ceph.conf, just make the journal symlink off to the device
[15:48] <burley> else ceph.conf gets full of clutter fast
[15:49] <raw_> sugoruyo, i have used btrfs for a half year, than redeployed all osds with xfs because that bit of performance does not actually matter to me. theoretically btrfs can offer better speed, but you have to expect problems.
[15:49] * b0e (~aledermue@213.95.25.82) has joined #ceph
[15:50] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[15:50] <raw_> in that half year, i only hat that slowdown problem that is probably solved by autobalance in newer kernels. while i had btrfs filesystems crashing in some places, the ones serving ceph did not cause other problems.
[15:51] <raw_> if you dont use snapshots, btrfs-raid, compression and other fancy features :)
[15:52] <raw_> sugoruyo, im also using a ssd-triering which helps a lot with everything
[15:52] <raw_> see here: http://ceph.com/docs/master/rados/operations/cache-tiering/ and http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
[15:53] <redf_> is there a way to set preferred osd on a per client base? :)
[15:53] <raw_> actually 80% of my data lives in that ssd trier, giving 300mb/s writes to all ceph clients even with xfs
[15:54] <Be-El> raw_: if you use btrfs without snapshots as osd, you loose the parallel journal write feature
[15:54] <burley> redf_: no
[15:54] <Be-El> raw_: on the other hand the way an osd uses btrfs snapshots may result in stability problem with older kernels (e.g. ubuntu trusty kernels)
[15:55] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[15:55] <raw_> Be-El, "without snapshots"? i just mean that im not creating snapshots manually
[15:55] <Be-El> raw_: ah, ok
[15:55] <burley> redf_: though you could create a pool that uses a separate set of OSDs dedicated for some purpose and only used on one client
[15:55] <Be-El> raw_: there's also an option to disable btrfs snapshots for journal commits
[15:56] <raw_> Be-El, yeah, ok. i did not know.
[15:56] <Be-El> redf_: if you use librados directly you might be able to use different osds as primary for different clients
[15:57] <Be-El> redf_: but that's probably not what to what to do
[15:58] <redf_> nah, qemu
[15:58] * hyst (~Drezil@9U1AAA3P4.tor-irc.dnsbl.oftc.net) Quit ()
[15:58] * Lite (~Popz@53IAAA5HM.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:59] <Be-El> brb, changing desktops
[15:59] <redf_> i want to "localize" vms ios, yeah running osd with hv
[15:59] * Be-El (~quassel@fb08-bioinf28.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[15:59] <burley> redf_: You want all IOs on the local drives?
[16:00] <burley> and replication?
[16:00] <burley> use RAID
[16:00] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[16:00] <burley> and local storage
[16:01] * tsuraan (~tsuraan@c-71-195-10-137.hsd1.mn.comcast.net) Quit (Quit: leaving)
[16:02] <redf_> no
[16:02] <redf_> it is a poor man solution
[16:02] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[16:02] <Be-El> re
[16:02] <redf_> got some old boxes with qemu on them
[16:02] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[16:02] <redf_> want to setup a ceph
[16:02] <redf_> with slow storage net
[16:03] <redf_> thats why "lolaclized" ios
[16:03] * JV (~chatzilla@204.14.239.107) has joined #ceph
[16:03] <redf_> free mem would do nicely as a fs read cache
[16:04] * gaveen (~gaveen@123.231.121.221) Quit (Remote host closed the connection)
[16:05] <sugoruyo> redf_: so what do you want Ceph for? 80% of the point of Ceph is to /distribute/ your writes uniformly
[16:06] <redf_> what for? to have all my data on all nodes?
[16:08] <sugoruyo> redf_: so you want to replicate all data to all nodes?
[16:08] <redf_> yup
[16:09] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Remote host closed the connection)
[16:09] <sugoruyo> it doesn't sound to me like Ceph is what you want...
[16:11] <redf_> why not? i belive in this case setting client prefernce could be a great advantage
[16:14] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[16:15] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Remote host closed the connection)
[16:16] <redf_> never did anything with crush but since it can be a "rack" aware i should be possible to make it "node" aware... not sure
[16:16] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[16:16] <tw0fish> Anyone know if mod_fcgid is comparable to mod_fastcgi as far as working with the radosgw stuff?
[16:16] <redf_> The CRUSH algorithm determines how to store and retrieve data by computing data storage locations. CRUSH empowers Ceph clients to communicate with OSDs directly rather than through a centralized server or broker
[16:22] <tw0fish> Maybe an even better question would be has anyone gotten the radosgw setup on rhel/centos 7? :)
[16:28] * raghu (~raghu@121.244.87.124) Quit (Remote host closed the connection)
[16:28] * xoritor (~xoritor@cpe-72-177-85-116.austin.res.rr.com) Quit (Quit: Leaving)
[16:28] * Lite (~Popz@53IAAA5HM.tor-irc.dnsbl.oftc.net) Quit ()
[16:30] * nyov (~nyov@178.33.33.184) Quit (Ping timeout: 480 seconds)
[16:31] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[16:33] * KapiteinKoffie (~Random@178-175-128-50.ip.as43289.net) has joined #ceph
[16:34] * cok (~chk@2a02:2350:18:1010:5cf1:3b0:a53e:a120) Quit (Quit: Leaving.)
[16:34] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[16:35] <tw0fish> Note
[16:35] <tw0fish> Previous versions of Ceph shipped with mod_fastcgi. The current version ships with mod_proxy_fcgi instead.
[16:36] <tw0fish> not only do you have to deal with systemd in rhel 7 , but also the fact that they renamed how many RPMs. meh. </rant>
[16:38] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[16:42] * shylesh (~shylesh@1.23.174.252) has joined #ceph
[16:43] * rlrevell (~leer@vbo1.inmotionhosting.com) has left #ceph
[16:43] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[16:43] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[16:44] * haomaiwang (~haomaiwan@125.33.114.25) Quit (Remote host closed the connection)
[16:46] * tw0fish (~tw0fish@UNIX3.ANDREW.CMU.EDU) Quit (Quit: bbl)
[16:46] <raw_> what is the best way to backup cephfs? currently i think its rsync right?
[16:47] * thomnico (~thomnico@2a01:e35:8b41:120:6c3f:ca6d:2f60:51f7) Quit (Quit: Ex-Chat)
[16:47] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[16:53] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[16:56] * nhm (~nhm@184-97-242-33.mpls.qwest.net) has joined #ceph
[16:56] * ChanServ sets mode +o nhm
[16:56] * Flynn (~stefan@89.207.24.152) Quit (Quit: Flynn)
[16:58] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:59] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[17:00] * MRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[17:01] * JV (~chatzilla@204.14.239.107) Quit (Ping timeout: 480 seconds)
[17:03] * KapiteinKoffie (~Random@2WVAACHBC.tor-irc.dnsbl.oftc.net) Quit ()
[17:03] * ain (~Kealper@7R2AAA4BM.tor-irc.dnsbl.oftc.net) has joined #ceph
[17:03] <Be-El> raw_: since cephfs is a filesystem you can use any backup solution
[17:04] * srk (~srk@32.97.110.56) has joined #ceph
[17:05] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:07] * brutuscat (~brutuscat@196.Red-88-19-187.staticIP.rima-tde.net) Quit (Remote host closed the connection)
[17:13] * dopesong_ (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[17:14] * daniel2_ (~dshafer@0001b605.user.oftc.net) has joined #ceph
[17:15] <sugoruyo> redf_: CRUSH can be told to pick any node on the tree as long as it's described in the hierarchy definition, your problem is that you want to tell your client side logic to preferentially read/write data from your locally hosted OSDs and AFAIK that's not part of the CRUSH feature set
[17:16] <sugoruyo> you can tell create a ruleset which will make sure all data is copied to all disks in however many replicas you want but then you're left with as much capacity as your smallest disk has
[17:16] <sugoruyo> you'd still be missing the "preference" for the local disk though
[17:16] * dopesong_ (~dopesong@88-119-94-55.static.zebra.lt) Quit (Remote host closed the connection)
[17:17] <Be-El> what's "local"?
[17:17] <Be-El> a client usually does not know where an osd is located
[17:20] * dopesong (~dopesong@lb1.mailer.data.lt) Quit (Ping timeout: 480 seconds)
[17:20] <sugoruyo> Be-El: redf_ was talking about Ceph OSDs and VM hypervisors co-existing on the same machine
[17:21] <Be-El> sugoruyo: i know, and he wants to prefer the local disk. but there's no concept of 'local' in ceph
[17:21] * tupper (~tcole@nat-pool-bos-u.redhat.com) has joined #ceph
[17:22] * kefu is now known as kefu|afk
[17:22] <sugoruyo> Be-El: that's what we were saying: the logic for the concept of data locality is not part of Ceph
[17:27] <rkeene> My systems that use Ceph (i.e., consumers of Ceph resources) have no disks at all -- if they did, the individual disk would be slower than reading stripes from many Ceph servers anyway
[17:27] * kefu|afk (~kefu@114.92.123.24) Quit (Quit: Textual IRC Client: www.textualapp.com)
[17:28] <raw_> rkeene, hrhr, are you also booting your machines from ceph?
[17:29] <rkeene> Essentially, yes
[17:29] <rkeene> They're booted off the network -- the TFTP server their data is coming from reads the kernel and boot files from a filesystem that lives on a Ceph RBD
[17:33] * ain (~Kealper@7R2AAA4BM.tor-irc.dnsbl.oftc.net) Quit ()
[17:33] * brutuscat (~brutuscat@196.Red-88-19-187.staticIP.rima-tde.net) has joined #ceph
[17:33] * Aramande_ (~Kyso_@balo.jager.io) has joined #ceph
[17:34] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[17:38] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) has joined #ceph
[17:38] * daniel2_ (~dshafer@0001b605.user.oftc.net) Quit (Read error: Connection reset by peer)
[17:42] * nils__ (~nils@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[17:43] * thomnico (~thomnico@2a01:e35:8b41:120:6c3f:ca6d:2f60:51f7) has joined #ceph
[17:44] * ifur (~osm@0001f63e.user.oftc.net) Quit (Quit: leaving)
[17:46] * bandrus (~brian@nat-pool-bos-t.redhat.com) has joined #ceph
[17:46] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[17:47] * ifur (~osm@0001f63e.user.oftc.net) Quit ()
[17:48] * pdrakewe_ (~pdrakeweb@104.247.39.34) has joined #ceph
[17:48] * pdrakeweb (~pdrakeweb@104.247.39.34) Quit (Read error: Connection reset by peer)
[17:49] * srk (~srk@32.97.110.56) Quit (Ping timeout: 480 seconds)
[17:50] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[17:51] * bandrus (~brian@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving.)
[17:51] * bandrus (~brian@nat-pool-bos-t.redhat.com) has joined #ceph
[17:55] * madkiss (~madkiss@2001:6f8:12c3:f00f:a14e:d843:ebf3:1930) has joined #ceph
[17:56] * masterom1 (~ivan@93-142-228-77.adsl.net.t-com.hr) has joined #ceph
[17:58] * thomnico (~thomnico@2a01:e35:8b41:120:6c3f:ca6d:2f60:51f7) Quit (Quit: Ex-Chat)
[17:59] * haomaiwang (~haomaiwan@114.111.166.250) has joined #ceph
[17:59] * JV (~chatzilla@204.14.239.54) has joined #ceph
[17:59] * rotbeard (~redbeard@x5f74d7eb.dyn.telefonica.de) Quit (Quit: Leaving)
[18:00] * srk (~srk@32.97.110.56) has joined #ceph
[18:01] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:03] * Aramande_ (~Kyso_@8Q4AAAWKQ.tor-irc.dnsbl.oftc.net) Quit ()
[18:03] * clarjon1 (~Crisco@37.187.129.166) has joined #ceph
[18:03] * masteroman (~ivan@93-139-178-8.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[18:04] * dopesong (~dopesong@78-60-74-130.static.zebra.lt) has joined #ceph
[18:05] * tupper (~tcole@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[18:06] * thomnico (~thomnico@2a01:e35:8b41:120:6c3f:ca6d:2f60:51f7) has joined #ceph
[18:06] * pdrakewe_ (~pdrakeweb@104.247.39.34) Quit (Read error: Connection reset by peer)
[18:06] * pdrakeweb (~pdrakeweb@104.247.39.34) has joined #ceph
[18:07] * rlrevell (~leer@vbo1.inmotionhosting.com) has left #ceph
[18:09] * sugoruyo (~georgev@paarthurnax.esc.rl.ac.uk) Quit (Quit: I'm going home!)
[18:11] * thomnico (~thomnico@2a01:e35:8b41:120:6c3f:ca6d:2f60:51f7) Quit ()
[18:11] * hfu (~hfu@58.40.124.211) has joined #ceph
[18:11] * pdrakewe_ (~pdrakeweb@104.247.39.34) has joined #ceph
[18:12] * vbellur (~vijay@122.171.86.240) has joined #ceph
[18:12] * pvh_sa (~pvh@105-237-252-162.access.mtnbusiness.co.za) has joined #ceph
[18:13] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[18:14] * KevinPerks (~Adium@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving.)
[18:15] * jeroen_ (~jeroen@37.74.194.90) has joined #ceph
[18:17] * Guest5776 (~jeroen@ip-213-127-160-90.ip.prioritytelecom.net) Quit (Ping timeout: 480 seconds)
[18:17] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) has joined #ceph
[18:17] * pdrakeweb (~pdrakeweb@104.247.39.34) Quit (Ping timeout: 480 seconds)
[18:23] * kanagaraj (~kanagaraj@27.7.34.85) has joined #ceph
[18:23] * pvh_sa (~pvh@105-237-252-162.access.mtnbusiness.co.za) Quit (Ping timeout: 480 seconds)
[18:26] * dneary (~dneary@72.28.92.10) has joined #ceph
[18:27] * hfu (~hfu@58.40.124.211) Quit (Remote host closed the connection)
[18:28] <magicrobotmonkey> let's say i accidently did an `auth del` on the wrong osd
[18:28] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[18:32] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[18:33] * clarjon1 (~Crisco@2WVAACHDR.tor-irc.dnsbl.oftc.net) Quit ()
[18:36] * jyoti-ranjan (~ranjanj@idp01webcache6-z.apj.hpecore.net) Quit (Ping timeout: 480 seconds)
[18:38] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[18:39] * Concubidated (~Adium@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving.)
[18:40] * bandrus (~brian@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving.)
[18:42] * gardenshed (~gardenshe@176.27.51.101) has joined #ceph
[18:46] * reed (~reed@72.28.92.10) has joined #ceph
[18:46] * gregmark (~Adium@68.87.42.115) has joined #ceph
[18:47] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Read error: Connection reset by peer)
[18:47] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) has joined #ceph
[18:48] * pvh_sa (~pvh@105-237-252-162.access.mtnbusiness.co.za) has joined #ceph
[18:50] * gardenshed (~gardenshe@176.27.51.101) Quit (Remote host closed the connection)
[18:50] * fmanana (~fdmanana@bl13-155-240.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[18:53] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[18:53] * dneary (~dneary@72.28.92.10) Quit (Ping timeout: 480 seconds)
[18:55] * qstion (~qstion@37.157.144.44) Quit (Remote host closed the connection)
[18:57] <stupidnic> I am using ceph rbd as a backend for cinder and everything is working correctly. However I have a problem that I can't understand how to get myself out of.
[18:58] <stupidnic> I have a volume that we created a snapshot of, and then I used that snapshot to create a new volume for a new instance
[18:58] <stupidnic> I now want to remove that snapshot and the underlying volume, but I can't
[18:58] <SamYaple> stupidnic: you cant from ceph or cinder?
[18:59] <stupidnic> SamYaple: both actually
[18:59] <stupidnic> I looked at the info for the volume in rbd and I show it has the parent of the snapshot
[18:59] <stupidnic> volumes/volume-ead2bcd2-bff5-46d9-8305-6c9a2bb4f298@snapshot-149e73bb-8d75-4de7-9fc4-43770c1caebe
[18:59] <SamYaple> what is the cinder error?
[19:00] <stupidnic> well cinder is telling me it is locked
[19:00] <stupidnic> rbd tells me it is protected
[19:00] <SamYaple> ceph you can definetly blow them away, but youll screw up the cinder DB
[19:00] <stupidnic> yeah that's what I was trying to prevent
[19:00] <SamYaple> what version of openstack?
[19:00] <stupidnic> Juno via RDO 2.2
[19:01] * rkeene hates OpenStack
[19:01] <stupidnic> If I unprotect the snapshot in rbd will that allow cinder to delete it?
[19:01] <SamYaple> /kick rkeene
[19:02] <stupidnic> Openstack is where the dedicated server market is heading
[19:02] <SamYaple> stupidnic: possibly, you can try it. but i would like to find the degredation here. is it procedure/user/bug
[19:02] <stupidnic> SamYaple: alright... tell me what you need
[19:03] <rkeene> SamYaple, No reason to kick me. I used it, it's terrible software.
[19:03] * Thononain (~kiasyn@nx-01.tor-exit.network) has joined #ceph
[19:03] <SamYaple> if you do the same thing again with a fresh volume, can you reproduce
[19:03] <stupidnic> please note this is production so I would llike to not piss my cusomters off
[19:03] <stupidnic> SamYaple: give me a few minutes to try
[19:03] <SamYaple> rkeene: lol I cant kick you anyway. and i dont disagree that it has bugs, but it can be very powerful, all bit it unstable
[19:04] <SamYaple> s/bit/be/
[19:04] <rkeene> SamYaple, I switched to OpenNebula and have been significantly happier
[19:04] <SamYaple> that being said the whole point of it is so the system can be unstable if your app is built correctly
[19:04] <SamYaple> yea i cant argue with you there, but openstack has a wider net right now
[19:06] <rkeene> Yeah, that's real annoying too. We deal with Cisco a lot and they're constantly wanting us to do OpenStack.
[19:07] <stupidnic> translated: buy our UCS
[19:07] <SamYaple> lol
[19:07] <rkeene> Yeah, we're using ACI fabric, etc
[19:09] * gardenshed (~gardenshe@176.27.51.101) has joined #ceph
[19:10] <stupidnic> SamYaple: okay. I was able to replicate the behavior
[19:11] <rkeene> So I'm annoyed with proponents of OpenStack because then people want me to defend my position of NOT using it
[19:11] <SamYaple> do you have a cinder stack trace you can post (paste.openstack.org)
[19:11] <stupidnic> SamYaple: well it isn't dumping a trace.
[19:11] <SamYaple> rkeene: ill be honest here, i really like opennebula i just cant sell people on the idea
[19:11] <SamYaple> youll need to turn on vebose+debug
[19:11] <stupidnic> I do have debug on though
[19:12] <stupidnic> hmm yeah verbose and debug ar on, but no traces
[19:13] <stupidnic> do you want to see the relevant sections of volume.log?
[19:13] <rkeene> SamYaple, I've been using it in my product for several months now and it's great
[19:13] <SamYaple> stupidnic: throw up anything you think is relevant
[19:14] <rkeene> We just started with it and whenever people have wanted us to switch, we've been able to defened our position of OpenStack is terrible, we can make OpenNebula do whatever, and it'll be too costly to switch now :-)
[19:14] <SamYaple> rkeene: all i truly need is SDNnetwork+compute+image/volume support. would be nice to have bare metal provisioning though
[19:14] * alram (~alram@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[19:15] <rkeene> I do the bare metal provisioning (of Compute and Storage nodes, not of guest workloads -- but easy enough to add to OpenNebula if that's what you're into) myself as a part of my larger problem
[19:15] <rkeene> That was one of my problems with OpenStack, it tried to do things that I could do outside OpenStack much better, and it did them poorly
[19:15] <rkeene> And fixing it was a huge PITA
[19:15] <SamYaple> rkeene: bare metal as in launch and instance that IS bare metal, you can do that?
[19:15] <stupidnic> SamYaple: http://paste.openstack.org/show/229579/
[19:16] <rkeene> SamYaple, OpenNebula could be extended to do it (obviously) without TOO much work
[19:16] <stupidnic> maas
[19:17] <stupidnic> not that I have used it, but I have heard good things about it
[19:17] <rkeene> It'd be integrating into the IPMI to boot the machine from the stub-node to the workload, and just consuming it at 100%
[19:18] <stupidnic> SamYaple: there is a bit of excess in there, but I figured grab it all and sort it out later
[19:18] <rkeene> So from OpenNebula's perspective the "host" would be 100% used (so nothing would try to schedule on it) by 1 "VM" (which is just running on bare metal) and the "driver" for it would talk to IPMI to do the real actions to the server
[19:18] <rkeene> The rest would just be glue... It'd probably take a week to get it implemented and tested
[19:19] <SamYaple> rkeene: right, but its about performance. i dont want a VM, I want bare metal. this is esspecially true for things that would use additional hardware
[19:19] * dneary (~dneary@72.28.92.10) has joined #ceph
[19:19] <rkeene> SamYaple, Right, it would be bare-metal
[19:20] <rkeene> SamYaple, OpenNebula doesn't have the concept of "bare metal", so it would appear to be a "VM", that consumed 100% of the resources of a host
[19:20] * gardenshed (~gardenshe@176.27.51.101) Quit (Remote host closed the connection)
[19:20] <rkeene> (Well, it has the concept of Infrastructure Hosts, which it presumes are running some sort of hypervisor, either Libvirt-based or otherwise)
[19:21] * alram (~alram@nat-pool-bos-t.redhat.com) has joined #ceph
[19:21] <stupidnic> rkeene: I know you said you were ON but did you try Ironic at all?
[19:22] <Anticimex> does ironic integrate into ceph?
[19:22] <SamYaple> ironic just came around stupidnic
[19:22] <stupidnic> SamYaple: right. I was mainly curious
[19:22] <Anticimex> ironic has been around more than 1 year
[19:22] <stupidnic> well I think the draft has been around but the code hasn't been
[19:23] <stupidnic> at least that's my understanding of it
[19:23] <SamYaple> Anticimex: it was in juno, but well call it experimental
[19:23] <Anticimex> the problem with doing hosted maas for customer with ceph i guess is that i can't give customers cluster access
[19:23] * raw_ (~raw@5.79.71.195) Quit (Remote host closed the connection)
[19:23] <rkeene> stupidnic, No -- I did try OpenStack for about a year, and it was terrible. I got it running, despite its best efforts but it never worked how I wanted -- it kept wanting to manage things for me that I didn't want it to manage. It would delete firewall rules I added and put its own broken ones there... It tried to use iptables to do "floating" IPs, it was really terrible.
[19:23] <Anticimex> so if customer owns the hardware i don't see how i can use ceph without some iscsi gateway and similar
[19:24] <Anticimex> sounds like you were beaten by neutron
[19:24] <Anticimex> happens to the best of us ;)
[19:24] <rkeene> Anticimex, I'm writing a Ceph RBD-based NFS server (i.e., serves RBD objects out as files over NFS)
[19:24] <SamYaple> I dont think you can win against neutron. you can jsut make it stop hitting its head against the wall
[19:24] <Anticimex> yeah, how do you solve hA?
[19:24] <rkeene> So if you had the object rbd/blah, and mounted it on /mnt, you'd have /mnt/rbd/blah
[19:24] <SamYaple> temporarily
[19:24] <Anticimex> * HA
[19:25] * pvh_sa (~pvh@105-237-252-162.access.mtnbusiness.co.za) Quit (Ping timeout: 480 seconds)
[19:25] <rkeene> HA for the NFS server ? pNFS is option A; UDP-based (I was only planning to support UDP anyway) failover is option B
[19:25] <stupidnic> Anticimex: it would be nice if there was a way to port rbd into ipxe... that would be cool
[19:25] <Anticimex> SamYaple: we run opencontrail and life is sweet
[19:26] <Anticimex> stupidnic: with current ceph authentication regime i see no way around using a proxy
[19:26] <rkeene> OpenNebula just does what I want, if the bridge is already there it just adds devices to it
[19:26] <stupidnic> Anticimex: that's something I hadn't considered
[19:27] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[19:27] <Anticimex> and best bet seems to me to be some iscsi thingy which can mount rbds in parallel and do regular multipath failover-stuff
[19:27] <SamYaple> rkeene: whats the deployment scheme for ON look like? any good tools out there? or is this a by hand/custom adventure
[19:27] <stupidnic> but if you are using ipxe it might be possible to inject the cluster keys using some other method
[19:27] <stupidnic> ipxe is really flexible
[19:27] <rkeene> SamYaple, Deployment scheme ? You mean to hosts ?
[19:27] <SamYaple> s/scheme/scene/
[19:27] <Anticimex> yeah but if you are selling machines to customers and want security=on you cannot let them have direct cluster access
[19:28] <stupidnic> give them their own pool?
[19:28] <stupidnic> but no way to limit what they can use I guess
[19:28] <SamYaple> yea but then you cant isolate guests, one guests accesses everyones data
[19:28] <Anticimex> insufficient
[19:28] <Anticimex> yeah the "RBAC" in Ceph is well, not
[19:29] <rkeene> SamYaple, OpenNebula is really simple.. Deployment is easy -- you set up the "frontend" node, and tell it about backend nodes. It SSHes from the frontend node to the backend nodes and copies the files it needs to /var/tmp on it
[19:30] * reed (~reed@72.28.92.10) Quit (Ping timeout: 480 seconds)
[19:32] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) has joined #ceph
[19:33] * Thononain (~kiasyn@2WVAACHFL.tor-irc.dnsbl.oftc.net) Quit ()
[19:33] * Scrin (~Aethis@212.7.194.71) has joined #ceph
[19:37] * dneary (~dneary@72.28.92.10) Quit (Ping timeout: 480 seconds)
[19:37] * shylesh (~shylesh@1.23.174.252) Quit (Remote host closed the connection)
[19:38] <stupidnic> SamYaple: do you think if I set the snapshot to be unprotected that would allow cinder to delete it?
[19:38] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[19:39] <stupidnic> And what happens to the volume that is based off that snapshot when the snapshot is removed? Will it's parent pivot?
[19:39] <SamYaple> stupidnic: you can try it, it wont hurt
[19:39] <SamYaple> the key is to just not delete anything on the ceph backend without going through cinder
[19:39] <stupidnic> okay. trying that now
[19:40] * pvh_sa (~pvh@105-237-252-162.access.mtnbusiness.co.za) has joined #ceph
[19:40] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[19:41] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[19:41] <stupidnic> nope
[19:41] <stupidnic> unprotecting snap failed: (16) Device or resource busy
[19:43] * fmanana (~fdmanana@bl13-155-240.dsl.telepac.pt) has joined #ceph
[19:44] <SamYaple> and the volume is "available" in cinder stupidnic?
[19:44] * debian112 (~bcolbert@173.225.179.34) has joined #ceph
[19:44] <stupidnic> it's attached
[19:44] <SamYaple> you cant delete attached volumes
[19:44] <stupidnic> let me detach it from the instance
[19:44] <stupidnic> well
[19:45] <stupidnic> I am not trying to delete the volume, I am trying to delete the snapshot that the volume was based on
[19:45] * dneary (~dneary@72.28.92.10) has joined #ceph
[19:46] <SamYaple> you want to delete the _base_?
[19:46] <SamYaple> youre not going to be able to do that
[19:47] * JV_ (~chatzilla@204.14.239.107) has joined #ceph
[19:47] <stupidnic> hmm okay. I guess this might be a gap in the way I am thinking of my volumes
[19:47] <SamYaple> what are you doing to create teh snapshot?
[19:48] <stupidnic> Just taking a snapshot of the volume in Horizon
[19:48] <rkeene> I think he wants a clone and then to delete the snapshot and the base
[19:48] * brutuscat (~brutuscat@196.Red-88-19-187.staticIP.rima-tde.net) Quit (Remote host closed the connection)
[19:48] <stupidnic> yeah, and I guess that's what I thought creating a new volume from a snapshot would do
[19:48] <stupidnic> rebase the snapshot to a new volume
[19:49] <stupidnic> but that doesn't seem to be what is acutally happening, it is just using the snapshot as a parent
[19:49] <SamYaple> i am pretty sure there is a clone method
[19:50] <SamYaple> yea that was added way back in grizly
[19:50] <stupidnic> Well I have a volume that was created from a snapshot using the "Boot from volume snapshot (creates a new volume)" on the new instance
[19:51] <stupidnic> And I thought that would have cloned it, but looking at the info on the new volume I see that it using the snapshot as a parent
[19:51] * sjm (~sjm@209.117.47.248) has joined #ceph
[19:51] * georgem (~Adium@72.28.92.15) has joined #ceph
[19:51] <SamYaple> dont trust horizon. youll want to create a clone of the volume and then boot from the clone
[19:52] * JV (~chatzilla@204.14.239.54) Quit (Ping timeout: 480 seconds)
[19:52] * JV_ is now known as JV
[19:52] <stupidnic> sadly my customers do use horizon
[19:53] <SamYaple> they may be able to do the clone first, but it will not be "one-click"
[19:54] <stupidnic> SamYaple: okay... so what are my options for correcting this issue on the one production server I have this issue on?
[19:54] <SamYaple> clone the volume instead of snaptshotting it
[19:54] <stupidnic> Should I take the current volume and clone that? And then make an instance boot off the clone?
[19:54] * gardenshed (~gardenshe@176.27.51.101) has joined #ceph
[19:55] <SamYaple> you could certainly do that stupidnic
[19:57] <stupidnic> SamYaple: what is the command for clone? I am looking at the help output and there is nothing named clone
[19:57] <stupidnic> re: cinder
[19:58] * MRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: This computer has gone to sleep)
[20:00] * dneary (~dneary@72.28.92.10) Quit (Ping timeout: 480 seconds)
[20:00] * georgem (~Adium@72.28.92.15) Quit (Quit: Leaving.)
[20:02] * sjm1 (~sjm@209.117.47.248) has joined #ceph
[20:03] * Scrin (~Aethis@8Q4AAAWN2.tor-irc.dnsbl.oftc.net) Quit ()
[20:03] * Mattress (~KeeperOfT@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[20:03] * sjm (~sjm@209.117.47.248) Quit (Read error: Connection reset by peer)
[20:04] <cmdrk> does anyone have experience with CONFIG_CEPH_FSCACHE ?
[20:04] <cmdrk> I have it enabled in my kernel, and i'm running cachefilesd.. but I have no idea how to get it working. is it automagic or does it require config?
[20:05] <cmdrk> i don't see anything in dmesg about a file cache for my cephfs mount being registered
[20:05] * Concubidated (~Adium@nat-pool-bos-t.redhat.com) has joined #ceph
[20:05] * wushudoin (~wushudoin@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[20:06] * fxmulder (~fxmulder@cpe-24-55-6-128.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[20:06] * dopesong_ (~dopesong@lb1.mailer.data.lt) has joined #ceph
[20:07] * oro (~oro@207.194.125.34) Quit (Ping timeout: 480 seconds)
[20:09] * sjm (~sjm@172.56.34.224) has joined #ceph
[20:11] * vbellur (~vijay@122.171.86.240) Quit (Read error: Connection reset by peer)
[20:11] * nhm (~nhm@184-97-242-33.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[20:12] * linjan (~linjan@213.8.240.146) has joined #ceph
[20:12] * dopesong (~dopesong@78-60-74-130.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[20:13] * daniel2_ (~dshafer@0001b605.user.oftc.net) has joined #ceph
[20:14] * vbellur (~vijay@122.171.86.240) has joined #ceph
[20:15] * sjm1 (~sjm@209.117.47.248) Quit (Ping timeout: 480 seconds)
[20:16] * oro (~oro@207.194.125.34) has joined #ceph
[20:17] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[20:20] <rlrevell> how do most people back up ceph? to another cluster? treating snapshots as backups is a huge mistake, correct?
[20:21] <florz> rlrevell: no clue, yes
[20:21] * sjm (~sjm@172.56.34.224) has left #ceph
[20:22] <rlrevell> i'm thinking of using the slowest oldest storage we have to build a backup cluster
[20:22] * gardenshed (~gardenshe@176.27.51.101) Quit (Remote host closed the connection)
[20:24] <florz> snapshots do provide some of the benefits of backups, of course (protection against accidental deletion, for example), but not all of them (protection against failed storage devices, for example)
[20:25] <florz> so, in the end, it all depends on how important your data is
[20:28] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[20:29] * kanagaraj (~kanagaraj@27.7.34.85) Quit (Quit: Leaving)
[20:31] * alram (~alram@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[20:31] <rlrevell> florz: pretty important, it's our customers' data. so there needs to be a physically separate copy. i still the occasional "all my data got eaten" post on ceph-users ;-)
[20:32] * alram (~alram@nat-pool-bos-t.redhat.com) has joined #ceph
[20:33] * Mattress (~KeeperOfT@8Q4AAAWO6.tor-irc.dnsbl.oftc.net) Quit ()
[20:33] * luckz (~hyst@spftor4e1.privacyfoundation.ch) has joined #ceph
[20:34] <rkeene> rlrevell, We only use RBD so we use snapshots and then incremental snapshots
[20:34] <rkeene> (export and export-diff)
[20:34] <rlrevell> do you then back up the snapshots or are they your backups?
[20:35] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has left #ceph
[20:35] <rkeene> The exports are backed up
[20:35] <rlrevell> yeah that's what we're thinking too
[20:35] <rkeene> But they are incremental backups from a base
[20:35] <rkeene> Right now we send them to another Ceph cluster where they are online, but we'll eventually put them offline
[20:36] <rkeene> But they are at a different site and not part of the same cluster, so they're safe
[20:36] <rkeene> Unless someone kills both independent clusters :-P
[20:36] <rlrevell> we'll probably do something similar but expire them rather than go offline, no requirement to keep data forever
[20:37] * pdrakeweb (~pdrakeweb@104.247.39.34) has joined #ceph
[20:37] * pdrakewe_ (~pdrakeweb@104.247.39.34) Quit (Read error: Connection reset by peer)
[20:37] <rkeene> We might have really long retention policies (7 years) so, we're working on planning for that
[20:40] * gardenshed (~gardenshe@176.27.51.101) has joined #ceph
[20:41] * fmanana (~fdmanana@bl13-155-240.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[20:41] <m0zes> a group at my institution has some silly retention policies. nightly, forever.
[20:42] * m0zes doesn't have to support them, though.
[20:43] <rkeene> This would most likely be 7 yearlies, in addition to more shorter-lived snapshots
[20:47] * jsfrerot (~jsfrerot@192.252.133.70) has joined #ceph
[20:48] <jsfrerot> hi all, I did a "ceph osd reweight-by-utilization 110" but now ceph status reports: "HEALTH_WARN 29 pgs stuck unclean;"
[20:48] <jsfrerot> anyone knows what to do ?
[20:48] * diq (~diq@nat2.460b.weebly.net) has joined #ceph
[20:53] * rendar (~I@host51-182-dynamic.20-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:54] * georgem (~Adium@72.28.92.10) has joined #ceph
[20:55] * linjan (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[20:56] * rendar (~I@host51-182-dynamic.20-87-r.retail.telecomitalia.it) has joined #ceph
[20:58] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:00] * _prime_ (~oftc-webi@199.168.44.192) Quit (Quit: Page closed)
[21:03] * luckz (~hyst@7R2AAA4JA.tor-irc.dnsbl.oftc.net) Quit ()
[21:03] * hgjhgjh1 (~Xeon06@2WVAACHI4.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:04] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[21:04] * alram (~alram@nat-pool-bos-t.redhat.com) Quit (Quit: leaving)
[21:04] * Concubidated (~Adium@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving.)
[21:05] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[21:08] * gardenshed (~gardenshe@176.27.51.101) Quit (Remote host closed the connection)
[21:08] * oro (~oro@207.194.125.34) Quit (Ping timeout: 480 seconds)
[21:09] * nsoffer (~nsoffer@bzq-109-66-155-139.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[21:10] * dneary (~dneary@72.28.92.10) has joined #ceph
[21:11] * vbellur (~vijay@122.171.86.240) Quit (Ping timeout: 480 seconds)
[21:31] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[21:31] * dopesong_ (~dopesong@lb1.mailer.data.lt) Quit (Remote host closed the connection)
[21:32] * fxmulder (~fxmulder@cpe-24-55-6-128.austin.res.rr.com) has joined #ceph
[21:33] * hgjhgjh1 (~Xeon06@2WVAACHI4.tor-irc.dnsbl.oftc.net) Quit ()
[21:35] * dneary (~dneary@72.28.92.10) Quit (Ping timeout: 480 seconds)
[21:44] * georgem (~Adium@72.28.92.10) Quit (Quit: Leaving.)
[21:48] * nsoffer (~nsoffer@bzq-109-64-255-238.red.bezeqint.net) has joined #ceph
[21:48] * bobrik_______ (~bobrik@83.243.64.45) Quit (Ping timeout: 480 seconds)
[21:49] * nsoffer (~nsoffer@bzq-109-64-255-238.red.bezeqint.net) Quit ()
[21:52] * b0e (~aledermue@p54AFF851.dip0.t-ipconnect.de) has joined #ceph
[21:55] * wesd (~wesd@140.247.242.44) has joined #ceph
[21:56] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[21:56] <wesd> i am deploying an initial cluster, I am using ceph-deploy, i am to the stage where I would like to deploy my MDS??? i have some disks set aside that I would like to use for my MDS, how is the location of the metadata server set? How can i specify?
[21:58] * bobrik_______ (~bobrik@83.243.64.45) has joined #ceph
[22:01] <m0zes> wesd: the mds data is in your metadata pool. You'd simply want to define a CRUSH rule that used those disks, create the pool and change the pool's crush rule number to match the newly created rule.
[22:02] <wesd> thamks, ill try and role with that.
[22:03] * richard (~diq@nat2.460b.weebly.net) has joined #ceph
[22:03] * diq (~diq@nat2.460b.weebly.net) Quit (Read error: Connection reset by peer)
[22:06] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[22:08] <championofcyrodi> if i were to go with a switch like http://www.nextwarehouse.com/item/?841430_g10e (10gbe) for ceph... and my nodes are 1u supermicros, running centos6... what is the best 10Gbe NIC to go with?
[22:08] <richard> probably something supermicro to keep things simple
[22:09] <richard> we actually use Mellanox adapters
[22:10] <richard> pretty pleased with them over the Intel's. Intel adapters don't have PXE enabled out of the box (which is annoying)
[22:10] <championofcyrodi> yea.. i have to be careful here because i'm running ceph via fuel openstack deployer... So there is a dependency on the kernel modules supported via the installed OS. (centos 6.5)
[22:10] <championofcyrodi> i've seen support for mellanox via fuel...so that might be a good option.
[22:11] * oms101 (~oms101@a79-169-49-115.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[22:11] <richard> our stuff is centos 6.old
[22:11] <championofcyrodi> however, my radosgw is running 10Gbe fiber to the LAN... so i just need to upgrade the backend switch w/ the ceph nodes attached.
[22:11] <championofcyrodi> of course i have this pipe dream that it will be 'easy'. :(
[22:11] <championofcyrodi> but nothing ever is
[22:13] <championofcyrodi> looks like i might want to go w/ a switch that has cx4 connectors? instead of RJ-45...
[22:13] <championofcyrodi> the RJ-45 NICs seem to be twice the cost as the CX4 ones...
[22:13] * nhm (~nhm@172.56.36.142) has joined #ceph
[22:13] * ChanServ sets mode +o nhm
[22:14] <richard> CX4?!
[22:14] <richard> you mean SFP+
[22:14] <m0zes> I've had good luck with the Arctica line from Penguin Computing, but I'd be comfortable going with any switch that supports Cumulus Linux :D
[22:14] <championofcyrodi> well i see SFP+ also...
[22:15] <championofcyrodi> I'm new to 10gbe ethernet... so i'm not sure what the 'standard' is.
[22:15] <richard> I'd use SFP+ or cat6. Prefer SFP+ over cat6 due to power draw
[22:15] <championofcyrodi> all the 1gbe seems to be rj-45
[22:15] <m0zes> also, we purchased quite a few 10Gb Mellanox ConnectX-2 10Gb cards off ebay.
[22:15] <richard> 10gig cat6 UTP draws a lot of power
[22:15] <m0zes> and latency is slightly higher over cat6
[22:15] <richard> we also like the flexibility of SFP+
[22:15] <championofcyrodi> so like this then? https://www.google.com/shopping/product/18110663480585530647?sclient=psy-ab&es_sm=122&biw=1341&bih=869&q=10gbe+switch+sfp%2B&oq=10gbe+switch+sfp%2B&pbx=1&bav=on.2,or.&bvm=bv.93756505,d.eXY&tch=1&ech=1&psi=zupcVYz6LMKzggSU3YHIAQ.1432152783443.9&prds=paur:ClkAsKraX45eNNzYNr9bzHOSLJ3Y4r7LzslevE6bmr95Ib4Ct1KYE1kAB1koe2h-y66E5Ek6hyP7Nq_4Xhtg76xxB4v_S0rfB3zIrLMlwlgzrTyJ6NKzuMT0AxIZAFPVH73Kj8l7ElZfCDn-3i9Wi
[22:15] <championofcyrodi> whoa... sorry.
[22:16] <richard> use DAC SFP+ cables in the rack and fiber if it needs to go farther
[22:16] <championofcyrodi> https://www.google.com/shopping/product/18110663480585530647
[22:16] <richard> I'd go with something Quanta/Penguin/etc before that switch
[22:17] <m0zes> championofcyrodi: that is a 1Gb switch fwict.
[22:17] <championofcyrodi> gah.
[22:18] <championofcyrodi> so i'm looking at about 10k to upgrade a 4 node cluster...
[22:18] <richard> not at all
[22:18] * gardenshed (~gardenshe@176.27.51.101) has joined #ceph
[22:18] <championofcyrodi> all these spf+ quanta switches are like 5-7k
[22:19] <championofcyrodi> (using google shopping w/ keywords like, 'quanta 10gbe sfp+'
[22:20] <richard> you could get the older 4804x for under $9k
[22:20] <richard> penguin ^^
[22:20] <richard> there's a newer model which I'm guessing is cheaper or same price
[22:20] <richard> Quanta also doesnt' really deal with singles
[22:20] <richard> you're order a dozen or more to deal with the directly and get good pricing
[22:23] <m0zes> any more, I'd just purchase 40Gb switches, even for 10Gb uses due to the 4x10Gb port breakouts.
[22:23] <m0zes> but I need lots of ports ;)
[22:23] <richard> no affiliation with these guys but here's one: http://whiteboxswitch.com/collections/10-gigabit-ethernet-switches/products/quantamesh-bms-t3048-ly2r-with-onie
[22:23] * PaulC (~paul@222-153-122-160.jetstream.xtra.co.nz) has joined #ceph
[22:24] <rkeene> Heh
[22:24] <rkeene> Forwarding 960Mbps
[22:25] <rkeene> Probably not the units they meant :-)
[22:25] <m0zes> hopefully ;)
[22:26] * gardenshed (~gardenshe@176.27.51.101) Quit (Ping timeout: 480 seconds)
[22:28] <championofcyrodi> well, i have an issue that management doesnt want to spend more money, since the 12k we dropped on 4 decent nodes *only* gave us IaaS, but is too latent to run VPCs that have higher disk i/o latency requirements. e.g. quorum consensus.
[22:29] * RodrigoUSA (~RodrigoUS@24.41.238.33) has joined #ceph
[22:29] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) Quit (Quit: Verlassend)
[22:29] <championofcyrodi> so i'm trying to coble together what i can and boost ceph performance to show that VPCs are possible to operate in house, and not lock-in w/ AWS.
[22:29] <championofcyrodi> aka, they'd spend 10k+ if I can convince them it will work.
[22:30] <RodrigoUSA> someone know how to recover a corrupted ceph monitor, I was trying to add a second monitor but it fucked up the primary, now I can even get ceph status lol
[22:30] <championofcyrodi> its the convincing w/o >1gbe that is difficult.
[22:30] <championofcyrodi> RodrigoUSA: I went deep down that rabbit hole a couple of months ago...
[22:30] <championofcyrodi> 1st, do you have backups of the monitor's data directory?
[22:31] <RodrigoUSA> championofcyrodi, this is a test environment doesn't matter the data :)
[22:31] <RodrigoUSA> btw I'm ceph rookie :D
[22:32] <championofcyrodi> so i had to cobble together the actual ceph objects (minus the replicas) and use 'dd' to append them together in order to get my rbd images back...
[22:32] <championofcyrodi> since then i learned to add:
[22:32] <championofcyrodi> scp /var/backup/ceph-mon-backup_$(date +'%m-%d-%Y').tar.gz user@backup-server:/media/mirror/
[22:32] <championofcyrodi> to my cron.daily
[22:33] <championofcyrodi> well... stop ceph, tar czf /var/backup/ceph-mon-backup_$(date +'%m-%d-%Y').tar.gz /var/lib/ceph/mon, start ceph, then copy the tar.gz
[22:33] <championofcyrodi> here is my poor attempt at documenting my recovery process: http://championofcyrodiil.blogspot.com/2015/02/recover-openstack-ceph-data-with.html
[22:34] <RodrigoUSA> ok let me check
[22:34] * pvh_sa (~pvh@105-237-252-162.access.mtnbusiness.co.za) Quit (Ping timeout: 480 seconds)
[22:35] <championofcyrodi> that just gets your data back... ultimately i had to scp all my rbd images to another location... once done, blow away ceph, rebuild it, and copy rbd images back in.
[22:35] <championofcyrodi> then make sure to backup ceph mons daily.
[22:35] <championofcyrodi> there might be a way to manually clean up pg_map and crush map... but i'm not aware of it.
[22:36] <championofcyrodi> my guess is that the jenkins hash math gets pretty intense when you start looking at that process.
[22:38] * PaulC (~paul@222-153-122-160.jetstream.xtra.co.nz) Quit (Ping timeout: 480 seconds)
[22:41] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[22:44] <championofcyrodi> okay, so here is a crazy idea... what if i ditch the switch all together and used bonded looping? http://www.clustermonkey.net/Interconnects/experiments-with-switchless-10gige-the-bonded-loop.html
[22:44] * oro (~oro@72.28.92.15) has joined #ceph
[22:45] * nsoffer (~nsoffer@109.64.255.238) has joined #ceph
[22:46] <rkeene> How many nodes ?
[22:47] <rkeene> And do you have enough network interfaces for it ? (4 per server)
[22:50] <kblin> hi folks
[22:50] <rkeene> Oh, they're not using bonding for multiple channels between targets, so 2 per server
[22:51] <kblin> can I get ceph-deploy to install ceph from a local repository? It's trying to grab packages from ceph.com, but that doesn't have debian jessie packages
[22:51] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[22:51] <kblin> and I've already set up a local repository with custom debs
[22:52] <rkeene> And FWIW, you could get it working better (i.e., smart) than that with a bit more work -- also, in their configuration if the link between n0 and HN or n2 and HN dies, there will be packet loss to n1
[22:55] * oro (~oro@72.28.92.15) Quit (Ping timeout: 480 seconds)
[22:55] * pvh_sa (~pvh@105-237-252-162.access.mtnbusiness.co.za) has joined #ceph
[22:55] * richard (~diq@nat2.460b.weebly.net) Quit (Read error: Connection reset by peer)
[22:55] * diq (~diq@nat2.460b.weebly.net) has joined #ceph
[22:57] <championofcyrodi> rkeene: 4 nodes
[22:59] <rkeene> How many NICs do you have in each node ?
[23:02] <rkeene> Anyway, you can just turn off STP and use ebtables to prevent traffic from looping rather than doing bonding, and that will be more efficient... But still won't automatically failover in reaction to topology changes like STP would
[23:02] * diq (~diq@nat2.460b.weebly.net) Quit (Read error: Connection reset by peer)
[23:02] * diq (~diq@nat2.460b.weebly.net) has joined #ceph
[23:03] * Shesh (~Redshift@185.77.129.88) has joined #ceph
[23:03] <rkeene> You'd have to write a small daemon that probes periodically and updates the rules if a link is lost
[23:03] * oro (~oro@72.28.92.15) has joined #ceph
[23:04] * nhm_ (~nhm@184-97-242-33.mpls.qwest.net) has joined #ceph
[23:04] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[23:06] * nhm (~nhm@172.56.36.142) Quit (Ping timeout: 480 seconds)
[23:07] * wesd (~wesd@140.247.242.44) Quit (Ping timeout: 480 seconds)
[23:12] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[23:13] * jnq (~jnq@95.85.22.50) has joined #ceph
[23:14] * jkt (~jkt@latimerie.flaska.net) Quit (Remote host closed the connection)
[23:16] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[23:17] * shakamunyi (~shakamuny@166.170.43.200) has joined #ceph
[23:20] * jkt (~jkt@latimerie.flaska.net) has joined #ceph
[23:20] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:21] * shakamunyi (~shakamuny@166.170.43.200) Quit (Remote host closed the connection)
[23:22] * richard (~diq@nat2.460b.weebly.net) has joined #ceph
[23:22] * diq (~diq@nat2.460b.weebly.net) Quit (Read error: Connection reset by peer)
[23:25] * jkt (~jkt@latimerie.flaska.net) Quit (Remote host closed the connection)
[23:30] * jkt (~jkt@latimerie.flaska.net) has joined #ceph
[23:32] * diq (~diq@nat2.460b.weebly.net) has joined #ceph
[23:32] * richard (~diq@nat2.460b.weebly.net) Quit (Read error: Connection reset by peer)
[23:32] <RodrigoUSA> :s
[23:33] * Shesh (~Redshift@5NZAACM80.tor-irc.dnsbl.oftc.net) Quit ()
[23:33] * oro (~oro@72.28.92.15) Quit (Ping timeout: 480 seconds)
[23:39] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[23:40] <Anticimex> rkeene: i'm watching https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/storage-security-in-a-critical-enterprise-openstack-environment now
[23:40] <Anticimex> 10 minutes in and at least pretty good description of the weaknesses in the security constructs of ceph to date. i'm crossing fingers for some segregation efforts :]
[23:42] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[23:42] * ChanServ sets mode +o elder
[23:44] * oro (~oro@72.28.92.15) has joined #ceph
[23:46] <RodrigoUSA> someone know howto remove a broken mon! wow could not be posible that I have to reinstall all things again :/
[23:47] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[23:47] * IndLogSeo (~indlogseo@2a02:8109:8680:118:e5e7:8ec7:2a86:9016) has joined #ceph
[23:47] * IndLogSeo (~indlogseo@2a02:8109:8680:118:e5e7:8ec7:2a86:9016) Quit ()
[23:47] <RodrigoUSA> i was trying to add another mon but the command add sit there then I kill it then first mon is broken, cannot execute any ceph command
[23:48] * srk (~srk@32.97.110.56) Quit (Read error: Connection reset by peer)
[23:50] * MRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[23:52] * dneary (~dneary@72.28.92.10) has joined #ceph
[23:55] * diq (~diq@nat2.460b.weebly.net) Quit (Quit: Leaving...)
[23:56] * diq (~diq@nat2.460b.weebly.net) has joined #ceph
[23:56] * diq (~diq@nat2.460b.weebly.net) Quit ()
[23:58] * b0e (~aledermue@p54AFF851.dip0.t-ipconnect.de) Quit (Quit: Leaving.)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.