#ceph IRC Log

Index

IRC Log for 2013-08-24

Timestamps are in GMT/BST.

[0:00] * jeff-YF (~jeffyf@67.23.117.122) Quit (Read error: Operation timed out)
[0:02] * symmcom (~wahmed@S0106001143030ade.cg.shawcable.net) has left #ceph
[0:05] * BillK (~BillK-OFT@124-148-252-83.dyn.iinet.net.au) has joined #ceph
[0:06] * ismell (~ismell@host-24-56-171-198.beyondbb.com) has joined #ceph
[0:08] * LPG_ (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) Quit (Remote host closed the connection)
[0:11] * madkiss (~madkiss@207.239.114.206) Quit (Quit: Leaving.)
[0:14] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[0:14] * sjustlaptop (~sam@38.122.20.226) Quit (Quit: Leaving.)
[0:14] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[0:15] * sjustlaptop (~sam@38.122.20.226) has joined #ceph
[0:18] * LeaChim (~LeaChim@176.24.168.228) Quit (Ping timeout: 480 seconds)
[0:18] * Mrwoofer (~oftc-webi@office.fortressitx.com) has joined #ceph
[0:19] <Mrwoofer> Hey guys another question. Can I run LVM ontop of ceph block storage?
[0:19] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[0:23] <sagewk> alfredodeza: still around?
[0:23] * twx (~twx@rosamoln.org) has joined #ceph
[0:24] <sagewk> woot, we just closed more than half of the ceph core bugs.
[0:28] <xarses> woot
[0:28] * sjustlaptop (~sam@38.122.20.226) Quit (Ping timeout: 480 seconds)
[0:30] <Mrwoofer> awesome!
[0:30] * Mrwoofer (~oftc-webi@office.fortressitx.com) Quit (Quit: Page closed)
[0:36] * rturk is now known as rturk-away
[0:41] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has joined #ceph
[0:52] * xmltok (~xmltok@pool101.bizrate.com) Quit (Quit: Leaving...)
[0:53] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[0:53] * xmltok (~xmltok@pool101.bizrate.com) Quit ()
[0:55] * sleinen1 (~Adium@2001:620:0:26:399e:e592:2a41:bc7d) Quit (Quit: Leaving.)
[0:55] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[1:04] * KindTwo (~KindOne@198.14.198.204) has joined #ceph
[1:06] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:06] * KindTwo is now known as KindOne
[1:09] * tnt (~tnt@109.130.102.13) Quit (Ping timeout: 480 seconds)
[1:12] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (Ping timeout: 480 seconds)
[1:19] * yehudasa_ (~yehudasa@2607:f298:a:607:ea03:9aff:fe98:e8ff) Quit (Ping timeout: 480 seconds)
[1:25] <alfredodeza> sagewk: I can be for a few minutes
[1:25] <alfredodeza> whats up
[1:25] <sagewk> np, i found your github comment
[1:25] <sagewk> file is closed so we're all set
[1:26] <alfredodeza> excellent
[1:26] <alfredodeza> high five
[1:26] <kraken> \o
[1:26] <alfredodeza> :)
[1:32] <sagewk> :)
[1:33] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[1:36] <janos> i have one host with OSD's that invariably fill up more than the other OSD's on the other hosts (3 hosts)
[1:37] <janos> can i force a rebalance of sorts by creating a new pool and copying contents of one to anothe?
[1:37] <janos> or is there a more official way
[1:38] <janos> this is bobtail btw
[1:38] <sjust> janos: how many pgs?
[1:38] <janos> i forget how to tell
[1:38] <sjust> ceph -s
[1:39] <janos> ok that makes me feel likea noob ;)
[1:39] <janos> 1856 total
[1:39] <sjust> how many pools/
[1:39] <sjust> ?
[1:39] <janos> 4
[1:39] <sjust> is most of the data in 1 pool?
[1:39] <janos> data, rbd, metadata, and one i added which carries the most - media
[1:39] <janos> yeah
[1:39] <janos> by far
[1:39] <sjust> ok, how many pgs in media?
[1:40] <janos> not sure how to tell that either (sorry)
[1:40] <sjust> ceph osd dump | grep media
[1:40] <janos> ok ;)
[1:40] <janos> 512
[1:40] <sjust> replication level?
[1:40] <janos> 2
[1:40] <sjust> how many osds?
[1:41] <janos> 9
[1:41] <sjust> 3 on each host/
[1:41] <sjust> ?
[1:41] <janos> 3 per host
[1:41] <janos> yeah
[1:41] <sjust> how unbalanced is it?
[1:42] <janos> the fiorst host always has one or two hovering at or over 80% full. the otehr two host's OSD sit around 50-60%
[1:42] <sjust> what are the objects like?
[1:42] <sjust> cephfs?
[1:42] <sjust> radosgw?
[1:42] <janos> today i did some reweighting on the heavy host to get a disk out of near-full warning
[1:42] <janos> hrmm, trying to recall
[1:42] <janos> rbd i believe
[1:43] <janos> mapped and shared out over samba
[1:43] <sjust> post the output of ceph osd tree?
[1:43] <janos> ah dur, not 3 per host. 3 2 4 (all 1TB except one 2TB, hence the oddity)
[1:43] <janos> getting
[1:44] <janos> http://paste.fedoraproject.org/34371/37730144
[1:44] <janos> host allon is the one with the heavyness
[1:44] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[1:44] <janos> been playing weight games today to kill the warning
[1:44] <janos> osd.0 was at 86% full earlier
[1:45] <sjust> it seems that you don't have enough pgs
[1:45] <janos> currently just shy of 80% and osd.2 is 81%
[1:45] <sjust> on newer versions, you could increase the pool size online
[1:45] <janos> can that be altered in bobtail? i was thinking of doing a dumpling changeover monday
[1:45] <sjust> janos: probably, but it's way less tested on bobtail
[1:45] <janos> wondering how recommended/not-recommended that is in this state
[1:46] <janos> upgrading that is
[1:46] <janos> could i make another pool and move objects to it?
[1:46] <sjust> maybe, there might be some rbd related gotchas with that
[1:46] <janos> eventually move all to the new pool and share that out instead
[1:47] * john (~shu@75-149-80-169-Illinois.hfc.comcastbusiness.net) has joined #ceph
[1:47] * john (~shu@75-149-80-169-Illinois.hfc.comcastbusiness.net) Quit ()
[1:47] <janos> i wasn't sure if i should hop, or snag the cuttlefish fedora 18 rpm's off the site
[1:47] <janos> and do the mid-step
[1:48] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[1:48] <janos> well i can make a new pool with more pg's and start moving objects to it
[1:48] <janos> and see how i come out
[1:49] <janos> not everything in that pool is in fully active use
[1:49] <sjust> janos: if it's rbd, that probably won't work
[1:49] <janos> dang
[1:49] <sjust> joshd: what's involved in moving an rbd image from 1 pool to another?
[1:49] <sjust> in bobtail
[1:49] * snapple (~snapple@75-149-80-169-Illinois.hfc.comcastbusiness.net) has joined #ceph
[1:49] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[1:50] <janos> i could do the "external" way - mount the rbd on hosts and move them effectively outside of ceph
[1:50] <joshd> just copying the data - you can't preserve any clone relationships, although you can clone from one pool to another and flatten later if you like
[1:50] <joshd> for bobtail clone + flatten for each image will be a bit faster
[1:50] <joshd> in dumpling rbd cp should be just as fast
[1:51] <janos> i'll read up on clone + flatten
[1:51] <snapple> hi all
[1:51] <janos> i'd like this cluster to be not so borderline before i upgrade
[1:51] <joshd> snapshots won't exist on the copies as well, so keep those around if you need them
[1:51] <janos> i don't have any snapshots that i know of
[1:52] <joshd> if they're format 1 you have to use rbd cp anyway, since it doesn't support cloning
[1:52] <janos> they are rbd 1
[1:52] <joshd> or export/import, doesn't really matter how you do it
[1:52] <janos> cool
[1:53] <joshd> just that they aren't in use while they're copied of course
[1:53] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[1:53] <janos> i'll get on that once everyone else is a asleep basically
[1:53] <janos> east coast - still have a few hours
[1:53] <janos> thanks for the advice sjust/joshd
[1:54] <snapple> can anyone assist me with a quick question regarding ceph deployment ? i am not sure if this is a known issue or not.
[1:55] <dmick> it's hard to say without knowing what the question is :)
[1:55] <snapple> i am having trouble installing ceph dumpling via ceph-deploy on ubuntu 13.04, i fail to get the all the keys using gatherkeys command as per the docs.. been stuck on it for few days, and have tried reinstalling and re-following the instructions; i have disabled ufw and apparmor and still somehow those keys don't get generated when new monitors are created
[1:56] <dmick> 1) current version of ceph-deploy?
[1:56] <dmick> (1.2.2 is latest)
[1:56] <snapple> 1.2.2 yeah
[1:58] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[1:58] <snapple> it only picks up ceph.mon.keyring which i think was locally created, rest of the key rings from the mon hosts don't exist in the /var/lib/ceph/boostrap-*/ or in /etc/ceph/
[1:58] * KindTwo (~KindOne@h185.53.186.173.dynamic.ip.windstream.net) has joined #ceph
[1:59] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:59] * KindTwo is now known as KindOne
[1:59] <snapple> I do see a ceph-create-key process hanging around on the monitor nodes, not sure what to do next.
[2:02] * alram (~alram@38.122.20.226) Quit (Quit: leaving)
[2:09] * KindTwo (~KindOne@h244.54.186.173.dynamic.ip.windstream.net) has joined #ceph
[2:10] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[2:11] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:11] * KindTwo is now known as KindOne
[2:18] * xarses (~andreww@204.11.231.50.static.etheric.net) Quit (Ping timeout: 480 seconds)
[2:19] * diegows (~diegows@190.190.11.42) Quit (Ping timeout: 480 seconds)
[2:19] * yasu` (~yasu`@dhcp-59-179.cse.ucsc.edu) Quit (Remote host closed the connection)
[2:20] * symmcom (~wahmed@S0106001143030ade.cg.shawcable.net) has joined #ceph
[2:21] <symmcom> Can somebody please tell me how can i create a bucket and assign that to a particular user? do i need a pool first to put the bucket in ? how do i put a bucket in a pool?
[2:22] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[2:24] * sagelap1 (~sage@17.sub-70-197-65.myvzw.com) has joined #ceph
[2:26] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) Quit (Ping timeout: 480 seconds)
[2:46] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[2:47] * xarses (~andreww@c-50-136-199-72.hsd1.ca.comcast.net) has joined #ceph
[2:51] <symmcom> how can i create a bucket and link it to a user
[3:18] <Kioob> I think you can restrict a bucket to a RULESET, then you can use that specific ruleset for a POOL
[3:19] * sagelap1 (~sage@17.sub-70-197-65.myvzw.com) Quit (Ping timeout: 480 seconds)
[3:20] * houkouonchi-work (~sandon@gw.sepia.ceph.com) has joined #ceph
[3:20] <Kioob> symmcom: you can look for that example, which use specific bucket ("ssd"), via ruleset
[3:20] <Kioob> http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
[3:21] <Kioob> If that bucket is not in your default "root" bucket, only that pool will be able to use it
[3:24] * yanzheng (~zhyan@101.82.245.74) has joined #ceph
[3:31] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[3:32] * sagelap (~sage@76.89.177.113) has joined #ceph
[3:33] * yanzheng (~zhyan@101.82.245.74) Quit (Ping timeout: 480 seconds)
[3:36] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[3:43] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[3:51] * yanzheng (~zhyan@101.83.41.204) has joined #ceph
[3:51] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[4:01] * denken (~denken@dione.pixelchaos.net) has left #ceph
[4:07] * sandon_ (~sandon@gw.sepia.ceph.com) has joined #ceph
[4:07] * sandon_ is now known as houkouonchi
[4:09] * houkouonchi-work (~sandon@gw.sepia.ceph.com) Quit (Ping timeout: 480 seconds)
[4:15] * snapple (~snapple@75-149-80-169-Illinois.hfc.comcastbusiness.net) Quit (Quit: snapple)
[4:16] * snapple (~snapple@75-149-80-169-Illinois.hfc.comcastbusiness.net) has joined #ceph
[4:25] * BillK (~BillK-OFT@124-148-252-83.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[4:26] * BillK (~BillK-OFT@220-253-162-118.dyn.iinet.net.au) has joined #ceph
[4:29] * snapple (~snapple@75-149-80-169-Illinois.hfc.comcastbusiness.net) Quit (Quit: snapple)
[4:38] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[4:45] * indeed (~indeed@206.124.126.33) Quit (Remote host closed the connection)
[4:45] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) Quit (Remote host closed the connection)
[4:58] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[5:13] <mikedawson> dmick: are you still around?
[5:28] <dmick> briefly. sup mikedawson?
[5:29] <mikedawson> dmick: what do you make of the missing lines here? http://pastebin.com/raw.php?i=U6U9YvkY
[5:30] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[5:31] <dmick> um....rbd bench-write not being ... super careful?
[5:31] <mikedawson> I'm trying to get you guys a repeatable test that shows RBD looking bursty like I see in my workload. If I throw rand writes with 'rbd bench-write', get seemingly inconsistent benchmark performance
[5:31] <dmick> mm. is cache on? joshd suggests maybe it is
[5:32] <mikedawson> dmick: it is. with sequential writes, I get every line... very consistent
[5:34] <mikedawson> dmick: cache off, sequential writes http://pastebin.com/raw.php?i=kRy10Jcw
[5:34] <mikedawson> er.. rand writes
[5:36] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) Quit (Ping timeout: 480 seconds)
[5:38] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (Ping timeout: 480 seconds)
[5:39] <dmick> hm. I dunno then really. I didn't even know rbd had a bench-write until you mentioned it :)
[5:40] <mikedawson> dmick: just found it yesterday. myself
[5:51] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[6:15] <sage> mikedawson: that may just be the cache's write throttling
[6:16] <sage> it wants to limit the amount of dirty data it has and may not be very good and doing that smoothly
[6:16] <sage> it would be interesting ot see a debug ms = 1 log with the cached case
[6:16] <sage> --log-file foo --debug-ms 1
[6:18] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[6:19] <mikedawson> sage: mikedawson-rbd-with-cache is on cephdrop. Let me know if you want a longer run
[6:19] <sage> do you see the stalls in the output?
[6:20] <mikedawson> sage: can't tell when logging is on because it seems to send logging to strout
[6:20] <sage> --no-log-to-stderr
[6:20] <mikedawson> s/strout/stdout
[6:22] <mikedawson> sagewk: now there is a foo on cephdrop. Saw this stall 23-27 http://pastebin.com/raw.php?i=cKgHqK3n
[6:25] <mikedawson> sagewk: an earlier run with rbd perf dump. Not sure it anything sticks out
[6:25] <mikedawson> http://pastebin.com/raw.php?i=VtQUaqbg
[6:26] <sage> do you have a log to go with the 23-27 stall?
[6:27] <mikedawson> yes, foo on cephdrop
[6:27] <mikedawson> it may have the mikedawson-rbd-with-cache contents at the beginning
[6:27] <sage> k
[6:30] <sage> hrm i don't see the gap. can you do a run with a date at the start/end of the bench-write so i can correlate to the log?
[6:34] <mikedawson> sage: http://pastebin.com/jiWreEyv and mikedawson-rbd-4 on cephdrop
[6:39] <sage> i think it is a funny interaction with teh cache
[6:39] <sage> can you repeat with --debug-objectcacher 20
[6:39] <sage> and --debug-rbd 20
[6:39] <sage> it's not the osds; something client side
[6:43] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[6:44] <mikedawson> sage: http://pastebin.com/raw.php?i=RYL9jcgp and mikedawson-rbd-5
[6:44] <mikedawson> ahh, missed rbd. I'll try again
[6:47] * jlogan1 (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[6:51] * zhyan_ (~zhyan@101.82.49.246) has joined #ceph
[6:52] <mikedawson> sage: tougher to reproduce with --debug-rbd. Seems to go at about 40% slower throughput
[6:52] <sage> i'll take a look at the other log then
[6:54] * KindTwo (~KindOne@h99.41.186.173.dynamic.ip.windstream.net) has joined #ceph
[6:57] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:57] * KindTwo is now known as KindOne
[6:58] * yanzheng (~zhyan@101.83.41.204) Quit (Ping timeout: 480 seconds)
[7:21] <sage> mikedawson: pushed wip-cache-stall, which might help.. it is either the cache holding the lock for too long, and/or that combined with the librados in-flight limits.
[7:22] <sage> maybe try setting --objecter-inflight-ops 1000000 --objecter-inflight-op-bytes 100000000 and see if that helps?
[7:22] <mikedawson> sage: can i update the box running the test client by itself, or do I need to hit the mons/osds, too?
[7:23] <sage> that's just librbd
[7:23] <sage> though i would make librados2 match librbd1
[7:24] <mikedawson> sage: I also saw a couple stalls with --no-rbd-cache, but it was less pronounced
[7:25] * KindTwo (~KindOne@198.14.194.105) has joined #ceph
[7:25] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:26] * KindTwo is now known as KindOne
[7:28] * jantje (~jan@paranoid.nl) Quit (Read error: Connection reset by peer)
[7:28] * jantje (~jan@paranoid.nl) has joined #ceph
[7:39] * Pretztail (~kid@CPE-72-135-237-220.wi.res.rr.com) has joined #ceph
[8:05] * houkouonchi (~sandon@gw.sepia.ceph.com) Quit (Read error: Connection reset by peer)
[8:05] * houkouonchi (~sandon@gw.sepia.ceph.com) has joined #ceph
[8:08] * KindTwo (~KindOne@h49.175.17.98.dynamic.ip.windstream.net) has joined #ceph
[8:10] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:10] * KindTwo is now known as KindOne
[8:21] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley)
[8:40] * tnt (~tnt@109.130.102.13) has joined #ceph
[8:43] * zhyan__ (~zhyan@101.83.48.246) has joined #ceph
[8:44] * AfC (~andrew@2001:44b8:31cb:d400:31b2:f929:558b:657f) has joined #ceph
[8:49] * zhyan_ (~zhyan@101.82.49.246) Quit (Ping timeout: 480 seconds)
[8:53] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:00] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:07] * rongze (~quassel@106.120.176.78) has joined #ceph
[9:07] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Remote host closed the connection)
[9:09] * pipat (~chatzilla@ppp-58-8-244-121.revip2.asianet.co.th) has joined #ceph
[9:12] * houkouonchi (~sandon@gw.sepia.ceph.com) Quit (Quit: Leaving)
[9:13] <pipat> Hi, I am new to ceph and need some advice on starting a system. I am trying to use ceph with Proxmox cluster and plan to install ceph into three hardware nodes, each with 1 SSD + 2 x 1TB SATA3 HD. Any recommendation on how I should install and test the system.
[9:13] <pipat> Also this is my first time using IRC so I may not use the right command.
[9:15] * rongze_ (~quassel@notes4.com) Quit (Ping timeout: 480 seconds)
[9:17] <buck> Maybe configure one OSD per SATA hard drive and sue the SATA drive for the journals and the OS ? And run a monitor on each node? So you'd have 3 (odd numbers are the way to go)
[9:17] <buck> er use, not sue
[9:17] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[9:18] * Pretztail (~kid@CPE-72-135-237-220.wi.res.rr.com) Quit (Quit: Leaving)
[9:18] * rongze_ (~quassel@211.155.113.206) has joined #ceph
[9:18] * mtl2 (~Adium@c-67-176-54-246.hsd1.co.comcast.net) has joined #ceph
[9:19] * sleinen1 (~Adium@2001:620:0:25:98fb:5007:e492:9c69) has joined #ceph
[9:23] * rongze (~quassel@106.120.176.78) Quit (Ping timeout: 480 seconds)
[9:24] * mtl1 (~Adium@c-67-176-54-246.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[9:24] <pipat> I am thinking of using SSD for journal and 1 old disk for OS and the two new 1 TB SATA for data. This is the max no of connector I have on each test PC.
[9:25] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[9:28] * mtl2 (~Adium@c-67-176-54-246.hsd1.co.comcast.net) Quit (Read error: Operation timed out)
[9:28] <pipat> What I am concerned is the I/O performance that I can provide to each VM using standard network switch (1 GB ports), SATA disks. Also the reliability in case power failure (UPS running out after half day utility down time). I have been reading Proxmox Cluster using Pacemaker type and begin to worry about fencing devices. If ceph uses the similar technology, it would be quite complex to...
[9:28] <pipat> ...maintain and recover.
[9:30] <buck> I'm not familiar with Proxmox, sorry :/
[9:30] <buck> I believe that if a ceph cluster died to to power loss, then the journal should protect the data. Granted, with a half day of power, I'd shut it down before it came to that :)
[9:31] <buck> I like your plan to put the OS on a 3rd drive and only use the SSD for 2 journals. That seems more like what I've seen others doing
[9:31] <buck> (on IRC and on the mailing list)
[9:35] * haomaiwa_ (~haomaiwan@125.108.227.141) Quit (Remote host closed the connection)
[9:35] * haomaiwang (~haomaiwan@li498-162.members.linode.com) has joined #ceph
[9:40] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[9:41] <pipat> In my country, there are a lot of construction projects which require moving power poles along the roads. That is why there are long downtime. Also an electrician can always swithch the wrong main breakers of the computer room. I like ceph's philosophy of assuming failure as a part of life and make sure that we can survive the disasters. Tsunami, flood, fire, typhoons are all here and IT...
[9:41] <pipat> ...staff have to deal with them. So I hope to build cluster of nodes in local LAN and one node in another site for DR. Someone has done it in Proxmox+OpenVPN platform linking low speed ADSL links with asynchronous update. I am hoping that the advanced feature in ceph will enable this type of capability. My project is to enable such virtualized environment.
[9:41] <pipat> Can I just follow the wiki installation instruction to do this project?
[9:45] <buck> those tend to be pretty current in my experience
[9:46] <buck> if you run into snags and IRC isn't yielding fruitful advice, the ceph mailing list is usually pretty helpful (just a tip). Good luck
[9:47] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has left #ceph
[10:22] * LeaChim (~LeaChim@176.24.168.228) has joined #ceph
[10:25] * haomaiwa_ (~haomaiwan@125.108.227.141) has joined #ceph
[10:30] * haomaiwang (~haomaiwan@li498-162.members.linode.com) Quit (Ping timeout: 480 seconds)
[10:38] * tobru_ (~quassel@217-162-50-53.dynamic.hispeed.ch) has joined #ceph
[10:41] * foosinn (~stefan@office.unitedcolo.de) has joined #ceph
[10:41] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[10:44] * KindTwo (~KindOne@h62.41.186.173.dynamic.ip.windstream.net) has joined #ceph
[10:46] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:46] * KindTwo is now known as KindOne
[11:01] * pipat (~chatzilla@ppp-58-8-244-121.revip2.asianet.co.th) Quit (Ping timeout: 480 seconds)
[11:10] * mtl1 (~Adium@c-67-176-54-246.hsd1.co.comcast.net) has joined #ceph
[11:12] * `10` (~10@juke.fm) Quit (Read error: Operation timed out)
[11:17] * lx0 is now known as lxo
[11:18] * pipat (~chatzilla@183.89.157.233) has joined #ceph
[11:27] * `10` (~10@juke.fm) has joined #ceph
[11:38] * zhyan__ (~zhyan@101.83.48.246) Quit (Ping timeout: 480 seconds)
[11:39] * zhyan__ (~zhyan@101.84.65.185) has joined #ceph
[11:42] * art (~artworklv@brln-4dbc27dc.pool.mediaWays.net) has joined #ceph
[11:43] * jjgalvez (~jjgalvez@ip72-193-217-254.lv.lv.cox.net) Quit (Quit: Leaving.)
[11:43] * art is now known as artworklv
[11:45] * jjgalvez (~jjgalvez@ip72-193-217-254.lv.lv.cox.net) has joined #ceph
[11:49] <artworklv> Hi guys! Maube someone knows� Does radosgw have file limit? I tried to upload 7GB of data using swift with no hope. Is it possible to increase this limit?
[11:59] * toMeloos (~tom@53545693.cm-6-5b.dynamic.ziggo.nl) has joined #ceph
[12:04] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[12:14] <vipr> :q
[12:22] * pipat_ (~chatzilla@183.89.157.233) has joined #ceph
[12:23] * pipat (~chatzilla@183.89.157.233) Quit (Ping timeout: 480 seconds)
[12:23] * pipat_ is now known as pipat
[12:27] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Quit: odyssey4me)
[12:30] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[12:35] * pipat (~chatzilla@183.89.157.233) Quit (Ping timeout: 480 seconds)
[12:36] * jjgalvez (~jjgalvez@ip72-193-217-254.lv.lv.cox.net) Quit (Quit: Leaving.)
[12:36] * jjgalvez (~jjgalvez@ip72-193-217-254.lv.lv.cox.net) has joined #ceph
[12:44] * jjgalvez (~jjgalvez@ip72-193-217-254.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[13:02] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) has joined #ceph
[13:06] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) Quit ()
[13:22] * Vjarjadian (~IceChat77@90.214.208.5) Quit (Quit: A day without sunshine is like .... night)
[13:59] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: No route to host)
[14:13] * wonkotheinsane (~jf@jf.ccs.usherbrooke.ca) Quit (Ping timeout: 480 seconds)
[14:21] * artworklv (~artworklv@brln-4dbc27dc.pool.mediaWays.net) Quit (Quit: This computer has gone to sleep)
[14:23] * wonkotheinsane (~jf@jf.ccs.usherbrooke.ca) has joined #ceph
[14:27] * zhyan_ (~zhyan@101.83.189.230) has joined #ceph
[14:33] * zhyan__ (~zhyan@101.84.65.185) Quit (Ping timeout: 480 seconds)
[14:43] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[14:48] * jlogan1 (~Thunderbi@2600:c00:3010:1:1::40) Quit (Ping timeout: 480 seconds)
[15:06] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley)
[15:12] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (Remote host closed the connection)
[15:12] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[15:20] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[15:21] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[15:24] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[15:29] * cjh_ (~cjh@ps123903.dreamhost.com) Quit (Ping timeout: 480 seconds)
[15:31] * sleinen1 (~Adium@2001:620:0:25:98fb:5007:e492:9c69) Quit (Ping timeout: 480 seconds)
[15:41] * sleinen1 (~Adium@2001:620:0:26:cdd0:cb13:5b45:1898) has joined #ceph
[15:44] * AfC (~andrew@2001:44b8:31cb:d400:31b2:f929:558b:657f) Quit (Quit: Leaving.)
[15:47] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[15:56] * toMeloos (~tom@53545693.cm-6-5b.dynamic.ziggo.nl) Quit (Quit: Ex-Chat)
[15:57] * aciancaglini (~quassel@78.134.20.174) Quit (Remote host closed the connection)
[15:59] * snapple (~snapple@75-149-80-169-Illinois.hfc.comcastbusiness.net) has joined #ceph
[15:59] * snapple (~snapple@75-149-80-169-Illinois.hfc.comcastbusiness.net) Quit ()
[16:02] * KindTwo (~KindOne@h19.19.131.174.dynamic.ip.windstream.net) has joined #ceph
[16:04] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:04] * KindTwo is now known as KindOne
[16:05] * sig_wall (~adjkru@185.14.185.91) Quit (Read error: Connection reset by peer)
[16:07] * sig_wall (~adjkru@185.14.185.91) has joined #ceph
[16:09] * mschiff (~mschiff@xdsl-81-173-176-242.netcologne.de) has joined #ceph
[16:11] * mschiff (~mschiff@xdsl-81-173-176-242.netcologne.de) Quit (Remote host closed the connection)
[16:18] * KindTwo (~KindOne@h202.175.17.98.dynamic.ip.windstream.net) has joined #ceph
[16:20] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:20] * KindTwo is now known as KindOne
[16:41] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[16:46] * ssejour (~sebastien@lif35-1-78-232-187-11.fbx.proxad.net) has joined #ceph
[16:53] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[16:54] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Remote host closed the connection)
[16:54] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[16:58] * xmltok_ (~xmltok@pool101.bizrate.com) Quit (Quit: Bye!)
[16:59] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[17:06] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (Remote host closed the connection)
[17:06] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[17:07] * xmltok (~xmltok@pool101.bizrate.com) Quit (Ping timeout: 480 seconds)
[17:14] * ssejour (~sebastien@lif35-1-78-232-187-11.fbx.proxad.net) Quit (Quit: Leaving.)
[17:14] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:15] * BillK (~BillK-OFT@220-253-162-118.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[17:19] * artworklv (~artworklv@brln-4dbc27dc.pool.mediaWays.net) has joined #ceph
[17:22] * ssejour (~sebastien@lif35-1-78-232-187-11.fbx.proxad.net) has joined #ceph
[17:23] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[17:27] * zhyan__ (~zhyan@101.83.206.80) has joined #ceph
[17:27] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (Remote host closed the connection)
[17:28] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[17:34] * zhyan_ (~zhyan@101.83.189.230) Quit (Ping timeout: 480 seconds)
[17:36] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:38] * ssejour (~sebastien@lif35-1-78-232-187-11.fbx.proxad.net) Quit (Quit: Leaving.)
[17:45] * rendar (~s@host119-183-dynamic.16-87-r.retail.telecomitalia.it) has joined #ceph
[17:49] <symmcom> I have setup RADOS Gateway, created a user, got the user access code, installed S3 compatible program, but now how can i assign where that user can store files through the program
[17:52] <symmcom> if i enter this command it doesnt show any list, so i m guessing i need to create a bucket ? #radosgw-admin bucket list
[18:06] <foosinn> it it possible to create a osd with a folder for cache using ceph-deploy?
[18:06] <foosinn> i have a ssd thats used for multiple osds as cache
[18:06] <foosinn> or do i need to set up partions for this?
[18:09] <symmcom> foosinn> u mean cache or journal?
[18:13] <foosinn> journal
[18:14] <foosinn> i'll go with partions. 9gb jornal per hdd should be enouth.
[18:14] <symmcom> definitely need partition as far as i know
[18:14] <foosinn> ok fine
[18:15] <foosinn> one more question: i never read anything about cache o.O is there a cache for cephfs?
[18:15] <foosinn> or for an osd?
[18:15] <symmcom> in my cluster i dont use seperate disk for journaling, each osd has its journal on each osd
[18:15] <foosinn> i have a mirrored ssd for all osds in the server
[18:16] <symmcom> i think journal is the Cache in a sense that everything gets written on journal first then to OSD
[18:16] <foosinn> i think so to
[18:16] <foosinn> thanks for you help symmcom :)
[18:17] <symmcom> i tried mirror SSD to do journaling, but realized that just extra layer of managing and almost single point of failure, with Journal on each osd, if a hdd dies i just lose the joournal for that osd, whole cluster continues to run as it is
[18:17] <symmcom> no problem foosinn
[18:18] <foosinn> symmcom, thats i i wanted to have at least a mirror. having to ssds for 6 osds each didn't seam to make more sense
[18:20] <symmcom> lot of people r setup that way, using SSD for journaling in mirror. i just went a different way. :)
[18:22] <foosinn> one more question: how to i setup the public / storage network? do i have to edit the file on every server? or can i setup this with ceph deploy? the guide doesnt mention this.
[18:23] <symmcom> foosinn> not sure i understand what u mean by public/storage network. u mean like SAN?
[18:24] <foosinn> the guide mentions this http://ceph.com/docs/master/rados/configuration/network-config-ref/
[18:24] <foosinn> to keep storage traffic on a delicated nic
[18:27] <symmcom> i use my cluster for virtual environment. virtual nodes and CEPH nodes r on 2 seperate dedicate network, on 2 switches and 2 subnets
[18:28] <symmcom> there r no settings in the CEPH really for this , since it is not CEPH specific, as long as u put CEPH nodes on different switch, with diferent subnet, it becomes a dedicated network of its own
[18:34] * KindTwo (~KindOne@h231.60.186.173.dynamic.ip.windstream.net) has joined #ceph
[18:35] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:35] * KindTwo is now known as KindOne
[18:37] * symmcom (~wahmed@S0106001143030ade.cg.shawcable.net) has left #ceph
[18:52] * diegows (~diegows@190.190.11.42) has joined #ceph
[18:56] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[18:58] * wonkotheinsane (~jf@jf.ccs.usherbrooke.ca) Quit (Read error: Operation timed out)
[19:01] * joao (~joao@89.181.146.94) Quit (Quit: Leaving)
[19:03] * tobru_ (~quassel@217-162-50-53.dynamic.hispeed.ch) Quit (Remote host closed the connection)
[19:07] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[19:11] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[19:13] * wonkotheinsane (~jf@jf.ccs.usherbrooke.ca) has joined #ceph
[19:14] * KindTwo (~KindOne@h102.32.186.173.dynamic.ip.windstream.net) has joined #ceph
[19:16] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:16] * KindTwo is now known as KindOne
[19:20] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[19:26] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (Ping timeout: 480 seconds)
[19:30] * KindTwo (~KindOne@h152.33.28.71.dynamic.ip.windstream.net) has joined #ceph
[19:32] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:32] * KindTwo is now known as KindOne
[19:52] * tnt_ (~tnt@91.177.230.140) has joined #ceph
[19:54] * tnt (~tnt@109.130.102.13) Quit (Ping timeout: 480 seconds)
[20:20] * foosinn (~stefan@office.unitedcolo.de) Quit (Quit: Leaving)
[20:21] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[20:28] * zhyan_ (~zhyan@101.83.198.103) has joined #ceph
[20:34] * zhyan__ (~zhyan@101.83.206.80) Quit (Ping timeout: 480 seconds)
[20:36] * KindTwo (~KindOne@198.14.199.128) has joined #ceph
[20:38] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:38] * KindTwo is now known as KindOne
[20:40] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[20:45] * KindTwo (~KindOne@h249.171.17.98.dynamic.ip.windstream.net) has joined #ceph
[20:46] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:46] * KindTwo is now known as KindOne
[20:54] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[21:14] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[21:21] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) Quit (Read error: Connection reset by peer)
[21:30] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[21:53] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Remote host closed the connection)
[22:00] * madkiss (~madkiss@2001:6f8:12c3:f00f:fdeb:c788:7082:e4d1) has joined #ceph
[22:09] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[22:09] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit ()
[22:10] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[22:18] * madkiss (~madkiss@2001:6f8:12c3:f00f:fdeb:c788:7082:e4d1) Quit (Quit: Leaving.)
[22:38] * danieagle (~Daniel@177.97.250.22) has joined #ceph
[22:50] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[22:57] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) Quit (Ping timeout: 480 seconds)
[22:59] * miniyo (~miniyo@0001b53b.user.oftc.net) Quit (Ping timeout: 480 seconds)
[23:08] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[23:15] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) has joined #ceph
[23:18] * rendar (~s@host119-183-dynamic.16-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[23:23] * miniyo (~miniyo@0001b53b.user.oftc.net) has joined #ceph
[23:27] * zhyan__ (~zhyan@101.82.163.108) has joined #ceph
[23:31] * jjgalvez (~jjgalvez@ip72-193-217-254.lv.lv.cox.net) has joined #ceph
[23:34] * miniyo (~miniyo@0001b53b.user.oftc.net) Quit (Read error: Connection reset by peer)
[23:35] * zhyan_ (~zhyan@101.83.198.103) Quit (Ping timeout: 480 seconds)
[23:35] * miniyo (~miniyo@0001b53b.user.oftc.net) has joined #ceph
[23:40] * jjgalvez (~jjgalvez@ip72-193-217-254.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[23:54] * miniyo (~miniyo@0001b53b.user.oftc.net) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.