#ceph IRC Log

Index

IRC Log for 2012-11-17

Timestamps are in GMT/BST.

[0:00] * Leseb (~Leseb@LRouen-151-71-115-203.w193-253.abo.wanadoo.fr) has joined #ceph
[0:02] * mdrnstm (~mdrnstm@206.169.78.213) Quit (Remote host closed the connection)
[0:06] <flesh> thanks gregaf
[0:06] <flesh> I tried a 2MDS 2 OSD configuration
[0:07] <flesh> where a lot of clients were creating files at the same time, and one of the MDS was consuming way too much RAM
[0:07] <flesh> I was wondering if maybe I was just creating too many files, or something was going wrong
[0:08] <gregaf> you might just be creating faster than it can get them onto disk
[0:08] <flesh> I wanted the 2 MDS to be active, so they could share the workload. But it didn't really help
[0:08] <gregaf> are they both active?
[0:08] <gregaf> what's the output of ceph -s?
[0:09] <flesh> http://mibpaste.com/A9Mpok
[0:10] <gregaf> yep, both active
[0:10] <gregaf> (thought maybe you hadn't increased the max mds count)
[0:10] * PerlStalker (~PerlStalk@perlstalker-1-pt.tunnel.tserv8.dal1.ipv6.he.net) Quit (Remote host closed the connection)
[0:11] <flesh> yep, but I don't really think the disk is the probnlem
[0:11] <flesh> I did
[0:11] <flesh> well
[0:11] <flesh> actually that was the only way I could make them both active
[0:11] <flesh> I don't know if there is anyother way
[0:11] <ghbizness> gregaf,,, i am back now
[0:11] <flesh> if I could specify something on the ceph.conf
[0:11] <ghbizness> gregaf, i used the following command... ceph osd pool set data size 3
[0:12] <ghbizness> as far as writting data to it... i did a dd if=/dev/zer of=jghksdjhg
[0:12] <ghbizness> for 10gigs of data
[0:12] <ghbizness> as far as osds.... 10 osds
[0:12] <ghbizness> 5 hosts, 2 osds per host
[0:13] <gregaf> flesh: you did it right
[0:13] <flesh> good
[0:13] <flesh> but the RAM consumtion... any thoughts apart from the disk contention?
[0:14] <flesh> *consumption
[0:14] <gregaf> ghbizness: probably you just didn't wait long enough for the replication to get very far then; it throttles recovery some
[0:14] <ghbizness> n; 39033 MB data, 79186 MB used,
[0:14] <ghbizness> these are test boxes so not much IO going on
[0:15] <ghbizness> i would like to note that my write speeds are now pretty bad after making that change
[0:15] <ghbizness> i was getting 200+ MB / sec
[0:15] <gregaf> ghbizness: is this via rbd or via the filesystem?
[0:15] <gregaf> flesh: unfortunately we don't have a good debug tree I can give you for the mds (yet)
[0:15] <ghbizness> filesystem
[0:16] <ghbizness> getting 100MB/s now
[0:16] <ghbizness> so it is a clear 1/2
[0:16] <flesh> ohh, thanks anyway for the help!
[0:17] <gregaf> flesh: you can dump the perfcounters for each MDS daemon and compare them
[0:18] <gregaf> that'll at least show if both of them are doing anything
[0:18] <ghbizness> after writing another 1GB, counters went up by 1GB data / 2GB used
[0:18] <gregaf> also, are you creating all your files in one folder?
[0:19] <gregaf> sjust: can you help out ghbizness here? mismatch with his pool size and the data usage he's seeing
[0:19] <ghbizness> gregaf, the mismatch is we should see 3GB usage for 1GB data
[0:20] <ghbizness> our goal is to have 3 replicas of each block
[0:20] * slang (~slang@ace.ops.newdream.net) has left #ceph
[0:20] * slang (~slang@ace.ops.newdream.net) has joined #ceph
[0:20] <slang> looks like teuthology just got restarted?
[0:20] <sjust> ghbizness: can you post the output of ceph osd dump, ceph osd tree, ceph pg dump, and ceph -s?
[0:21] <ghbizness> ceph osd dump | grep 'rep size'
[0:21] <ghbizness> pool 0 'data' rep size 3 crush_ruleset 0 object_hash rjenkins pg_num 12928 pgp_num 12928 last_change 160 owner 0 crash_replay_interval 45
[0:21] <ghbizness> ceph -s
[0:21] <ghbizness> health HEALTH_OK
[0:21] <ghbizness> monmap e1: 5 mons at {1=172.21.1.1:6789/0,2=172.21.1.2:6789/0,3=172.21.1.3:6789/0,4=172.21.1.4:6789/0,5=172.21.1.5:6789/0}, election epoch 46, quorum 0,1,2,3,4 1,2,3,4,5
[0:21] <ghbizness> osdmap e161: 10 osds: 10 up, 10 in
[0:21] <ghbizness> pgmap v173336: 38784 pgs: 38784 active+clean; 40059 MB data, 81240 MB used, 26746 GB / 27945 GB avail
[0:21] <ghbizness> mdsmap e33: 1/1/1 up {0=3=up:active}, 4 up:standby
[0:21] <gregaf> slang: yeah, dmick and Sandon have been spamming irc and the email lists with that for the last 30 minutes or hour? :)
[0:22] <ghbizness> hmm... let me pastebin this
[0:22] <sjust> ghbizness: yeah
[0:22] <dmick> spamming?
[0:22] <ghbizness> sorry
[0:22] <slang> oh hey look at that
[0:22] <dmick> I sent one for down, one for up; jeez :-P
[0:22] <slang> a NOTICE email
[0:23] * slang notices it
[0:23] * tnt (~tnt@140.20-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[0:23] <gregaf> I didn't say it was inappropriate spam
[0:23] <flesh> gregaf I'm creating them in the root folder where I mount the client. Why are you asking?
[0:24] <gregaf> flesh: ah, so right now Ceph can't split up directories across MDSes
[0:24] <gregaf> you can, for an additional drop in stability, turn that feature on if you wish
[0:24] <slang> dmick: did it already have 6gigs?
[0:24] <flesh> ohhh. I see
[0:25] <gregaf> it's the "mds bal frag" option; set it to true if you're feeling particularly brave and/or foolhardy ;)
[0:25] <flesh> yes ,I would like to
[0:25] <ghbizness> http://pastebin.com/Mhk9Eps1
[0:25] <SpamapS> dmick: re how to proceed on the lack of a -v, I'd say file a bug, and perhaps even suggest a patch if you have an idea of why its broken
[0:25] <dmick> slang: the vm host, not the vm per se
[0:25] <flesh> hahaha I think I do :)
[0:25] <ghbizness> that link has all except pg dump
[0:25] <dmick> and afaik we didn't change the vm allocations
[0:25] <SpamapS> dmick: and keep bugging me to fix it :)
[0:25] <dmick> yet
[0:25] <dmick> SpamapS: I have no idea what happens from our git tree to your build, is the problem
[0:25] * ctrl (~Nrg3tik@78.25.73.250) Quit ()
[0:26] <sjust> ghbizness: you are using rbd?
[0:26] <ghbizness> yes
[0:26] <sjust> ok, ceph pg dump?
[0:26] <ghbizness> pg dump is HUGE..
[0:26] <sjust> yes
[0:26] <ghbizness> u want whole thing
[0:26] <sjust> I do
[0:26] <rweeks> pastebin it
[0:26] <ghbizness> rweeks, LOL ... ofcourse
[0:26] <flesh> gragaf, so by default directories are not share among MDSs?
[0:26] <SpamapS> dmick: well its not that hard todiff the two source package
[0:26] <SpamapS> s
[0:27] <gregaf> flesh: you will also need to do it in something besides the root folder; that's protected a bit :)
[0:27] <flesh> gregaf so by default directories are not share among MDSs?
[0:27] <gregaf> correct; it's a stability issue
[0:27] <flesh> oh, ok. no problem :)
[0:27] <flesh> Ok, understood
[0:27] <gregaf> anyway, just set the option on the MDS nodes (in the MDS section) and reboot them and it should come on
[0:30] * maxiz__ (~pfliu@111.192.242.110) Quit (Remote host closed the connection)
[0:31] <ghbizness> so... that just killed firefox for a few mins.... then pastebin came up with a nice message...
[0:31] <ghbizness> You have exceeded the maximum file size of 500 kilobytes per paste. PRO users don't have this limit!
[0:31] <ghbizness> :-)
[0:31] <sjust> you can gzip it and upload it via sftp to ceph.com
[0:32] <ghbizness> i can do that
[0:34] <sjust> ghbizness: looks like your data is in rbd
[0:35] <sjust> ceph osd pool set rbd size 3
[0:35] <ghbizness> i did the data pool... should that matter ?
[0:35] <sjust> not really, it's empty
[0:35] <ghbizness> should i change that back to a different default number /
[0:35] <sjust> you can set it back to 2 if you want
[0:35] <ghbizness> k
[0:37] <ghbizness> g; 40064 MB data, 81471 MB used, 26746 GB / 27945 GB avail; 6529/23483 degraded (27.803%)
[0:37] <ghbizness> nice...
[0:37] <ghbizness> looks good now...
[0:38] <ghbizness> sjust, thanks for the help. looks like i just did the wrong pool
[0:39] <ghbizness> any advantage to setting 3 for the other pools like the metadata pool ?
[0:39] <gregaf> not if you aren't using them; you could delete them if you aren't using the filesystem
[0:39] <gregaf> (it doesn't really hurt either, though)
[0:40] <ghbizness> dont the mds use that pool ?
[0:42] <dmick> yes, but you only use the mds if you're using the Ceph filesystem
[0:42] <ghbizness> i see
[0:43] <ghbizness> we are using this for VMs where we are exporting over NFS a RBD mount...
[0:43] <ghbizness> we are also using it for VM block devices via KVM / Qemu
[0:43] <ghbizness> so i guess both are RBD pool
[0:43] <sjust> yeah, they are
[0:44] <ghbizness> so the mds never gets used ?
[0:44] <ghbizness> only the mons ?
[0:44] <sjust> not with you're set up
[0:45] <sjust> if you were using cephfs, the mds would be needed
[0:45] <ghbizness> i see
[0:46] <ghbizness> any advantage to more than 3 mon processes ?
[0:46] <ghbizness> besides a quorum
[0:46] <ghbizness> as in... any performance advantage ?
[0:47] <gregaf> nope
[0:48] <ghbizness> ok, thank you all for your help today, looks like im out for a bit
[0:55] * tnt (~tnt@140.20-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[1:10] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[1:19] * Leseb (~Leseb@LRouen-151-71-115-203.w193-253.abo.wanadoo.fr) Quit (Quit: Leseb)
[1:30] * jlogan1 (~Thunderbi@2600:c00:3010:1:1ccf:467e:284:aea8) Quit (Ping timeout: 480 seconds)
[1:31] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[1:33] * jjgalvez (~jjgalvez@12.248.40.138) Quit (Quit: Leaving.)
[1:38] <wer> can I specify the default content-type for a bucket?
[1:44] * flesh (547908cc@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[1:53] * mikeryan (mikeryan@lacklustre.net) has left #ceph
[2:02] * sagewk sigh
[2:03] <sjust> indeed
[2:03] <rweeks> that bad, sage?
[2:03] <yehudasa> wer: no
[2:04] <wer> yehudasa: interesting. Thank you.
[2:04] <rweeks> yehudasa: is that a limitation of ceph, or of the s3 api?
[2:04] <yehudasa> rweeks: the s3 api
[2:05] <rweeks> I suspected so
[2:08] <wer> Thanks!
[2:15] * dmick (~dmick@2607:f298:a:607:75cc:429e:ce3b:50cd) Quit (Quit: Leaving.)
[2:29] * rweeks (~rweeks@c-24-4-66-108.hsd1.ca.comcast.net) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[2:29] * scalability-junk (~stp@188-193-202-99-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[2:49] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[3:00] <wer> Is there a way to change the Content-Encoding when using the radosgw? Apache doesn't seem to add headers like Content-Encoding: x-gzip and I am not sure how to get that in there?
[3:01] * Cube1 (~Cube@12.248.40.138) Quit (Read error: Operation timed out)
[3:03] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[3:19] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[3:22] * adjohn (~adjohn@69.170.166.146) Quit ()
[3:24] * miroslav1 (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[3:45] <jefferai> sagewk: don't suppose you're still around?
[3:54] <jefferai> elder: or you, maybe?
[4:06] * Cube (~Cube@184.255.135.27) has joined #ceph
[4:10] * JoDarc (~Adium@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[4:17] * Cube (~Cube@184.255.135.27) Quit (Quit: Leaving.)
[4:36] * LarsFronius (~LarsFroni@95-91-242-149-dynip.superkabel.de) Quit (Quit: LarsFronius)
[4:48] * Cube (~Cube@184.255.135.27) has joined #ceph
[4:58] * Cube (~Cube@184.255.135.27) Quit (Quit: Leaving.)
[5:01] <jefferai> So I'm stuck...
[5:01] <jefferai> I'm on Precise, and using Ganeti, so using the RBD kernel client
[5:02] <jefferai> I was told that I really should use the 3.6 kernel, so I've tried the 3.6.1 and 3.6.3 kernels for Precise that you guys build
[5:02] <jefferai> and something is wrong with the brige networking and it drops packets to the VM
[5:02] <jefferai> if I go back to the 3.2 kernel, the packets aren't dropped -- but then there are some amount of known issues/problems/gotchas with the RBD kernel client
[5:03] <jefferai> (although I don't know offhand what those are...maybe they are things I can work around?)
[5:04] <jefferai> I guess one question is, is it at all possible that the newer RBD/libceph modules could be built against the older kernel, or are there changes/bugfixes in the kernel outside those modules that are important?
[5:05] <jefferai> I could also try building interim releases of the kernel and try to figure out what broke (git-bisect-ish), but if it's beyond my ken to fix (which is likely) then I'm not sure that will do much good
[5:47] * rweeks (~rweeks@c-24-4-66-108.hsd1.ca.comcast.net) has joined #ceph
[6:05] * gaveen (~gaveen@112.134.112.168) has joined #ceph
[6:05] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[6:37] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[6:53] * The_Bishop (~bishop@2001:470:50b6:0:6d92:c796:36e6:174b) has joined #ceph
[7:30] * Cube (~Cube@184.255.135.27) has joined #ceph
[7:32] * rweeks (~rweeks@c-24-4-66-108.hsd1.ca.comcast.net) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[7:40] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[7:50] * Cube (~Cube@184.255.135.27) Quit (Quit: Leaving.)
[8:09] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[8:42] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[9:00] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[9:05] * Cube (~Cube@184.255.135.27) has joined #ceph
[9:12] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[9:16] * Cube (~Cube@184.255.135.27) Quit (Quit: Leaving.)
[9:17] * loicd (~loic@magenta.dachary.org) has joined #ceph
[9:23] * MikeMcClurg (~mike@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) has joined #ceph
[9:37] * Leseb (~Leseb@LRouen-151-71-115-203.w193-253.abo.wanadoo.fr) has joined #ceph
[9:37] * LarsFronius (~LarsFroni@95-91-242-149-dynip.superkabel.de) has joined #ceph
[9:49] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[9:50] * loicd (~loic@magenta.dachary.org) has joined #ceph
[10:11] * MikeMcClurg (~mike@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) Quit (Quit: Leaving.)
[10:11] * MikeMcClurg (~mike@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) has joined #ceph
[10:14] * Leseb (~Leseb@LRouen-151-71-115-203.w193-253.abo.wanadoo.fr) Quit (Quit: Leseb)
[10:15] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[10:17] * gaveen (~gaveen@112.134.112.168) Quit (Ping timeout: 480 seconds)
[10:18] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[10:26] * gaveen (~gaveen@112.134.113.124) has joined #ceph
[10:35] * baozich (~cbz@118.249.94.252) has joined #ceph
[10:40] <baozich> hi, everyone. I'm trying to build the latest version from git repo on my fedora 17 box. However, it failed showing the following error message:
[10:40] <baozich> CXXLD radosgw
[10:40] <baozich> /usr/bin/ld: radosgw-rgw_resolve.o: undefined reference to symbol '__res_nquery@@GLIBC_2.2.5'
[10:40] <baozich> /usr/bin/ld: note: '__res_nquery@@GLIBC_2.2.5' is defined in DSO /lib64/libresolv.so.2 so try adding it to the linker command line
[10:40] <baozich> /lib64/libresolv.so.2: could not read symbols: Invalid operation
[10:41] <baozich> collect2: error: ld returned 1 exit status
[10:41] <baozich> make[3]: *** [radosgw] Error 1
[10:41] <baozich> make[3]: Leaving directory `/home/cbz/src/ceph/src'
[10:41] <baozich> make[2]: *** [all-recursive] Error 1
[10:41] <baozich> make[2]: Leaving directory `/home/cbz/src/ceph/src'
[10:41] <baozich> make[1]: *** [all] Error 2
[10:41] <baozich> make[1]: Leaving directory `/home/cbz/src/ceph/src'
[10:41] <baozich> make: *** [all-recursive] Error 1
[10:41] <baozich> any suggestion?
[10:42] * Cube (~Cube@184.255.135.27) has joined #ceph
[10:45] <dweazle> baozich: uhm.. yum install glibc-devel?
[10:49] <baozich> yup, glibc-devel-2.15-57.fc17.x86_64
[10:50] <baozich> i built the srpm from fedora repo successfully
[10:50] <baozich> which is 0.53
[10:50] <baozich> weird
[10:59] * Cube (~Cube@184.255.135.27) Quit (Quit: Leaving.)
[11:26] * LarsFronius (~LarsFroni@95-91-242-149-dynip.superkabel.de) Quit (Quit: LarsFronius)
[11:52] * Cube (~Cube@184.255.135.27) has joined #ceph
[11:53] * Cube (~Cube@184.255.135.27) Quit ()
[11:55] * Cube (~Cube@184.255.135.27) has joined #ceph
[11:56] * Cube (~Cube@184.255.135.27) Quit ()
[12:07] * baozich (~cbz@118.249.94.252) has left #ceph
[12:13] * baozich (~cbz@118.249.94.252) has joined #ceph
[12:16] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[12:23] * guigouz (~guigouz@177.33.216.27) Quit (Quit: Computer has gone to sleep.)
[13:13] * guigouz1 (~guigouz@201.83.213.121) has joined #ceph
[13:29] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[13:29] * loicd (~loic@magenta.dachary.org) has joined #ceph
[13:47] * maxiz (~pfliu@111.194.214.36) has joined #ceph
[14:16] * vaab (~vaab@cha28-1-82-240-147-17.fbx.proxad.net) has joined #ceph
[14:17] <vaab> Hello all !
[14:18] <vaab> I'm pretty fresh to ceph technology, and I had some question.
[14:20] <vaab> I haven't access to great hardware, but my hardware meet the requirements on ceph.org except the ethernet bandwidth part between the hosts. Mine is 100 Mbit/s ... can I still test ceph, and use it in a very small smili-prod environnement (with limited number of clients) ?
[14:22] <vaab> Second question: When used in a cloud environnement (I'm new to this too), are the VMs images on the ceph rbd device or are they client and then mount the device in their environnement ? Are both sensible solution ?
[14:25] <vaab> Does it make sense to have ceph provided by an LXC (or more) running on an host ?
[14:28] <vaab> Are there any filesystem that are better suited to be built upon a rdb device ?
[14:37] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[14:50] <vaab> I'm heading towards serverfault to ask these questions... question1: http://serverfault.com/questions/449699/is-ceph-usable-with-only-100mbps-bandwidth-between-nodes
[14:55] <Meyer__> vaab: it will work, but will of course not be as fast as with 1G
[14:57] <vaab> well, that I would have guessed. My question is: is it usable in a low requirement environment ? If I have to wait 5 minutes for each file access, this will be an issue for me, even If I have low requirements ;).
[14:59] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[15:01] <vaab> I've tested Ceph thanks to the 5min quickstart guide on my own computer in a local environment and I experimented some big lags, but this remains usable for me (and it seems that's normal on new fs created on the distributed device).
[15:01] <vaab> But if this sort of lag is continuous and a little worse on two host separated by a 100Mbps, this wouldn't meet my requirements.
[15:05] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[15:06] * loicd (~loic@magenta.dachary.org) has joined #ceph
[15:12] <vaab> The second question is here: http://serverfault.com/questions/449706/how-exactly-are-distributed-file-systems-used-in-cloud-environment
[15:21] <vaab> the third: http://serverfault.com/questions/449709/are-there-any-filesystem-better-suited-for-ceph-rbd
[15:23] <vaab> If these question are improper (I'm really new to all these concept), please take the time to explain why. Thank you !
[15:25] * gaveen (~gaveen@112.134.113.124) Quit (Remote host closed the connection)
[15:28] * gaveen (~gaveen@112.134.113.124) has joined #ceph
[15:30] * maxiz (~pfliu@111.194.214.36) Quit (Ping timeout: 480 seconds)
[15:39] * maxiz (~pfliu@111.194.210.108) has joined #ceph
[16:19] * plut0 (~cory@pool-96-236-43-69.albyny.fios.verizon.net) has joined #ceph
[16:24] * gaveen (~gaveen@112.134.113.124) Quit (Remote host closed the connection)
[16:26] * gaveen (~gaveen@112.134.113.124) has joined #ceph
[16:26] <plut0> is there any documentation on architect sizing?
[16:41] <darkfader> vaab: i replied to one of your q's
[16:41] <darkfader> i think
[16:41] <darkfader> if you wanna use ceph for real over 100mbit, that is a no-go
[16:42] <darkfader> it's not the standard linux toy fs that does everything asynchronously and leaves you with lost data if the connection breaks
[16:59] * guigouz1 (~guigouz@201.83.213.121) Quit (Quit: Computer has gone to sleep.)
[17:07] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[17:08] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:19] <vaab> darkfader: thanks for your answer ! This is a clean answer, even if I would have preferred to hear the opposite conclusion ;) ...
[17:19] <darkfader> hehe
[17:20] <darkfader> i didn't very carefully read the use case. as long as it stays one that just has 2 copies of a given data, i'd probably pick dr:bd with not fully sync replication
[17:20] <plut0> all writes are synchronous to each replica right?
[17:20] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[17:21] * loicd (~loic@magenta.dachary.org) has joined #ceph
[17:21] <darkfader> plut0: i'm not even sure, there used to be different replication topologies but i think they removed one
[17:21] <darkfader> vaab's q was easy enough for me
[17:32] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[17:40] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[17:40] <vaab> darkfader: I'm getting a shower of -1 for 2 of my questions. You might be able to explain me why ? (maybe I could improve my questions) (cf: http://serverfault.com/questions/449706/how-exactly-are-distributed-file-systems-used-in-cloud-environment)
[17:43] <plut0> vaab: i don't believe DFS have a high market share
[17:44] <plut0> vaab: probably a good idea to have at least some of your DFS infrastructure on physical or you'll have a circular dependency
[17:46] <vaab> plut0: I'm quite new to these topics, so I'm ready to give your beliefs credits ;) ... Do you know then how is DFS used in amazon for instance ? (I'm thinking of S3) if not, you might have some links or google clues to give me so I can try to find answers myself ?
[17:49] <vaab> best bet, from what you said, would that they have a shit load of physical host serving their DFS, but do they use the DFS to put VMs on it for services, or is it not really made for this, and they just have another shit load of physical hosts serving services and that are just client to the DFS when needed ?
[17:49] <vaab> I'm not sure I'm very clear on this. Most of these notions are quite fresh for me.
[17:50] <darkfader> vaab: if you have 30 minutes, try to google for the original paper on ceph and then read it slowly
[17:50] <darkfader> ceph is just "one" distributed filesystem, but the architecture is very clean so there's nothing wrong starting to learn there
[17:55] <vaab> Won't I only find info on how ceph works, and its internal structure ? This probably won't answer my practical question: how is it effectively deployed in relation with clouds ? Well I guess I'll probably go read the paper you mentioned even if I'm not sure it'll help me.
[18:00] <darkfader> it will not answer the direct question
[18:01] <darkfader> but cover a dozen others and give you some understanding and then you can ask your question more specifically :)
[18:38] * baozich (~cbz@118.249.94.252) has left #ceph
[19:08] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[19:08] * iltisanni (d4d3c928@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[19:14] * Leseb (~Leseb@LRouen-151-71-115-203.w193-253.abo.wanadoo.fr) has joined #ceph
[19:21] * Leseb_ (~Leseb@LRouen-151-71-115-203.w193-253.abo.wanadoo.fr) has joined #ceph
[19:21] * Leseb (~Leseb@LRouen-151-71-115-203.w193-253.abo.wanadoo.fr) Quit (Read error: Connection reset by peer)
[19:21] * Leseb_ is now known as Leseb
[19:39] * rweeks (~rweeks@c-24-4-66-108.hsd1.ca.comcast.net) has joined #ceph
[19:49] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[20:05] * loicd (~loic@2a01:e35:2eba:db10:120b:a9ff:feb7:cce0) has joined #ceph
[20:08] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[20:08] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[20:11] * Leseb (~Leseb@LRouen-151-71-115-203.w193-253.abo.wanadoo.fr) Quit (Quit: Leseb)
[20:12] <plut0> is there any documentation on architect capacity planning?
[20:17] * gaveen (~gaveen@112.134.113.124) Quit (Ping timeout: 480 seconds)
[20:26] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[20:26] * gaveen (~gaveen@112.134.112.156) has joined #ceph
[20:27] * danieagle (~Daniel@177.99.134.146) has joined #ceph
[20:27] * gaveen (~gaveen@112.134.112.156) Quit (Remote host closed the connection)
[20:33] * Leseb (~Leseb@LRouen-151-71-115-203.w193-253.abo.wanadoo.fr) has joined #ceph
[20:34] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[20:37] * Leseb (~Leseb@LRouen-151-71-115-203.w193-253.abo.wanadoo.fr) Quit ()
[20:39] * rweeks (~rweeks@c-24-4-66-108.hsd1.ca.comcast.net) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[20:39] <plut0> anyone?
[20:46] * cypher497 (~jay@76.175.167.163) has joined #ceph
[20:59] * calebamiles (~caleb@c-24-128-194-192.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[21:01] * calebamiles (~caleb@c-24-128-194-192.hsd1.vt.comcast.net) has joined #ceph
[21:08] * buck (~buck@bender.soe.ucsc.edu) has joined #ceph
[21:08] * buck (~buck@bender.soe.ucsc.edu) has left #ceph
[21:20] <jefferai> plut0: "architect"?
[21:20] <jefferai> can you describe what your'e trying to figure out a bit more?
[21:22] <plut0> jefferai: sizing of infrastructure
[21:22] <jefferai> to meet what needs?
[21:22] <jefferai> do you mean in terms of hardware requirements?
[21:22] <jefferai> that's really vague
[21:23] <plut0> jefferai: yes hardware, cpu, ram, disk, number of hosts, etc.
[21:25] <plut0> minimum requirements, how to size, calculator perhaps, etc.
[21:36] <plut0> nothing out there for this?
[21:49] * rweeks (~rweeks@c-24-4-66-108.hsd1.ca.comcast.net) has joined #ceph
[21:56] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[22:04] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[22:18] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[22:19] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[22:20] * brambles (xymox@grip.espace-win.org) Quit (Ping timeout: 480 seconds)
[22:29] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[22:44] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) has joined #ceph
[23:04] <jefferai> plut0: besides the ceph documentation?
[23:04] <jefferai> http://ceph.com/docs/master/install/
[23:05] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[23:25] * jluis (~JL@89.181.157.220) has joined #ceph
[23:31] * joao (~JL@89.181.156.186) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.