#ceph IRC Log

Index

IRC Log for 2012-11-24

Timestamps are in GMT/BST.

[0:18] * karl_k (~karl@213.47.43.12) has joined #ceph
[0:19] <karl_k> hi, a quick question: how is replication influencing available storage space ? i.e. on raid5 i get 2 disks from 3, how is that in ceph ?
[0:32] <iggy> karl_k: replication is customizable per pool. the default is 2 which is equivalent to mirroring
[0:33] <karl_k> hi ! so if i want replication i will lose half the storage space right ?
[0:36] <iggy> you will lose however much your replication level is set to
[0:36] <iggy> so using defaults, yes, you have half the usable space
[0:38] <karl_k> ok, and there is no way around that, if i want any security against hdd failure i have to sacrifice half the space of my cluster. Thats a lot !
[0:39] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[0:44] <iggy> hard drives are cheap
[0:46] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[0:47] <karl_k> is there a reason why there is no raid5 like algorithm on the ceph pool level ?
[0:50] <Robe> complexity most likely
[0:50] <Robe> and latency
[0:50] <karl_k> i need to build a cheap solution to storage of large datasets from human genome sequencing
[0:51] <Robe> and a replica count of 2 is already too expensive?
[0:51] <CristianDM> Is it possible change the journal size?
[0:51] <Robe> CristianDM: yes, see the docs
[0:52] <CristianDM> Yes, but in production
[0:52] <CristianDM> I set to 1000 but now need put this in 2000
[0:52] <Robe> ah
[0:52] <CristianDM> I can´t find any information in the docs
[0:52] <karl_k> in a 12slot case i could fit 4 raid5 arrays of 3 disks each and use these 4 as osds, thus i would get 8 disks out of 12 with redundancy, right ? is that a bad idea ?
[0:53] <Robe> karl_k: most people use one osd per disk and let rados/ceph take care of redundancy
[0:55] <karl_k> so your advice is to stick to redundancy=2 and live with 50% capacity loss?
[0:58] <karl_k> which would on the other hand keep my data safe until half of the disks are dead ... :)
[1:02] <iggy> yes and no, there is time for data to redistribute after an osd is marked out
[1:02] <iggy> if you only have 1 server for storage, why are you looking at ceph?
[1:04] <karl_k> well, we will generate about 1-2tb of data per week. i cant buy too much at a time so i want something that i can incrementally increase
[1:05] <karl_k> i got a 19" rack and can fill that with servers with 12 drive bays and then buy disks on a regular basis
[1:06] <iggy> what's the workload?
[1:06] <karl_k> this is a university institute....
[1:06] <karl_k> not much, basically static data
[1:07] * tnt (~tnt@48.29-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[1:07] <iggy> it would probably be more economical to do higher disk density than 2u servers
[1:09] <iggy> 1 dual cpu motherboard with tons of memory can probably push closer to 36-48 disks in that kind of scenario
[1:09] * xiaoxi (~xiaoxiche@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[1:10] <karl_k> blushing i have to admit that any single item on my shopping list cant be above 400� .... university ...
[1:12] <karl_k> any other ideas how to do a big storage without losing 50% capacity and still have some protection against disk failure ?
[1:18] <iggy> do raid in the storage server and use replication=1?
[1:18] <iggy> but if anything happens to a server, the entire fs is unusable
[1:19] <karl_k> yep i also think thow bunches of 3 disks into raid5 arrays as osds and pool these
[1:20] <karl_k> i need storage space, availability is not so much an issue
[1:20] <iggy> then ceph is probably more complexity than you need
[1:21] <karl_k> any alternative that is easily expandable ?
[1:21] <iggy> why 3 disk raid 5 arrays?
[1:22] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) has joined #ceph
[1:22] <karl_k> 4 osds a (3 disk in raid5)=12 disks per server
[1:22] <karl_k> or something along that lines
[1:22] <iggy> why not just put all 12 drives in the same array (raid 5 or 6)
[1:23] <karl_k> basically i could use raid5 for redundancy and ceph for scalability
[1:23] <karl_k> what i dont know is if this is a sane setup :)
[1:24] <iggy> not really, ceph isn't going to give you any more scalability than nfs at that point
[1:24] <iggy> add to that the fact that the ceph filesystem bits aren't suggested for production yet
[1:25] * xiaoxi (~xiaoxiche@jfdmzpr04-ext.jf.intel.com) Quit (Ping timeout: 480 seconds)
[1:26] <karl_k> oh, seems i need to read up on nfs, didnt know it scales
[1:27] <iggy> it doesn't, but neither does ceph in that kind of setup
[1:27] <karl_k> sorry, wrong wording, didnt think about how to grow a nfs volume
[1:28] <iggy> it grows with the underlying filesystem
[1:28] <karl_k> ok, so i do raid5, lvm, ext4, nfs right ?
[1:29] * BManojlovic (~steki@212.69.24.38) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:29] <iggy> that's an option
[1:29] <karl_k> can i span that on a second server ?
[1:30] <iggy> not the same fs, no
[1:30] <karl_k> so then iscsi and lvm comes to mind...
[1:30] <iggy> well, pnfs might allow that
[1:34] <iggy> i wasn't thinking of multiple servers when you said scalability... i normally think performance when scalability is mentioned
[1:34] <iggy> so yeah, for that ceph with repl=1 could make sense
[1:35] <iggy> but there's still the issue of cephfs not being ready for production yet
[1:37] <karl_k> hmm, how far from production is cephfs ?
[1:37] <iggy> next year?
[1:38] <iggy> nobody can say for sure... i doubt if the devs working on it even know
[1:39] <karl_k> ok, so i think along the lines: buy cases for 24 disks, buy disks in bundles of 3-5, raid5 the bundles, add them as osds to a ceph pool
[1:40] <karl_k> thus i get cheap safety from 1 failing disk (or do raid6) and i can infinitely grow the space
[1:40] <karl_k> but i need to wait for cephfs to be ready
[1:41] <iggy> i wouldn't make multiple raid arrays per server
[1:41] <iggy> you lose more capacity that way
[1:42] <karl_k> yep, could do servers of 12 disks with one large raid6 array
[1:43] <karl_k> i wonder if i could just use iscsi and lvm to bundle multiple servers
[1:43] <karl_k> i.e. iscsi, lvm, nfs
[1:45] <karl_k> btw, just found a nice powerpoint about "alternative reliability models in ceph"
[1:45] <karl_k> http://www.google.com/url?sa=t&rct=j&q=alternative%20reliability%20models%20in%20ceph&source=web&cd=2&cad=rja&ved=0CEEQFjAB&url=http%3A%2F%2Finstitutes.lanl.gov%2Fisti%2Fissdm%2Fprojects%2Fissdm-Bigelow-CephReliability-R11.pdf&ei=nhiwUPXsNaWl4gSY7ICwCQ&usg=AFQjCNHeyh7l3MKKBXM6L-TqdyAr4kS9OQ
[1:45] <karl_k> ah, sorry
[1:48] <karl_k> http://institutes.lanl.gov/isti/issdm/projects/issdm-Bigelow-CephReliability-R11.pdf
[1:50] <karl_k> ok, thanks for the discussion, need sleep, cheers !
[1:50] * karl_k (~karl@213.47.43.12) Quit (Quit: Verlassend)
[2:00] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Read error: Connection reset by peer)
[2:00] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[2:13] * benner (~benner@193.200.124.63) Quit (Read error: Connection reset by peer)
[2:16] * benner (~benner@193.200.124.63) has joined #ceph
[2:50] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[3:30] * benner (~benner@193.200.124.63) Quit (Read error: Connection reset by peer)
[3:30] * benner (~benner@193.200.124.63) has joined #ceph
[3:32] <plut0> tell me if i'm understanding this correctly... files are striped across many objects, objects are mapped to PG's 1:1, PG's are mapped to N OSD's where N is the number of replicas + 1?
[3:35] <yanzheng> right
[3:45] * scuttlemonkey (~scuttlemo@96-42-136-136.dhcp.trcy.mi.charter.com) has joined #ceph
[3:45] * ChanServ sets mode +o scuttlemonkey
[3:47] <plut0> how do you know how many objects are mapped to a file?
[3:49] <iggy> size of file / object size
[3:49] <plut0> is the object size configurable?
[3:49] <plut0> is that like a stripe size?
[3:50] <iggy> there's probably a ceph util to get it too
[3:50] <iggy> it is, i don't know what granularity though (pool, file, etc)
[3:51] <iggy> *it is configurable
[3:51] <plut0> i would think you'd want the object size to match the number of osd's, right?
[3:51] <iggy> doubtful
[3:51] <plut0> you'd want to stripe across almost all osd's right?
[3:52] <iggy> what happens when you have 100,000 OSDs
[3:52] <plut0> are you saying the effectiveness of the stripe would become overhead at some point?
[3:52] <iggy> the default is 1M or 4M maybe
[3:53] <plut0> work against you perhaps
[3:54] <iggy> you don't want to have to send 1000s of packets to retrieve a 14K file
[3:54] <plut0> so how do you know where the sweet spot is?
[3:54] <plut0> is there an equation to calculate?
[3:57] * scuttlemonkey (~scuttlemo@96-42-136-136.dhcp.trcy.mi.charter.com) Quit (Quit: This computer has gone to sleep)
[3:57] * The_Bishop (~bishop@2001:470:50b6:0:c853:26b5:8bac:d3b3) Quit (Ping timeout: 480 seconds)
[4:01] <iggy> workload testing
[4:02] * xiaoxi (~xiaoxiche@134.134.137.73) has joined #ceph
[4:04] <xiaoxi> Hi, do anyone have idea about why dirty pagecache has some relationship with performance?Means if I clean the pagecache on all ceph nodes, the performance drops a lot
[4:06] * The_Bishop (~bishop@2001:470:50b6:0:418:7473:300c:5f1d) has joined #ceph
[4:07] <plut0> iggy: it's configurable though? not dynamic?
[4:11] <iggy> plut0: correct
[4:11] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) Quit (Remote host closed the connection)
[4:12] <plut0> iggy: so you may have to adjust as you add more osd's?
[4:14] <iggy> nah, you'd adjust more based on workload than OSD count
[4:15] <plut0> what does daemon refer to as in these docs? http://ceph.com/docs/master/install/hardware-recommendations/
[4:16] <iggy> depends which part you are reading
[4:17] <iggy> in general it just refers to the MDS, OSD, and CMON processes that run
[4:17] <plut0> under the hardware recommendation section
[4:18] <iggy> for best performance, you would want to put MDS and MON daemons on servers that aren't also OSDs
[4:18] <plut0> RAM - 500MB per daemon, etc.
[4:19] <iggy> if you are running multiple processes on the same server, you want to have 500 per
[4:19] <iggy> so if you have 12 disks with an OSD on each one, that's 6G of ram
[4:20] * CristianDM (~CristianD@186.153.251.60) Quit ()
[4:20] <plut0> so if we're talking osd, each osd is a daemon?
[4:20] <iggy> yes
[4:21] <plut0> what about monitor?
[4:21] <iggy> it says 1G each on that page
[4:21] <plut0> how many monitors would you have though?
[4:21] <iggy> daemon is a general unix/linux term
[4:22] <plut0> you'd run only 1 monitor per host right?
[4:22] <iggy> oh, 1 or 3+
[4:22] <iggy> per cluster
[4:22] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[4:22] <plut0> ok
[4:22] <iggy> you probably don't want a mon per host (unless it's a very small number of hosts)
[4:23] <plut0> i'd probably do 3 dedicated hosts each with a monitor
[4:24] <plut0> mds is only relevant for cephfs right?
[4:25] <iggy> correct
[4:25] <plut0> can't be used for rbd right?
[4:31] <iggy> it's just not necessary
[4:31] <plut0> whys that?
[4:34] <plut0> doesn't it split the metadata from osd?
[4:50] * scuttlemonkey (~scuttlemo@96-42-136-136.dhcp.trcy.mi.charter.com) has joined #ceph
[4:50] * ChanServ sets mode +o scuttlemonkey
[4:55] * loicd (~loic@2a01:e35:2eba:db10:ecfc:5795:a1de:9b71) Quit (Quit: Leaving.)
[4:55] * loicd (~loic@magenta.dachary.org) has joined #ceph
[5:18] * scuttlemonkey (~scuttlemo@96-42-136-136.dhcp.trcy.mi.charter.com) Quit (Quit: This computer has gone to sleep)
[5:29] * scuttlemonkey (~scuttlemo@96-42-136-136.dhcp.trcy.mi.charter.com) has joined #ceph
[5:29] * ChanServ sets mode +o scuttlemonkey
[5:42] * themgt (~themgt@96-37-21-211.dhcp.gnvl.sc.charter.com) has joined #ceph
[5:43] <themgt> if I have fubared by mon setup, is there a way to recover?
[5:44] <themgt> I tried to add monitors using this approximately: http://ceph.com/docs/master/rados/operations/add-or-rm-mons/ and there's been no more output from 'ceph status' etc
[5:49] * scuttlemonkey (~scuttlemo@96-42-136-136.dhcp.trcy.mi.charter.com) Quit (Quit: This computer has gone to sleep)
[6:04] <themgt> ahh nm, was able to force remap
[6:04] * themgt (~themgt@96-37-21-211.dhcp.gnvl.sc.charter.com) has left #ceph
[6:15] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[6:16] * loicd (~loic@magenta.dachary.org) has joined #ceph
[6:21] <iggy> plut0: rbd volumes don't have metadata
[6:21] * cypher6877 (~jay@cpe-76-175-167-163.socal.res.rr.com) Quit ()
[6:21] <plut0> iggy: thanks
[6:45] <phantomcircuit> is it safe to expose ceph to the outside world with cephx for authentication?
[6:45] <phantomcircuit> im guessing probably it's a bad idea
[6:45] * gaveen (~gaveen@112.134.113.147) has joined #ceph
[6:46] * plut0 (~cory@pool-96-236-43-69.albyny.fios.verizon.net) has left #ceph
[7:12] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit (Remote host closed the connection)
[7:36] * yanzheng (~zhyan@134.134.139.76) has joined #ceph
[7:49] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) has joined #ceph
[7:54] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[7:54] * loicd (~loic@magenta.dachary.org) has joined #ceph
[8:14] * yanzheng (~zhyan@134.134.139.76) Quit (Remote host closed the connection)
[8:24] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) has joined #ceph
[8:26] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) Quit (Quit: Leaving.)
[8:29] * topro (~quassel@46.115.25.194) has joined #ceph
[8:47] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:49] <topro> hi, I'm trying to figure out whether ceph might be the right choice for my storage needs by doing some testing. a question that I got from that is how data replication placement is done as that would have a big impact on hardware arrangement to get a good compromise of best performance and best disk-/host-failure-tolerance
[8:50] <topro> as its stated on several ceph ressources one should run 1 osd per storage disk per host...
[8:50] <topro> if one of my constraints is to have exactly 3 hosts, I would run one mds and one mon per host and one osd per storage spindle per host
[8:51] <topro> I think I would need about 9 spindles in total, three disks/osds per host, to satisfy io-performance that would mean that if 1 host dies I would loose 3 of 9 osds at a time.
[8:52] <topro> how can ceph handle such a situation? what design/configuration decisions to take into account?
[9:08] * contrl (~Nrg3tik@78.25.73.250) has joined #ceph
[9:08] * ctrl (~Nrg3tik@78.25.73.250) Quit (Read error: Connection reset by peer)
[9:27] * xiaoxi (~xiaoxiche@134.134.137.73) Quit (Remote host closed the connection)
[9:29] * tnt (~tnt@48.29-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[9:29] <iggy> topro: depends on replication settings
[9:34] <iggy> as long as you have enough available space to rebalance the data and have the required replication, it would continue fine
[9:47] * xiaoxi (~xiaoxiche@134.134.137.73) has joined #ceph
[9:59] <topro> iggy: thats what I didn't hope to learn ;)
[10:00] <topro> so generally rep size should be higher than max. osds per host to prevent data loss when a host dies, right?
[10:02] * sukiyaki (~Tecca@114.91.114.121) has joined #ceph
[10:02] <topro> does crush respect which osd runs on which host to try to distribute replicas on different hosts or does it simply replicate on random osds, no matter what host they are on?
[10:02] * loicd (~loic@163.5.222.250) has joined #ceph
[10:03] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) Quit (Read error: Connection reset by peer)
[10:04] * loicd (~loic@163.5.222.250) Quit ()
[10:04] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) has joined #ceph
[10:04] <NaioN> topro: that's encoded in the crushmap
[10:06] <NaioN> in the crushmap you can define the weights of the osds (weight means the change it has to get data) and you can make a hierarchy
[10:07] <NaioN> and with that hierarchy you can define that replica's have to reside in different parts of the tree
[10:07] <topro> NaioN: ah, ok. that really sounds interesting
[10:07] <topro> gotta leave for the moment, bb
[10:10] * xiaoxi (~xiaoxiche@134.134.137.73) Quit (Ping timeout: 480 seconds)
[10:12] * contrl (~Nrg3tik@78.25.73.250) Quit (Ping timeout: 480 seconds)
[10:24] * sukiyaki (~Tecca@114.91.114.121) Quit (Ping timeout: 480 seconds)
[10:38] * loicd (~loic@163.5.222.250) has joined #ceph
[10:46] * loicd1 (~loic@163.5.222.250) has joined #ceph
[10:46] * loicd (~loic@163.5.222.250) Quit (Write error: connection closed)
[11:00] * loicd1 (~loic@163.5.222.250) Quit (Quit: Leaving.)
[11:06] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[11:06] <Kioob> Hi !
[11:07] <Kioob> I didn't find on the wiki a lot info about Btrfs vs XFS vs ext4, for OSD.
[11:07] <Kioob> some time ago, there was performance problem with OSD over ext4, no ?
[11:08] * loicd (~loic@163.5.222.250) has joined #ceph
[11:08] <Kioob> Since Btrfs is not stable, I'm not sure to want to use it...
[11:08] <Kioob> (I have a lot of problem with Btrfs on my backup systems, with kernel 3.6.x too)
[11:09] <Kioob> oh, I found the "good" page on the wiki : http://ceph.com/docs/master/rados/configuration/filesystem-recommendations/
[11:10] <Kioob> thanks :D
[11:10] * loicd (~loic@163.5.222.250) Quit ()
[11:17] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) Quit (Remote host closed the connection)
[11:17] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[11:19] <Kioob> so XFS, it's ok
[11:20] * loicd (~loic@163.5.222.250) has joined #ceph
[11:28] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[11:34] * loicd (~loic@163.5.222.250) Quit (Ping timeout: 480 seconds)
[12:06] * loicd (~loic@163.5.222.250) has joined #ceph
[12:09] * loicd (~loic@163.5.222.250) Quit ()
[12:11] * gucki (~smuxi@80-218-32-183.dclient.hispeed.ch) has joined #ceph
[12:21] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[12:26] * maxiz (~pfliu@221.223.237.201) has joined #ceph
[12:45] * sukiyaki (~Tecca@199.241.203.104) has joined #ceph
[13:35] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit (Remote host closed the connection)
[13:36] * sukiyaki (~Tecca@199.241.203.104) Quit (Ping timeout: 480 seconds)
[13:38] * deepsa (~deepsa@122.172.214.44) has joined #ceph
[13:46] * loicd (~loic@163.5.222.250) has joined #ceph
[13:51] * sukiyaki (~Tecca@199.241.203.65) has joined #ceph
[13:53] * loicd (~loic@163.5.222.250) Quit (Quit: Leaving.)
[13:58] * gaveen (~gaveen@112.134.113.147) Quit (Ping timeout: 480 seconds)
[14:12] * sukiyaki (~Tecca@199.241.203.65) Quit (Ping timeout: 480 seconds)
[14:18] * topro (~quassel@46.115.25.194) Quit (Ping timeout: 480 seconds)
[14:38] * deepsa_ (~deepsa@115.184.51.77) has joined #ceph
[14:39] * deepsa (~deepsa@122.172.214.44) Quit (Ping timeout: 480 seconds)
[14:39] * deepsa_ is now known as deepsa
[14:56] * plut0 (~cory@pool-96-236-43-69.albyny.fios.verizon.net) has joined #ceph
[14:57] <plut0> what are people using for hard drives? sas? nl-sas? sata?
[15:07] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[15:13] * yanzheng (~zhyan@134.134.139.76) has joined #ceph
[15:21] * gucki (~smuxi@80-218-32-183.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[15:28] * illuminatis (~illuminat@89-76-193-235.dynamic.chello.pl) has joined #ceph
[15:30] * maxiz (~pfliu@221.223.237.201) Quit (Ping timeout: 480 seconds)
[15:31] * loicd (~loic@163.5.222.250) has joined #ceph
[15:31] * deepsa_ (~deepsa@122.172.213.104) has joined #ceph
[15:36] * deepsa (~deepsa@115.184.51.77) Quit (Ping timeout: 480 seconds)
[15:36] * deepsa_ is now known as deepsa
[15:38] * illuminatis (~illuminat@89-76-193-235.dynamic.chello.pl) Quit (Quit: WeeChat 0.3.9.2)
[16:01] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) has joined #ceph
[16:19] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) Quit (Quit: Leaving.)
[16:21] * Kioob (~kioob@luuna.daevel.fr) Quit (Ping timeout: 480 seconds)
[16:22] * sbadia (~sbadia@yasaw.net) has joined #ceph
[16:27] * illuminatis (~illuminat@89-76-193-235.dynamic.chello.pl) has joined #ceph
[17:00] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[17:00] * loicd (~loic@163.5.222.250) Quit (Quit: Leaving.)
[17:05] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[17:15] * gucki (~smuxi@80-218-32-183.dclient.hispeed.ch) has joined #ceph
[17:27] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) has joined #ceph
[17:46] * gaveen (~gaveen@112.134.112.49) has joined #ceph
[18:11] * yanzheng (~zhyan@134.134.139.76) Quit (Remote host closed the connection)
[18:18] * nhm (~nh@184-97-251-146.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[18:21] * yanzheng (~zhyan@134.134.139.74) has joined #ceph
[18:38] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) Quit (Quit: Leaving.)
[18:46] * KindOne (KindOne@h4.176.130.174.dynamic.ip.windstream.net) Quit (Read error: Connection reset by peer)
[18:49] * KindOne (KindOne@h4.176.130.174.dynamic.ip.windstream.net) has joined #ceph
[19:02] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[19:27] <iggy> plut0: ceph was designed around running on cheap hardware... so any of those should work
[19:47] * yanzheng (~zhyan@134.134.139.74) Quit (Remote host closed the connection)
[20:03] * yanzheng (~zhyan@134.134.139.74) has joined #ceph
[20:06] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[20:10] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit ()
[20:26] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Quit: Leaving.)
[20:37] * yanzheng (~zhyan@134.134.139.74) Quit (Remote host closed the connection)
[21:03] * danieagle (~Daniel@186.214.60.197) has joined #ceph
[21:19] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[21:21] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[21:21] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[21:21] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[21:23] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit ()
[21:29] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) Quit (Remote host closed the connection)
[21:47] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[22:16] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[22:16] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit ()
[22:45] * loicd (~loic@magenta.dachary.org) has joined #ceph
[22:45] * loicd (~loic@magenta.dachary.org) Quit ()
[22:46] * loicd (~loic@2a01:e35:2eba:db10:88e2:8f6e:7515:3e8f) has joined #ceph
[22:47] * loicd (~loic@2a01:e35:2eba:db10:88e2:8f6e:7515:3e8f) Quit ()
[22:52] * loicd (~loic@magenta.dachary.org) has joined #ceph
[23:01] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[23:13] * loicd (~loic@magenta.dachary.org) has joined #ceph
[23:48] * tpeb (~root@dan75-10-83-157-22-200.fbx.proxad.net) has joined #ceph
[23:48] <tpeb> hi guys !
[23:49] <tpeb> I have a little question about ceph !
[23:49] <tpeb> I'm testing it for openstack
[23:49] <tpeb> in an test infra
[23:50] <tpeb> I have all ceph deamon running on a server
[23:51] <tpeb> I want to split the data location on two raid (one fast, the other is slow)
[23:51] <tpeb> the first for block storage, the second as a fs
[23:51] <tpeb> how can I d that ?
[23:54] * gucki (~smuxi@80-218-32-183.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.