#ceph IRC Log

Index

IRC Log for 2013-07-21

Timestamps are in GMT/BST.

[0:02] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:17] * sleinen1 (~Adium@2001:620:0:25:c585:5ee2:52e1:36ad) Quit (Quit: Leaving.)
[0:17] * rudolfsteiner (~federicon@220-122-245-190.fibertel.com.ar) has joined #ceph
[0:23] * rudolfsteiner (~federicon@220-122-245-190.fibertel.com.ar) Quit (Quit: rudolfsteiner)
[0:25] * DaChun (~quassel@222.76.57.24) has joined #ceph
[0:26] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[0:26] * Meths_ is now known as Meths
[0:27] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit ()
[0:29] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[0:40] * DaChun (~quassel@222.76.57.24) Quit (Read error: Connection reset by peer)
[0:47] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[0:55] * rudolfsteiner (~federicon@220-122-245-190.fibertel.com.ar) has joined #ceph
[0:56] * LeaChim (~LeaChim@90.210.148.5) Quit (Ping timeout: 480 seconds)
[0:59] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[1:08] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[1:09] * tremendous (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[1:10] * testarossa (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[1:10] * tremendous (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[1:10] * tremendous (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[1:10] * testarossa (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[1:11] * testarossa (~xmltok@relay.els4.ticketmaster.com) has joined #ceph
[1:11] * tremendous (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[1:14] * mozg (~andrei@host109-151-35-94.range109-151.btcentralplus.com) has joined #ceph
[1:14] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:16] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:18] <mozg> hello guys
[1:18] <mozg> has anyone used xenserver with ceph/rbd support?
[1:18] <mozg> any results/benchmarks/feedback?
[1:20] * rudolfsteiner (~federicon@220-122-245-190.fibertel.com.ar) Quit (Quit: rudolfsteiner)
[1:22] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[1:51] * rudolfsteiner (~federicon@220-122-245-190.fibertel.com.ar) has joined #ceph
[1:53] * MooingLemur (~troy@phx-pnap.pinchaser.com) Quit (Remote host closed the connection)
[1:55] * jakes (~oftc-webi@128-107-239-233.cisco.com) has joined #ceph
[1:59] * rudolfsteiner (~federicon@220-122-245-190.fibertel.com.ar) Quit (Quit: rudolfsteiner)
[2:10] * rudolfsteiner (~federicon@220-122-245-190.fibertel.com.ar) has joined #ceph
[2:10] * rudolfsteiner (~federicon@220-122-245-190.fibertel.com.ar) Quit ()
[2:15] * MooingLemur (~troy@phx-pnap.pinchaser.com) has joined #ceph
[2:21] * diegows (~diegows@190.190.2.126) has joined #ceph
[2:28] * testarossa (~xmltok@relay.els4.ticketmaster.com) Quit (Ping timeout: 480 seconds)
[2:31] * MooingLemur (~troy@phx-pnap.pinchaser.com) Quit (Remote host closed the connection)
[2:31] * MooingLemur (~troy@phx-pnap.pinchaser.com) has joined #ceph
[2:37] * rudolfsteiner (~federicon@220-122-245-190.fibertel.com.ar) has joined #ceph
[2:41] * sprachgenerator (~sprachgen@c-50-141-192-36.hsd1.il.comcast.net) has joined #ceph
[2:45] * jakes (~oftc-webi@128-107-239-233.cisco.com) Quit (Quit: Page closed)
[2:53] * rudolfsteiner (~federicon@220-122-245-190.fibertel.com.ar) Quit (Quit: rudolfsteiner)
[3:00] * waxzce (~waxzce@2a01:e35:2e1e:260:155e:8a04:8c10:cb57) Quit (Read error: Connection reset by peer)
[3:01] * waxzce (~waxzce@2a01:e35:2e1e:260:7dcb:1273:5f81:fc73) has joined #ceph
[3:02] * Garen (~garen@69.76.17.207) has joined #ceph
[3:23] * BillK (~BillK-OFT@203-214-147-30.perm.iinet.net.au) Quit (Ping timeout: 480 seconds)
[3:47] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) has joined #ceph
[3:50] * sprachgenerator (~sprachgen@c-50-141-192-36.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[4:11] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[4:17] * stxShadow (~Jens@ip-88-152-161-249.unitymediagroup.de) has joined #ceph
[4:32] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[4:40] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[4:49] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:52] * mozg (~andrei@host109-151-35-94.range109-151.btcentralplus.com) Quit (Read error: Operation timed out)
[4:58] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[5:06] * fireD_ (~fireD@93-142-209-141.adsl.net.t-com.hr) has joined #ceph
[5:07] * fireD (~fireD@93-139-147-194.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:14] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[5:20] * jjgalvez1 (~jjgalvez@ip72-193-215-88.lv.lv.cox.net) has joined #ceph
[5:20] * jjgalvez (~jjgalvez@ip72-193-215-88.lv.lv.cox.net) Quit (Read error: Connection reset by peer)
[5:33] * DaChun (~quassel@222.76.57.24) has joined #ceph
[5:37] * stxShadow1 (~Jens@jump.filoo.de) has joined #ceph
[5:42] * stxShadow (~Jens@ip-88-152-161-249.unitymediagroup.de) Quit (Ping timeout: 480 seconds)
[6:34] * BillK (~BillK-OFT@203-214-147-30.perm.iinet.net.au) has joined #ceph
[6:56] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[7:01] * huangjun (~huangjun@221.234.36.134) has joined #ceph
[7:02] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[7:05] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) has joined #ceph
[7:07] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[7:08] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) has joined #ceph
[7:08] * stxShadow1 (~Jens@jump.filoo.de) Quit (Read error: Connection reset by peer)
[7:10] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[7:32] * DaChun (~quassel@222.76.57.24) Quit (Read error: Connection reset by peer)
[7:46] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[7:55] * smiley (~smiley@c-71-200-71-128.hsd1.md.comcast.net) Quit (Quit: smiley)
[7:59] * smiley (~smiley@c-71-200-71-128.hsd1.md.comcast.net) has joined #ceph
[8:13] * matt__ (~matt@220-245-1-152.static.tpgi.com.au) has joined #ceph
[8:27] * KindTwo (KindOne@h61.211.89.75.dynamic.ip.windstream.net) has joined #ceph
[8:29] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:29] * KindTwo is now known as KindOne
[8:39] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) has joined #ceph
[9:40] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[9:42] <huangjun> if i set a parameter dynamicly, using "ceph osd tell 0 injectargs '--debug_osd 20' ", but if then get the settings by using "ceph-osd -i 0 --show-config |grep debug_osd", it shows 'debug_osd 5'?
[9:48] * smiley (~smiley@c-71-200-71-128.hsd1.md.comcast.net) Quit (Quit: smiley)
[9:52] * Garen (~garen@69.76.17.207) has left #ceph
[9:53] * KindTwo (KindOne@50.96.82.87) has joined #ceph
[9:55] * infinitytrapdoor (~infinityt@p5DDD72A1.dip0.t-ipconnect.de) has joined #ceph
[9:55] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:55] * KindTwo is now known as KindOne
[9:55] * madkiss (~madkiss@089144192063.atnat0001.highway.a1.net) has joined #ceph
[10:12] * BillK (~BillK-OFT@203-214-147-30.perm.iinet.net.au) Quit (Ping timeout: 480 seconds)
[10:38] * infinitytrapdoor (~infinityt@p5DDD72A1.dip0.t-ipconnect.de) Quit ()
[10:43] * madkiss (~madkiss@089144192063.atnat0001.highway.a1.net) Quit (Ping timeout: 480 seconds)
[10:46] * infinitytrapdoor (~infinityt@p5DDD72A1.dip0.t-ipconnect.de) has joined #ceph
[10:53] <Gugge-47527> huangjun: try '--debug-osd 20'
[11:00] * AfC (~andrew@2001:44b8:31cb:d400:b1be:84ef:3a2a:984f) has joined #ceph
[11:17] * infinitytrapdoor (~infinityt@p5DDD72A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[11:42] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:07] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[12:10] * sleinen1 (~Adium@2001:620:0:26:dd4c:f155:afbe:7232) has joined #ceph
[12:10] * Machske (~Bram@d5152D87C.static.telenet.be) Quit (Read error: Connection reset by peer)
[12:11] * Machske (~Bram@d5152D87C.static.telenet.be) has joined #ceph
[12:15] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[12:16] * madkiss (~madkiss@089144192063.atnat0001.highway.a1.net) has joined #ceph
[12:23] * haomaiwang (~haomaiwan@117.79.232.209) Quit (Ping timeout: 480 seconds)
[12:27] * haomaiwang (~haomaiwan@117.79.232.171) has joined #ceph
[12:50] * leafs32 (~nowhat@50.7.1.114) has joined #ceph
[12:53] * leafs32 (~nowhat@50.7.1.114) Quit ()
[13:02] * madkiss (~madkiss@089144192063.atnat0001.highway.a1.net) Quit (Ping timeout: 480 seconds)
[13:50] * lautriv (~lautriv@f050082113.adsl.alicedsl.de) has joined #ceph
[13:55] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[14:17] <lautriv> could anyone link me to a proper howto about ceph ? that homepage using ceph-deploy craps out for no reason :(
[14:21] * smiley (~smiley@c-71-200-71-128.hsd1.md.comcast.net) has joined #ceph
[14:27] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[14:39] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[14:41] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[15:12] <lautriv> ... another dead channel ?
[15:13] <Gugge-47527> nahh
[15:14] <Gugge-47527> but asking about something thats available on ceph.com is not interesting enough for anyone to answer :)
[15:15] <Gugge-47527> http://ceph.com/docs/master/start/quick-start/
[15:15] <joao> even more on a Sunday
[15:15] <Gugge-47527> sunday is quite dead :)
[15:18] <lautriv> Gugge-47527, i was there and followed it step by step but never comes back on creating mds, right now i checked what i have and it seems thet ceph-deploy did not even the half.
[15:20] <Gugge-47527> ceph-deploy is not perfect yet, i usually run ceph status to check after each step :)
[15:21] <Gugge-47527> but i would expect it to work fine for the simple quick-start guide
[15:21] <Gugge-47527> actually the quick-start guide does not even use ceph-deploy i see :)
[15:22] <Gugge-47527> so if you use the quick-start, dont use ceph-deploy i guess :)
[15:22] <lautriv> or rather if the tool is crap, the cluster is probably too :P
[15:22] <Gugge-47527> if you do use ceph-deploy, your config file wont contain much info
[15:23] <Gugge-47527> and there i lost all interest in helping :)
[15:25] <lautriv> i'll just use the almighty "purge" and look for a sane implementation of a distributed FS, wish you much luck with thisone ...
[15:25] <joao> I smell trolls
[15:26] <joao> and not even good ones at it
[15:26] <Gugge-47527> judge the system by the new "unfinished" deployment tool :)
[15:26] <lautriv> joao, i'm no troll but dissapointed on sites which can't hold the howto on the same level like it's versions.
[15:27] <joao> the howto works perfectly well as long as you're running a recent version
[15:27] <joao> might not work 100% on argonaut though
[15:27] <Gugge-47527> the quick-start guiede just dont use ceph-deploy
[15:28] <joao> but here's the thing
[15:28] <lautriv> i assume 0.61.5 is not outdated.
[15:29] <joao> if you find troubles running things, considering this is a fast paced project in which the docs have trouble to keep up with, imo the proper way to present your issues would be to point out where you hit a wall and ask for help
[15:29] <joao> not just slam the project
[15:29] <Gugge-47527> and dont mix a ceph-deploy setup with a non ceph-deploy setup :)
[15:29] <lautriv> joao, the howto said, other methods are deprecated.
[15:30] <joao> besides, you say you want sane, but I bet you haven't read the docs beyond the deployment phase
[15:31] <joao> if you had, you probably had considered the architecture sane enough
[15:31] <joao> (to justify a couple of hickups along the way)
[15:31] <joao> (or wouldn't consider the architecture sane enough to even have the trouble)
[15:32] * diegows (~diegows@190.190.2.126) has joined #ceph
[15:32] <Gugge-47527> lautriv: which ceph-deploy howto did you follow?
[15:32] <joao> anyway, yeah, other methods are deprecated; sometimes however you mess a command in some way and ceph-deploy goes to shit; tell us what you did, what you followed and what you see, and we may be able to help
[15:32] <lautriv> Gugge-47527, some page from ceph.com.
[15:32] <joao> doing that on a monday instead could help to
[15:33] <joao> well, lunch time
[15:35] <lautriv> joao, i did already MON, OSDs, zapped disks and waited forefer to finish MDS. then i had a look and miss anything related to the setup but have a key for mdsand a touched "done".......whatever failed, it should not have come back and tell me "OK".
[15:38] <lautriv> Gugge-47527, about the "sane" part, i was already on lustre-FS which worked well but after Oracle aquited sun this is not longer an option. also recent NFS is used to kill data and freeze servers which is also not really that stable like it was in the past.
[15:39] * markit (~marco@88-149-177-66.v4.ngi.it) has joined #ceph
[15:41] <Gugge-47527> you know cephfs is not considered stable yet right?
[15:41] <Gugge-47527> (the only part requiering the mds)
[15:41] <lautriv> Gugge-47527, some things are never considered stable and still working better that those who are
[15:42] <Gugge-47527> sure :)
[15:42] <lautriv> NFS and EXT4 is considered stable and both have issues.
[15:42] <lautriv> even the latest changes on XFS do heataches.
[15:43] <lautriv> d
[15:44] <Gugge-47527> are your mons and osds running, and are the ceph status HEALTH_OK ?
[15:44] <Gugge-47527> if yes, did you check the mds log to see why it wont run?
[15:44] <lautriv> Gugge-47527, so assumed that ceph-deploy is just unuseable, what way will work on a simple test-setup of 2 OSD via private switch ?
[15:45] <Gugge-47527> both ceph-deploy and the quick-start method should work fine
[15:46] <Gugge-47527> but if ceph-deploy does not work for you, use the other way :)
[15:46] <lautriv> __should__ was already prooved wrong.
[15:47] <Gugge-47527> not really
[15:47] <Gugge-47527> it will work in most cases, but something must be special in your setup
[15:48] <lautriv> one may assume it's the private switch but that is proper and tested.
[15:49] <lautriv> also i found on the site for "deploy" Ceph defaults to XFS and the quickstart will use Ext4 instead ?
[15:50] <Gugge-47527> yes?
[15:50] <Gugge-47527> both work, ceph-deploy defaults to XFS
[15:50] <Gugge-47527> but can be told to use EXT4 too
[15:51] <lautriv> so i set the omap to false ?
[15:52] <Gugge-47527> if you want to use the fs xattr yes
[15:53] <Gugge-47527> true will work for all filesystems
[15:53] <Gugge-47527> false wont work for ext4
[15:53] <lautriv> since ext4 killed all data on 2.6.31 it is not longer a choice.
[15:54] <Gugge-47527> if you rule all systems that made a fatal error in some point in time out, nothing is an option :)
[15:55] <Gugge-47527> but hey, i use XFS for my OSD filesystem too
[15:55] <lautriv> i rule them out if they crap out while called stable and beeing recommented, however ext4 never reached XFS perormance.
[15:56] <lautriv> hmm, kbd asking for new batteries :(
[15:57] <markit> Gugge-47527: I've not found a way to tell ceph-deploy to use anything than XFS, did you? Man ceph-dephoy or ceph-dephoy --help (I can't reach my test ceph installation now) don't show any useful parameter
[15:58] <markit> (wanted to test btrfs)
[15:59] <lautriv> Gugge-47527, ok, just one question which is unclear then i will see how far the non-deploy way might go ............. the sections [osd.0] [os1.1] and so forth do they all count for the same pool or do i need several host/drive enties below one of them ?
[15:59] <Gugge-47527> osd.0 is the settings for the osd with id 0
[15:59] <Gugge-47527> you have a bunch of osd's
[15:59] <lautriv> markit, i would not test btrfs on anything via network.
[15:59] <Gugge-47527> and they form a cluster
[15:59] <Gugge-47527> in that cluster you can create pools
[16:00] <markit> lautriv: why "via network" makes the difference?
[16:00] * eternaleye (~eternaley@2002:3284:29cb::1) Quit (Ping timeout: 480 seconds)
[16:00] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit (Remote host closed the connection)
[16:01] <Gugge-47527> markit: i dont know, i only remember ceph-deploy calling ceph-disk, and ceph-disk having options for ext4 and btrfs inside it too :)
[16:01] <lautriv> markit, not the network itself but the locking and meta-handling. since btrfs is a COW-FS it'll confuse a bunch.
[16:02] <markit> lautriv: should be the preferred Ceph fs, they use XFS only until btrfs is ready enough (and why is not yet? mistery...)
[16:02] <Gugge-47527> follow the btrfs list a couple of days, and you will see how much is still changed all the time in that fs :)
[16:03] <Gugge-47527> its still new and missing a bit :)
[16:03] * eternaleye (~eternaley@2002:3284:29cb::1) has joined #ceph
[16:04] <lautriv> markit, try to export some btrfs via NFS and you will see ..... distribution makes it not easier so ceph must plan/have some mechanism to handle that internally.
[16:04] <markit> Gugge-47527: ok, will be ready when no one will ever care? With reiserfs4 we missed a great oportunity some years ago...
[16:05] <Gugge-47527> i dont know if btrfs will ever be stable enough for me :)
[16:05] <markit> lautriv: oh, I'm using ceph as RBD, so probably is just used "locally" in the ODS
[16:05] <Gugge-47527> but i hope :)
[16:08] <lautriv> Gugge-47527, i miss a part about the clustering/pool thingie. the test will have 2 same-sized OSD but i may add some much larger OSD later on, they should not disturb the running FS but add more space somewhere else, do i need some [pool.0] in the initial config to prevent such behaviour ?
[16:15] <Gugge-47527> when you add another osd
[16:15] <Gugge-47527> data will be redistribued according to the new crushmap
[16:16] <Gugge-47527> you should watch some of the webinars on inktank :)
[16:18] <lautriv> i prefer to read but having insufficient input makes it harder.
[16:19] <Gugge-47527> some of the webinar videos explain a lot :)
[16:19] <Gugge-47527> all of the info is in the documentation too though
[16:20] <Gugge-47527> http://ceph.com/docs/master/rados/operations/crush-map/
[16:26] <lautriv> assumed i have a face to the clients and a face to the nodes, the mon addr should be on the cluster-site i guess ?
[16:26] * smiley (~smiley@c-71-200-71-128.hsd1.md.comcast.net) Quit (Quit: smiley)
[16:26] <Gugge-47527> no
[16:26] <Gugge-47527> the clients need access to the mons
[16:27] <lautriv> ok
[16:33] <lautriv> does this look ok --> http://pastebin.com/fvvZxNnN
[16:36] <Gugge-47527> yes
[16:39] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[16:48] * smiley (~smiley@c-71-200-71-128.hsd1.md.comcast.net) has joined #ceph
[16:48] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[16:56] * diegows (~diegows@190.190.2.126) Quit (Read error: Operation timed out)
[16:57] * leseb (~Adium@bea13-1-82-228-104-16.fbx.proxad.net) has joined #ceph
[17:02] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[17:15] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[17:16] * AfC (~andrew@2001:44b8:31cb:d400:b1be:84ef:3a2a:984f) Quit (Quit: Leaving.)
[17:19] * madkiss (~madkiss@178.188.60.118) Quit ()
[17:20] <huangjun> Good evening!
[17:27] * huangjun (~huangjun@221.234.36.134) Quit (Quit: HydraIRC -> http://www.hydrairc.com <- Nine out of ten l33t h4x0rz prefer it)
[17:37] * leseb (~Adium@bea13-1-82-228-104-16.fbx.proxad.net) Quit (Quit: Leaving.)
[17:43] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[17:50] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[17:53] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[17:54] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[18:05] * markit (~marco@88-149-177-66.v4.ngi.it) Quit ()
[18:09] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[18:11] * jmlowe1 (~Adium@2601:d:a800:97:c5bd:db07:ec9a:3a90) has joined #ceph
[18:16] * jmlowe (~Adium@c-98-223-198-138.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[18:20] * allsystemsarego (~allsystem@188.27.167.90) has joined #ceph
[18:25] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[18:33] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[18:37] * drokita (~drokita@97-92-254-72.dhcp.stls.mo.charter.com) has joined #ceph
[18:51] * jmlowe1 (~Adium@2601:d:a800:97:c5bd:db07:ec9a:3a90) has left #ceph
[18:58] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[19:03] * matt__ (~matt@220-245-1-152.static.tpgi.com.au) Quit (Ping timeout: 480 seconds)
[19:15] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Remote host closed the connection)
[19:15] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[19:46] <lautriv> what is this error about ulimit -n 8192 about ? ulimit is from stoneage.
[19:50] <Gugge-47527> so is a lot of the stuff we still use :)
[19:51] <Gugge-47527> and i dont think anyone can guess what "this error about ulimit" is :)
[19:57] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Remote host closed the connection)
[19:57] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[20:01] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[20:03] <lautriv> Gugge-47527, i assume many see that on starting a cluster.
[20:08] * sleinen1 (~Adium@2001:620:0:26:dd4c:f155:afbe:7232) Quit (Ping timeout: 480 seconds)
[20:09] * xmltok (~xmltok@pool101.bizrate.com) Quit (Quit: Bye!)
[20:09] * sleinen1 (~Adium@2001:620:0:26:14f3:2759:33e5:57db) has joined #ceph
[20:10] <darkfaded> lautriv: ok, then we'll wait for them
[20:10] <darkfaded> ;)
[20:16] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:19] <lautriv> darkfaded, i was more about a "well known" output, isn't it ?
[20:23] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[20:30] <lautriv> ok, this leads to nothing else waste of time, will be back in a decade........or two. have fun ;)
[20:35] * diegows (~diegows@190.190.2.126) has joined #ceph
[20:49] <lautriv> finally i found why ceph-deploy failed. even ceph is advertising a subnet-split, ceph-deploy doesn't it proper.
[20:51] * madkiss (~madkiss@2001:6f8:12c3:f00f:929:73b3:76ea:52fe) has joined #ceph
[21:29] * fridudad (~oftc-webi@p4FC2DE23.dip0.t-ipconnect.de) has joined #ceph
[21:34] * yanzheng (~zhyan@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[21:38] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:a1d5:d5c1:87ff:8925) has joined #ceph
[21:38] * madkiss (~madkiss@2001:6f8:12c3:f00f:929:73b3:76ea:52fe) Quit (Ping timeout: 480 seconds)
[21:41] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[21:44] * fridudad (~oftc-webi@p4FC2DE23.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[21:45] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:a1d5:d5c1:87ff:8925) Quit (Quit: Leaving.)
[21:48] * gregaf1 (~Adium@38.122.20.226) has joined #ceph
[21:56] * gregaf (~Adium@2607:f298:a:607:e44a:4714:6b0f:b2a7) Quit (Ping timeout: 480 seconds)
[22:05] * ntranger (~ntranger@c-98-228-58-167.hsd1.il.comcast.net) has joined #ceph
[22:06] <ntranger> I have a couple questions about ceph, and was wondering if anyone might be willing to help me out? I'm fairly new to this, so the questions are probably pretty easy
[22:09] * _robbat21irssi (nobody@www2.orbis-terrarum.net) Quit (Quit: leaving)
[22:10] * danieagle (~Daniel@177.99.135.10) has joined #ceph
[22:10] * _robbat2|irssi (nobody@www2.orbis-terrarum.net) has joined #ceph
[22:10] <ntranger> I have 3 different servers with 12, 2TB drives, and a 500gb OS drive. I'm curious as to how, without raiding the 12 drives, would I configure ceph to use them? I've been looking online, and either I'm overlooking it, or its not documented.
[22:12] <mikedawson> ntranger: deploy one Ceph OSD process per drive
[22:13] * allsystemsarego (~allsystem@188.27.167.90) Quit (Quit: Leaving)
[22:14] <ntranger> so, for example, I would set [osd.0] host=server1 and then set multiple devs under that?
[22:15] * fireD_ is now known as fireD
[22:16] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[22:17] <mikedawson> ntranger: [osd.0] host=server1 .... [osd.1] host=server1 ... [osd.11] host=server1 ... [osd.12] host=server2 ... [osd.35] host=server6
[22:18] <mikedawson> s/server6/server3/
[22:18] <ntranger> ah! ok. That makes more sense. Thanks so much!
[22:20] <mikedawson> ntranger: at this point (3 servers), each should have a Ceph Monitor. If you scale to more servers, you can most likely stick to three monitors, but you may want to move them to dedicated servers if performance demands the change
[22:22] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[22:24] <_robbat2|irssi> is it worthwhile putting dedicated monitors in VMs (and ensuring that the monitors aren't on the same physical host)?
[22:26] <ntranger> yeah, this is pretty much in its testing phase at this point. They are just going to be used as file dumps. they really aren't meant for crazy speed (at this point).
[22:26] <ntranger> Thanks so much for your help, Mike!
[22:29] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Read error: Operation timed out)
[22:42] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[22:46] * drokita (~drokita@97-92-254-72.dhcp.stls.mo.charter.com) Quit (Quit: Leaving.)
[22:49] * mschiff (~mschiff@port-33202.pppoe.wtnet.de) has joined #ceph
[22:52] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[23:04] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:05] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[23:22] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[23:29] * yanzheng (~zhyan@jfdmzpr02-ext.jf.intel.com) Quit (Remote host closed the connection)
[23:34] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[23:43] * sleinen1 (~Adium@2001:620:0:26:14f3:2759:33e5:57db) Quit (Quit: Leaving.)
[23:46] * LeaChim (~LeaChim@90.210.148.5) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.