#ceph IRC Log

Index

IRC Log for 2016-10-05

Timestamps are in GMT/BST.

[0:01] * dgurtner (~dgurtner@109.236.136.226) Quit (Ping timeout: 480 seconds)
[0:02] * aNuposic (~aNuposic@192.55.54.43) has joined #ceph
[0:07] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[0:14] * ntpttr_ (~ntpttr@192.55.54.42) Quit (Remote host closed the connection)
[0:15] * ntpttr_ (~ntpttr@192.55.54.42) has joined #ceph
[0:23] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[0:23] * pinkypie (~AusGersw6@S010600212986d401.ed.shawcable.net) Quit ()
[0:24] * Kingrat (~shiny@2605:6000:1526:4063:61da:62:ac47:a080) Quit (Ping timeout: 480 seconds)
[0:24] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) has joined #ceph
[0:26] * Hemanth (~hkumar_@103.228.221.190) Quit (Ping timeout: 480 seconds)
[0:28] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[0:28] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[0:28] * mattbenjamin (~mbenjamin@12.118.3.106) Quit (Ping timeout: 480 seconds)
[0:29] * cathode (~cathode@50-232-215-114-static.hfc.comcastbusiness.net) has joined #ceph
[0:32] * andreww (~xarses@64.124.158.3) Quit (Ping timeout: 480 seconds)
[0:33] * bene2 (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[0:37] * Kingrat (~shiny@2605:6000:1526:4063:2c6d:6d69:9355:642c) has joined #ceph
[0:39] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[0:39] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[0:46] * ira (~ira@12.118.3.106) Quit (Quit: Leaving)
[0:53] * colde1 (~anadrom@exit0.radia.tor-relays.net) has joined #ceph
[0:57] * sudocat1 (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[1:04] <lincolnb> gregsfortytwo: just to follow up from earlier, i cranked the mds cache up by a factor of 10, restarted the mds, and now i'm seeing about 30G utilized vs the 50g before. also 'failing to respond to cache pressure' stuff has gone away. for now, anyhow :)
[1:04] <gregsfortytwo> have to wait and see how it goes long-term ;)
[1:05] <lincolnb> ya
[1:05] <lincolnb> first time i've seen HEALTH_OK a while!
[1:14] * cathode (~cathode@50-232-215-114-static.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[1:14] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Quit: ...)
[1:14] * ledgr (~ledgr@88-222-11-185.meganet.lt) Quit (Remote host closed the connection)
[1:15] * KindOne (kindone@h229.169.16.98.dynamic.ip.windstream.net) has joined #ceph
[1:15] * ledgr (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[1:23] * colde1 (~anadrom@exit0.radia.tor-relays.net) Quit ()
[1:23] * ledgr (~ledgr@88-222-11-185.meganet.lt) Quit (Ping timeout: 480 seconds)
[1:25] * oms101 (~oms101@p20030057EA000200C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:29] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[1:33] * oms101 (~oms101@p20030057EA49CC00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:34] <diq> I haven't seen one in ages
[1:34] <diq> cache pressure is annoying
[1:34] <diq> any meaningless TBH
[1:34] * vbellur (~vijay@71.234.224.255) has joined #ceph
[1:36] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[1:39] * jiffe (~jiffe@nsab.us) Quit (Ping timeout: 480 seconds)
[1:40] <diq> has anyone noticed any real world performance differences in running 4K native drives vs 512e drives with Ceph OSD's?
[1:40] <diq> I know about the theoretical hit of read-modify-writes on 512e
[1:41] * ntpttr_ (~ntpttr@192.55.54.42) Quit (Remote host closed the connection)
[1:41] * ntpttr_ (~ntpttr@192.55.54.42) has joined #ceph
[1:41] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[1:47] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[1:47] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[1:48] * wushudoin (~wushudoin@2601:646:8200:c9f0:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[1:54] * Concubidated (~cube@68.140.239.164) Quit (Quit: Leaving.)
[2:01] * kfox1111 (bob@leary.csoft.net) has joined #ceph
[2:01] <kfox1111> ok. could really use some help...
[2:02] <kfox1111> I'm deploying ceph in containers, in a vm in a gate test. (yeah, kind of scary. mostly seems to work)
[2:02] <kfox1111> but, if you look at the bottem bit of this:
[2:02] <kfox1111> http://logs.openstack.org/41/381041/30/experimental/gate-kolla-kubernetes-deploy-centos-binary-ceph-nv/2c11084/console.html
[2:03] <kfox1111> I see it get stuck often now when I try and create a rbd volume.
[2:03] * salwasser (~Adium@2601:197:101:5cc1:49f:18f0:e1f0:3629) has joined #ceph
[2:03] <kfox1111> I put a 4 minute timeout on it and the cluster seems stuck. there's only one mon, one osd.
[2:03] <kfox1111> As far as I can tell, everythings fine, it just never makes progress. any thoughts on what that can be?
[2:06] * ntpttr_ (~ntpttr@192.55.54.42) Quit (Remote host closed the connection)
[2:08] * jarrpa (~jarrpa@2602:3f:e183:a600:a4c6:1a92:820f:bb6) Quit (Ping timeout: 480 seconds)
[2:08] * billwebb (~billwebb@66.56.15.14) has joined #ceph
[2:09] * aNuposic (~aNuposic@192.55.54.43) Quit (Remote host closed the connection)
[2:13] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[2:15] * Concubidated (~cube@h4.246.129.40.static.ip.windstream.net) has joined #ceph
[2:21] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[2:31] * sudocat (~dibarra@2602:306:8bc7:4c50:602b:9200:85ea:6bd7) has joined #ceph
[2:34] * salwasser (~Adium@2601:197:101:5cc1:49f:18f0:e1f0:3629) Quit (Quit: Leaving.)
[2:36] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[2:36] * sudocat1 (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[2:36] * sudocat (~dibarra@2602:306:8bc7:4c50:602b:9200:85ea:6bd7) Quit ()
[2:38] * georgem (~Adium@69-165-135-139.dsl.teksavvy.com) has joined #ceph
[2:44] * sudocat1 (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[2:50] <minnesotags> tessier_; That seems like an odd arrangement. The OSDs aren't machines, they are drives. Are you saying you have have two machines with raids of 4 disks? Or a machine that is a monitor with two raid arrays? If the first, scrap the raids and make those machines monitors with 4 OSDs (make sure to have an odd number of monitors), if it is the second case (which I suspect), scrap the raids and have the one monitor with 8 OSDs.
[2:54] <minnesotags> But aside from that, I don't know Centos7. If you were using Debian, I could help. Centos7 though seems like one of the "preferred" operating systems, so you should be able to follow the step by step instructions and get a working system.
[3:01] * sudocat (~dibarra@45-17-188-191.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[3:09] * jfaj (~jan@p20030084AD1B01005EC5D4FFFEBB68A4.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:09] * sudocat (~dibarra@45-17-188-191.lightspeed.hstntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[3:14] * billwebb (~billwebb@66.56.15.14) Quit (Quit: billwebb)
[3:19] * jfaj (~jan@p20030084AD152E005EC5D4FFFEBB68A4.dip0.t-ipconnect.de) has joined #ceph
[3:21] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[3:42] * mLegion (~Popz@tor-exit.squirrel.theremailer.net) has joined #ceph
[3:46] * yanzheng1 (~zhyan@125.70.23.147) has joined #ceph
[4:01] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Ping timeout: 480 seconds)
[4:02] * cyphase (~cyphase@000134f2.user.oftc.net) has joined #ceph
[4:12] * mLegion (~Popz@tor-exit.squirrel.theremailer.net) Quit ()
[4:21] * Racpatel (~Racpatel@2601:87:3:31e3::34db) Quit (Ping timeout: 480 seconds)
[4:26] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Read error: Connection reset by peer)
[4:31] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[5:02] * Vacuum__ (~Vacuum@i59F796CE.versanet.de) has joined #ceph
[5:09] * Vacuum_ (~Vacuum@88.130.203.105) Quit (Ping timeout: 480 seconds)
[5:12] * kefu (~kefu@114.92.125.128) has joined #ceph
[5:22] * rotbeard (~redbeard@aftr-109-90-233-215.unity-media.net) has joined #ceph
[5:28] * Bonzaii (~Maza@h-185-239.a322.priv.bahnhof.se) has joined #ceph
[5:31] * intr1nsic (~textual@96.228.196.111) has joined #ceph
[5:32] <intr1nsic> If we have PG that we are ok that the data is considered lost, is there a way to either delete the PG's or "reset" them to get back to a health ok state?
[5:37] * georgem (~Adium@69-165-135-139.dsl.teksavvy.com) Quit (Quit: Leaving.)
[5:42] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[5:52] * vimal (~vikumar@114.143.160.250) has joined #ceph
[5:54] * Hemanth (~hkumar_@103.228.221.190) has joined #ceph
[5:56] * evelu (~erwan@37.160.181.8) Quit (Ping timeout: 480 seconds)
[5:58] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:58] * Bonzaii (~Maza@h-185-239.a322.priv.bahnhof.se) Quit ()
[5:59] * Vacuum_ (~Vacuum@88.130.222.224) has joined #ceph
[6:03] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[6:05] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[6:06] * Vacuum__ (~Vacuum@i59F796CE.versanet.de) Quit (Ping timeout: 480 seconds)
[6:07] * Kurt (~Adium@2001:628:1:5:cdee:9999:a7:3f78) Quit (Read error: No route to host)
[6:07] * Kurt (~Adium@2001:628:1:5:3cbc:8252:2296:6255) has joined #ceph
[6:08] * vimal (~vikumar@114.143.160.250) Quit (Quit: Leaving)
[6:11] * walcubi (~walcubi@p5797A087.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:11] * walcubi (~walcubi@p5797A06E.dip0.t-ipconnect.de) has joined #ceph
[6:13] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[6:15] * rdas (~rdas@121.244.87.116) has joined #ceph
[6:16] * ledgr (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[6:17] * wkennington (~wak@0001bde8.user.oftc.net) has joined #ceph
[6:18] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[6:21] <tessier_> minnesotags: I mean I have two machines with 4 OSDs each.
[6:21] <tessier_> minnesotags: No RAID involved.
[6:21] <tessier_> minnesotags: The quick start guide seems to imply that I can get by with 1 monitor. Is that not correct?
[6:22] <tessier_> minnesotags: Did you see the timeouts in the log I pastebin'd?
[6:22] <tessier_> Been following the step by step instructions...no luck. :(
[6:24] * ledgr (~ledgr@88-222-11-185.meganet.lt) Quit (Ping timeout: 480 seconds)
[6:24] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[6:29] * toastydeath (~toast@pool-71-255-253-39.washdc.fios.verizon.net) has joined #ceph
[6:36] * vimal (~vikumar@121.244.87.116) has joined #ceph
[6:52] * danielsj (~Ian2128@108.61.166.139) has joined #ceph
[6:58] * vikhyat (~vumrao@49.248.86.245) has joined #ceph
[7:01] * jeh (~jeh@76.16.206.198) has joined #ceph
[7:22] * danielsj (~Ian2128@108.61.166.139) Quit ()
[7:23] * krashcan (~Krash@117.205.150.31) has joined #ceph
[7:24] * intr1nsic (~textual@96.228.196.111) Quit (Ping timeout: 480 seconds)
[7:26] <krashcan> hey,I am new to open source and I wanted to contribute.
[7:26] <krashcan> Where should I start?
[7:28] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[7:30] * kefu (~kefu@114.92.125.128) has joined #ceph
[7:31] * branto (~branto@transit-86-181-132-209.redhat.com) has joined #ceph
[7:35] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Quit: treenerd_)
[7:38] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[7:39] * krashcan (~Krash@117.205.150.31) has left #ceph
[7:53] * karnan (~karnan@125.16.34.66) has joined #ceph
[8:06] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Quit: treenerd_)
[8:07] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[8:11] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Read error: Connection reset by peer)
[8:11] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[8:12] * Kvisle (~tv@tv.users.bitbit.net) Quit (Remote host closed the connection)
[8:13] * Ivan (~ipencak@213.151.95.130) has joined #ceph
[8:17] * kefu_ (~kefu@114.92.125.128) has joined #ceph
[8:17] * kefu (~kefu@114.92.125.128) Quit (Read error: Connection reset by peer)
[8:18] * krashcan (~Krash@117.205.150.31) has joined #ceph
[8:18] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Quit: treenerd_)
[8:19] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[8:20] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit ()
[8:20] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[8:20] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit ()
[8:22] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[8:23] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit ()
[8:23] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[8:25] <IcePic> kfox1111: do you have a crushmap that allows the pgs to be placed on your one osd?
[8:26] <IcePic> or a policy that says "keep only one copy, fine if it goes away" ?
[8:27] * jcsp (~jspray@62.214.2.210) has joined #ceph
[8:29] * kefu_ is now known as kefu|afk
[8:31] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:33] * kefu|afk (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[8:33] * kefu (~kefu@li1456-173.members.linode.com) has joined #ceph
[8:34] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Quit: treenerd_)
[8:34] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[8:34] * lmb (~Lars@ip5b404bab.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[8:37] * krypto (~krypto@G68-121-13-162.sbcis.sbc.com) has joined #ceph
[8:39] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Read error: Connection reset by peer)
[8:39] * treenerd__ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[8:40] * treenerd__ is now known as treenerd_
[8:43] * jcsp (~jspray@62.214.2.210) Quit (Ping timeout: 480 seconds)
[8:50] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[8:52] * ade (~abradshaw@p200300886B2F0600A6C494FFFE000780.dip0.t-ipconnect.de) has joined #ceph
[8:53] * jcsp (~jspray@62.214.2.210) has joined #ceph
[8:55] <FidoNet> hi ??? so ??? with cephfs I thought specifying mon01,mon02,mon03 in /etc/fstab would cause cephfs to auto switch if one of the mons vanishes ??? that seems not to be the case as my cephfs mount is currently stuck in fault mode and in fact the node itself is hanging on ceph -s ???.
[8:56] <FidoNet> other ceph clients are fine and see that mon01 has gone away
[8:56] <FidoNet> should we be exposing a single load balanced IP for the mons or is there something else afoot here ?
[8:56] * krashcan (~Krash@117.205.150.31) has left #ceph
[8:57] <FidoNet> 2016-10-05 07:53:46.749365 7f985146f700 0 -- 80.252.116.205:0/2718052439 >> 80.252.116.195:6789/0 pipe(0x7f984800ad70 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f9848006bc0).fault
[8:58] * trociny (~mgolub@93.183.239.2) Quit (Ping timeout: 480 seconds)
[9:01] <FidoNet> https://pastebin.com/WciexJ1U
[9:02] * jcsp (~jspray@62.214.2.210) Quit (Ping timeout: 480 seconds)
[9:02] <Be-El> FidoNet: nope, ceph client should listen to mon map changes
[9:03] <FidoNet> ok thanks for confirming .. not sure why this missed the change
[9:03] <Be-El> do you use the kernel client or ceph-fuse?
[9:03] <FidoNet> kernel
[9:04] <thoht> hi. is it a good idea to modify "filestore max sync interval" to 15 (defaul 5) and filestore min sync interval to 5 (default: .01). according to doc : "Less frequent synchronization allows the backing filesystem to coalesce small writes and metadata updates more optimally???potentially resulting in more efficient synchronization.".
[9:04] <Be-El> and ceph -s should choose one of the mons listed in ceph.conf (and maybe even iterating over them if the first selected mon is not available)
[9:05] <FidoNet> ceph -s just sort of hung and more fail messages appeared
[9:05] <FidoNet> rebooting it seems to have fixed it short term
[9:05] <Be-El> are you sure all your mons are accessible from the clients?
[9:05] <FidoNet> still seems to think one of the mons is down though
[9:06] <FidoNet> seem to be
[9:06] * jcsp (~jspray@62.214.2.210) has joined #ceph
[9:06] * Hemanth (~hkumar_@103.228.221.190) Quit (Ping timeout: 480 seconds)
[9:06] <Be-El> FidoNet: if all mons are accessible the clients should react to mon changes
[9:07] <Be-El> FidoNet: just try to validate the connectivity (e.g. telnet to port 6789 for all mon on the failing client)
[9:07] <FidoNet> we rebooted mon01 ???. seems like mon03 is also unwell .. I can ping it, but that???s about it
[9:07] * AlexeyAbashkin (~AlexeyAba@91.207.132.67) Quit (Quit: Leaving)
[9:07] <FidoNet> 6789 responds ??? ssh doesn???t let me in
[9:07] <FidoNet> weird
[9:07] <FidoNet> debugging
[9:07] <Be-El> are you able to log into the hosts of the other mons?
[9:08] <FidoNet> yup
[9:08] <Be-El> you can use the ceph daemon command locally to retrieve information about running daemons
[9:08] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[9:08] * dgurtner (~dgurtner@178.197.236.72) has joined #ceph
[9:13] <FidoNet> never quite figured out how that works ??? I just seem to get permission denied no matter what I try
[9:14] <FidoNet> s???ok .. think I???ve sussed it
[9:14] <FidoNet> deph dameon that is
[9:14] <FidoNet> daemon even
[9:19] <Be-El> ceph daemon mon.XYZ status
[9:20] <Be-El> the help command gives you an overview of available commands
[9:21] * krypto (~krypto@G68-121-13-162.sbcis.sbc.com) Quit (Ping timeout: 480 seconds)
[9:24] * derjohn_mob (~aj@42.red-176-83-90.dynamicip.rima-tde.net) has joined #ceph
[9:24] * lmb (~Lars@62.214.2.210) has joined #ceph
[9:24] * Hemanth (~hkumar_@103.228.221.190) has joined #ceph
[9:28] <FidoNet> yup .. on the remote host .. not on the ???local??? admin host ??? terminology thing
[9:30] <FidoNet> same as I keep getting confused with the references to ???public??? network .. which is actually a private network ??? guess we (ISPs) do things differently :) (public to us means must be routable and accessible by the public) ??? private is RFC1918 style but routed locally to the LAN/WAN ??? internal is not exposed at all outside of the VRF
[9:30] <FidoNet> (at least here it is :)
[9:31] <Be-El> public means accessible to clients in the case of ceph
[9:32] <FidoNet> like I say it???s a terminology thing
[9:33] <FidoNet> for us, clients connect to the mons ??? the mons connect to the private network and talk to the osd???s which talk to each other on an internal network
[9:34] * Flynn (~stefan@89.207.24.152) has joined #ceph
[9:34] <FidoNet> but I???m all confused and it???s been a long day :)
[9:35] <Be-El> the mons do not talk to osd on behalf of the clients....the clients talk to the osds ;-)
[9:35] <Be-El> and the private/internal/cluster network is only used for inter-osd traffic
[9:36] * Hemanth (~hkumar_@103.228.221.190) Quit (Quit: Leaving)
[9:37] <FidoNet> yup .. that was what confused me to start with ??? was trying to work out where to put the 10 Gig NICs, where to put the 1Gig NICs and where we might one day need 40G/100G NICs
[9:37] * analbeard (~shw@support.memset.com) has joined #ceph
[9:37] <Be-El> FidoNet: well, there's no easy answer for that question
[9:38] <FidoNet> :)
[9:38] * Flynn (~stefan@89.207.24.152) Quit ()
[9:38] <FidoNet> tbh the online docs need some work ??? they seem on the whole to be 3 years behind ??? and there were a few major gotchas that cost me a week trying to fathom .. only answered by going back over the forums/lists ....
[9:38] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[9:40] <IcePic> well, on the terminology, one would have to get into the storage network mindset, even if you happen to be an ISP
[9:42] <Be-El> 'client facing' might be a better term than 'public'
[9:43] * jcsp (~jspray@62.214.2.210) Quit (Ping timeout: 480 seconds)
[9:46] * AlexeyAbashkin (~AlexeyAba@91.207.132.76) has joined #ceph
[9:49] * peetaur2 (~peter@i4DF67CD2.pool.tripleplugandplay.com) Quit (Remote host closed the connection)
[9:49] <FidoNet> :)
[9:50] * dgurtner (~dgurtner@178.197.236.72) Quit (Read error: Connection reset by peer)
[9:50] <IcePic> well, I can see how an ISP would make that to be the internet outward ips once again
[9:51] <FidoNet> from a security perspective we don???t really want any of the ???gubbins??? to be routed outside ??? but then we need to work out what has to be exposed ??? initially I thought it was the mons ??? now I???ve had time to play more I realise that it???s none of the above and that we need to build gateway nodes (rgw/cifs/etc) and expose those to customers ???
[9:51] * jcsp (~jspray@62.214.2.210) has joined #ceph
[9:51] * lmb (~Lars@62.214.2.210) Quit (Ping timeout: 480 seconds)
[9:51] * peetaur2 (~peter@i4DF67CD2.pool.tripleplugandplay.com) has joined #ceph
[9:52] * peetaur2 (~peter@i4DF67CD2.pool.tripleplugandplay.com) Quit ()
[9:52] <FidoNet> the diagrams / docs miss a lot of that / take it for granted (which to someone coming in to this ???anew??? leaves a big gap to cover)
[9:52] * trociny (~mgolub@93.183.239.2) has joined #ceph
[9:53] * dgurtner (~dgurtner@195.238.25.37) has joined #ceph
[9:53] <FidoNet> and currently leaves me trying to work out how I renumber the whole lot without trashing 50TB of data :)
[9:55] <FidoNet> (although most of it is ???test data??? and backups of backups so not the end of the world .. but a pain to rebuild)
[9:57] * lmb (~Lars@62.214.2.210) has joined #ceph
[10:04] <IcePic> rgw and similar machines can well expose a separate interface to $whoever with the public services they offer, and have 1-2-3 internal-only interfaces to talk to all kinds of ceph stuff internally
[10:05] * bvi (~Bastiaan@185.56.32.1) has joined #ceph
[10:08] * derjohn_mob (~aj@42.red-176-83-90.dynamicip.rima-tde.net) Quit (Read error: Connection timed out)
[10:09] * derjohn_mob (~aj@42.red-176-83-90.dynamicip.rima-tde.net) has joined #ceph
[10:16] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Quit: treenerd_)
[10:17] * EinstCrazy (~EinstCraz@110.84.163.88) has joined #ceph
[10:19] * sickology (~root@vpn.bcs.hr) Quit (Read error: Connection reset by peer)
[10:19] * sickology (~root@vpn.bcs.hr) has joined #ceph
[10:23] * fridim (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[10:25] * derjohn_mob (~aj@42.red-176-83-90.dynamicip.rima-tde.net) Quit (Read error: Connection timed out)
[10:26] * derjohn_mob (~aj@42.red-176-83-90.dynamicip.rima-tde.net) has joined #ceph
[10:28] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[10:29] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[10:29] * briner (~briner@2001:620:600:1000:fab1:56ff:fece:3849) has joined #ceph
[10:36] * ashah (~ashah@125.16.34.66) has joined #ceph
[10:36] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[10:36] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[10:37] * DanFoster (~Daniel@office.34sp.com) has joined #ceph
[10:38] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[10:38] * krypto (~krypto@125.16.137.146) has joined #ceph
[10:47] * kefu (~kefu@li1456-173.members.linode.com) Quit (Quit: Textual IRC Client: www.textualapp.com)
[10:59] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[10:59] * Jamana (~Borf@5.153.234.146) has joined #ceph
[11:02] * efirs (~firs@98.207.153.155) Quit (Quit: Leaving.)
[11:05] * nilez (~nilez@104.129.29.42) Quit (Ping timeout: 480 seconds)
[11:08] * peetaur2 (~peter@i4DF67CD2.pool.tripleplugandplay.com) has joined #ceph
[11:13] * masber (~masber@129.94.15.152) Quit (Read error: Connection reset by peer)
[11:15] * rotbeard (~redbeard@aftr-109-90-233-215.unity-media.net) Quit (Read error: Connection reset by peer)
[11:18] <peetaur2> I just discovered rbd-fuse...the strangest thing. truncate -s 100 testfile makes a 100 byte rbd image and then cp testfile testfile2 makes it 1GB :D I wonder if I can use that for proxmox iso images... is there a way to tell it not to make everything 1GB by default?
[11:18] * rotbeard (~redbeard@aftr-109-90-233-215.unity-media.net) has joined #ceph
[11:22] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[11:23] * EinstCrazy (~EinstCraz@110.84.163.88) Quit (Remote host closed the connection)
[11:24] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:84c4:a35d:32e9:a5e9) has joined #ceph
[11:28] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[11:29] * Jamana (~Borf@5.153.234.146) Quit ()
[11:30] * KindOne_ (kindone@h229.169.16.98.dynamic.ip.windstream.net) has joined #ceph
[11:31] <Tetard> peetaur2: man cp, search for sparse ?
[11:32] * EinstCrazy (~EinstCraz@110.84.163.88) has joined #ceph
[11:32] <peetaur2> oh that actually worked
[11:33] <peetaur2> so is it sane to put some iso images in there for proxmox using this?
[11:33] <peetaur2> if someone builds a ceph cluster and openstack, they'd use rbd for their disk images...but where do they put iso files?
[11:34] * EinstCrazy (~EinstCraz@110.84.163.88) Quit (Remote host closed the connection)
[11:34] <Tetard> good Q - NFS mount across all compute nodes ?
[11:35] <peetaur2> but where is the nfs server? would I want my compute nodes to rely on themselves to run a vm for iso images?
[11:35] <peetaur2> or some other hardware wasted for this purpose?
[11:35] <Tetard> then you can do that, but have to sync ISOs across all nodes
[11:35] * KindOne- (kindone@h21.177.190.173.ip.windstream.net) has joined #ceph
[11:35] <Tetard> or make one of the nodes be the NFS
[11:35] <peetaur2> putting it on a ceph node would be kinda wrong too since it's not going to migrate to another node if the first is down
[11:35] <Tetard> or use GFS/OCFS on top of a shared mount on an RBD device
[11:36] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:36] * KindOne- is now known as KindOne
[11:36] <peetaur2> I was also thinking of a cluster fs...and people say they all suck...is that just iops which doesn't matter for read only iso images?
[11:36] <peetaur2> ..on shared rbd as you say
[11:39] <Gugge-47527> cant you just make a rbd pr iso, and attach them as a dvd type block device to the vm? :)
[11:39] <Gugge-47527> or use cephfs
[11:40] <peetaur2> cephfs isn't as production ready as rbd though... although I plan to use jewel, I wonder what people on hammer do
[11:40] <Tetard> Gugge-47527: good one :)
[11:40] * derjohn_mob (~aj@42.red-176-83-90.dynamicip.rima-tde.net) Quit (Ping timeout: 480 seconds)
[11:41] <Tetard> cluster FS suck differently but for sharing a mostly static cache of ISO images, OCFS2 or GFS2 would work fine
[11:41] <Gugge-47527> peetaur2: iso storage is not _that_ heavy :)
[11:41] <peetaur2> yeah but I don't want kernel panics or whatever...
[11:41] <Gugge-47527> its not like rbd-fuse is more production ready than cephfs :)
[11:41] <peetaur2> I tried cephfs years ago with cuttlefish/dumpling and every client kernel I tried would panic eventually :D
[11:42] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[11:42] <Tetard> I've used OCFS2 for doing sharing of VM images on KVM clusters, and it worked fine. No panics
[11:42] <Gugge-47527> then dont use the kernel client :)
[11:42] * KindOne_ (kindone@h229.169.16.98.dynamic.ip.windstream.net) Quit (Ping timeout: 480 seconds)
[11:42] <peetaur2> so I'm scarred for life...and I plan to use cephfs, but I wanted to avoid it on the VM compute nodes
[11:42] <Tetard> but the suggestions of one ISO = one RBD, shared read only as a block dvd device = cool
[11:42] <peetaur2> and also I considered using the ceph-fuse for cephfs; is that the best choice then?
[11:43] <peetaur2> and yeah one iso is one rbd sounds nice, but for proxmox, I think it makes more sense to do it like it normally does, with a directory of files... it doesn't have any UI for sharing disks between guests
[11:43] * rraja (~rraja@125.16.34.66) has joined #ceph
[11:44] <peetaur2> so that's what I thought of using rbd-fuse for...which looks good for that, except that it will give me wrong file sizes (but in theory not use more space since rbd is thin provisioned)
[11:44] <peetaur2> like on a normal fs, open file, write n bytes, flush, close file, and the file is n bytes... but with rbd-fuse, it's n>1GB?n:1GB
[11:45] <peetaur2> solved by cp --sparse=always ...but dunno what other weird behavior I'll find
[11:46] <Gugge-47527> i would trust the cephfs fuse client more than rbd-fuse :)
[11:46] * KindOne_ (kindone@h229.169.16.98.dynamic.ip.windstream.net) has joined #ceph
[11:46] <Gugge-47527> you could setup radosgw and use s3fs for iso storage too :)
[11:47] <peetaur2> rbd-fuse seems so simple though...how cna it fail? ;) mkdir blah mkdir: cannot create directory ???blah???: Function not implemented
[11:47] <peetaur2> but I know people use the cephfs fuse client for big things without issues, so by comparison my use case is so small that it also likely will never fail
[11:48] <peetaur2> and I'll end up using cephfs fuse too...since apparently you need a very new kernel with it (newer than ubuntu 14.04 kernel 3.13.0-96)
[11:48] * rdas (~rdas@121.244.87.113) has joined #ceph
[11:48] <peetaur2> but I wanted to be extra cautious on the vm compute nodes
[11:49] <peetaur2> (I don't have any fencing hardware, so no automatic failover)
[11:50] * derjohn_mob (~aj@23.red-176-83-91.dynamicip.rima-tde.net) has joined #ceph
[11:52] * KindOne- (kindone@h137.226.28.71.dynamic.ip.windstream.net) has joined #ceph
[11:52] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:52] * KindOne- is now known as KindOne
[11:53] * lmb (~Lars@62.214.2.210) Quit (Ping timeout: 480 seconds)
[11:54] <brians_> Hi Guys - one of my mons was using 7GB of ram so I just restarted. my ceph health is heatlh_ok but this mon isn't right
[11:54] <brians_> it just shows 0 mon.1@1(peon).data_health(164) update_stats avail 81% total 24062 MB, used 3192 MB, avail 19625 MB in the logs over and over -
[11:54] <brians_> this isn't up to date cluster representation
[11:55] <brians_> cluster looks like this pgmap v9120168: 1536 pgs: 1536 active+clean; 207 GB data, 422 GB used, 11494 GB / 11917 GB avail; 93698 B/s wr, 19 op/s
[11:55] <peetaur2> what do I do if my ceph seems not to have rbd-fuse or a new enough kernel to map rbd images...? (ubuntu 14.04 with ceph 10.2.3)
[11:55] * jeroen_ (~jeroen@37.74.194.90) has joined #ceph
[11:55] * jeroen_ (~jeroen@37.74.194.90) Quit ()
[11:55] <brians_> should I remove this mon and add mon again to this node? Is that safe only leaving 2 other monitors?
[11:56] <brians_> while doing the process
[11:56] * jcsp (~jspray@62.214.2.210) Quit (Ping timeout: 480 seconds)
[11:57] * KindOne_ (kindone@h229.169.16.98.dynamic.ip.windstream.net) Quit (Ping timeout: 480 seconds)
[11:57] * jcsp (~jspray@62.214.2.210) has joined #ceph
[11:59] <brians_> please ignore :)
[11:59] <brians_> DERP
[12:00] <peetaur2> brians_: no...you can't leave us hanging... you must tell us how you fixed it now.
[12:00] <brians_> I was reading logs incorrectly
[12:00] <brians_> I don't think what I'm seeing is actually a problem
[12:00] <brians_> I think there was defintiely a memory leak in the mon process (which I'll have to monitor more closely)
[12:01] <brians_> but I was putting 2 and 2 together and getting 7.6
[12:01] <brians_> :)
[12:01] <peetaur2> my guess is that 81% is the mon's disk space and the other is the pg/osd disk space?
[12:01] * dgurtner (~dgurtner@195.238.25.37) Quit (Ping timeout: 480 seconds)
[12:01] <peetaur2> I know 2nd, but not sure what the 81% is....where can I see that in my system?
[12:02] <peetaur2> oh I found it with grep update_stats\ avail... so it matches mon hd disk space. /dev/sda1 3.9G 2.1G 1.6G 58% / update_stats avail 40% total 3903 MB, used 2101 MB, avail 1581 MB
[12:03] <brians_> you were correct peetaur2
[12:03] <brians_> :)
[12:03] * nardial (~ls@p54894EE9.dip0.t-ipconnect.de) has joined #ceph
[12:04] * jcsp (~jspray@62.214.2.210) Quit (Quit: Ex-Chat)
[12:04] * jcsp (~jspray@62.214.2.210) has joined #ceph
[12:06] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:09] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[12:11] <peetaur2> ah a simple "apt-get install rbd-fuse" fixes my issue... I just assumed it would be together with the other package like it was on another machine
[12:12] <peetaur2> but what do other people do when they want to map an rbd image on a client that doesn't support it?
[12:12] <peetaur2> is rbd-fuse the proper way to map rbd without the kernel support?
[12:20] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[12:21] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) has joined #ceph
[12:24] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[12:33] <peetaur2> 403 iops on ocfs2 on rbd ... that's not bad
[12:33] <peetaur2> randwrite
[12:34] <peetaur2> with rbd-fuse that is...
[12:34] * [0x4A6F]_ (~ident@p4FC26E9D.dip0.t-ipconnect.de) has joined #ceph
[12:34] <peetaur2> with ceph-fuse (cephfs), I got only 236 iops
[12:35] <peetaur2> (and 2000 with cephfs kernel driver)
[12:37] * [0x4A6F] (~ident@0x4a6f.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:37] * [0x4A6F]_ is now known as [0x4A6F]
[12:38] * nilez (~nilez@104.129.29.42) has joined #ceph
[12:39] * percevalbot (~supybot@pct-empresas-83.uc3m.es) Quit (Remote host closed the connection)
[12:41] * percevalbot (~supybot@pct-empresas-83.uc3m.es) has joined #ceph
[12:44] * KindOne (kindone@0001a7db.user.oftc.net) has joined #ceph
[12:47] * karnan (~karnan@125.16.34.66) Quit (Ping timeout: 480 seconds)
[12:49] * dgurtner (~dgurtner@178.197.235.135) has joined #ceph
[12:50] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:84c4:a35d:32e9:a5e9) Quit (Ping timeout: 480 seconds)
[12:50] * ade (~abradshaw@p200300886B2F0600A6C494FFFE000780.dip0.t-ipconnect.de) Quit (Quit: Too sexy for his shirt)
[12:50] <peetaur2> is ocfs2 supposed to do this.... I did hostname>blah and then cat blah shows different hostname on different machines.
[12:51] <peetaur2> ran hostname>blah on both
[12:55] * krypto (~krypto@125.16.137.146) Quit (Ping timeout: 480 seconds)
[12:56] * krypto (~krypto@G68-121-13-175.sbcis.sbc.com) has joined #ceph
[12:57] * ade (~abradshaw@p200300886B2F0600A6C494FFFE000780.dip0.t-ipconnect.de) has joined #ceph
[13:03] * vimal (~vikumar@121.244.87.116) Quit (Quit: Leaving)
[13:03] * jcsp (~jspray@62.214.2.210) Quit (Ping timeout: 480 seconds)
[13:10] * jonas1 (~jonas@193.71.65.246) Quit (Quit: WeeChat 1.3)
[13:10] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[13:10] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[13:10] <darkfader> peetaur2: nope, it should be in sync
[13:11] <darkfader> there might be a little caching but it sounds like there's a communication issue or so
[13:12] <darkfader> write into the file once more, from node A, then plus umount + mount it on node B
[13:12] <darkfader> will the file then say "A"?
[13:12] * vimal (~vikumar@121.244.87.116) has joined #ceph
[13:12] <darkfader> i don't remember a lot of ocfs2 but there's some daemons that need to be able to talk and it also somehow needs to invalidate the local cache of linux
[13:13] <darkfader> it might also just be a fuse issue
[13:13] <darkfader> since fuse doesn't do direct io
[13:13] <thoht> hi. is it a good idea to modify "filestore max sync interval" to 15 (defaul 5) and filestore min sync interval to 5 (default: .01). according to doc : "Less frequent synchronization allows the backing filesystem to coalesce small writes and metadata updates more optimally???potentially resulting in more efficient synchronization.".
[13:13] <darkfader> i'm not sure if any kind of real cluster can work like that
[13:13] <Be-El> i would propose to use kernel rbd instead of rbd-fuse, since you'll get the same context switching overhead as with ceph-fuse
[13:13] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[13:14] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:14] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[13:14] <Be-El> otherwise it's comparing apples with oranges...
[13:18] <peetaur2> Be-El: the ubuntu 14.04 kernel driver doesn't work with ceph 10.2.3
[13:20] * bniver (~bniver@pool-71-174-250-171.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[13:22] * rf`2 (~DoDzy@46.166.190.130) has joined #ceph
[13:25] * hk135 (~horner@rs-mailrelay1.hornerscomputer.co.uk) has joined #ceph
[13:26] <hk135> Hi all, I was wondering if anyone could tell me what http://pastebin.com/0eyrtXkg was about
[13:26] <hk135> other than that my mon is broken!!
[13:26] <peetaur2> hk135: should post more that is above that
[13:27] <hk135> peetaur2: okay, so I was trying to deploy cephfs, ran ceph fs new <meta> <data>, and started getting this on the monitor
[13:27] * georgem (~Adium@24.114.79.169) has joined #ceph
[13:28] * georgem (~Adium@24.114.79.169) Quit ()
[13:28] * georgem (~Adium@206.108.127.16) has joined #ceph
[13:33] * atheism (~atheism@182.48.117.114) Quit (Remote host closed the connection)
[13:33] * atheism (~atheism@182.48.117.114) has joined #ceph
[13:36] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) has joined #ceph
[13:41] * natarej (~natarej@101.188.54.14) has joined #ceph
[13:42] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (Ping timeout: 480 seconds)
[13:45] * rdas (~rdas@121.244.87.113) Quit (Quit: Leaving)
[13:49] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[13:52] * rf`2 (~DoDzy@46.166.190.130) Quit ()
[13:52] * krypto (~krypto@G68-121-13-175.sbcis.sbc.com) Quit (Remote host closed the connection)
[13:55] * badone (~badone@66.187.239.16) has joined #ceph
[14:01] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[14:04] <kfox1111> IcePic: the config's here: http://pastebin.com/7U45svRS
[14:05] <kfox1111> it does occationally work in the gate, and the same config works in a different vm I have. So I'm not sure why it doesn't work in the gate most of the time.
[14:06] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[14:10] * Racpatel (~Racpatel@2601:87:3:31e3::34db) has joined #ceph
[14:11] * Racpatel (~Racpatel@2601:87:3:31e3::34db) Quit ()
[14:14] * nardial (~ls@p54894EE9.dip0.t-ipconnect.de) Quit (Quit: Leaving)
[14:14] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[14:16] * jcsp (~jspray@62.214.2.210) has joined #ceph
[14:20] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[14:22] * bene3 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[14:23] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Read error: Connection reset by peer)
[14:24] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:26] * mattbenjamin (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[14:28] * VampiricPadraig (~bret@108.61.123.88) has joined #ceph
[14:28] * lmb (~Lars@62.214.2.210) has joined #ceph
[14:32] * b0e (~aledermue@213.95.25.82) has joined #ceph
[14:34] * ashah (~ashah@125.16.34.66) Quit (Quit: Leaving)
[14:34] * jcsp (~jspray@62.214.2.210) Quit (Ping timeout: 480 seconds)
[14:35] * rdias (~rdias@2001:8a0:749a:d01:22cf:30ff:fe65:690e) Quit (Ping timeout: 480 seconds)
[14:48] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[14:49] * billwebb (~billwebb@50-203-47-138-static.hfc.comcastbusiness.net) has joined #ceph
[14:57] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Ping timeout: 480 seconds)
[14:57] * hommie (~oftc-webi@2a00:ec8:404:1113:a8b5:a2ea:b6c0:d1f5) has joined #ceph
[14:58] <hommie> guys, "ceph-deploy install [host] --release hammer" doesn't seem to find the packages ceph-osd anymore
[14:58] * VampiricPadraig (~bret@108.61.123.88) Quit ()
[14:58] * rotbeard (~redbeard@aftr-109-90-233-215.unity-media.net) Quit (Quit: Leaving)
[14:59] <hommie> even thought I have the repo added to my sources.list (deb https://download.ceph.com/debian-hammer/ trusty main)
[14:59] * jcsp (~jspray@62.214.2.210) has joined #ceph
[14:59] <hommie> any ideas?
[15:10] * natarej__ (~natarej@101.188.54.14) Quit (Read error: Connection reset by peer)
[15:10] * hommie (~oftc-webi@2a00:ec8:404:1113:a8b5:a2ea:b6c0:d1f5) Quit (Remote host closed the connection)
[15:11] * natarej__ (~natarej@101.188.54.14) has joined #ceph
[15:14] * EinstCrazy (~EinstCraz@110.84.163.88) has joined #ceph
[15:15] * karnan (~karnan@106.51.139.218) has joined #ceph
[15:15] * EinstCrazy (~EinstCraz@110.84.163.88) Quit (Remote host closed the connection)
[15:18] <etienneme> apt-cache show ceph-osd works ? hoonetorg
[15:18] <etienneme> hommie sorry
[15:20] * mattbenjamin (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[15:22] * lmb (~Lars@62.214.2.210) Quit (Ping timeout: 480 seconds)
[15:27] * fridim (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[15:33] * dlan (~dennis@116.228.88.131) Quit (Ping timeout: 480 seconds)
[15:34] * ira (~ira@12.118.3.106) has joined #ceph
[15:34] * jcsp (~jspray@62.214.2.210) Quit (Ping timeout: 480 seconds)
[15:36] * blynch (~blynch@vm-nat.msi.umn.edu) Quit (Ping timeout: 480 seconds)
[15:37] * mattbenjamin (~mbenjamin@12.118.3.106) has joined #ceph
[15:38] * fridim (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[15:39] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[15:40] * yanzheng1 (~zhyan@125.70.23.147) Quit (Quit: This computer has gone to sleep)
[15:44] * blynch (~blynch@vm-nat.msi.umn.edu) has joined #ceph
[15:45] * Xerati (~Pommesgab@86.104.15.15) has joined #ceph
[15:46] * lmb (~Lars@62.214.2.210) has joined #ceph
[15:47] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[15:47] * krypto (~krypto@G68-121-13-175.sbcis.sbc.com) has joined #ceph
[15:55] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[15:59] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:59] * bene3 (~bene@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:59] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[16:01] * briner (~briner@2001:620:600:1000:fab1:56ff:fece:3849) Quit (Quit: briner)
[16:02] * briner (~briner@129.194.16.54) has joined #ceph
[16:03] * dgurtner (~dgurtner@178.197.235.135) Quit (Read error: Connection reset by peer)
[16:13] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (Remote host closed the connection)
[16:14] * dlan (~dennis@116.228.88.131) has joined #ceph
[16:15] * Xerati (~Pommesgab@86.104.15.15) Quit ()
[16:15] * derjohn_mob (~aj@23.red-176-83-91.dynamicip.rima-tde.net) Quit (Ping timeout: 480 seconds)
[16:17] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:18] * dgurtner (~dgurtner@178.197.227.188) has joined #ceph
[16:19] * salwasser (~Adium@72.246.3.14) has joined #ceph
[16:22] * lmb (~Lars@62.214.2.210) Quit (Ping timeout: 480 seconds)
[16:24] * derjohn_mob (~aj@70.red-176-83-182.dynamicip.rima-tde.net) has joined #ceph
[16:25] * branto (~branto@transit-86-181-132-209.redhat.com) Quit (Quit: ZNC 1.6.3 - http://znc.in)
[16:26] * Meths (~meths@95.151.244.152) Quit (Ping timeout: 480 seconds)
[16:28] * xinli (~charleyst@32.97.110.55) has joined #ceph
[16:33] * bla (~b.laessig@chimeria.ext.pengutronix.de) Quit (Remote host closed the connection)
[16:33] <thoht> i'm replacing an OSD device with a new one device/ what is best to proceed. add the new one in cluster then remove the old OR remove the old and add the new one ?
[16:34] <thoht> i got replica 3 with 3 servers acting as OSD
[16:34] * vikhyat (~vumrao@49.248.86.245) Quit (Quit: Leaving)
[16:36] * bla (~b.laessig@chimeria.ext.pengutronix.de) has joined #ceph
[16:42] <minnesotags> tessier_ : ok, that makes more sense and that is the source of at least one of your problems. You need to set up and install ceph on each of the physical machines (nodes) as a monitor with 4 OSDs. Get it set up on the first one and come back. Don't try to add both. But when you add the second you end up with two monitors in the cluster and you need an odd number in the same cluster. But like I say, I am not a Ceph expert and I'm un
[16:43] <minnesotags> That said, did you wipe everything completely, including users and user directories and start from the preflight from scratch?
[16:43] * Meths (~meths@95.151.244.152) has joined #ceph
[16:46] * vata (~vata@207.96.182.162) has joined #ceph
[16:50] <tessier_> minnesotags: I did ceph-deploy purge but have not wiped the ceph user dirs.
[16:50] <tessier_> I'll do that too.
[16:53] * wjw-freebsd (~wjw@vpn.ecoracks.nl) has joined #ceph
[16:54] * Concubidated (~cube@h4.246.129.40.static.ip.windstream.net) Quit (Quit: Leaving.)
[16:57] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[16:57] <minnesotags> Just warning you the way you intend to do your architecture is going to be a problem. Start with just the one monitor node on the one machine, get that working first, then add nodes (in even numbers) to the cluster. That way you have an odd number of monitors.
[16:59] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[16:59] <minnesotags> Delete all of the ceph directories in /var/lib, in /etc, etc..
[17:00] * jarrpa (~jarrpa@2602:3f:e183:a600:a4c6:1a92:820f:bb6) has joined #ceph
[17:01] * georgem1 (~Adium@206.108.127.16) has joined #ceph
[17:01] * georgem (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[17:02] * georgem (~Adium@206.108.127.16) has joined #ceph
[17:02] * georgem1 (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[17:03] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[17:08] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[17:09] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[17:10] * wushudoin (~wushudoin@38.140.108.2) has joined #ceph
[17:10] * Ivan (~ipencak@213.151.95.130) Quit (Quit: Leaving.)
[17:12] * TMM (~hp@dhcp-077-248-009-229.chello.nl) Quit (Ping timeout: 480 seconds)
[17:16] * aNuposic (~aNuposic@fmdmzpr02-ext.fm.intel.com) has joined #ceph
[17:18] <peetaur2> darkfader: it seems the rbd image is same seen anywhere but the loop device is different (on every node, and different than rbd image); is there a way to mount rbd ocfs2 without the kernel rbd driver?
[17:19] <peetaur2> and dropping caches and restarting o2cb didn't fix it
[17:19] <darkfader> peetaur2: look fuse is a toy
[17:20] <darkfader> same for loop imo
[17:20] <darkfader> we pretty much agreed here you should use the kernel rbd for this case
[17:20] * kristen (~kristen@134.134.139.76) has joined #ceph
[17:22] * The_Ball (~pi@20.92-221-43.customer.lyse.net) has joined #ceph
[17:22] * mykola (~Mikolaj@91.245.79.65) has joined #ceph
[17:23] * TMM (~hp@dhcp-077-248-009-229.chello.nl) has joined #ceph
[17:24] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[17:24] * natarej__ (~natarej@101.188.54.14) Quit (Quit: Leaving)
[17:26] <Be-El> what about rbd-nbd?
[17:26] <dillaman> +1 -- I wouldn't recommend ceph-fuse for any real workloads
[17:27] <dillaman> rbd-nbd would be a better alternative -- but you would need to disable all librbd client-side caching, so realistically you'd probably be better off directly using krbd
[17:28] <peetaur2> darkfader: I know loop is not a good idea here, and I agree fuse is not the greatest, but it isn't the problem here... but how can I do this if the kernel driver won't map the image? https://bpaste.net/show/4d5350d845fa
[17:28] * lmb (~Lars@62.214.2.210) has joined #ceph
[17:28] <peetaur2> I already tried: rbd feature disable test/testfile1 exclusive-lock, object-map, fast-diff, deep-flatten
[17:28] <dillaman> peetaur2: you need to set your cluster's tunables to legacy
[17:29] <dillaman> http://cephnotes.ksperis.com/blog/2014/01/21/feature-set-mismatch-error-on-ceph-kernel-client
[17:29] * Concubidated (~cube@68.140.239.164) has joined #ceph
[17:29] <dillaman> peetaur2: note: changing the tunables will result in lots of data movement
[17:31] * ircolle (~Adium@2601:285:201:633a:ded:c0e5:7272:e568) has joined #ceph
[17:34] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[17:34] <peetaur2> ok so legacy fixes it... so what do I lose if I use this?
[17:34] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[17:35] * dgurtner (~dgurtner@178.197.227.188) Quit (Read error: Connection reset by peer)
[17:38] <dillaman> peetaur2: there are lots of little tweaks associated with legacy vs optimal tunables
[17:38] <dillaman> peetaur2: http://docs.ceph.com/docs/jewel/rados/operations/crush-map/#tunables
[17:39] <Be-El> peetaur2: you can also use an up to date kernel from the kernel mainline ppa
[17:41] * rwheeler (~rwheeler@46.189.28.237) has joined #ceph
[17:45] * lmb (~Lars@62.214.2.210) Quit (Ping timeout: 480 seconds)
[17:46] <peetaur2> ok well I think I want it at optimal... don't want to change it just for this ocfs2 thing. I'll think of something else for iso images.
[17:46] * vimal (~vikumar@121.244.87.116) Quit (Quit: Leaving)
[17:46] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Remote host closed the connection)
[17:47] <peetaur2> (or use ceph-fuse or rbd-fuse which seem fine...)
[17:47] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[17:47] <peetaur2> thanks for the help all 3 of you and more I forgot...goin home
[17:49] * ntpttr_ (~ntpttr@134.134.139.82) has joined #ceph
[17:49] * starcoder (~Plesioth@tor01.prd.kista.ovpn.se) has joined #ceph
[17:50] * andreww (~xarses@64.124.158.3) has joined #ceph
[17:51] * aNuposic (~aNuposic@fmdmzpr02-ext.fm.intel.com) Quit (Remote host closed the connection)
[17:52] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[17:54] <hk135> Hi All, I am getting an issue where my mon crashes at startup after I have created a cephfs
[17:54] <hk135> I was wondering if I have hit a know bug or anything like that
[17:55] <hk135> http://pastebin.com/0eyrtXkg
[17:56] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[17:58] * efirs (~firs@98.207.153.155) has joined #ceph
[17:58] * sudocat1 (~dibarra@192.185.1.20) has joined #ceph
[17:59] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) Quit (Quit: WeeChat 1.5)
[18:01] * davidzlap (~Adium@2605:e000:1313:8003:2076:903a:7dd0:df52) has joined #ceph
[18:02] * sudocat1 (~dibarra@192.185.1.20) Quit ()
[18:05] * dgurtner (~dgurtner@178.197.227.188) has joined #ceph
[18:05] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) has joined #ceph
[18:06] * rwheeler (~rwheeler@46.189.28.237) Quit (Remote host closed the connection)
[18:06] * ntpttr_ (~ntpttr@134.134.139.82) Quit (Remote host closed the connection)
[18:07] * georgem1 (~Adium@206.108.127.16) has joined #ceph
[18:07] * georgem (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[18:11] * bvi (~Bastiaan@185.56.32.1) Quit (Quit: Leaving)
[18:12] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:84c4:a35d:32e9:a5e9) has joined #ceph
[18:19] * starcoder (~Plesioth@tor01.prd.kista.ovpn.se) Quit ()
[18:21] * mitchty (~quassel@130-245-47-212.rev.cloud.scaleway.com) Quit (Quit: No Ping reply in 180 seconds.)
[18:22] * efirs1 (~firs@98.207.153.155) has joined #ceph
[18:22] * mitchty (~quassel@130-245-47-212.rev.cloud.scaleway.com) has joined #ceph
[18:23] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[18:25] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[18:28] * newbie87 (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[18:29] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:32] * ntpttr_ (~ntpttr@134.134.139.83) has joined #ceph
[18:34] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[18:34] * derjohn_mob (~aj@70.red-176-83-182.dynamicip.rima-tde.net) Quit (Read error: Connection reset by peer)
[18:35] * rraja (~rraja@125.16.34.66) Quit (Ping timeout: 480 seconds)
[18:36] * ntpttr__ (~ntpttr@134.134.139.83) has joined #ceph
[18:36] * cathode (~cathode@50-232-215-114-static.hfc.comcastbusiness.net) has joined #ceph
[18:36] * Racpatel (~Racpatel@2601:87:3:31e3::34db) has joined #ceph
[18:38] * newbie87 (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[18:40] * ntpttr_ (~ntpttr@134.134.139.83) Quit (Remote host closed the connection)
[18:42] * ntpttr__ (~ntpttr@134.134.139.83) Quit (Quit: Leaving)
[18:42] * wjw-freebsd (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[18:43] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:44] * kristen (~kristen@134.134.139.76) Quit (Remote host closed the connection)
[18:44] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[18:45] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:84c4:a35d:32e9:a5e9) Quit (Ping timeout: 480 seconds)
[18:51] * _kelv (uid73234@id-73234.highgate.irccloud.com) Quit (Quit: Connection closed for inactivity)
[18:56] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[18:58] * dgurtner_ (~dgurtner@109.236.136.226) has joined #ceph
[19:00] * dgurtner (~dgurtner@178.197.227.188) Quit (Ping timeout: 480 seconds)
[19:02] * DanFoster (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[19:03] <tessier_> minnesotags: Understood, but this is just a test. Not production. I'm just following the way the quickstart guide recommends getting started. Is the quick start guide no longer accurate or a good way to go? It says to start with 1 monitor (an odd number) and an even number of nodes (2). That's exactly what I'm doing.
[19:04] <tessier_> minnesotags: But would any of the things you say I am doing wrong here cause the timeout which we see in the logs which I pasted?
[19:04] * KindOne_ (kindone@h4.129.30.71.dynamic.ip.windstream.net) has joined #ceph
[19:09] * aNuposic (~aNuposic@134.134.139.83) has joined #ceph
[19:11] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:11] * KindOne_ is now known as KindOne
[19:12] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:84c4:a35d:32e9:a5e9) has joined #ceph
[19:13] * Nacer (~Nacer@37.167.55.154) has joined #ceph
[19:18] * Nacer (~Nacer@37.167.55.154) Quit (Remote host closed the connection)
[19:19] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Quit: treenerd_)
[19:57] * blynch (~blynch@vm-nat.msi.umn.edu) Quit (Remote host closed the connection)
[19:57] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:58] * BrianA (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) has joined #ceph
[20:00] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:84c4:a35d:32e9:a5e9) Quit (Ping timeout: 480 seconds)
[20:01] * ledgr (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[20:04] * SamYaple_ (~SamYaple@162.209.126.134) has joined #ceph
[20:04] * SamYaple_ (~SamYaple@162.209.126.134) Quit ()
[20:04] * SamYaple (~SamYaple@162.209.126.134) Quit (Read error: Connection reset by peer)
[20:04] * SamYaple (~SamYaple@162.209.126.134) has joined #ceph
[20:05] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[20:07] <tessier_> Just posted to ceph-users looking for a paid consultant. We need to get this project moving.
[20:09] * efirs1 (~firs@98.207.153.155) Quit (Remote host closed the connection)
[20:12] * efirs1 (~firs@98.207.153.155) has joined #ceph
[20:13] * aNuposic (~aNuposic@134.134.139.83) Quit (Remote host closed the connection)
[20:14] <kfox1111> what are the reasons why pgs might get stuck forever on a new cluster?
[20:14] <kfox1111> I keep getting clusters that don't schedule them and get stuck on initial install. not much useful in the logs though. They seem pretty happy otherwise. :/
[20:15] * efirs1 (~firs@98.207.153.155) Quit ()
[20:15] <T1> kfox1111: default CRUSH map? how many nodes?
[20:15] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[20:19] * jeh (~jeh@76.16.206.198) Quit ()
[20:25] * ledgr (~ledgr@88-222-11-185.meganet.lt) Quit (Remote host closed the connection)
[20:25] <tessier_> Hah...two posts to ceph list with a problem, no reply over the course of days. One post saying "$$$!!!" and two replies in minutes. :)
[20:26] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[20:26] <tessier_> I'll give it until the end of the day to see who else replies.
[20:27] <T1> well.. $$$ changes a lot of things
[20:28] * doppelgrau (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[20:28] <T1> apart from that - unless you have the time to learn yourself (or others in your organization) you end up paying a consultant the same amount as the regular paycheck would have cost..
[20:35] <m0zes> gregsfortytwo: thoughts on what I need to run to reset the mdsmap/flush the mds journal on my cluster?
[20:36] <gregsfortytwo> m0zes: there's a doc in the FS section about disaster recovery
[20:36] <gregsfortytwo> do the journal scavenging and flushing steps
[20:36] <gregsfortytwo> then the fs reset command via the ceph tool
[20:36] <m0zes> thanks
[20:38] <m0zes> I somehow missed that when looking.
[20:40] * jeh (~jeh@76.16.206.198) has joined #ceph
[20:48] * Hemanth (~hkumar_@103.228.221.190) has joined #ceph
[20:52] * xinli (~charleyst@32.97.110.55) Quit (Ping timeout: 480 seconds)
[21:01] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (Ping timeout: 480 seconds)
[21:04] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[21:07] <thoht> to replace a node (MON+OSD) by a new one (new hardware), on a cluster of 3 nodes, what is the best ?
[21:07] <thoht> going from 3 to 4 nodes then go to 3 (by removing the old one)
[21:07] <thoht> going from 3 to 2 nodes (removing the old one at first) then go to 3
[21:09] * efirs1 (~firs@209.49.4.114) has joined #ceph
[21:09] * billwebb (~billwebb@50-203-47-138-static.hfc.comcastbusiness.net) Quit (Quit: billwebb)
[21:13] * bjozet (~bjozet@82-183-17-144.customers.ownit.se) has joined #ceph
[21:13] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[21:18] * aNuposic (~aNuposic@192.55.55.41) has joined #ceph
[21:20] * m0zes (~mozes@ns1.beocat.ksu.edu) Quit (Ping timeout: 480 seconds)
[21:23] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[21:23] * ledgr_ (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[21:24] * rwheeler (~rwheeler@46.189.28.81) has joined #ceph
[21:25] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[21:29] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[21:36] * m0zes (~mozes@ns1.beocat.ksu.edu) has joined #ceph
[21:38] * xinli (~charleyst@32.97.110.55) has joined #ceph
[21:38] * kristen (~kristen@134.134.139.76) has joined #ceph
[21:41] <m0zes> gregsfortytwo: Thanks for all your help on this, we've got the filesystem mounting again.
[21:47] * lmb (~Lars@ip5b404bab.dynamic.kabel-deutschland.de) has joined #ceph
[21:49] * salwasser (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[21:53] * krypto (~krypto@G68-121-13-175.sbcis.sbc.com) Quit (Read error: Connection reset by peer)
[21:55] * karnan (~karnan@106.51.139.218) Quit (Ping timeout: 480 seconds)
[21:56] <gregsfortytwo> hurray
[21:57] * Dinnerbone (~Thononain@212.7.196.82) has joined #ceph
[22:03] * ade (~abradshaw@p200300886B2F0600A6C494FFFE000780.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[22:03] * mykola (~Mikolaj@91.245.79.65) Quit (Quit: away)
[22:09] * Hemanth (~hkumar_@103.228.221.190) Quit (Ping timeout: 480 seconds)
[22:13] * ade (~abradshaw@p4FF7B414.dip0.t-ipconnect.de) has joined #ceph
[22:18] * georgem1 (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:25] * wushudoin (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[22:27] * vZerberus (~dogtail@00021993.user.oftc.net) Quit (Quit: Coyote finally caught me)
[22:27] * Dinnerbone (~Thononain@212.7.196.82) Quit ()
[22:31] <minnesotags> tessier_; Time is money. If you don't have the time, prepare to pay the money.
[22:32] * bjozet (~bjozet@82-183-17-144.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[22:36] <minnesotags> Ok, I can see you still have a misunderstanding of the terminology, which is understandable. You start with one node. A node is a monitor is a host machine. All the same, basically. Under that, they say "two OSD Daemons". In your case, that means hard drives. Once you get your one monitor up and running, it is easy to add OSDs, so just add all four at one time.
[22:37] <minnesotags> By "all four", I mean all four of the drives on your host machine.
[22:42] <minnesotags> The "admin-node" can be the same machine as your "node1" monitor. That is how people (like me) do it on single machines. In my case, I have a server with 8 ssd drives, two are bound in a redundant raid and is where my os is located, the other six drives are OSDs on a single monitor at localhost (also with the localhost's network ip address).
[22:52] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:84c4:a35d:32e9:a5e9) has joined #ceph
[22:54] * jarrpa (~jarrpa@2602:3f:e183:a600:a4c6:1a92:820f:bb6) Quit (Ping timeout: 480 seconds)
[22:54] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) Quit (Remote host closed the connection)
[22:55] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) has joined #ceph
[22:58] * davidzlap (~Adium@2605:e000:1313:8003:2076:903a:7dd0:df52) Quit (Quit: Leaving.)
[23:02] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[23:13] * vZerberus (~dogtail@00021993.user.oftc.net) has joined #ceph
[23:15] * georgem (~Adium@24.114.71.50) has joined #ceph
[23:16] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[23:18] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[23:21] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[23:24] * bniver (~bniver@pool-71-174-250-171.bstnma.fios.verizon.net) has joined #ceph
[23:25] * georgem (~Adium@24.114.71.50) has left #ceph
[23:28] * kfox1111 (bob@leary.csoft.net) Quit (Quit: leaving)
[23:28] * aNuposic (~aNuposic@192.55.55.41) Quit (Quit: Leaving)
[23:38] * ceph-ircslackbot4 (~ceph-ircs@ds9536.dreamservers.com) Quit (Remote host closed the connection)
[23:38] * ceph-ircslackbot (~ceph-ircs@ds9536.dreamservers.com) has joined #ceph
[23:44] * fridim (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[23:44] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[23:46] * rdias (~rdias@2001:8a0:74a0:bf01:22cf:30ff:fe65:690e) has joined #ceph
[23:49] * mattbenjamin (~mbenjamin@12.118.3.106) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.