#ceph IRC Log

Index

IRC Log for 2014-03-15

Timestamps are in GMT/BST.

[0:08] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[0:11] * BillK (~BillK-OFT@58-7-191-145.dyn.iinet.net.au) has joined #ceph
[0:17] * ircolle (~Adium@2601:1:8380:2d9:b9d1:af93:7aba:738f) Quit (Quit: Leaving.)
[0:19] <elyograg> I started looking at the ceph docs for getting started, or quickstart, or whatever it's called. Those help you set up something that is NOT fault tolerant. Are there instructions anywhere for setting up something that is fault tolerant from the beginning?
[0:20] <elyograg> I don't want fault tolerance to be something that I 'add on'.
[0:20] <bitblt> ok so it's not a disk partition issue, at this point it's a mounting issue..
[0:21] <bitblt> elyograg, define fault tolerant?
[0:22] <elyograg> able to keep running with minimal outage in the event that one box (server, switch, etc) fails hard.
[0:22] <Sysadmin88> then build your system to do that :)
[0:22] <elyograg> I need instructions.
[0:22] <Sysadmin88> multiple nodes with multiple disks and multiple switches
[0:22] <bitblt> minimum 3 mon nodes and 2 osd nodes
[0:22] <elyograg> I don't want to set up a non-redundant setup, then have to scour the reference materials for information about making every piece fault tolerant.
[0:23] <Sysadmin88> ceph accepts failure and rebalances
[0:23] <bitblt> failure is considered normal/acceptable with ceph
[0:23] <Sysadmin88> but it is self healing and self replicating
[0:24] <elyograg> I know that it supports that. But the step-by-step documentation seems to be all about a toy setup without that capability.
[0:24] <Sysadmin88> maybe thats aimed at people testing it
[0:24] <Sysadmin88> ceph will work with what you give it...
[0:25] <bitblt> are you looking at the quickstart?
[0:25] <elyograg> not right this minute. I have looked at it before, though.
[0:25] <bitblt> i'm not sure what you are looking at that makes you think it's a toy setup?
[0:26] <elyograg> it doesn't talk about redundant boxes.
[0:26] <Sysadmin88> if the quickstart tells you how to make OSDs, MONs etc. then you can adapt that to whatever setup you need/can afford.
[0:26] <Sysadmin88> everything in ceph is a redundant box
[0:26] <bitblt> it's redundant by design.
[0:26] <Sysadmin88> if a drive dies, it's data is put somewhere else from replicas
[0:27] <bitblt> you don't need load balancers or anything like that either, unless you are using the object storage bit, and then you would lbs in front of the rados gateways
[0:27] * bitblt just wants to know why his osds aren't mounting on rhel 6.5 :(
[0:28] <bitblt> i see stuff like this and wonder https://github.com/ceph/ceph/blob/master/src/ceph-disk#L65
[0:31] <elyograg> I have six boxes in my testbed to work with. So I need three monitor nodes and two OSD nodes for a minimum setup. Are there other roles involved? Which roles can share hardware, and which shouldn't?
[0:31] <bitblt> ideally you don't want to share any for best performance
[0:31] <bitblt> although depending on the box, many people do cohabitate mons and osds
[0:32] <bitblt> depends on which services you use too. eg block, object, or ceph fs
[0:32] <elyograg> feel free to point me at URLs for further reading on these questions, too.
[0:32] <Sysadmin88> youtube has lots of videos by the ceph team
[0:33] <Sysadmin88> but you should research a lot before trusting something with your data
[0:33] <elyograg> I'll be doing the ceph fs. if it's possible to have both an object store access and filesystem access to the same data, we'll want to do that.
[0:33] <Sysadmin88> iirc thats one thing that ceph doesnt do 'yet'
[0:33] <Sysadmin88> watch the video on ceph vs glusterfs
[0:34] <bitblt> it does, but that's the one bit that's not prod ready..^^^
[0:34] <bitblt> for that you need the additional MDS role
[0:34] <bitblt> iirc it's supposed to be more polished in firefly
[0:34] <elyograg> I did some evaluations on various distributed filesystems. Ceph never even entered the running because the docs said that it was unstable resharing via NFS, and we have Solaris clients that we can't yet eliminate. We went with the G-word, and have had nothing but problems.
[0:36] * alram (~alram@38.122.20.226) Quit (Read error: Operation timed out)
[0:36] <bitblt> yeah nfs is hairy
[0:37] <elyograg> gluster has had problems that did not appear in my testing, even though I did tests that should have exploded. Of course I was unable to test with anywhere near as much data as we have in production.
[0:40] * xmltok (~xmltok@216.103.134.250) Quit (Quit: Leaving...)
[0:41] * sarob (~sarob@ip-64-134-225-149.public.wayport.net) Quit (Remote host closed the connection)
[0:41] <elyograg> be back later. going home.
[0:41] * elyograg (~oftc-webi@client175.mainstreamdata.com) has left #ceph
[0:45] * sjustwork (~sam@2607:f298:a:607:45ad:a4fc:7bbd:392c) Quit (Quit: Leaving.)
[0:47] <bitblt> hmm this looks about right http://tracker.ceph.com/issues/5194
[0:59] * BillK (~BillK-OFT@58-7-191-145.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[1:00] * BillK (~BillK-OFT@124-149-88-171.dyn.iinet.net.au) has joined #ceph
[1:09] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Remote host closed the connection)
[1:09] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[1:21] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (Remote host closed the connection)
[1:21] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[1:28] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:28] * hasues (~hazuez@12.216.44.38) has joined #ceph
[1:30] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[1:32] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[1:35] * danieagle (~Daniel@179.186.127.193.dynamic.adsl.gvt.net.br) has joined #ceph
[1:39] * skeenan (~Adium@8.21.68.242) has joined #ceph
[1:40] <skeenan> hi all, using ceph-0.67-7 I'm getting mount error 5 = Input/output error when I try to mount. pgmap v85: 192 pgs: 192 active+clean, mds health is ok
[1:41] <skeenan> any pointers to what I might be doing wrong
[1:41] <skeenan> [root@sk-chef ~]# ceph -w
[1:41] <skeenan> cluster 61b6dda1-5412-41f7-9769-3ae7e47241b7
[1:41] <skeenan> health HEALTH_OK
[1:41] <skeenan> monmap e1: 1 mons at {ceph-mds1=10.9.53.53:6789/0}, election epoch 1, quorum 0 ceph-mds1
[1:41] <skeenan> osdmap e14: 2 osds: 2 up, 2 in
[1:41] <skeenan> pgmap v86: 192 pgs: 192 active+clean; 14540 bytes data, 26388 MB used, 50148 MB / 80631 MB avail
[1:41] <skeenan> mdsmap e7: 0/0/1 up
[1:42] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[1:44] <skeenan> trying to mount with fuse just sits there and never returns too???
[1:47] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) Quit (Quit: leaving)
[1:49] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[1:50] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:52] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[1:52] * zerick (~eocrospom@190.114.248.34) Quit (Remote host closed the connection)
[1:52] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) has joined #ceph
[1:58] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[2:00] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:00] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:03] <bitblt> is it normal that if i have a mon down ceph status hangs?
[2:03] <bitblt> or when i first build a cluster and have multiple mons specified in ceph.conf already i get segfauls
[2:08] * Cube (~Cube@12.248.40.138) Quit (Remote host closed the connection)
[2:08] * Cube (~Cube@12.248.40.138) has joined #ceph
[2:08] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[2:13] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[2:14] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[2:15] * bandrus (~Adium@75.5.250.197) Quit (Quit: Leaving.)
[2:16] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (Remote host closed the connection)
[2:16] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[2:19] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[2:19] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[2:23] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[2:28] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:34] * dtalton2 (~don@128-107-239-234.cisco.com) has joined #ceph
[2:34] * bitblt (~don@128-107-239-233.cisco.com) Quit (Ping timeout: 480 seconds)
[2:36] * bdonnahue (~James@24-148-64-18.c3-0.mart-ubr2.chi-mart.il.cable.rcn.com) has joined #ceph
[2:37] <bdonnahue> hey guys. iv got some VMs hosted on ceph. they keep compaoining that the filesystem has become read only. Ceph health is ok though
[2:37] <bdonnahue> anyone know what this could be from?
[2:38] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:44] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[2:45] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[2:46] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[2:49] * andrein (~andrein@188.27.121.224) Quit (Quit: Konversation terminated!)
[2:49] * andrein (~andrein@188.27.121.224) has joined #ceph
[2:53] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:54] <janos_> usually you'll see that behavior as a defense mechanism after the disks have been disconnected from a guest
[2:54] <janos_> left and came back sort of thing
[2:54] <janos_> i recall that with weepy eyes when one day i accidentally disconnected all iscsi sessions from the wrong machine
[2:56] * Guest3354 (~me@2a02:2028:6b:a9d0:6267:20ff:fec9:4e40) Quit (Ping timeout: 480 seconds)
[2:57] * joao (~joao@a95-92-33-54.cpe.netcabo.pt) Quit (Quit: Leaving)
[2:58] * bitblt (~don@128-107-239-233.cisco.com) has joined #ceph
[2:59] * danieagle (~Daniel@179.186.127.193.dynamic.adsl.gvt.net.br) Quit (Quit: Muito Obrigado por Tudo! :-))
[3:03] * dtalton2 (~don@128-107-239-234.cisco.com) Quit (Read error: Operation timed out)
[3:07] * Cube (~Cube@66-87-65-221.pools.spcsdns.net) has joined #ceph
[3:13] * BillK (~BillK-OFT@124-149-88-171.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[3:13] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[3:16] * andrein (~andrein@188.27.121.224) Quit (Ping timeout: 480 seconds)
[3:19] * erkules (~erkules@port-92-193-121-243.dynamic.qsc.de) has joined #ceph
[3:26] * erkules_ (~erkules@port-92-193-70-78.dynamic.qsc.de) Quit (Ping timeout: 480 seconds)
[3:30] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[3:34] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[3:36] * hasues (~hazuez@12.216.44.38) Quit (Remote host closed the connection)
[3:39] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) has joined #ceph
[3:39] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) Quit ()
[3:45] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[3:46] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[3:46] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[3:50] * cerealkillr (~neil@c-24-5-71-242.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:53] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[4:07] * bitblt (~don@128-107-239-233.cisco.com) Quit (Quit: Leaving)
[4:19] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[4:27] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[4:40] * fdmanana_ (~fdmanana@bl10-140-160.dsl.telepac.pt) has joined #ceph
[4:43] * cerealkillr (~neil@c-24-5-71-242.hsd1.ca.comcast.net) has joined #ceph
[4:47] * fdmanana (~fdmanana@bl10-252-34.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[4:48] * rustam (~rustam@90.208.236.210) has joined #ceph
[4:49] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[4:51] * rustam (~rustam@90.208.236.210) Quit ()
[4:52] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) Quit (Quit: Leaving.)
[4:52] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[4:57] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:00] * hasues (~hazuez@12.216.44.38) has joined #ceph
[5:17] * Vacum (~vovo@i59F79973.versanet.de) has joined #ceph
[5:17] * Vacum_ (~vovo@88.130.223.207) Quit (Read error: Connection reset by peer)
[5:24] * cerealkillr (~neil@c-24-5-71-242.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:49] * sarob (~sarob@2601:9:7080:13a:d0b7:8e9:2593:32f7) has joined #ceph
[5:53] * JCL1 (~JCL@2601:9:5980:39b:d040:8876:5c07:6db3) Quit (Quit: Leaving.)
[5:54] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[5:54] * ChanServ sets mode +v andreask
[5:57] * sarob (~sarob@2601:9:7080:13a:d0b7:8e9:2593:32f7) Quit (Ping timeout: 480 seconds)
[6:02] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[6:06] * Boltsky (~textual@office.deviantart.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[6:26] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[6:41] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) has joined #ceph
[6:53] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[6:56] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[6:59] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[7:01] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:02] * piezo (~piezo@108-88-37-13.lightspeed.iplsin.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[7:08] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[7:21] * Meistarin (sid19523@0001c3c8.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:21] * kwmiebach (sid16855@id-16855.charlton.irccloud.com) Quit (Ping timeout: 480 seconds)
[7:32] * kwmiebach (sid16855@charlton.irccloud.com) has joined #ceph
[7:38] * wrale__ (~wrale@cpe-107-9-20-3.woh.res.rr.com) Quit (Quit: Leaving...)
[7:44] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[7:44] * elyograg (~oftc-webi@client175.mainstreamdata.com) has joined #ceph
[7:49] * hasues (~hazuez@12.216.44.38) Quit (Quit: Leaving.)
[7:50] * Cube (~Cube@66-87-65-221.pools.spcsdns.net) Quit (Quit: Leaving.)
[7:52] * Cube (~Cube@66-87-65-221.pools.spcsdns.net) has joined #ceph
[7:54] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[7:55] * Cube (~Cube@66-87-65-221.pools.spcsdns.net) Quit ()
[8:02] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:12] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) has joined #ceph
[8:18] * Cube (~Cube@66-87-65-221.pools.spcsdns.net) has joined #ceph
[8:19] * Cube (~Cube@66-87-65-221.pools.spcsdns.net) Quit ()
[8:49] * Cube (~Cube@66-87-65-221.pools.spcsdns.net) has joined #ceph
[8:57] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[9:01] * Cube (~Cube@66-87-65-221.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[9:02] * rendar (~s@host94-191-dynamic.11-87-r.retail.telecomitalia.it) has joined #ceph
[9:04] * BillK (~BillK-OFT@124-149-88-171.dyn.iinet.net.au) has joined #ceph
[9:05] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:05] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) Quit (Remote host closed the connection)
[9:07] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) has joined #ceph
[9:08] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Ping timeout: 480 seconds)
[9:29] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[9:31] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[9:33] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit ()
[9:53] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:57] * sarob (~sarob@2601:9:7080:13a:30a7:6800:f7a6:6ce9) has joined #ceph
[10:06] * sarob (~sarob@2601:9:7080:13a:30a7:6800:f7a6:6ce9) Quit (Ping timeout: 480 seconds)
[10:07] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[10:11] * BillK (~BillK-OFT@124-149-88-171.dyn.iinet.net.au) Quit (Quit: ZNC - http://znc.in)
[10:14] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[10:27] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[10:28] * simulx (~simulx@66-194-114-178.static.twtelecom.net) has joined #ceph
[10:28] * mnash_ (~chatzilla@vpn.expressionanalysis.com) has joined #ceph
[10:32] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) Quit (Read error: Operation timed out)
[10:32] * mnash_ is now known as mnash
[10:32] * simulx2 (~simulx@vpn.expressionanalysis.com) Quit (Ping timeout: 480 seconds)
[10:37] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[10:45] * BillK (~BillK-OFT@124-149-88-171.dyn.iinet.net.au) has joined #ceph
[10:56] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[11:01] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[11:09] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[11:15] * fedgoat (~fedgoat@cpe-68-203-10-64.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[11:16] * fedgoat (~fedgoat@cpe-68-203-10-64.austin.res.rr.com) has joined #ceph
[11:17] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) Quit (Remote host closed the connection)
[11:20] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) has joined #ceph
[11:35] * steki (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[11:46] * The_Bishop_ (~bishop@g229096164.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[11:48] * Meistarin (sid19523@0001c3c8.user.oftc.net) has joined #ceph
[11:57] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[12:00] * The_Bishop_ (~bishop@g229096164.adsl.alicedsl.de) has joined #ceph
[12:01] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[12:09] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 481 seconds)
[12:11] * sarob (~sarob@2601:9:7080:13a:6164:4ae4:530b:902d) has joined #ceph
[12:12] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (Quit: leaving)
[12:19] * sarob (~sarob@2601:9:7080:13a:6164:4ae4:530b:902d) Quit (Ping timeout: 480 seconds)
[12:22] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[12:40] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) has joined #ceph
[12:54] * TMM (~hp@c97185.upc-c.chello.nl) has joined #ceph
[12:58] * rendar (~s@host94-191-dynamic.11-87-r.retail.telecomitalia.it) Quit ()
[13:00] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[13:02] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[13:04] * rendar (~s@host94-191-dynamic.11-87-r.retail.telecomitalia.it) has joined #ceph
[13:06] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[13:11] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (Ping timeout: 480 seconds)
[13:15] * steki (~steki@fo-d-130.180.254.37.targo.rs) Quit (Ping timeout: 480 seconds)
[13:16] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[13:33] * TMM (~hp@c97185.upc-c.chello.nl) Quit (Quit: Ex-Chat)
[13:47] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[13:49] * BillK (~BillK-OFT@124-149-88-171.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[13:51] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[13:53] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[14:01] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[14:03] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[14:04] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Quit: If your not living on the edge, you're taking up too much space)
[14:07] * garphy`aw is now known as garphy
[14:09] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[14:16] * thomnico (~thomnico@2a01:e35:8b41:120:9594:e292:4ecb:f444) has joined #ceph
[14:20] * thomnico (~thomnico@2a01:e35:8b41:120:9594:e292:4ecb:f444) Quit ()
[14:44] * garphy is now known as garphy`aw
[14:53] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[15:06] * sarob (~sarob@2601:9:7080:13a:352f:20c6:2b91:60cc) has joined #ceph
[15:14] * sarob (~sarob@2601:9:7080:13a:352f:20c6:2b91:60cc) Quit (Ping timeout: 480 seconds)
[15:24] * kevincox (~kevincox@CPE68b6fc405da3-CM68b6fc405da0.cpe.net.cable.rogers.com) has joined #ceph
[15:32] * kevincox (~kevincox@CPE68b6fc405da3-CM68b6fc405da0.cpe.net.cable.rogers.com) Quit (Quit: Leaving)
[15:35] * diegows (~diegows@190.190.5.238) has joined #ceph
[15:41] * kevincox (~kevincox@4.s.kevincox.ca) has joined #ceph
[15:42] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[15:44] <kevincox> Hey guys, I was thinking that adding woreshard decoding for ceph protocol would be a fun GSOC project and was just wondering what the current state is. I see https://github.com/ceph/ceph/tree/master/wireshark in the tree and wondering if anyone knows where it stands ATM.
[15:44] <kevincox> *wiresharl
[15:44] <kevincox> *wireshark
[15:46] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (Remote host closed the connection)
[15:46] * SpamapS (~clint@xencbyrum2.srihosting.com) has joined #ceph
[15:58] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[16:07] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[16:11] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[16:11] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[16:15] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:17] * jeff-YF (~jeffyf@pool-173-66-76-78.washdc.fios.verizon.net) has joined #ceph
[16:17] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[16:24] * jeff-YF_ (~jeffyf@67.23.123.228) has joined #ceph
[16:29] * jeff-YF (~jeffyf@pool-173-66-76-78.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[16:29] * jeff-YF_ is now known as jeff-YF
[16:33] * zackc (~zackc@0001ba60.user.oftc.net) has joined #ceph
[16:48] * fedgoat (~fedgoat@cpe-68-203-10-64.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[16:55] * nuved (~novid@81.31.238.20) has joined #ceph
[16:57] <nuved> hello all, how can I deploy btrfs osd with ceph-deploy tool?
[16:57] <nuved> i get error with this command: ceph-deploy osd prepare --fs-type btrfs ceph-3:/dev/sdb1
[17:05] * madkiss (~madkiss@bzq-218-11-179.cablep.bezeqint.net) has joined #ceph
[17:06] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[17:09] * flaxy (~afx@78.130.174.164) Quit (Quit: WeeChat 0.4.2)
[17:10] * sarob (~sarob@2601:9:7080:13a:8d6:fb2d:319d:894) has joined #ceph
[17:11] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[17:14] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[17:18] <elyograg> i've been banging my head against the docs for getting a ceph fs started. After more missteps than I want to admit to, I finally reached the point where I have a monitor node and two OSD nodes, and the 'ceph-deploy mon create-initial' command worked.
[17:18] * sarob (~sarob@2601:9:7080:13a:8d6:fb2d:319d:894) Quit (Ping timeout: 480 seconds)
[17:20] <elyograg> the next steps and the reference that it points to don't seem to cover having a btrfs filesystem ready to go at /dev/md0, mounted as /storage.
[17:21] * hasues (~hazuez@12.216.44.38) has joined #ceph
[17:21] <elyograg> If my testing proves ceph can fill our needs, then when I do this for real, I *will* have actual disk devices I can point it at - /dev/sdb, /dev/sdc ... but right now it's /dev/md0.
[17:23] <darkfader> elyograg: you might save some head-banging if you do the tests during weekdays, there's many more people here
[17:24] <darkfader> i'm not sure what the easiest way is for your test. if you wrote the ceph.conf manually you can more easily adjust the devices but then you get a lot of other things to do manually
[17:24] <elyograg> true. I just installed the systems friday evening, now it's Saturday morning. We're in a time crunch, I don't have the luxury of waiting.
[17:25] <elyograg> I guess I can just point it at my mounted directory, worry about "best practices" for production.
[17:25] <darkfader> yes for the first test
[17:27] <darkfader> skip anything that keeps you from getting it up and running - unless it's important for your the things you actually want to test
[17:27] <darkfader> if you really want to have it mount the /dev/mdX please pastebin some of the errors you get
[17:28] * jwillem (~jwillem@thuiscomputer.xs4all.nl) Quit ()
[17:28] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[17:30] <elyograg> I didn't try mounting it. When I ran the 'disk list' command it wasn't listed.
[17:31] <elyograg> i have found an anomaly - the health check says it should say active + clean ... but it actually says HEALTH_OK
[17:33] <elyograg> reading further ... says it works with only one metadata server. what happens if that server dies?
[17:40] <darkfader> if you're using the ceph fs filesystem layer (and only then i think then a mds is really used) you _lose_ access
[17:40] <darkfader> you want to run multiple mds with the filesystem
[17:41] <darkfader> for testing i usually had 3x mon 3x mds + osds
[17:41] * jeff-YF (~jeffyf@67.23.123.228) Quit (Quit: jeff-YF)
[17:41] <darkfader> you can put them on osd nodes while doing the tests
[17:42] <darkfader> for prod i would have the mon's on separate servers
[17:42] <darkfader> you can also manually add more mds / mon later
[17:43] * The_Bishop_ (~bishop@g229096164.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[17:43] <elyograg> jumping ahead in concepts: Do clients need to talk to all servers, or is there a role that provides a network access point?
[17:46] <darkfader> all servers, you can have a separate backend network for osd<->osd comms to keep the sync traffic out of the "front" net
[17:47] <darkfader> client will ask mon to get a list which mds / osd to talk to
[17:47] <darkfader> and every metadata access goes via mds, and data is "just" on the osd's
[17:47] <elyograg> background: we've got a gluster deployment, but we keep having tons of trouble with it misbehaving. One misbehavior caused us to entirely lose 91 files. We figured out the bug that caused it and have upgraded ... but we continue to have errors and weird behavior. Ceph didn't even make it to testing on our last evaluation because it could not provide stable unique IDs for NFS sharing, and we have Solaris clients we won't be able to elimina
[17:48] <darkfader> if you reexport nfs you can just (and should) set fsids
[17:48] <darkfader> wouldn't that be enough?
[17:49] <darkfader> i think it's less pain to really really manage those than rely on nfs handles set by whatever the filesystem said
[17:49] <elyograg> it's my understanding that emperor fixes the problem with NFS. It's been over a year since I did that last evaluation, though.
[17:49] * Underbyte (~jerrad@pat-global.macpractice.net) has joined #ceph
[17:50] <darkfader> might be true, i'm just buying hardware for a new lab, so you know more than me there
[17:50] <darkfader> still, i think with linux+nfs it's better to go nazi and control all of it
[17:51] <darkfader> especially if you would like a soft mount here and there
[17:51] * hasues (~hazuez@12.216.44.38) Quit (Remote host closed the connection)
[17:51] <darkfader> do you have some link where you read abut the nfs fix?
[17:52] <elyograg> I asked in here just before emeror was released. I'll see if I can find the other info i read after it was released.
[17:53] <darkfader> thx
[17:53] <darkfader> i'll go do some paperwork now so the hw gets paid for hehe
[17:53] <elyograg> https://ceph.com/docs/master/release-notes/#upgrading
[17:54] * The_Bishop_ (~bishop@g229096164.adsl.alicedsl.de) has joined #ceph
[17:54] <elyograg> There was a note on the ceph page about NFS re-exporting not working very well because the id numbers could change. Can't find it now, but if the problem has been fixed, I'm not surprised.
[17:57] <darkfader> generally i'm sure there's many things in cephfs that could still get in the way. but i'm as confident you'll see them disappear over the course of the year
[17:57] <darkfader> and, well, i haven't used gluster since long
[17:57] <darkfader> but ceph has the ability to get more stable
[17:58] * hasues (~hazuez@12.216.44.38) has joined #ceph
[17:58] <darkfader> the other thing is never able to overcome it's architecture
[18:04] * flaxy (~afx@78.130.174.164) has joined #ceph
[18:04] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[18:09] <darkfader> libcephfs: API changes to better support NFS reexport via Ganesha (Matt Benjamin, Adam Emerson, Andrey Kuznetsov, Casey Bodley, David Zafman)
[18:09] <darkfader> maybe this
[18:10] <darkfader> i had made a mental note to look at ganesha once before
[18:10] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[18:10] <darkfader> haha awesome, 9fs suppor
[18:10] <elyograg> ok, so what I'm understanding is that for clients where I cannot drop a new NIC, or the Solaris clients, I continue with NFS. Just like the other filesystem that we don't like to talk about, native access will require the additional NIC.
[18:10] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[18:11] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) has joined #ceph
[18:11] <elyograg> right now everything is via NFS - both with our older solution and our mangled attempt to migrate.
[18:12] <darkfader> elyograg: you don't need to drop a new nic, you can route!
[18:12] <darkfader> the rest: ack
[18:12] * piezo (~piezo@108-88-37-13.lightspeed.iplsin.sbcglobal.net) has joined #ceph
[18:12] <elyograg> if it were simple routing, yes. all the routing here is via Cisco firewalls, though -- the amount of traffic involved would *kill* them.
[18:13] <darkfader> oookay hehe
[18:15] <elyograg> afk. need to go kick one of my testbed servers that I can't reach remotely, buy some milk, and visit http://www.bombayhouse.com/
[18:16] <darkfader> yummy, see you
[18:18] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[18:19] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:20] * cerealkillr (~neil@c-24-5-71-242.hsd1.ca.comcast.net) has joined #ceph
[18:22] * analbeard (~shw@host86-155-197-65.range86-155.btcentralplus.com) has joined #ceph
[18:29] * cerealkillr (~neil@c-24-5-71-242.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[18:38] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Ping timeout: 480 seconds)
[18:51] * elyograg (~oftc-webi@client175.mainstreamdata.com) Quit (Remote host closed the connection)
[18:55] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:00] * analbeard (~shw@host86-155-197-65.range86-155.btcentralplus.com) Quit (Quit: Leaving.)
[19:05] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[19:12] * JCL (~JCL@c-24-23-166-139.hsd1.ca.comcast.net) has joined #ceph
[19:14] * sarob (~sarob@2601:9:7080:13a:6c6d:c9e0:5690:61c) has joined #ceph
[19:14] * Meistarin (sid19523@0001c3c8.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:19] * andrein (~andrein@46.108.33.138) has joined #ceph
[19:22] * sarob (~sarob@2601:9:7080:13a:6c6d:c9e0:5690:61c) Quit (Ping timeout: 480 seconds)
[19:26] * nuved (~novid@81.31.238.20) Quit (Read error: Operation timed out)
[19:34] * Meistarin (sid19523@0001c3c8.user.oftc.net) has joined #ceph
[19:39] * pvsa (~pvsa@pd95c6a80.dip0.t-ipconnect.de) has joined #ceph
[19:40] * pvsa (~pvsa@pd95c6a80.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[19:42] * oblu (~o@62.109.134.112) Quit (Quit: ~)
[19:49] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[20:02] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[20:06] * andrein_ (~andrein@188.27.121.224) has joined #ceph
[20:09] * andrein (~andrein@46.108.33.138) Quit (Read error: Operation timed out)
[20:15] * sarob (~sarob@2601:9:7080:13a:e9a6:8923:5375:eb98) has joined #ceph
[20:23] * sarob (~sarob@2601:9:7080:13a:e9a6:8923:5375:eb98) Quit (Ping timeout: 480 seconds)
[20:33] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[20:34] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) has joined #ceph
[20:35] * oblu (~o@62.109.134.112) has joined #ceph
[20:43] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[20:45] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) Quit (Quit: mtanski)
[20:57] * arye (~arye@207-38-181-177.c3-0.wsd-ubr1.qens-wsd.ny.cable.rcn.com) has joined #ceph
[20:57] * arye (~arye@207-38-181-177.c3-0.wsd-ubr1.qens-wsd.ny.cable.rcn.com) has left #ceph
[20:57] * elyograg (~oftc-webi@albus.elyograg.org) has joined #ceph
[21:05] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[21:09] * davidzlap1 (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Quit: Leaving.)
[21:16] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[21:17] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[21:18] * elyograg (~oftc-webi@albus.elyograg.org) Quit (Remote host closed the connection)
[21:47] * ChrisNBlum1 (~Adium@dhcp-ip-152.dorf.rwth-aachen.de) has joined #ceph
[21:48] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Ping timeout: 480 seconds)
[21:49] * loicd reading http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/
[21:51] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[21:52] * andrein_ (~andrein@188.27.121.224) Quit (Ping timeout: 480 seconds)
[21:53] * gregsfortytwo (~Adium@2607:f298:a:607:f85f:560e:6e7c:86b5) Quit (Quit: Leaving.)
[21:57] * sroy (~sroy@96.127.230.203) has joined #ceph
[22:00] * gregsfortytwo (~Adium@2607:f298:a:607:f813:fe85:f9ed:a404) has joined #ceph
[22:02] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[22:02] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[22:03] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[22:10] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[22:10] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[22:11] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[22:18] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[22:24] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[22:33] * sarob (~sarob@2601:9:7080:13a:408d:9111:e92b:f2da) has joined #ceph
[22:34] * sroy (~sroy@96.127.230.203) Quit (Ping timeout: 480 seconds)
[22:34] * diegows (~diegows@190.190.5.238) has joined #ceph
[22:36] * sroy (~sroy@96.127.230.203) has joined #ceph
[22:42] * sarob (~sarob@2601:9:7080:13a:408d:9111:e92b:f2da) Quit (Remote host closed the connection)
[22:42] * sarob (~sarob@2601:9:7080:13a:408d:9111:e92b:f2da) has joined #ceph
[22:43] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) has joined #ceph
[22:43] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) Quit ()
[22:44] * sarob (~sarob@2601:9:7080:13a:408d:9111:e92b:f2da) Quit (Remote host closed the connection)
[22:44] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[22:51] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[22:52] * The_Bishop__ (~bishop@g229071168.adsl.alicedsl.de) has joined #ceph
[22:52] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[22:59] * The_Bishop_ (~bishop@g229096164.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[23:02] * sroy (~sroy@96.127.230.203) Quit (Quit: Quitte)
[23:15] * sarob (~sarob@2601:9:7080:13a:f123:cb3c:af19:5889) has joined #ceph
[23:20] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[23:28] * flaxy (~afx@78.130.174.164) Quit (Quit: WeeChat 0.4.3)
[23:29] * guerby (~guerby@ip165-ipv6.tetaneutral.net) Quit (Ping timeout: 480 seconds)
[23:31] * sarob (~sarob@2601:9:7080:13a:f123:cb3c:af19:5889) Quit (Ping timeout: 480 seconds)
[23:31] * sputnik1_ (~sputnik13@client64-254.sdsc.edu) has joined #ceph
[23:33] * flaxy (~afx@78.130.174.164) has joined #ceph
[23:34] * guerby (~guerby@ip165-ipv6.tetaneutral.net) has joined #ceph
[23:36] * sputnik1_ (~sputnik13@client64-254.sdsc.edu) Quit ()
[23:38] * sputnik1_ (~sputnik13@client64-254.sdsc.edu) has joined #ceph
[23:40] * sputnik1_ (~sputnik13@client64-254.sdsc.edu) Quit ()
[23:41] * sputnik1_ (~sputnik13@client64-254.sdsc.edu) has joined #ceph
[23:55] * sputnik1_ (~sputnik13@client64-254.sdsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:55] * sputnik1_ (~sputnik13@client64-254.sdsc.edu) has joined #ceph
[23:57] * elyograg (~oftc-webi@albus.elyograg.org) has joined #ceph
[23:59] <elyograg> wondering about the nice big red warning on the ceph fs docs that says "Important : Ceph FS is currently not recommended for production data." When is that likely to change?

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.