#ceph IRC Log

Index

IRC Log for 2016-05-17

Timestamps are in GMT/BST.

[14:54] -kinetic.oftc.net- *** Looking up your hostname...
[14:54] -kinetic.oftc.net- *** Checking Ident
[14:54] -kinetic.oftc.net- *** Found your hostname
[14:55] -kinetic.oftc.net- *** No Ident response
[14:55] * CephLogBot (~PircBot@rockbox.widodh.nl) has joined #ceph
[14:55] * Topic is 'http://ceph.com/get || dev channel #ceph-devel || test lab channel #sepia'
[14:55] * Set by ChanServ!services@services.oftc.net on Sun Apr 17 08:41:16 CEST 2016
[14:56] * mattbenjamin1 (~mbenjamin@121.244.87.118) Quit (Ping timeout: 480 seconds)
[14:56] * kefu|afk (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[14:57] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[14:58] * Kayla (~SquallSee@4MJAAE6CJ.tor-irc.dnsbl.oftc.net) Quit ()
[14:58] * TomyLobo (~JohnO@h2343030.stratoserver.net) has joined #ceph
[14:59] * rraja_ (~rraja@121.244.87.118) has joined #ceph
[14:59] * rraja__ (~rraja@121.244.87.118) has joined #ceph
[15:00] * shaunm (~shaunm@74.83.215.100) has joined #ceph
[15:02] * mhackett (~mhack@nat-pool-bos-u.redhat.com) has joined #ceph
[15:03] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:04] * rraja (~rraja@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:07] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:07] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:15] * bara (~bara@213.175.37.12) has joined #ceph
[15:15] <Be-El> TMM: does putting the PG data on the other OSDs solve the situation?
[15:17] * atheism (~atheism@124.126.235.14) has joined #ceph
[15:17] <TMM> Be-El, I've just imported the same pg data on all osds that are either being probed or supposed to be part of the pg
[15:17] <TMM> still incomplete
[15:17] <TMM> and now all osds are even tangentally related to this pg have the same data
[15:18] * jordanP (~jordan@92.103.184.178) has joined #ceph
[15:19] * kefu (~kefu@114.92.122.74) has joined #ceph
[15:19] <TMM> I have no idea why it still thinks it is incomplet
[15:19] <TMM> I think it is because it still lists those probing osds
[15:20] <Be-El> what's the current pg query output?
[15:20] <TMM> I don't think anything changed, but I'll pastebin it
[15:21] <TMM> Be-El, http://paste.debian.net/686855/
[15:21] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:24] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[15:24] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) Quit (Quit: valeech)
[15:25] <Be-El> TMM: the interesting part is the 'peered' state flag
[15:26] <Be-El> TMM: which ceph version do you use
[15:26] <Be-El> ?
[15:27] * johnavp1989 (~jpetrini@8.39.115.8) Quit ()
[15:27] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[15:27] <TMM> 0.94.6
[15:27] * aj__ (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[15:28] * johnavp1989 (~jpetrini@8.39.115.8) has left #ceph
[15:28] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[15:28] * TomyLobo (~JohnO@4MJAAE6EA.tor-irc.dnsbl.oftc.net) Quit ()
[15:28] * Jourei (~mollstam@193.90.12.86) has joined #ceph
[15:28] * johnavp1989 (~jpetrini@8.39.115.8) Quit ()
[15:28] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[15:28] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[15:29] * swami2 (~swami@49.32.0.252) Quit (Quit: Leaving.)
[15:31] <Be-El> TMM: my last idea: restart the mons one by one, and restart the primary osd daemon afterwards
[15:31] <Be-El> TMM: does the log on the primary OSD contain any hints why the PG does not leave the peered state and starts backfilling?
[15:32] <TMM> no
[15:33] <TMM> no errors :-/
[15:33] <TMM> I'll restart the mons
[15:33] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[15:34] <kiranos> I'm using ceph hammer and centos 7
[15:34] <kiranos> I'm wondering where the logic is for automount osd disk at startup
[15:34] <kiranos> its not in fstab
[15:34] * vbellur (~vijay@122.178.206.131) has joined #ceph
[15:34] <Be-El> kiranos: it's via udev rules
[15:35] <kiranos> Be-El: thanks do you know where the files are?
[15:35] <kiranos> canf tind it in rules.d
[15:35] * bara_ (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[15:35] <TMM> Be-El, that didn't do anything
[15:35] <Be-El> kiranos: /lib/udev/rules.d/
[15:35] <TMM> just went from stale+incomplete to incomplete again
[15:36] <TheSov> how is everyone today!
[15:36] <kiranos> Be-El: thanks!
[15:36] <liiwi> better check /etc/udev/rules.d also
[15:36] <kiranos> liiwi: thanks but nothing in /etc/udev/rulsed.d
[15:37] <Be-El> TMM: does a manual scrub on that PG work?
[15:37] <TMM> it doesn't seem to start
[15:37] <TMM> ceph pg deep-scrub 54.3e9 you mean, right?
[15:38] <Be-El> TMM: and still no message in the osd log file?
[15:38] * Hemanth (~hkumar_@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:38] <TMM> Be-El, only about slow requests
[15:38] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[15:38] * mhackett (~mhack@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[15:40] <kiranos> Be-El: so basicly if partition guid code is "4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D" run /usr/sbin/ceph-disk-activate /dev/sdX
[15:40] <kiranos> I'm guessing 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D is ceph osd unique
[15:40] * markl (~mark@knm.org) Quit (Ping timeout: 480 seconds)
[15:41] <Be-El> kiranos: it's the partition type guid used for ceph osd partitions
[15:42] <TMM> Be-El, I'm also noticing that osd 32 is not mentioned in the peer list, but it is mentioned as the primary in cursh
[15:42] <TMM> crush*
[15:42] <TMM> but according to the pg query it's a different osd
[15:42] <TheSov> does anyone know if bluestores are stable at this point?
[15:42] <TMM> the pg query seems to think it's supposed to be osd.166
[15:43] <TMM> but health detail thinks it's 32
[15:43] <Be-El> TMM: the primary osd is not listed in the peers list. the first info section refers to the primary one
[15:43] * fsimonce (~simon@87.13.130.124) Quit (Ping timeout: 480 seconds)
[15:43] <TMM> but at the bottom in the recovery section osd.166 is listed as primary
[15:43] * rraja_ (~rraja@121.244.87.118) Quit (Ping timeout: 480 seconds)
[15:43] * rraja__ (~rraja@121.244.87.118) Quit (Ping timeout: 480 seconds)
[15:43] <Be-El> TMM: you can validate this with another active+clean PG. it should show two entries in the peers list for the secondary osds
[15:44] <Be-El> TMM: the list at the bottom contains the former state. and you've probably restarted osd 32 in the past
[15:44] <TMM> ah, yes, because if your suggestion, I get it
[15:44] * fsimonce (~simon@87.13.130.124) has joined #ceph
[15:44] * guampa (~g@0001bfc4.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:46] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[15:48] * guampa (~g@216.17.110.252) has joined #ceph
[15:49] * aj__ (~aj@fw.gkh-setu.de) has joined #ceph
[15:49] * rendar (~I@host49-87-dynamic.22-79-r.retail.telecomitalia.it) Quit (Read error: Connection reset by peer)
[15:54] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[15:55] * rraja_ (~rraja@121.244.87.117) has joined #ceph
[15:55] * rraja__ (~rraja@121.244.87.117) has joined #ceph
[15:55] * mykola (~Mikolaj@91.225.201.82) Quit (Quit: away)
[15:58] * Jourei (~mollstam@4MJAAE6GL.tor-irc.dnsbl.oftc.net) Quit ()
[15:58] * RaidSoft (~LRWerewol@tor1e1.privacyfoundation.ch) has joined #ceph
[16:01] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:01] * fsimonce` (~simon@host243-34-dynamic.250-95-r.retail.telecomitalia.it) has joined #ceph
[16:01] * fsimonce (~simon@87.13.130.124) Quit (Ping timeout: 480 seconds)
[16:01] * mtb` (~mtb`@157.130.171.46) has joined #ceph
[16:02] * dneary (~dneary@nat-pool-bos-t.redhat.com) has joined #ceph
[16:03] * jordanP (~jordan@92.103.184.178) Quit (Quit: Leaving)
[16:04] * med (~medberry@71.74.177.250) has joined #ceph
[16:05] * vbellur (~vijay@122.178.206.131) Quit (Ping timeout: 480 seconds)
[16:10] * jquinn (~jquinn@nat-pool-bos-t.redhat.com) has joined #ceph
[16:19] * debian112 (~bcolbert@24.126.201.64) Quit (Remote host closed the connection)
[16:19] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:20] * Tetard_ (~regnauld@x1.x0.dk) has joined #ceph
[16:20] * jowilkin (~jowilkin@nat-pool-rdu-u.redhat.com) has joined #ceph
[16:20] * Tetard (~regnauld@x1.x0.dk) Quit (Read error: Connection reset by peer)
[16:21] <Kdecherf> can i use types in crushmap for rulesets instead of roots? (e.g. add types ssd, spin under host, and use root default in all rulesets but chooseleaf type ssd/spin in ssd and spin rulesets?)
[16:22] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[16:24] * jordanP (~jordan@92.103.184.178) has joined #ceph
[16:24] <Be-El> Kdecherf: the types are a hierarchy and are not associated with values selectable in the crush ruleset
[16:25] <Be-El> Kdecherf: so something like tech=ssd and tech=hdd under a host entry does not work since the label have to be unique
[16:25] * mtb` (~mtb`@157.130.171.46) Quit (Quit: Textual IRC Client: www.textualapp.com)
[16:26] <Kdecherf> Be-El: i use bucket-types, they don't have to be unique
[16:26] <Kdecherf> (and i use the needed bucket-type in step chooseleaf)
[16:27] <Be-El> Kdecherf: but the bucket types are a hierarchie. i don't know whether you can skip on level in the hierarchy for one type of disk, and another level for another type
[16:28] * RaidSoft (~LRWerewol@06SAACMLY.tor-irc.dnsbl.oftc.net) Quit ()
[16:28] * Jaska (~Grum@hessel3.torservers.net) has joined #ceph
[16:29] <Kdecherf> hm
[16:31] * yanzheng1 (~zhyan@125.70.22.41) Quit (Quit: This computer has gone to sleep)
[16:33] * vata1 (~vata@207.96.182.162) has joined #ceph
[16:36] <Kdecherf> interesting, it appears to work
[16:39] <Kdecherf> or not, hm
[16:40] * Lokta (~Lokta@carbon.coe.int) has joined #ceph
[16:40] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[16:41] <Lokta> Hi everyone ! trying a few things for a possible migration in the near future
[16:41] <Lokta> is it possible atm to mount different FS via cephFS ?
[16:41] <Be-El> Kdecherf: it works for the bucket type at the lower level, but probably fails for the other type
[16:42] <Lokta> i have set enable_mutiple to true but how can i specify which FS i want on mount ? Thank you !
[16:42] * overclk (~quassel@121.244.87.117) Quit (Ping timeout: 480 seconds)
[16:43] * atheism (~atheism@124.126.235.14) Quit (Ping timeout: 480 seconds)
[16:43] <Kdecherf> Be-El: how can i troubleshoot the crush to see if there is any issue? (it seems that the degraded cluster does not recover)
[16:44] <Be-El> Kdecherf: the crushtool binaries has a simulation mode
[16:44] <Be-El> Kdecherf: and afaik it will not work that way
[16:45] * fastlife2042 (~fastlife2@mta.comparegroup.eu) Quit (Remote host closed the connection)
[16:46] * fastlife2042 (~fastlife2@mta.comparegroup.eu) has joined #ceph
[16:47] * ade (~abradshaw@tmo-109-152.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[16:49] * itamarl (~itamarl@194.90.7.244) Quit (Quit: itamarl)
[16:50] <TMM> Be-El, I've sent a message to the ceph-users list
[16:53] * fsimonce` (~simon@host243-34-dynamic.250-95-r.retail.telecomitalia.it) Quit (Ping timeout: 482 seconds)
[16:54] * fastlife2042 (~fastlife2@mta.comparegroup.eu) Quit (Ping timeout: 480 seconds)
[16:58] * ade (~abradshaw@GK-84-46-90-18.routing.wtnet.de) has joined #ceph
[16:58] * Wizeon (~Moriarty@hessel0.torservers.net) has joined #ceph
[16:59] * Jaska (~Grum@4MJAAE6KG.tor-irc.dnsbl.oftc.net) Quit ()
[16:59] * wes_dillingham (~wes_dilli@140.247.242.44) Quit (Quit: wes_dillingham)
[16:59] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[17:03] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[17:04] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:04] * linjan (~linjan@86.62.112.22) Quit (Ping timeout: 480 seconds)
[17:06] * fastlife2042 (~fastlife2@84.241.212.26) has joined #ceph
[17:06] * arcimboldo (~antonio@dhcp-y11-zi-s3it-130-60-34-019.uzh.ch) Quit (Quit: Ex-Chat)
[17:06] * fastlife2042 (~fastlife2@84.241.212.26) Quit ()
[17:06] * wushudoin (~wushudoin@2601:646:8202:5ed0:2ab2:bdff:fe0b:a6ee) has joined #ceph
[17:08] * fsimonce (~simon@host128-29-dynamic.250-95-r.retail.telecomitalia.it) has joined #ceph
[17:10] * sudocat (~dibarra@2602:306:8bc7:4c50:7868:47ee:c196:f281) Quit (Ping timeout: 480 seconds)
[17:10] * wushudoin_ (~wushudoin@2601:646:8202:5ed0:2ab2:bdff:fe0b:a6ee) has joined #ceph
[17:11] * neurodrone_ (~neurodron@162.243.191.67) has joined #ceph
[17:15] * wushudoin (~wushudoin@2601:646:8202:5ed0:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[17:16] * rendar (~I@host61-179-dynamic.27-79-r.retail.telecomitalia.it) has joined #ceph
[17:17] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[17:17] * kefu (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:18] * bara (~bara@213.175.37.12) has joined #ceph
[17:21] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[17:21] * Lokta (~Lokta@carbon.coe.int) Quit (Quit: Leaving)
[17:21] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[17:23] * bara_ (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[17:24] * kefu (~kefu@114.92.122.74) has joined #ceph
[17:24] * overclk (~quassel@117.202.96.167) has joined #ceph
[17:28] * Wizeon (~Moriarty@7V7AAEUO2.tor-irc.dnsbl.oftc.net) Quit ()
[17:28] * zc00gii (~clarjon1@tor-amici-exit.tritn.com) has joined #ceph
[17:29] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[17:29] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Ping timeout: 480 seconds)
[17:30] * TMM (~hp@185.5.122.2) Quit (Quit: Ex-Chat)
[17:32] * grauzikas (grauzikas@78-56-222-78.static.zebra.lt) has joined #ceph
[17:33] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[17:33] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[17:34] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[17:34] * bvi (~Bastiaan@185.56.32.1) has joined #ceph
[17:35] * aj__ (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[17:35] * bvi (~Bastiaan@185.56.32.1) Quit ()
[17:35] * bvi (~Bastiaan@185.56.32.1) has joined #ceph
[17:38] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:38] * bvi (~Bastiaan@185.56.32.1) Quit ()
[17:45] * rendar (~I@host61-179-dynamic.27-79-r.retail.telecomitalia.it) Quit (Read error: Connection reset by peer)
[17:46] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[17:52] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:52] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[17:54] * vanham (~vanham@208.76.55.202) has joined #ceph
[17:55] * aj__ (~aj@fw.gkh-setu.de) has joined #ceph
[17:55] * erwan__ (~erwan@46.231.131.178) Quit (Ping timeout: 480 seconds)
[17:58] * jowilkin (~jowilkin@nat-pool-rdu-u.redhat.com) Quit (Ping timeout: 480 seconds)
[17:58] * zc00gii (~clarjon1@4MJAAE6NF.tor-irc.dnsbl.oftc.net) Quit ()
[17:58] * Azerothian______ (~airsoftgl@5.56.133.19) has joined #ceph
[17:59] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[18:10] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[18:10] * jowilkin (~jowilkin@nat-pool-rdu-u.redhat.com) has joined #ceph
[18:11] * rraja__ (~rraja@121.244.87.117) Quit (Quit: Leaving)
[18:11] * rraja_ (~rraja@121.244.87.117) Quit (Quit: Leaving)
[18:13] * linuxkidd (~linuxkidd@38.sub-70-210-245.myvzw.com) has joined #ceph
[18:14] <flaf> Hi. In a Infernalis cluster (little testing cluster which was off during few days), impossible to restart the OSD correctly. According to ???ceph ods tree??? all my 3 OSD are down. I don't see why. After a restart of the daemon, the daemon is running. In the log, I see no clew. In fact, I have absolutely no idea why my cluster doesn't start => http://paste.alacon.org/41199
[18:14] * guerby (~guerby@ip165.tetaneutral.net) Quit (Ping timeout: 480 seconds)
[18:16] <flaf> It's probably something stupid I have missed.
[18:16] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[18:19] * rendar (~I@host61-179-dynamic.27-79-r.retail.telecomitalia.it) has joined #ceph
[18:20] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Read error: No route to host)
[18:20] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[18:22] * freakybanana (~freakyban@c-73-158-201-226.hsd1.ca.comcast.net) has joined #ceph
[18:22] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) has joined #ceph
[18:22] * ChanServ sets mode +o nhm
[18:23] * pabluk_ is now known as pabluk__
[18:24] * jordanP (~jordan@92.103.184.178) Quit (Quit: Leaving)
[18:25] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:28] * Azerothian______ (~airsoftgl@7V7AAEUSB.tor-irc.dnsbl.oftc.net) Quit ()
[18:29] * kawa2014 (~kawa@94.56.39.231) has joined #ceph
[18:29] * guerby (~guerby@ip165.tetaneutral.net) has joined #ceph
[18:30] * aj__ (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[18:30] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:31] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[18:33] * ircolle (~Adium@2601:285:201:633a:69f5:9fe6:e942:e5bf) has joined #ceph
[18:34] * freakybanana (~freakyban@c-73-158-201-226.hsd1.ca.comcast.net) Quit (Quit: freakybanana)
[18:37] * rraja (~rraja@121.244.87.117) has joined #ceph
[18:38] <kiranos> I have an issue with ceph-deploy prepare, everytinh seem fine but I get this in the osd log:
[18:38] <kiranos> http://pastebin.com/uBXd2vj1
[18:38] <kiranos> its hammer 0..94.7 and centos 7
[18:39] <kiranos> I run it with ceph-deploy osd prepare ceph01-osd02:sdj:/journals/osd.38
[18:39] <kiranos> the disk is not mounted after this commnand which it should be
[18:39] <kiranos> I have to manually
[18:40] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[18:40] <kiranos> mount /dev/sdj1 /var/lib/ceph/osd/ceph-38
[18:40] <kiranos> and start the osd after that
[18:41] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[18:43] * jermudgeon_ (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[18:43] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Read error: Connection reset by peer)
[18:43] * jermudgeon_ is now known as jermudgeon
[18:44] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Quit: Leaving.)
[18:45] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Read error: No route to host)
[18:45] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[18:48] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[18:54] * kawa2014 (~kawa@94.56.39.231) Quit (Quit: Leaving)
[18:54] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[18:56] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[19:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[19:01] * daviddcc (~dcasier@LAubervilliers-656-1-16-160.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[19:01] * haomaiwang (~haomaiwan@106.187.51.170) has joined #ceph
[19:08] * kefu (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:09] * aj__ (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[19:11] * folivora_ (~out@devnull.drwxr-xr-x.eu) Quit (Read error: Connection reset by peer)
[19:12] * mykola (~Mikolaj@91.225.201.82) has joined #ceph
[19:13] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[19:14] * branto (~branto@nat-pool-brq-t.redhat.com) Quit ()
[19:14] * mattt (~mattt@lnx1.defunct.ca) Quit (Remote host closed the connection)
[19:14] * funnel (~funnel@81.4.123.134) Quit (Remote host closed the connection)
[19:15] * mattt (~mattt@lnx1.defunct.ca) has joined #ceph
[19:15] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[19:17] * overclk (~quassel@117.202.96.167) Quit (Remote host closed the connection)
[19:17] * folivora (~out@devnull.drwxr-xr-x.eu) has joined #ceph
[19:20] * PoRNo-MoRoZ (~hp1ng@mail.ap-team.ru) has joined #ceph
[19:20] <PoRNo-MoRoZ> yo
[19:20] <PoRNo-MoRoZ> okay i have some kinda problem
[19:20] <PoRNo-MoRoZ> all ceph services suddenly stopped
[19:20] <PoRNo-MoRoZ> keyring was the problem
[19:20] <PoRNo-MoRoZ> so i disabled cephx completely
[19:20] <PoRNo-MoRoZ> (for optimization purposes also)
[19:20] <PoRNo-MoRoZ> now my proxmox can't connect to rbd
[19:23] <PoRNo-MoRoZ> cephx_require_signatures = false
[19:23] <PoRNo-MoRoZ> cephx_sign_messages = false
[19:23] <PoRNo-MoRoZ> that's probably my problem
[19:25] * DanFoster (~Daniel@2a00:1ee0:3:1337:d986:3717:f394:7905) Quit (Quit: Leaving)
[19:27] * RMar04 (~RMar04@support.memset.com) Quit (Ping timeout: 480 seconds)
[19:27] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Killed (NickServ (Too many failed password attempts.)))
[19:27] * rmart04 (~rmart04@support.memset.com) Quit (Ping timeout: 480 seconds)
[19:27] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[19:28] * BillyBobJohn (~Thononain@marylou.nos-oignons.net) has joined #ceph
[19:33] <PoRNo-MoRoZ> looks like nope
[19:35] <PoRNo-MoRoZ> okay i got it
[19:35] <PoRNo-MoRoZ> removed keyrings from /etc/pve/priv/ceph
[19:38] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[19:45] * karnan (~karnan@106.51.137.46) has joined #ceph
[19:45] <TMM> I posted this question on ceph-users earlier: http://article.gmane.org/gmane.comp.file-systems.ceph.user/29648 could someone please see if they can help me with this?
[19:47] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[19:48] * folivora (~out@devnull.drwxr-xr-x.eu) Quit (Read error: Connection reset by peer)
[19:48] * folivora (~out@devnull.drwxr-xr-x.eu) has joined #ceph
[19:48] * linuxkidd (~linuxkidd@38.sub-70-210-245.myvzw.com) Quit (Quit: Leaving)
[19:50] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[19:50] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[19:52] * linuxkidd (~linuxkidd@38.sub-70-210-245.myvzw.com) has joined #ceph
[19:53] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Remote host closed the connection)
[19:56] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:d139:ef47:92aa:73d8) Quit (Ping timeout: 480 seconds)
[19:56] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[19:58] * BillyBobJohn (~Thononain@7V7AAEUXU.tor-irc.dnsbl.oftc.net) Quit ()
[19:59] * Brochacho (~alberto@c-73-45-127-198.hsd1.il.comcast.net) has joined #ceph
[20:01] * haomaiwang (~haomaiwan@106.187.51.170) Quit (Remote host closed the connection)
[20:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[20:03] * luigiman (~DougalJac@185.100.85.236) has joined #ceph
[20:03] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[20:05] * Hemanth (~hkumar_@103.228.221.176) has joined #ceph
[20:06] * Hemanth (~hkumar_@103.228.221.176) Quit ()
[20:13] * rdias (~rdias@2001:8a0:749a:d01:fc09:7dc:c90c:6d06) Quit (Remote host closed the connection)
[20:14] * rdias (~rdias@2001:8a0:749a:d01:796d:44ac:1cad:318c) has joined #ceph
[20:15] <TMM> if I rbd export an image, and import it again, are all the snaps preserved?
[20:17] * ivancich (~ivancich@12.118.3.106) Quit (Quit: ivancich)
[20:17] * ivancich (~ivancich@12.118.3.106) has joined #ceph
[20:30] <rkeene> I'm almost 100% sure the answer is no, it's just an individual image (snapshots are separate images as far as that goes)
[20:30] <rkeene> It's easy to test, of course
[20:30] * mhackett (~mhack@nat-pool-bos-u.redhat.com) has joined #ceph
[20:31] <jmlowe> has anybody done an upgrade from hammer to jewel?
[20:33] * luigiman (~DougalJac@4MJAAE6U3.tor-irc.dnsbl.oftc.net) Quit ()
[20:33] * datagutt (~Sigma@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[20:34] <rkeene> jmlowe, Not yet !
[20:34] <rkeene> jmlowe, I'm on hammer, and going to upgrade to Jewel
[20:35] <TMM> rkeene, my images only have one single snapshot it seems, it's something that glance does it seems. I have an incomplete pg in my images pool and I'm trying to rescue all the images that are still readable
[20:35] * Skaag (~lunix@65.200.54.234) has joined #ceph
[20:36] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[20:36] <jmlowe> rkeene: What's your plan? My read of the release notes is that unlike previous releases you don't have to do all the mons first then all the osds
[20:41] <TMM> rkeene, I'm sorry to bother you, but if you have a moment could you please have a look at this? http://article.gmane.org/gmane.comp.file-systems.ceph.user/29648 I'm totally stumped as to what to do at this point.
[20:42] * vata1 (~vata@207.96.182.162) Quit (Quit: Leaving.)
[20:44] * ledgr (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[20:44] <TMM> rkeene, or if you have any suggestions on what I could add to my message, I don't know if I've added all the relevant information
[20:46] * squizzi (~squizzi@107.13.31.195) Quit (Remote host closed the connection)
[20:46] <rkeene> TMM, So what's up with OSDs 32,166,96 ?
[20:46] * penguinRaider (~KiKo@146.185.31.226) Quit (Ping timeout: 480 seconds)
[20:47] * ledgr (~ledgr@88-222-11-185.meganet.lt) Quit (Remote host closed the connection)
[20:47] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[20:48] <TMM> rkeene, I have no idea.
[20:48] <TMM> rkeene, they seem fine, the logs don't really say anything either.
[20:48] <TMM> rkeene, and every other pg on them seems fine too
[20:48] <rkeene> What happens if you kick one of them own/down ?
[20:48] <rkeene> own -> out
[20:49] <TMM> nothing happens to the incomplete pg :-/
[20:49] <TMM> I tried kicking both the 32 and 166 our
[20:49] <TMM> out*
[20:49] <TMM> (after a suitable interval of course)
[20:51] <TMM> if I take down the 32 (which is currently the primary) the pg goes stale,incomplete for a little bit
[20:51] <TMM> then just goes back to incomplete
[20:51] <rkeene> What if you completely delete OSD 32 and recreate it ?
[20:52] <TMM> I have not tried that
[20:52] <rkeene> Let's see what happens !
[20:52] <TMM> ok
[20:53] <TMM> just stop it, out it, then when it's done syncing rm it, right?
[20:53] <mischief> hi, is anyone having trouble mounting ceph with 4.6 kernel and jewel userspace release?
[20:54] <mischief> when i try to mount, i see this after a long timeout: $ sudo mount -t ceph -o rw,relatime,name=admin,secret=$SECRET,fsc 10.0.2.15:/ /mnt
[20:54] <mischief> mount: 10.0.2.15:/: can't read superblock
[20:54] <rkeene> TMM, I'd out it first
[20:55] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[20:55] <TMM> ok
[20:56] * penguinRaider (~KiKo@146.185.31.226) has joined #ceph
[20:58] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[20:58] * ade (~abradshaw@GK-84-46-90-18.routing.wtnet.de) Quit (Quit: Too sexy for his shirt)
[20:59] <TMM> rkeene, do you have any idea what could cause something like this?
[20:59] <rkeene> TMM, No
[21:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[21:01] <TMM> I guess I should up the replication of those volumes to 4
[21:01] <TMM> most of my pools are configured through crush to only exist in one of the racks
[21:01] <TMM> except for this one
[21:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[21:01] <TMM> and it had 2 out of 3 copies down
[21:03] * datagutt (~Sigma@06SAACM2H.tor-irc.dnsbl.oftc.net) Quit ()
[21:03] <rkeene> I've never seen the problem you had though -- and I run a similar configuration... but I don't have many unplanned outages
[21:03] * lobstar (~KapiteinK@politkovskaja.torservers.net) has joined #ceph
[21:03] * mhackett (~mhack@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[21:03] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[21:03] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[21:04] <TMM> a good number of the images are still readable it seems
[21:05] <TMM> it's also strange I apparently have no unfound objects at all
[21:06] <TMM> I figured I could just mark_lost revert
[21:06] <rkeene> mark_lost revert never worked out well for me
[21:06] <TheSov> wierd, i just tried to install jewel. and while it installed fine i cant add my osd's via ceph-deploy osd create --bluestore system1:sdd
[21:06] <TheSov> all the osd's i added are down
[21:07] <TMM> rkeene, ok, so I deleted it now, but there's no change, the pg is still incomplete
[21:07] <TheSov> KEENE
[21:07] <vanham> mischief, I'm running Jewel fine on 4.4
[21:07] * fontana (~fontana@141.101.134.214) has joined #ceph
[21:07] <fontana> you guys heard about demonsaw ? i got it a few weeks ago but not many populate it, great open source project, we get to share files between us anonymously and with added encryption for private grouping on top of standard one.... maybe check it out join the community ? Im on debian, it works great....
[21:08] <vanham> But I'm using Hammer tunables
[21:08] <TheSov> anyone know wtf is going on with my setup?
[21:08] * fontana (~fontana@141.101.134.214) Quit (Quit: Leaving)
[21:09] <mischief> vanham: do you happen to have selinux on
[21:09] <vanham> mischief, nope
[21:10] <vanham> I know that with Jewel tunables you will need at least kernel 4.5 to mount
[21:10] <vanham> mischief, what does your dmesg says?
[21:10] <TMM> rkeene, I'm recreating it now
[21:10] <TMM> still incomplete :-/
[21:10] <mischief> vanham: [ 948.807584] libceph: mon0 10.0.2.15:6789 session established
[21:10] <mischief> [ 948.809226] libceph: client4113 fsid eecc68d9-c517-4878-aac1-3d7597d14b08
[21:11] <mischief> and that's it. then after some time (~30s) mount will print 'can't read superblock'
[21:11] <rkeene> TMM, Which OSDs did it live on when there was no 32 ?
[21:11] <vanham> oh
[21:11] <vanham> dman
[21:11] <vanham> dman
[21:11] <vanham> damn
[21:11] <mischief> :P
[21:11] <TMM> rkeene, 161
[21:11] <vanham> mischief, anything on /var/log/ceph/ceph-mds.a.log (or whatever your active MDS is)?
[21:12] <mischief> vanham: i can try to see... i'm running the ceph 'demo' docker container, rebuilt for jewel
[21:12] <vanham> mischief, Cool!!! I tried doing that but gave up!
[21:12] <vanham> Everything else here is Docker!
[21:13] <mischief> vanham: https://github.com/coreos/bugs/issues/1092 is what i am investigating
[21:13] <mischief> you can see for the bug we had i found a simple reproducer using the ceph demo container
[21:13] <mischief> aparently the bug is fixed in kernel 4.6, but i can't actually mount.
[21:14] <TMM> I wonder if I should just remove all osds one by one that are listed in the pg
[21:14] <mischief> vanham: 2016-05-17 19:13:15.880051 7f885b17f180 -1 mds.0 *** no OSDs are up as of epoch 8, waiting
[21:14] <mischief> this looks bad :')
[21:14] <TMM> the probing_osds looks completely wrong
[21:14] <vanham> mischief, LOL
[21:14] * lcurtis_ (~lcurtis@47.19.105.250) has joined #ceph
[21:14] <rkeene> TMM, Remove all three :-D
[21:14] <mischief> https://github.com/ceph/ceph-docker/blob/master/demo/entrypoint.sh
[21:14] <mischief> this is what runs in the container
[21:14] <rkeene> TMM, But yes, one-by-one
[21:15] <mischief> maybe the script is not correct for jewel release
[21:15] * RMar04 (~RMar04@host109-146-247-100.range109-146.btcentralplus.com) has joined #ceph
[21:15] * rmart04 (~rmart04@host109-146-247-100.range109-146.btcentralplus.com) has joined #ceph
[21:16] * rmart04 (~rmart04@host109-146-247-100.range109-146.btcentralplus.com) Quit (Read error: Connection reset by peer)
[21:16] <TMM> rkeene, well, it's 6 at this point, there are 6 osds listed as probing_osds
[21:16] <TMM> some of them never even had the data as far as I could tell
[21:16] <vanham> mischief, Cool! I saw that project b4. In the end we went with the thinking that Ceph is too much base infrastructure, as it was the how most people were bloging/messaging when we read about it.
[21:16] * rmart04 (~rmart04@host109-146-247-100.range109-146.btcentralplus.com) has joined #ceph
[21:16] <vanham> But, if you want to do it on CoreOS then Docker is the only way to go!
[21:17] <vanham> CoreOS is awesome
[21:17] <mischief> yep
[21:18] <vanham> Just to be sure, I would try the Ceph FUSE client first, make sure CephFS is working
[21:18] * mykola (~Mikolaj@91.225.201.82) Quit (Quit: away)
[21:18] <vanham> Then go to the kernel problem
[21:19] <mischief> well, apparently there is some problem in the mds/osd
[21:20] <mischief> i honestly have zero clue about running ceph, i'm just testing it to close this bug on coreos
[21:20] <vanham> I would think so, then you have the bug they reported just after that
[21:20] <vanham> How's your ceph status?
[21:20] <mischief> i have no idea
[21:21] <vanham> do a exec ceph status on the docker mds docker
[21:21] <vanham> MDS, MON and OSDs should be on the same network
[21:21] <vanham> So that they can talk to each other
[21:21] <mischief> http://sprunge.us/BbRi
[21:21] <mischief> looks bad
[21:21] <vanham> kernel has to be able to access all of then, so no overlay network / ipip / any other encapsulation
[21:22] <vanham> So, that docker can access the monitor, that's good
[21:22] <vanham> You didn't add an OSD to the cluster
[21:22] <vanham> http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/
[21:23] <vanham> Not sure how to do it with the ceph-docker project but I remember they have good docs there
[21:23] <vanham> Actually you will probably have to add 2 or 3 OSDs
[21:23] <mischief> vanham: in theory the script should do it :')
[21:24] <vanham> Yeah, but with a different container for every component, as far as I remember
[21:24] * rmart04 (~rmart04@host109-146-247-100.range109-146.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[21:24] <vanham> Btw, on the same network, so host network is a great choice here
[21:25] <mischief> this is docker logs: http://sprunge.us/bNLf
[21:25] <mischief> this might be the problem.. 2016-05-17 18:43:52.071353 7f8c96590800 -1 filestore(/var/lib/ceph/osd/ceph-0) could not find #-1:7b3f43c4:::osd_superblock:0# in index: (2) No such file or directory
[21:25] * RMar04 (~RMar04@host109-146-247-100.range109-146.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[21:26] <vanham> Nice script then!
[21:26] <vanham> Backing volume for OSD should be XFS
[21:26] <vanham> Those ENAMETOOLONG are because of that
[21:26] * bvi (~Bastiaan@152-64-132-5.ftth.glasoperator.nl) has joined #ceph
[21:27] <vanham> Also Ceph will need permition to write there
[21:27] <vanham> You should probrably have a XFS volume for /var/lib/ceph/osd/ceph-0 then
[21:28] * gregmark (~Adium@68.87.42.115) has joined #ceph
[21:29] <vanham> After that, ceph status might still complaint, unless the replica size for the cephfs pools are also 1
[21:29] <vanham> I saw the entrypoint scripts will change the replica size for the rbd pool to 1, but didn't see it do it for the cephfs_data and cephfs_metadata pools.
[21:30] <vanham> Might have missed it bc I only did a quick scan on the script
[21:30] <vanham> Sorry, it does (line 40)
[21:32] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[21:33] * lobstar (~KapiteinK@06SAACM3N.tor-irc.dnsbl.oftc.net) Quit ()
[21:34] <mischief> vanham: well, this is running on overlayfs because docker container. :/
[21:34] <mischief> and default root in coreos is ext4
[21:35] <vanham> How do you do volumes with CoreOS?
[21:35] <vanham> (I stopped my study of CoreOS a few pages before that)
[21:36] <vanham> Take a look at http://docs.ceph.com/docs/jewel/rados/configuration/filesystem-recommendations/
[21:37] <vanham> You can change your ceph.conf and make it work with ext4 as well
[21:40] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:41] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[21:41] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit ()
[21:44] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[21:46] * karnan (~karnan@106.51.137.46) Quit (Quit: Leaving)
[21:48] <m0zes> make sure to pay attention to the note about ext4 "This may result in difficult-to-diagnose errors if you try to use RGW or other librados clients that do not properly handle or politely surface any resulting ENAMETOOLONG errors."
[21:52] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:55] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[21:57] <TMM> bleh, I also appear to have a host with dud ram
[21:57] <TMM> the host has 8 osds, can I just mark them 'out' all at the same time? or should I do one at a time?
[21:58] <TMM> or can I just out an entire server in a safe way
[22:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[22:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[22:07] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[22:14] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[22:18] <MentalRay> anyone played with the "recovery_delay_start" value in ceph ?
[22:20] <TheSov> for some reason it seems that ceph jewel for debian is not mounting the osd's
[22:20] * georgem (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[22:21] <TheSov> well for ubuntu i mean
[22:22] * bvi (~Bastiaan@152-64-132-5.ftth.glasoperator.nl) Quit (Remote host closed the connection)
[22:23] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[22:23] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[22:25] * allaok (~allaok@machine107.orange-labs.com) Quit (Quit: Leaving.)
[22:25] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[22:33] * loft (~murmur@185.133.32.19) has joined #ceph
[22:36] * ledgr_ (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[22:38] * wes_dillingham (~wes_dilli@140.247.242.44) Quit (Ping timeout: 480 seconds)
[22:43] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[22:47] * ledgr_ (~ledgr@88-222-11-185.meganet.lt) Quit (Remote host closed the connection)
[22:49] <TheSov> how would one create a ceph cluster without ceph-deploy?
[22:49] * ledgr (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[22:50] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) has joined #ceph
[22:50] * jquinn (~jquinn@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[22:50] <TMM> TheSov, you make a monitor map, scp it to your monitors, initialize the monitors with mon fs create, the you just add your osds like normal, after you distributed the deployment keys using ssh
[22:51] <TMM> TheSov, it's documented on the ceph documentation page
[22:51] <TheSov> the ceph documentation pages are horrible.
[22:52] <TMM> TheSov, http://docs.ceph.com/docs/master/install/manual-deployment/
[22:52] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has left #ceph
[22:52] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[22:52] <TheSov> for some odd reason
[22:52] <TheSov> when i deploy this to ubuntu 16.04, it doesnt actually mount osd
[22:52] <TheSov> and im trying to figure out why
[22:53] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) has joined #ceph
[22:53] <TheSov> something very strange going on
[22:54] <TheSov> the monitor create seems to go ok, but ceph doesnt realize python isnt installed by default
[22:54] <TheSov> so after installing python it seems to go ok until u add osd's
[22:55] <TheSov> then it says it creates the osd. but theres no mount for the disk
[22:55] <TheSov> does the user ceph need super special rights?
[23:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[23:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[23:03] * loft (~murmur@06SAACM7L.tor-irc.dnsbl.oftc.net) Quit ()
[23:05] * jowilkin (~jowilkin@nat-pool-rdu-u.redhat.com) Quit (Ping timeout: 480 seconds)
[23:07] * georgem (~Adium@24.114.76.111) has joined #ceph
[23:08] * georgem1 (~Adium@24.114.66.36) has joined #ceph
[23:09] * sickology (~mio@vpn.bcs.hr) Quit (Read error: Connection reset by peer)
[23:10] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[23:11] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[23:12] * georgem (~Adium@24.114.76.111) Quit (Read error: Connection reset by peer)
[23:14] * sickology (~mio@vpn.bcs.hr) has joined #ceph
[23:15] * gregmark (~Adium@68.87.42.115) has joined #ceph
[23:19] <vanham> Guys, I have a LSI Controller here on a server I'm formatting to use Ceph. How do you export all the drives to the OS separatelly, without any RAID?
[23:19] <vanham> I'm not seeing how on their WebGUI BIOS
[23:23] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:26] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[23:30] <TheSov> u have to flash its bios
[23:30] <TheSov> its called an IT firmwar
[23:30] <TheSov> so go google your LSI controller number and add "it firmware" behind it
[23:31] <TheSov> in case you wannt go back to raid you flash its IR firmware
[23:31] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Read error: No route to host)
[23:32] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[23:33] * cheese^ (~CoMa@192.42.116.16) has joined #ceph
[23:35] * georgem1 (~Adium@24.114.66.36) Quit (Quit: Leaving.)
[23:37] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[23:39] <TMM> rkeene, someone on the mailing list told me what I needed to do, and it worked!
[23:41] * ledgr (~ledgr@88-222-11-185.meganet.lt) Quit (Remote host closed the connection)
[23:41] <rkeene> TMM, What was it ?
[23:43] <m0zes> vanham: sometimes the only option is to make single-disk raid0 arrays.
[23:43] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[23:43] <m0zes> other times, there is a way to enable jbod, but only from the command line interface.
[23:44] <TMM> rkeene, http://article.gmane.org/gmane.comp.file-systems.ceph.user/29652
[23:44] <TMM> rkeene, looks like a 'don't go crazy with this' flag though
[23:45] <TMM> mine seems to be the best case you can need this flag, my data is almost entirely static and someone just happened to be uploading an image while the recovery got interrupted
[23:46] <TMM> if this had happened to a live data pool this would've been very bad indeed
[23:48] <TMM> rkeene, it appears I have not actually lost anything
[23:49] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[23:58] * hybrid512 (~walid@161.240.10.109.rev.sfr.net) has joined #ceph
[23:58] * ieth0 (~ieth0@user232.77-105-223.netatonce.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.