#ceph IRC Log

Index

IRC Log for 2013-11-26

Timestamps are in GMT/BST.

[0:01] * Siva_ (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[0:03] * minchen (~minchen@202.197.9.8) Quit ()
[0:04] * dmsimard (~Adium@108.163.152.2) Quit (Quit: Leaving.)
[0:05] * xinxinsh (~xinxinsh@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[0:05] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:06] * Siva (~sivat@117.192.37.147) Quit (Ping timeout: 480 seconds)
[0:06] * Siva_ is now known as Siva
[0:07] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[0:09] * wwformat (~chatzilla@61.187.54.9) Quit (Ping timeout: 480 seconds)
[0:09] <bloodice> gkoch The fix is just to make sure the gateway has read and WRITE cause in the newer version of ceph, they automatically create the pools.
[0:10] <bloodice> the documentation only states read in the example.... and it needs write because the newer version is adding the pools automatically
[0:12] * xinxinsh (~xinxinsh@jfdmzpr03-ext.jf.intel.com) Quit (Quit: Leaving)
[0:13] * xinxinsh (~xinxinsh@jfdmzpr05-ext.jf.intel.com) has joined #ceph
[0:14] * al-maisan (~al-maisan@86.188.131.84) Quit (Ping timeout: 480 seconds)
[0:18] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[0:19] * sarob (~sarob@2601:9:7080:13a:a5b0:1b8:149c:6eb2) has joined #ceph
[0:20] * al-maisan (~al-maisan@86.188.131.84) has joined #ceph
[0:21] * ScOut3R (~scout3r@dsl51B69BF7.pool.t-online.hu) Quit ()
[0:25] * danieagle (~Daniel@179.176.54.5.dynamic.adsl.gvt.net.br) Quit (Quit: inte+ e Obrigado Por tudo mesmo! :-D)
[0:26] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[0:27] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[0:27] * sarob (~sarob@2601:9:7080:13a:a5b0:1b8:149c:6eb2) Quit (Ping timeout: 480 seconds)
[0:29] * avijaira (~avijaira@c-24-6-37-207.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[0:30] * jskinner (~jskinner@69.170.148.179) Quit (Remote host closed the connection)
[0:31] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[0:33] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[0:34] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[0:34] <via> is there any way to cut down on the number of threads an osd starts up?
[0:34] <via> i have a huge (128 gig ram) machine with 45 drives and i keep hitting cannot allocate memory errors on centos6
[0:35] <via> and i'm thinking it might be related to kernel space allocation of threads
[0:35] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:39] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[0:39] * jskinner (~jskinner@69.170.148.179) Quit (Ping timeout: 480 seconds)
[0:41] <pmatulis> via: how many OSDs (and sizes) on that machine?
[0:42] <via> 45
[0:42] <via> 1.8T
[0:42] <via> it ran for a week with 0 problems
[0:42] <via> now i can't get all the osd's to stay up for more than a few minutes before claiming out of memory
[0:42] <via> all the while according to free/top its only using about 30 gigs
[0:44] <pmatulis> via: how about cpu usage?
[0:44] <via> during the small period of time everything runs its pretty high as everything starts to peer
[0:45] <via> but each osd has like, 100 threads
[0:45] <via> according to hto
[0:45] * angdraug (~angdraug@64-79-127-122.static.wiline.com) Quit (Quit: Leaving)
[0:45] <via> p
[0:45] * mozg (~andrei@host81-151-251-29.range81-151.btcentralplus.com) has joined #ceph
[0:45] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:45] <mozg> hello guys
[0:46] <mozg> could someone please help me with proxy settings for the radosgw s3 api?
[0:46] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Ping timeout: 480 seconds)
[0:46] <mozg> i am using apache and I am having access denied message when trying to acccess buckets
[0:46] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) Quit (Ping timeout: 480 seconds)
[0:46] <mozg> am i missing some settings?
[0:47] <mozg> i guess the proxy is not passing on some of the headers
[0:47] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[0:47] * ChanServ sets mode +o elder
[0:47] <pmatulis> via: maybe get a better idea of what those threads are doing and then see if this helps:
[0:47] <pmatulis> http://ceph.com/docs/master/rados/configuration/osd-config-ref/
[0:48] <via> in that document there about 3 different options regarding threads
[0:48] <via> and the total number of threadsd the default say should be 4
[0:48] <via> so i'm not clear on how so many get spawned up
[0:48] <via> or what they are doing
[0:49] <joshd> via: they're mostly network-related threads - there are 2 per connection
[0:49] <via> i see
[0:50] * nigwil (~chatzilla@2001:44b8:5144:7b00:6870:5c5b:c52e:e0cd) Quit (Read error: Connection reset by peer)
[0:50] <joshd> you might try increasing maximum tcp buffer memory, you could be hitting a limit there
[0:50] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Read error: Operation timed out)
[0:50] * nigwil (~chatzilla@2001:44b8:5144:7b00:6870:5c5b:c52e:e0cd) has joined #ceph
[0:50] <via> /proc/sys/net/ipv4/tcp_mem ?
[0:51] * rongze (~rongze@117.79.232.236) has joined #ceph
[0:51] <joshd> that's the main one
[0:52] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) has joined #ceph
[0:52] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[0:53] * erice__ (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[0:54] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) Quit (Remote host closed the connection)
[0:54] * erice__ is now known as ericenb
[0:55] * sjustwork (~sam@2607:f298:a:607:38aa:d318:6f02:da9b) has joined #ceph
[0:57] * sjustlaptop1 (~sam@38.122.20.226) has joined #ceph
[0:59] * rongze (~rongze@117.79.232.236) Quit (Ping timeout: 480 seconds)
[1:00] * saaby_ (~as@mail.saaby.com) has joined #ceph
[1:01] * dxd828 (~dxd828@host-92-24-127-29.ppp.as43234.net) Quit (Quit: Computer has gone to sleep.)
[1:02] * saaby (~as@mail.saaby.com) Quit (Ping timeout: 480 seconds)
[1:09] * mschiff_ (~mschiff@85.182.236.82) Quit (Remote host closed the connection)
[1:13] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[1:13] <via> joshd: had no effect
[1:13] <via> setting os op threads to 0 seems to make it not crash
[1:13] <via> but then it doesn't seem to do anything at all
[1:17] <joshd> yeah, that'd make it unable to process anything
[1:18] <via> the docs just say it makes it not multithreaded <_<
[1:18] <via> i'll try one
[1:18] <via> are you aware of any other weird limits i could hit on centos6?
[1:19] <via> catting the limits file under /proc for an osd process shows nothing obvious like nofiles or nprocs being below a million or so
[1:19] <via> of note i have two nodes like this, and if i start all on just one node, they will go foever
[1:20] <via> the moment i start osd's on teh other nodes the osd's on the first start to crash
[1:20] <joshd> hmm, are your osds using tcmalloc? strings `which ceph-osd` | grep tcmalloc
[1:20] <via> occasionally on the first one when this is happening i can't even do an ls with out getting a fork out of memory error
[1:20] <via> libtcmalloc.so.4
[1:21] <via> there were a few results
[1:21] <via> these are the official rpms from the ceph repo
[1:21] <joshd> that sounds like a general oom problem then. and using tcmalloc is good
[1:21] <via> even when this happens all tools show i'm nowhere near out of memory, no swap being used, etc
[1:21] <joshd> it's probably not happening with only one node because there's not much peering
[1:22] <joshd> I'm not sure what other limits you'd be hitting off hand
[1:22] <via> so it must be some kernel memory limit
[1:23] <via> setting op threads to 1 did not solve it
[1:25] <via> the processes are getting sigabort
[1:25] <via> https://pastee.org/uyntf
[1:26] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[1:27] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[1:28] <via> i will note that this happened shortly after restructurign the crush map to allow for more replicas, so they probably all start to backfill
[1:31] * Pedras1 (~Adium@216.207.42.132) Quit (Ping timeout: 480 seconds)
[1:35] <joshd> via: how about increasing /proc/sys/vm/max_map_count
[1:35] <via> its set to 65530, what is reasonable?
[1:35] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:38] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[1:38] <joshd> I'm not sure how high is reasonable, but try doubling it
[1:38] * KevinPerks (~Adium@97.68.216.74) has joined #ceph
[1:40] <via> oh note, /proc/pid/maps only has about 500 rows in it for various osd's
[1:40] <via> of*
[1:41] <via> it had no effect
[1:43] * liiwi (~liiwi@idle.fi) Quit (Ping timeout: 480 seconds)
[1:45] * sagelap1 (~sage@2607:f298:a:607:6c38:5203:39d7:100f) has joined #ceph
[1:46] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[1:47] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[1:48] <mozg> could someone point me in the right direction of setting up a separate pool made off a dedicated set of hard disks? I am planning to allocate a separate pool made only of ssds
[1:50] <lurbs> mozg: http://www.sebastien-han.fr/blog/2012/12/07/ceph-2-speed-storage-with-crush/
[1:50] * glzhao (~glzhao@118.195.65.67) has joined #ceph
[1:51] <mozg> lurbs, thanks!
[1:51] <via> joshd: so.... /proc/sys/kernel/pid_max
[1:51] <via> was at 32k, i upped to 4mil and so far i haven't hit the issue
[1:52] * sagelap (~sage@172.56.39.54) Quit (Ping timeout: 480 seconds)
[1:52] <fred_> any problem running btrfs on some osd and xfs/ext4 on others?
[1:54] <joshd> via: aha, that makes sense. I wasn't expecting enomem for that case
[1:55] * sjustlaptop1 (~sam@38.122.20.226) Quit (Read error: Operation timed out)
[1:56] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[1:56] * dmsimard (~Adium@108.163.152.66) has joined #ceph
[1:57] <via> joshd: yeah, i've spent the last 6 hours fucking with this
[1:57] <via> i can't believe that was it
[1:57] <via> the kernel log could have said something at least, jesus
[1:57] <via> anyway, thanks for helping out, i'm done for the day
[1:58] <joshd> you're welcome, glad you finally figured it out
[2:01] <mozg> is anyone here using radosgw?
[2:01] <mozg> I was wondering if it is possible to check the disk usage of a user or a bucket?
[2:03] * KevinPerks1 (~Adium@97.68.216.74) has joined #ceph
[2:03] * KevinPerks (~Adium@97.68.216.74) Quit (Read error: Connection reset by peer)
[2:04] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[2:08] * clayb (~kvirc@69.191.241.59) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[2:14] * dmsimard (~Adium@108.163.152.66) Quit (Ping timeout: 480 seconds)
[2:16] * mtanski (~mtanski@cpe-66-68-155-199.austin.res.rr.com) has joined #ceph
[2:19] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit (Ping timeout: 480 seconds)
[2:19] * alram (~alram@38.122.20.226) Quit (Quit: leaving)
[2:21] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Read error: Operation timed out)
[2:28] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[2:32] * sagelap1 (~sage@2607:f298:a:607:6c38:5203:39d7:100f) Quit (Ping timeout: 480 seconds)
[2:35] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[2:35] * mtanski (~mtanski@cpe-66-68-155-199.austin.res.rr.com) Quit (Quit: mtanski)
[2:36] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:38] * xinxinsh (~xinxinsh@jfdmzpr05-ext.jf.intel.com) Quit (Ping timeout: 480 seconds)
[2:39] * rongze (~rongze@117.79.232.204) has joined #ceph
[2:40] * KevinPerks1 (~Adium@97.68.216.74) Quit (Quit: Leaving.)
[2:41] * sagelap (~sage@2600:1012:b00f:cd05:6c38:5203:39d7:100f) has joined #ceph
[2:43] * BillK (~BillK-OFT@58-7-79-238.dyn.iinet.net.au) has joined #ceph
[2:44] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[2:44] * liiwi (liiwi@idle.fi) has joined #ceph
[2:44] * Pedras (~Adium@216.207.42.140) has joined #ceph
[2:44] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[2:44] * nigwil_ (~chatzilla@2001:44b8:5144:7b00:6870:5c5b:c52e:e0cd) has joined #ceph
[2:46] * rongze (~rongze@117.79.232.204) Quit (Remote host closed the connection)
[2:46] * shang (~ShangWu@175.41.48.77) has joined #ceph
[2:49] * nigwil (~chatzilla@2001:44b8:5144:7b00:6870:5c5b:c52e:e0cd) Quit (Ping timeout: 480 seconds)
[2:49] * nigwil_ is now known as nigwil
[2:50] * ircolle (~Adium@2601:1:8380:2d9:85cf:ceb8:ba7:36a2) Quit (Quit: Leaving.)
[2:52] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Read error: Operation timed out)
[2:52] <JoeGruher> will ceph reorder IOs from the journal to the OSD to try to make them more efficient (less disk seeking)?
[2:56] * sarob (~sarob@2601:9:7080:13a:3db4:66eb:244:862b) has joined #ceph
[2:57] * mozg (~andrei@host81-151-251-29.range81-151.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:57] * ericenb (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[2:58] * mtanski (~mtanski@cpe-66-68-155-199.austin.res.rr.com) has joined #ceph
[3:01] * rongze (~rongze@211.155.113.217) has joined #ceph
[3:02] <aarontc> JoeGruher: I'm not 100% sure, but I believe no - Ceph relies on the kernel elevator to do any reordering that might be needed
[3:09] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[3:09] * sagelap (~sage@2600:1012:b00f:cd05:6c38:5203:39d7:100f) Quit (Read error: Connection reset by peer)
[3:10] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[3:10] * unis (~unis@58.213.102.114) has joined #ceph
[3:11] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[3:11] * ChanServ sets mode +o elder
[3:11] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[3:14] * rturk is now known as rturk-away
[3:16] <JoeGruher> fair enough... but the kernel elevator could then do some IO reordering between the journal and the OSD?
[3:16] * LeaChim (~LeaChim@86.162.2.255) Quit (Ping timeout: 480 seconds)
[3:16] <JoeGruher> overall i'm curious if random IO to an RBD will become more ordered by the time it hits the journal and then the OSD... running some performance testing
[3:18] <aarontc> JoeGruher: I think it depends greatly on whether you are writing or reading, and how much of your OSD system's RAM is available for disk caching
[3:18] <dmick> I/O coalescing definitely happens, at least
[3:21] <JoeGruher> thx
[3:24] * alfredodeza (~alfredode@c-98-194-83-79.hsd1.tx.comcast.net) Quit (Remote host closed the connection)
[3:26] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[3:26] * JoeGruher (~JoeGruher@jfdmzpr04-ext.jf.intel.com) Quit (Remote host closed the connection)
[3:27] * aliguori (~anthony@74.202.210.82) Quit (Remote host closed the connection)
[3:28] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[3:28] * davidzlap1 (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[3:29] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) has joined #ceph
[3:34] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:35] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[3:36] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[3:40] * sarob (~sarob@2601:9:7080:13a:3db4:66eb:244:862b) Quit (Remote host closed the connection)
[3:40] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[3:45] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[3:45] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[3:54] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[3:56] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[4:01] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[4:02] * sarob (~sarob@2601:9:7080:13a:1c67:72b4:94fe:546f) has joined #ceph
[4:02] * mrjack_ (mrjack@office.smart-weblications.net) has joined #ceph
[4:02] <mrjack_> hm
[4:02] <mrjack_> hi
[4:04] <mrjack_> how can i add a host bucket?
[4:04] * sjm (~sjm@pool-96-234-124-66.nwrknj.fios.verizon.net) has joined #ceph
[4:08] <mrjack_> ceph osd crush set osd.0 1 pool=default rack=unknownrack host=itsn01
[4:08] <mrjack_> Error ENOENT: unable to set item id 0 name 'osd.0' weight 1 at location {host=itsn01,pool=default,rack=unknownrack}: does not exist
[4:08] <dmick> does osd.0 exist?
[4:09] <mrjack_> yeah
[4:09] <mrjack_> look
[4:09] <mrjack_> http://pastebin.com/yDV0inE6
[4:10] <dmick> oh, sorry, you said "add a host bucket"
[4:10] * sarob (~sarob@2601:9:7080:13a:1c67:72b4:94fe:546f) Quit (Ping timeout: 480 seconds)
[4:10] <dmick> osd crush add-bucket
[4:10] <mrjack_> well i think i get the error because of the host bucket missing?
[4:11] <dmick> yeah, do add-bucket¸then move, then set, I think
[4:11] <dmick> I don't know that there's a "add it into the right place" for a bucket
[4:13] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[4:13] <mrjack_> where can i find the documentation for add-bucket?
[4:14] <dmick> with the other osd crush commands
[4:15] <dmick> or...not
[4:15] <dmick> ceph osd crush -h is how I found it
[4:16] <dmick> http://dachary.org/?p=2536 might help?
[4:17] <aarontc> I think it'd be awesome if there was a single page with a reference and explanation for essentially all the ceph CLI commands :)
[4:18] <mrjack_> hm
[4:18] <mrjack_> i still get errors
[4:18] <dmick> "ceph -h" :)
[4:18] <mrjack_> http://pastebin.com/2X0xykCs
[4:18] <aarontc> dmick: that doesn't have enough explanation. I'm thinking like a small paragraph for each one, reminding what the parameters mean ,and so on, for people like me that forget things they don't use daily
[4:18] <mrjack_> node01:~# ceph osd crush set osd.0 1 root=default rack=unknownrack host=itsn01
[4:18] <mrjack_> Error ENOENT: unable to set item id 0 name 'osd.0' weight 1 at location {host=itsn01,rack=unknownrack,root=default}: does not exist
[4:18] <dmick> aarontc: I know. We can make those very-terse strings longer pretty easily.
[4:19] <mrjack_> node01:~# ceph osd crush move osd.0 root=default rack=unknownrack host=itsn01
[4:19] <mrjack_> Error ENOENT: item osd.0 does not exist
[4:19] * unis_ (~unis@58.213.102.114) has joined #ceph
[4:19] <aarontc> dmick: But I like HTML formatting and ctrl-F support, too..
[4:19] <dmick> there's no substitute for good docs. ceph was *always* hopelessly out of date, so we kinda gave up
[4:19] <dmick> but it doesn't have to stay that way
[4:20] <dmick> mrjack_: hm
[4:20] <aarontc> dmick: yeah, I am thinking a good start for a "reference" page would be to combine the operations pages for each section of the manual
[4:21] <dmick> mrjack_: the test script (at ....qa/workunits/mon/crush_ops.sh) uses osd link and osd unlink
[4:22] <mrjack_> http://pastebin.com/rZ90JMJU
[4:22] <dmick> but I also see a add-bucket/move pair
[4:23] <dmick> hm
[4:23] <dmick> device 0 device0
[4:23] * unis (~unis@58.213.102.114) Quit (Ping timeout: 480 seconds)
[4:24] <mrjack_> can i safely manually edit the crushmap to the current topology and reinject it
[4:24] <mrjack_> ?
[4:25] * linuxkidd (~linuxkidd@cpe-066-057-061-231.nc.res.rr.com) Quit (Quit: Konversation terminated!)
[4:26] <aarontc> mrjack_: yes
[4:27] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[4:28] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[4:35] * alfredodeza (~alfredode@c-98-194-83-79.hsd1.tx.comcast.net) has joined #ceph
[4:36] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[4:36] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[4:37] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[4:38] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[4:42] * sagelap (~sage@161.sub-70-197-82.myvzw.com) has joined #ceph
[4:43] * alfredodeza (~alfredode@c-98-194-83-79.hsd1.tx.comcast.net) Quit (Ping timeout: 480 seconds)
[4:44] <mrjack_> ok i manually corrected the crushmap and now everything seems to work well
[4:50] <aarontc> glad to hear it, mrjack_
[4:50] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[4:50] <mrjack_> \o/
[4:56] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[4:59] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) Quit (Quit: Leaving.)
[5:00] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[5:01] * fireD (~fireD@93-139-165-73.adsl.net.t-com.hr) Quit (Read error: Connection reset by peer)
[5:01] * mwarwick (~mwarwick@110-174-133-236.static.tpgi.com.au) has joined #ceph
[5:02] * mwarwick (~mwarwick@110-174-133-236.static.tpgi.com.au) Quit ()
[5:04] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[5:05] * fireD (~fireD@93-142-240-250.adsl.net.t-com.hr) has joined #ceph
[5:13] * Hakisho_ (~Hakisho@p4FC2660F.dip0.t-ipconnect.de) has joined #ceph
[5:13] * Hakisho (~Hakisho@0001be3c.user.oftc.net) Quit (Ping timeout: 480 seconds)
[5:13] * Hakisho_ is now known as Hakisho
[5:14] * L2SHO_ (~L2SHO@office-nat.choopa.net) has joined #ceph
[5:15] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[5:15] * perfectsine (~perfectsi@if01-gn01.dal05.softlayer.com) Quit (Read error: Connection reset by peer)
[5:16] * perfectsine (~perfectsi@if01-gn01.dal05.softlayer.com) has joined #ceph
[5:16] * glzhao (~glzhao@118.195.65.67) Quit (Quit: leaving)
[5:19] * mtanski (~mtanski@cpe-66-68-155-199.austin.res.rr.com) Quit (Quit: mtanski)
[5:20] * L2SHO (~L2SHO@office-nat.choopa.net) Quit (Ping timeout: 480 seconds)
[5:23] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[5:26] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[5:29] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[5:36] * mwarwick (~mwarwick@110-174-133-236.static.tpgi.com.au) has joined #ceph
[5:36] * mwarwick (~mwarwick@110-174-133-236.static.tpgi.com.au) has left #ceph
[5:37] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[5:43] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[5:46] * sagelap (~sage@161.sub-70-197-82.myvzw.com) Quit (Ping timeout: 480 seconds)
[5:48] * rongze (~rongze@211.155.113.217) Quit (Remote host closed the connection)
[5:50] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) has joined #ceph
[5:50] * bandrus (~Adium@107.216.174.246) Quit (Quit: Leaving.)
[5:55] * Sodo (~Sodo@a88-113-108-239.elisa-laajakaista.fi) has joined #ceph
[5:56] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[6:00] * Cube (~Cube@66-87-65-213.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[6:00] * Cube (~Cube@66-87-65-213.pools.spcsdns.net) has joined #ceph
[6:02] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[6:03] * sarob (~sarob@2601:9:7080:13a:4115:78db:b727:1d8a) has joined #ceph
[6:04] * ntranger_ (~ntranger@proxy2.wolfram.com) has joined #ceph
[6:06] * ntranger (~ntranger@proxy2.wolfram.com) Quit (Ping timeout: 480 seconds)
[6:08] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[6:10] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[6:11] * sarob (~sarob@2601:9:7080:13a:4115:78db:b727:1d8a) Quit (Ping timeout: 480 seconds)
[6:11] * tarfik (~tarfik@aghy169.neoplus.adsl.tpnet.pl) Quit (Quit: Leaving)
[6:17] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[6:19] * rongze (~rongze@118.186.151.57) has joined #ceph
[6:26] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[6:28] * sarob (~sarob@2601:9:7080:13a:9865:d547:89cd:4409) has joined #ceph
[6:29] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[6:30] * Cube (~Cube@66-87-65-213.pools.spcsdns.net) Quit (Quit: Leaving.)
[6:31] * rongze (~rongze@118.186.151.57) Quit (Ping timeout: 480 seconds)
[6:34] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[6:37] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[6:39] * sarob (~sarob@2601:9:7080:13a:9865:d547:89cd:4409) Quit (Remote host closed the connection)
[6:40] * sarob (~sarob@2601:9:7080:13a:9865:d547:89cd:4409) has joined #ceph
[6:43] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
[6:48] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) has joined #ceph
[6:48] * sarob (~sarob@2601:9:7080:13a:9865:d547:89cd:4409) Quit (Ping timeout: 480 seconds)
[6:48] * Meths (~meths@2.25.214.231) Quit (Remote host closed the connection)
[6:48] * Meths (~meths@2.25.214.231) has joined #ceph
[6:55] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:57] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[6:57] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[7:00] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[7:00] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) has joined #ceph
[7:07] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[7:08] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:10] * gregmark (~Adium@68.87.42.115) has joined #ceph
[7:10] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:11] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[7:12] * sleinen1 (~Adium@2001:620:0:25:55d5:daa2:dd3e:7e9d) has joined #ceph
[7:13] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[7:19] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:25] * tarfik (~tarfik@aghy169.neoplus.adsl.tpnet.pl) has joined #ceph
[7:25] * tarfik (~tarfik@aghy169.neoplus.adsl.tpnet.pl) Quit ()
[7:27] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[7:28] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) has joined #ceph
[7:33] * avijaira (~avijaira@c-24-6-37-207.hsd1.ca.comcast.net) has joined #ceph
[7:33] * avijaira (~avijaira@c-24-6-37-207.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[7:38] * rongze (~rongze@117.79.232.197) has joined #ceph
[7:38] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[7:52] * haomaiwa_ (~haomaiwan@117.79.232.229) Quit (Remote host closed the connection)
[7:52] * haomaiwang (~haomaiwan@211.155.113.217) has joined #ceph
[7:53] * haomaiwang (~haomaiwan@211.155.113.217) Quit (Remote host closed the connection)
[7:53] * haomaiwang (~haomaiwan@117.79.232.229) has joined #ceph
[7:54] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[7:54] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) Quit (Quit: Leaving.)
[7:54] * sarob (~sarob@2601:9:7080:13a:913:1f4a:95a8:e226) has joined #ceph
[7:57] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[8:02] * sarob (~sarob@2601:9:7080:13a:913:1f4a:95a8:e226) Quit (Ping timeout: 480 seconds)
[8:03] * mattt_ (~textual@94.236.7.190) has joined #ceph
[8:04] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[8:06] * sjm (~sjm@pool-96-234-124-66.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[8:07] * foosinn (~stefan@office.unitedcolo.de) has joined #ceph
[8:07] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[8:09] * al-maisan (~al-maisan@86.188.131.84) Quit (Ping timeout: 480 seconds)
[8:10] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) has joined #ceph
[8:11] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[8:15] * sleinen1 (~Adium@2001:620:0:25:55d5:daa2:dd3e:7e9d) Quit (Quit: Leaving.)
[8:15] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[8:15] * sleinen1 (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[8:15] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Read error: Connection reset by peer)
[8:17] * sleinen (~Adium@2001:620:0:26:9cfe:3986:7321:c1e6) has joined #ceph
[8:21] * Pedras (~Adium@216.207.42.140) Quit (Read error: Connection reset by peer)
[8:23] * sleinen (~Adium@2001:620:0:26:9cfe:3986:7321:c1e6) Quit (Quit: Leaving.)
[8:23] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[8:23] * sleinen1 (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:26] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[8:29] * micha_ (~micha@hyper1.noris.net) has joined #ceph
[8:31] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:32] * tobru (~quassel@2a02:41a:3999::94) Quit (Remote host closed the connection)
[8:32] * tobru (~quassel@2a02:41a:3999::94) has joined #ceph
[8:36] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[8:41] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[8:42] * xinxinsh (~xinxinsh@134.134.139.72) has joined #ceph
[8:44] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Ping timeout: 480 seconds)
[8:45] <xinxinsh> hi,every, i want to check rbd cache status through rbd admin socket, i add 'admin daemon=/var/run/ceph/rbd-$pid.asok' in client part of my ceph.conf, but i can not find rbd-$pid.asok in /var/run/ceph, is there any guide to enable rbd admin socket
[8:56] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[8:57] * rendar (~s@host28-176-dynamic.22-79-r.retail.telecomitalia.it) has joined #ceph
[9:03] * sleinen (~Adium@2001:620:0:2d:c4c4:b25e:49e1:2c8) has joined #ceph
[9:04] * xinxinsh_ (~xinxinsh@134.134.139.72) has joined #ceph
[9:04] * thomnico (~thomnico@2a01:e35:8b41:120:c997:4318:36e4:a3c8) has joined #ceph
[9:06] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[9:07] * sleinen1 (~Adium@130.59.94.71) has joined #ceph
[9:10] * xinxinsh (~xinxinsh@134.134.139.72) Quit (Quit: Leaving)
[9:11] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[9:11] * ChanServ sets mode +v andreask
[9:11] * sleinen (~Adium@2001:620:0:2d:c4c4:b25e:49e1:2c8) Quit (Ping timeout: 480 seconds)
[9:15] * fouxm (~fouxm@185.23.92.11) has joined #ceph
[9:15] * sleinen1 (~Adium@130.59.94.71) Quit (Ping timeout: 480 seconds)
[9:15] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:17] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[9:18] * yuan (~yuanz@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[9:20] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[9:20] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) has joined #ceph
[9:26] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[9:26] * dxd828 (~dxd828@212.183.128.228) has joined #ceph
[9:28] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:37] * dxd828 (~dxd828@212.183.128.228) Quit (Quit: Computer has gone to sleep.)
[9:41] * sleinen (~Adium@2001:620:0:25:910f:5d83:45e2:a9) has joined #ceph
[9:44] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[9:45] <Kioob`Taff> Hi
[9:46] <Kioob`Taff> one note : is it possible that when a cluster is full (an OSD is full, so all the cluster stop), some OSD behave badly and corrupt data ?
[9:47] <Kioob`Taff> I'm not talking about a FS error like with a "power outage"
[9:48] <Kioob`Taff> some FS are not mountable anymore, because of header completely zeroed
[9:48] <Kioob`Taff> and RBD snapshots reports same corruption
[9:50] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[9:52] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[9:52] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[9:56] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[9:56] * dmsimard (~Adium@108.163.152.66) has joined #ceph
[9:56] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) has joined #ceph
[9:59] * mschiff (~mschiff@tmo-100-167.customers.d1-online.com) has joined #ceph
[10:00] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:03] * sleinen (~Adium@2001:620:0:25:910f:5d83:45e2:a9) Quit (Quit: Leaving.)
[10:03] * sleinen (~Adium@130.59.94.71) has joined #ceph
[10:05] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:10] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[10:10] * dlan (~dennis@116.228.88.131) Quit (Quit: Lost terminal)
[10:11] * sleinen (~Adium@130.59.94.71) Quit (Ping timeout: 480 seconds)
[10:13] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[10:17] * dmsimard (~Adium@108.163.152.66) Quit (Ping timeout: 480 seconds)
[10:18] * allsystemsarego (~allsystem@5-12-240-115.residential.rdsnet.ro) has joined #ceph
[10:24] * dlan (~dennis@116.228.88.131) has joined #ceph
[10:26] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[10:32] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:35] * LeaChim (~LeaChim@host86-162-2-255.range86-162.btcentralplus.com) has joined #ceph
[10:35] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[10:38] * sleinen (~Adium@2001:620:0:25:1cdf:b2f:3207:437c) has joined #ceph
[10:40] * foosinn (~stefan@office.unitedcolo.de) Quit (Read error: Connection reset by peer)
[10:41] * foosinn (~stefan@office.unitedcolo.de) has joined #ceph
[10:50] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[10:53] * trond (~trond@trh.betradar.com) has joined #ceph
[10:54] * trond (~trond@trh.betradar.com) Quit ()
[10:56] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[10:57] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[11:01] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[11:04] * thomnico (~thomnico@2a01:e35:8b41:120:c997:4318:36e4:a3c8) Quit (Quit: Ex-Chat)
[11:05] * BillK (~BillK-OFT@58-7-79-238.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[11:06] * BillK (~BillK-OFT@106-69-70-240.dyn.iinet.net.au) has joined #ceph
[11:15] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[11:17] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[11:27] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[11:29] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[11:29] * rongze (~rongze@117.79.232.197) Quit (Remote host closed the connection)
[11:41] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[11:44] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Read error: Connection reset by peer)
[11:44] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[11:45] * mschiff_ (~mschiff@tmo-100-167.customers.d1-online.com) has joined #ceph
[11:45] * mschiff (~mschiff@tmo-100-167.customers.d1-online.com) Quit (Read error: Connection reset by peer)
[11:46] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[11:50] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[11:57] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[11:58] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[11:58] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[11:58] * ChanServ sets mode +v andreask
[12:00] * tarfik (~tarfik@aghy169.neoplus.adsl.tpnet.pl) has joined #ceph
[12:09] * tarfik (~tarfik@aghy169.neoplus.adsl.tpnet.pl) Quit (Quit: Leaving)
[12:10] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[12:11] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[12:12] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[12:12] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[12:17] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[12:18] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (Quit: jlogan)
[12:19] * ScOut3R (~ScOut3R@212.96.46.212) has joined #ceph
[12:24] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) has joined #ceph
[12:25] * alaind (~dechorgna@161.105.182.35) Quit (Ping timeout: 480 seconds)
[12:27] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[12:36] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[12:41] * fireD_ (~fireD@93-142-247-39.adsl.net.t-com.hr) has joined #ceph
[12:43] * fireD (~fireD@93-142-240-250.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[12:46] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[12:46] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[12:46] * ChanServ sets mode +v andreask
[12:47] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) Quit (Quit: leaving)
[12:50] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[12:57] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[12:58] * glzhao (~glzhao@118.195.65.67) has joined #ceph
[13:01] * mschiff_ (~mschiff@tmo-100-167.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[13:02] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[13:07] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:14] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:15] * rongze (~rongze@117.79.232.236) has joined #ceph
[13:16] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) has joined #ceph
[13:16] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[13:22] <baffle> What is the correct way of adding an OSD to the crush map after it has been created with --mkfs ?
[13:24] <baffle> When I do an "ceph --keyring KEYRING auth add osd.X -i $OSD_DATA/keyring osd "allow *" mon "allow profile osd" --- Is the keyring entries mirrored to other hosts? Or is it a local operation?
[13:25] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[13:27] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[13:27] * jskinner (~jskinner@50-80-52-210.client.mchsi.com) has joined #ceph
[13:31] <mikedawson> baffle: the KEYRING you have in caps should be the cluster's ceph.client.admin.keyring that the monitors already recognize. the $OSD_DATA/keyring is the new OSD's keyring that will be added to the monitors
[13:31] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[13:32] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[13:33] <baffle> mikedawson: Right now I'm pointing KEYRING to /etc/ceph/keyring wich contains "client.admin" and "mon.".. Is that wrong? Also, I see that the magic tools seems to use a osd-bootstrap entry as well.. But is this a local operation?
[13:34] <baffle> And where is auth entries saved? If I do "ceph auth list" I see all the keys I've added on all the hosts, but it's not saved into /etc/ceph/keyring; I guess it shouldn't? But where is it saved? :)
[13:35] <baffle> client.bootstrap-osd I mean.
[13:39] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[13:40] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:42] <mikedawson> baffle: if you can do ceph cli commands without specifying a keyring (like ceph -s), you don't need the --keyring flag at all
[13:42] <baffle> Should I even use /etc/ceph/keyring anymore? Or should there be separate keyrings for client.admin, mon. ?
[13:46] <baffle> If I do: ceph auth add osd.0 -i <path_to_osd>/keyring osd 'allow *' mon 'allow profile osd'
[13:47] <baffle> I see that these now exists on other nodes when I do "ceph auth list". But I don't understand where they are saved. :)
[13:50] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[13:51] <mikedawson> baffle: the monitors
[13:51] <mikedawson> in their leveldb key value store, I believe
[13:55] <baffle> So, the keyrings in /etc/ceph/ is just for initial configuration or to auth to the cluster from a client I guess.
[13:57] <mikedawson> baffle: correct
[13:57] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[13:57] <baffle> So, after I've created an OSD with --mkfs --mkkey and added the OSDs key to the cluster; How does the OSD actually get added to the cluster? Are there other prerequisites?
[13:58] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[14:00] <baffle> If I try to start the OSD right now, I just get: Error ENOENT: osd.2 does not exist. create it before updating the crush map
[14:01] <baffle> failed: 'timeout 10 /usr/bin/ceph --name=osd.2 --keyring=/etc/ceph/keyring osd crush create-or-move -- 2 0.02 root=default host=node-5254001668ff
[14:01] <micha_> baffle: try "ceph osd create"
[14:02] <baffle> (This is from /etc/init.d/ceph start osd.2)
[14:02] <baffle> Oooh.. I tought it read that from ceph.conf?
[14:04] <micha_> i don't think so, the osds have to be created in the monmap, which are seperate from the ceph.conf
[14:05] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[14:06] <micha_> i think ceph.conf is mainly used by daemons to get their own info (their section), but not looking into the other sections of ceph.conf
[14:06] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[14:06] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) Quit (Read error: Operation timed out)
[14:06] <baffle> When I created the OSD I did a "mon getmap -o monmap" and used that map when creating the OSD with "--monmap monmap".. Does this mean that it doesn't "do" anythin with that map?
[14:08] <baffle> I kindof assumed it would somehow use the map to alert the monitors of its presense.
[14:09] -kinetic.oftc.net- *** Looking up your hostname...
[14:09] -kinetic.oftc.net- *** Checking Ident
[14:09] -kinetic.oftc.net- *** Couldn't look up your hostname
[14:09] -kinetic.oftc.net- *** No Ident response
[14:09] * CephLogBot (~PircBot@92.63.168.213) has joined #ceph
[14:09] * Topic is 'For CDS join #ceph-summit || CDS Firefly Schedule available! http://goo.gl/LOhq3O || Latest stable (v0.72.0 "Emperor") -- http://ceph.com/get || dev channel #ceph-devel '
[14:09] * Set by scuttlemonkey!~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net on Mon Nov 25 21:30:50 CET 2013
[14:10] -magnet.oftc.net- *** Looking up your hostname...
[14:10] -magnet.oftc.net- *** Checking Ident
[14:10] -magnet.oftc.net- *** Couldn't look up your hostname
[14:11] -oxygen.oftc.net- *** Looking up your hostname...
[14:11] -oxygen.oftc.net- *** Checking Ident
[14:11] -oxygen.oftc.net- *** Couldn't look up your hostname
[14:11] -oxygen.oftc.net- *** No Ident response
[14:11] * CephLogBot (~PircBot@92.63.168.213) has joined #ceph
[14:11] * Topic is 'For CDS join #ceph-summit || CDS Firefly Schedule available! http://goo.gl/LOhq3O || Latest stable (v0.72.0 "Emperor") -- http://ceph.com/get || dev channel #ceph-devel '
[14:11] * Set by scuttlemonkey!~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net on Mon Nov 25 21:30:50 CET 2013
[14:13] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:13] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[14:14] <micha_> how did you create the osd?
[14:15] * Remy (~Remy@ks3292820.kimsufi.com) Quit (Quit: ZNC - http://znc.in)
[14:16] * mattt (~mattt@lnx1.defunct.ca) Quit (Remote host closed the connection)
[14:16] * mattt_ is now known as mattt
[14:17] * rongze (~rongze@117.79.232.236) Quit (Remote host closed the connection)
[14:17] * mattt_ (~mattt@lnx1.defunct.ca) has joined #ceph
[14:20] <baffle> micha_: "ceph-osd -i <osdnum> --mkfs --mkkey" initially. Then I tried on where I specified a --modmap as well, but that didn't do much. :)
[14:21] <baffle> (I've ignored journal for now, this is just for testing until I set it up with "proper" hardware)
[14:21] <baffle> --monmap I mean
[14:22] <baffle> I'm trying to automate it all with SaltStack.
[14:27] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[14:29] * rongze (~rongze@211.155.113.166) has joined #ceph
[14:31] <baffle> So, whenever I want to add an OSD; I should use "ceph osd create" if it doesn't exist allready? I.e. I should check by doing "ceph osd ls" and see if it allready is created, and if it isn't, do a create? :)
[14:32] * dmsimard1 (~Adium@70.38.0.248) has joined #ceph
[14:32] * dmsimard (~Adium@108.163.152.2) Quit (Read error: Connection reset by peer)
[14:34] <micha_> i think so
[14:41] * dmsimard1 (~Adium@70.38.0.248) Quit (Ping timeout: 480 seconds)
[14:41] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[14:42] * markbby (~Adium@168.94.245.3) has joined #ceph
[14:42] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[14:50] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[14:51] * mschiff (~mschiff@tmo-100-167.customers.d1-online.com) has joined #ceph
[14:56] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[14:57] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[14:57] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[14:57] * markbby (~Adium@168.94.245.3) has joined #ceph
[14:58] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[14:58] <mozg> hello guys
[14:58] * LeaChim (~LeaChim@host86-162-2-255.range86-162.btcentralplus.com) Quit (Read error: Operation timed out)
[14:58] <mozg> does anyone know if there is a way to check how much data is used by each user in radosgw/s3 service?
[15:00] * madkiss (~madkiss@2001:6f8:12c3:f00f:9461:c7d4:cf12:bdb5) Quit (Ping timeout: 480 seconds)
[15:03] * madkiss (~madkiss@2001:6f8:12c3:f00f:5da7:ce67:7b59:2027) has joined #ceph
[15:07] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Remote host closed the connection)
[15:08] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[15:08] <micha_> mozg: "radosgw-admin bucket stats" shows you bucket usage, and bucket owner
[15:08] * gucki (~smuxi@77-56-39-154.dclient.hispeed.ch) has joined #ceph
[15:09] * linuxkidd (~linuxkidd@cpe-066-057-061-231.nc.res.rr.com) has joined #ceph
[15:11] * thomnico (~thomnico@37.163.101.243) has joined #ceph
[15:14] <mozg> micha_, nice one, thanks
[15:15] * gucki_ (~gucki@77-56-39-154.dclient.hispeed.ch) has joined #ceph
[15:16] <xevwork> I've heard rumors that btrfs is now production-ready (aside from RAID5/6) as of 3.12. Is it a reliable backend for Ceph yet?
[15:17] <mozg> xevwork, as far as i know ceph still recommends using xfs for production
[15:17] <mozg> one of the reasons, as discovered through testing by ceph community members btrfs performance degrades over time
[15:17] <xevwork> Oh that's no good.
[15:18] <mozg> however, there are people here with better knowledge
[15:18] <xevwork> What about Ceph on ZFS?
[15:18] <mozg> i suggest asking this question to Sage
[15:18] <mozg> i've done that a few months back when he was in London at the ceph day
[15:19] <linuxkidd> Emperor brings initial support for ZFS, however none of the special capabilities of ZFS are implemented yet.
[15:19] <mozg> and he said that they are still recommending xfs
[15:20] <mozg> xevwork, I would love to see ceph + zfs working
[15:20] <mozg> especially the caching capabilities of zfs
[15:20] <linuxkidd> Re: BTRFS - I have the same understanding.. perf degrades to ~ 15% (iirc) of it's original speed over time ( measured in days )
[15:20] <xevwork> Wow, that's bad.
[15:21] <mozg> xevwork, well, if the performance is much better in the first place, loosing 15% is not to bad
[15:21] <linuxkidd> That may be fixed in newest BTRFS releases... but I've not dug into that part
[15:21] <mozg> i've seen benchmarks and it is far better for small blocks sizes
[15:21] <linuxkidd> no no... not loosing 15%, degrading TO 15%
[15:21] * ScOut3R (~ScOut3R@212.96.46.212) Quit (Ping timeout: 480 seconds)
[15:21] <xevwork> Yeah, that's pretty shocking. Got any links I can read about that?
[15:21] <kraken> http://i.imgur.com/wkY1FUI.gif
[15:22] <xevwork> kraken: haha
[15:22] <xevwork> Perfect :)
[15:25] <linuxkidd> re: links... these details were relayed to me verbally... I'm looking for something online, but coming up short.
[15:25] <linuxkidd> Only thing I can find is on the filesystem-recommendations page at ceph...
[15:25] <linuxkidd> "We recommend btrfs for testing, development, and any non-critical deployments. We believe that btrfs has the correct feature set and roadmap to serve Ceph in the long-term, but XFS and ext4 provide the necessary stability for today’s deployments."
[15:25] <linuxkidd> http://ceph.com/docs/master/rados/configuration/filesystem-recommendations/
[15:25] * BillK (~BillK-OFT@106-69-70-240.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:26] <xevwork> Yeah, I've heard about this several times, but haven't been able to find any benchmarks. I'm not doubting, I'm just curious.
[15:26] <linuxkidd> my understanding is that the issue is cleared by blowing away the OSD, laying down new btrfs, and then back-filling..
[15:26] <linuxkidd> and you'll get the normal (fast) performance for another few days before it degrades again
[15:26] <xevwork> Another *few days*?! It's that fast/
[15:27] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[15:27] <linuxkidd> Ya, understood... I'd like to see it in writing myself.. esp if there's any data on the reason for the degredation
[15:27] <scuttlemonkey> The stream for the Ceph Developer Summit will be up in a few minutes (for a 7a PST start in ~33mins)
[15:29] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[15:29] <linuxkidd> xevwork: http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/
[15:29] * sarob_ (~sarob@2601:9:7080:13a:bc54:74c5:e305:b452) has joined #ceph
[15:29] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[15:31] <linuxkidd> xevwork: in the Conclusion, there is a statement of: "In the future we’ll want to compare the filesystems in more depth and look at how performance changes over time (Hint: BTRFS small write performance tends to degrade rather quickly)."
[15:32] * jskinner (~jskinner@50-80-52-210.client.mchsi.com) Quit (Remote host closed the connection)
[15:33] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[15:34] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[15:39] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[15:44] * rturk-away is now known as rturk
[15:44] * xinxinsh (~xinxinsh@134.134.139.76) has joined #ceph
[15:45] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[15:48] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[15:49] * alfredodeza (~alfredode@c-98-194-83-79.hsd1.tx.comcast.net) has joined #ceph
[15:50] * allsystemsarego (~allsystem@5-12-240-115.residential.rdsnet.ro) Quit (Quit: Leaving)
[15:50] * L2SHO_ is now known as L2SHO
[15:51] <L2SHO> I have 33 pg's stuck in active+remapped+backfill_toofull but as far as I can tell I still have at least 200GB free on each OSD. How should I diagnose this?
[15:53] * KevinPerks (~Adium@rrcs-67-78-170-22.se.biz.rr.com) has joined #ceph
[15:54] * mtanski (~mtanski@cpe-66-68-155-199.austin.res.rr.com) has joined #ceph
[15:55] <linuxkidd> L2SHO: http://ceph.com/docs/master/dev/osd_internals/recovery_reservation/ <- states that "This reservation CAN be rejected, for instance if the OSD is too full (osd_backfill_full_ratio config option)."
[15:55] <linuxkidd> I'd check the full ratio config option..
[15:56] <linuxkidd> maybe 200g is a small amount compared to the size of the drive and that ratio is preventing backfill
[15:56] <linuxkidd> otherwise, I'd simply replace the OSD and allow backfilling to it
[15:56] * vata (~vata@2607:fad8:4:6:bc7a:b74e:6288:1ba6) has joined #ceph
[15:57] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:57] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[16:00] * ircolle (~Adium@2601:1:8380:2d9:9028:4e68:f884:294f) has joined #ceph
[16:00] * alaind (~dechorgna@161.105.182.35) has joined #ceph
[16:01] <L2SHO> linuxkidd, Is there a way to query my running cluster to see what those backfull settings are set to? ceph config-key get doesn't seem to work
[16:02] <mikedawson> L2SHO: ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show
[16:02] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[16:04] <mikedawson> L2SHO: look for osd_backfill_full_ratio
[16:05] <L2SHO> aha, osd_backfill_full_ratio is set to 0.85, while the mon_osd_full_ratio is set to 0.95
[16:05] <L2SHO> that's why it's not finishing
[16:06] * EWDurbin (~ernestd@ewd3do.ernest.ly) has joined #ceph
[16:06] <EWDurbin> good morning
[16:07] <EWDurbin> i'm currently trialling CephFS and GlusterFS to act as the backend shared storage for new and improved PyPI infrastructure
[16:07] <L2SHO> is there a way I can change osd_backfill_full_ratio to 0.90 on a live cluster?
[16:07] <EWDurbin> it appears CephFS is still warned as being 'not prod ready'
[16:07] <EWDurbin> trying to understand if it's best back off for now, or if our use case might be within the realm of what CephFS can handle
[16:08] <ircolle> EWDurbin - there are some using it in production, but I'd say give it 9-12 months. What's your use case?
[16:08] <EWDurbin> ircolle, the webservers for the Python Package Index need shared access to the tars/wheels/exes/docs etc
[16:09] <EWDurbin> we currently have some lame ass DRBD setup
[16:09] * fouxm_ (~fouxm@185.23.92.11) has joined #ceph
[16:09] * fouxm_ (~fouxm@185.23.92.11) Quit (Remote host closed the connection)
[16:09] * fouxm (~fouxm@185.23.92.11) Quit (Read error: Connection reset by peer)
[16:09] <saturnine> win 2
[16:09] <ircolle> heh
[16:09] * fouxm (~fouxm@185.23.92.11) has joined #ceph
[16:10] <EWDurbin> basically, infrequent writes (someone uploads a package) medium reads (package pulled from FS when not in CDN cache)
[16:10] * thomnico (~thomnico@37.163.101.243) Quit (Read error: Operation timed out)
[16:10] <EWDurbin> in the long term, we'd like to go to an object store, but PyPI codebase isn't exactly ready for that
[16:11] <alfredodeza> hi EWDurbin :)
[16:11] <EWDurbin> and warehouse is on its way through development, and will be wired up to use the S3 interface most likely
[16:11] <EWDurbin> good morning alfredodeza
[16:11] <alfredodeza> good to see you here
[16:11] * fouxm_ (~fouxm@185.23.92.11) has joined #ceph
[16:12] <EWDurbin> need to get the inside scoop on wheter i'm an idiot to use ceph-fuse/CephFS as PyPI's shared storage :-/
[16:12] * fouxm_ (~fouxm@185.23.92.11) Quit (Remote host closed the connection)
[16:12] * mtanski (~mtanski@cpe-66-68-155-199.austin.res.rr.com) Quit (Quit: mtanski)
[16:12] <ircolle> EWDurbin have you looked at RGW's interface to RADOS for warehouse?
[16:12] <mikedawson> L2SHO: ceph tell osd.* injectargs '--osd_backfill_full_ratio 0.95'
[16:12] * fouxm (~fouxm@185.23.92.11) Quit (Read error: Connection reset by peer)
[16:13] * fouxm (~fouxm@185.23.92.11) has joined #ceph
[16:13] <mozg> I am having an issue with radosgw S3 api. It does not accept file names with character + in the name
[16:13] <mozg> is there a way around it apart from renaming all files where + is present?
[16:13] <EWDurbin> ircolle: i'm just the infr guy, donald stufft and I will be working on the warehouse infrastructure once i get the bleeding stopped on the existing PyPI
[16:13] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[16:13] <EWDurbin> which assumes a file system... not anything fancy
[16:14] <L2SHO> mikedawson, is that the same as "ceph-admin-daemon /var/run/ceph/ceph-osd.0.asok config set osd_backfill_full_ratio 0.90" ? I alerady ran that
[16:14] <ircolle> EWDurbin, we can't yet fsck
[16:15] <mikedawson> L2SHO: that command will only set it on osd.0, but conceptually they do the same thing
[16:15] <EWDurbin> ircolle, that's a bummer. is that the only blocker?
[16:15] <EWDurbin> we keep good checksums of everything that's fiddled out to disk and can always recover an individual package from backup or a mirror
[16:16] <EWDurbin> or are we talking massive crushing loss?
[16:17] * john_barbee (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[16:17] <ircolle> EWDurbin - we have users with it in production, but I'd give it another 9-12 months. It's primarily stability issues.
[16:17] <EWDurbin> sad to hear that
[16:17] <dmsimard> EWDurbin: I'm in the same boat :(
[16:18] <alfredodeza> EWDurbin: why not just use the rados gateway and start treating the PyPI packages as objects that can be saved/retrieved with the S3 API ?
[16:18] <ircolle> EWDurbin - http://tracker.ceph.com/projects/cephfs/issues?set_filter=1&tracker_id=1
[16:18] <dmsimard> EWDurbin: Funny enough, my use case is also about mirrors
[16:18] <ircolle> Those are the currently open tickets
[16:18] <EWDurbin> alfredodeza https://bitbucket.org/pypa/pypi
[16:18] <EWDurbin> ;)
[16:18] * alfredodeza looks
[16:18] * alaind (~dechorgna@161.105.182.35) Quit (Ping timeout: 480 seconds)
[16:18] * xcrracer (~xcrracer@fw-ext-v-1.kvcc.edu) has joined #ceph
[16:18] <EWDurbin> if you want a development environment https://github.com/python/pypi-salt
[16:19] * sarob_ (~sarob@2601:9:7080:13a:bc54:74c5:e305:b452) Quit (Remote host closed the connection)
[16:20] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[16:21] * xcrracer (~xcrracer@fw-ext-v-1.kvcc.edu) Quit (Remote host closed the connection)
[16:21] * xcrracer (~xcrracer@fw-ext-v-1.kvcc.edu) has joined #ceph
[16:24] * micha_ (~micha@hyper1.noris.net) Quit (Quit: leaving)
[16:25] * gregphone (~gregphone@66-87-65-72.pools.spcsdns.net) has joined #ceph
[16:26] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[16:28] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:30] * thomasth (~thomasth@74.125.121.65) has joined #ceph
[16:31] * gregphone (~gregphone@66-87-65-72.pools.spcsdns.net) Quit (Quit: Rooms • iPhone IRC Client • http://www.roomsapp.mobi)
[16:31] * gregphone (~gregphone@66-87-65-72.pools.spcsdns.net) has joined #ceph
[16:32] * xinxinsh (~xinxinsh@134.134.139.76) Quit (Quit: Leaving)
[16:34] * kraken (~kraken@gw.sepia.ceph.com) Quit (Remote host closed the connection)
[16:35] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[16:36] * gregphone (~gregphone@66-87-65-72.pools.spcsdns.net) Quit ()
[16:36] * gregphone (~gregphone@66-87-65-72.pools.spcsdns.net) has joined #ceph
[16:37] * KevinPerks1 (~Adium@97.68.216.74) has joined #ceph
[16:38] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[16:39] * kraken (~kraken@gw.sepia.ceph.com) Quit (Remote host closed the connection)
[16:39] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[16:39] * KevinPerks2 (~Adium@97.68.216.74) has joined #ceph
[16:39] * KevinPerks1 (~Adium@97.68.216.74) Quit (Read error: Connection reset by peer)
[16:41] * kraken (~kraken@gw.sepia.ceph.com) Quit (Remote host closed the connection)
[16:41] <EWDurbin> alfredodeza: is it done yet?
[16:41] <EWDurbin> :-p
[16:41] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[16:41] <alfredodeza> EWDurbin: I am not trying it right now :(
[16:41] <alfredodeza> in the Ceph Deployment Session at the moment
[16:42] <EWDurbin> alfredodeza: was joking. but current PyPI is pretty tied to a FS
[16:42] <alfredodeza> I see
[16:42] * gkoch (~gkoch@38.86.161.178) Quit (Ping timeout: 480 seconds)
[16:42] <EWDurbin> so i think we're going to either go with CephFS and cross our fingers, or GlusterFS for now
[16:42] * kraken (~kraken@gw.sepia.ceph.com) Quit (Remote host closed the connection)
[16:42] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[16:44] * kraken (~kraken@gw.sepia.ceph.com) Quit (Remote host closed the connection)
[16:44] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[16:44] * KevinPerks (~Adium@rrcs-67-78-170-22.se.biz.rr.com) Quit (Ping timeout: 480 seconds)
[16:44] * tsnider (~tsnider@nat-216-240-30-23.netapp.com) has joined #ceph
[16:46] <tsnider> Is there any capability in ceph to "bring up the cluster" (start monitor and osds on all nodes) from a single monitor node -- instead of starting services on each node individually?
[16:46] * gregphone (~gregphone@66-87-65-72.pools.spcsdns.net) Quit (Quit: Rooms • iPhone IRC Client • http://www.roomsapp.mobi)
[16:47] * gregphone (~gregphone@66-87-65-72.pools.spcsdns.net) has joined #ceph
[16:48] * jcfischer (~fischer@macjcf.switch.ch) has joined #ceph
[16:48] <ccourtaut> tsnider: do not see the point here, as if your monitors and osds are not running, i don't see how they would communicate
[16:48] <Kioob`Taff> tsnider: the init script have the option "--allhosts", if ssh is properly setup
[16:52] <stj> hi all, is it OK/recommended to increase the number of pg's on the default pools? (data, metadata, rbd)
[16:52] * gregphone (~gregphone@66-87-65-72.pools.spcsdns.net) Quit ()
[16:52] <stj> ceph is in a warning state which goes away if I increase them, but wasn't sure if it's OK to do so
[16:52] <stj> think there's 64 pgs by default for those pools on my 12-osd cluster
[16:53] <mozg> is anyone using radosgw with reverse proxy?
[16:53] * sagewk1 is now known as sagewk
[16:53] <mozg> i was hoping to pick your brains on the reverse proxy configuration
[16:53] <mozg> i am having some issues with using nginx as a reverse proxy - in particular for uploading large files
[16:54] <mozg> i can't seem to upload files larger than 1.3GB in size
[16:54] <mozg> smaller files are working perfectly well
[16:55] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[16:56] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[16:56] * mattt (~textual@94.236.7.190) Quit (Quit: Textual IRC Client: www.textualapp.com)
[17:01] * bandrus (~Adium@107.216.174.246) has joined #ceph
[17:04] <ccourtaut> unable to get the stream on youtube, did we moved to another video track for next session?
[17:04] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[17:04] * glzhao (~glzhao@118.195.65.67) Quit (Quit: leaving)
[17:04] <tsnider> ccourtaut: This would be after all nodes were rebooted.
[17:04] <ccourtaut> ok. just needed to ask to get it back...
[17:05] <ccourtaut> do not type while talking on hangout also
[17:06] <tsnider> What's the solution for "permission denied" errors during activate? I usually completly teardown and resetup the cluster but that seems like overkill.
[17:06] <tsnider> 2013-11-26 08:03:49.859301 7fc4f20a9700 0 librados: client.bootstrap-osd authentication error (1) Operation not permitted
[17:06] <tsnider> [swift15][ERROR ] Error connecting to cluster: PermissionError
[17:06] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) Quit (Quit: Leaving)
[17:07] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[17:12] * clayb (~kvirc@proxy-nj1.bloomberg.com) has joined #ceph
[17:12] * KevinPerks (~Adium@97.68.216.74) has joined #ceph
[17:12] * KevinPerks2 (~Adium@97.68.216.74) Quit (Read error: Connection reset by peer)
[17:12] * foosinn (~stefan@office.unitedcolo.de) Quit (Quit: Leaving)
[17:14] * sjustlaptop (~sam@38.122.20.226) has joined #ceph
[17:14] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has left #ceph
[17:14] * mtanski (~mtanski@cpe-66-68-155-199.austin.res.rr.com) has joined #ceph
[17:16] <tsnider> ccourtaut: ?
[17:17] * thomnico (~thomnico@37.162.175.246) has joined #ceph
[17:17] <ccourtaut> tsnider: good question, have you got the client.bootstrap-osd keyring created?
[17:17] <ccourtaut> should be in /var/lib/ceph/bootstrap-osd
[17:17] * jcfischer (~fischer@macjcf.switch.ch) Quit (Quit: jcfischer)
[17:19] * dvanders (~dvanders@137.138.33.84) Quit (Ping timeout: 480 seconds)
[17:19] * dxd828 (~dxd828@195.191.107.205) has joined #ceph
[17:20] <via> should it be possible to have a negative number of degraded objects?
[17:20] <via> "health HEALTH_WARN 9000 pgs stuck unclean; recovery -1/13134279 objects degraded (-0.000%)"
[17:20] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:21] <tsnider> ccourtaut: yeah all that seemed to go fine. 2 of the 10 nodes have different keyrings.
[17:22] * dxd828 (~dxd828@195.191.107.205) Quit ()
[17:22] <ccourtaut> tsnider: seems odd to have different keyring
[17:23] <ccourtaut> at least as far as my understing goes, this keyring is created to be able to communicate over the cluster throught ceph-deploy
[17:23] <via> furthermore, i have a whole pools worth of PGs that are stuck unclean and look like: https://pastee.org/9refh
[17:23] <ccourtaut> to solve the "egg-chicken" stuff
[17:23] <via> where backfill target is -1
[17:23] <baffle> How is client.bootstrap-osd created?
[17:25] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[17:25] <alfredodeza> ccourtaut: ceph-deploy does not need any keyrings :) Your cluster does
[17:26] <ccourtaut> ccourtaut: yes indeed :)
[17:26] <baffle> I guess it is /etc/init/ceph-create-keys.conf -> /usr/sbin/ceph-create-keys .. But how does it sync? :)
[17:26] * ccourtaut need more energy
[17:26] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[17:27] * mschiff (~mschiff@tmo-100-167.customers.d1-online.com) Quit (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.)
[17:30] * thomasth (~thomasth@74.125.121.65) has left #ceph
[17:31] <baffle> How does single OSD servers get the osd bootstrap key?
[17:32] * swinchen (~swinchen@samuel-winchenbach.ums.maine.edu) Quit (Remote host closed the connection)
[17:32] * swinchen (~swinchen@samuel-winchenbach.ums.maine.edu) has joined #ceph
[17:34] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:37] * KevinPerks (~Adium@97.68.216.74) Quit (Quit: Leaving.)
[17:38] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[17:39] <alphe> hello !
[17:40] <alphe> can I map a rdb RA client to the default data pool ? or is that a completly different way to treat data ?
[17:40] <baffle> In a configuration management world, should /var/lib/ceph/bootstrap-osd/ceph.keyring be defined by the cm system?
[17:40] * al-maisan (~al-maisan@94.236.7.185) has joined #ceph
[17:40] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:41] <alphe> I have the default pools data. metadata and rdb I use cephfs to access it but I want to try rdb to access to the same datas is that possible ?
[17:41] <L2SHO> I have 1 pg that's stuck in the active+remapped+backfilling state and doesn't appear to be making any progress. http://apaste.info/7y0f
[17:45] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[17:45] <alphe> L2SHO have you seen this ?
[17:45] <alphe> http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/
[17:48] <alphe> hum weird ...
[17:49] <L2SHO> alphe, Thanks, reading it now, but nothing seems to matce the exact issue I'm seeing
[17:49] <alphe> seems like the "rebuilding" process ended but that 3 pg are still in troubles ...
[17:49] <alphe> hum you have degraded objects that much is for sure
[17:50] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) has joined #ceph
[17:51] * gdavis33 (~gdavis@38.122.12.254) has joined #ceph
[17:51] * gdavis33 (~gdavis@38.122.12.254) has left #ceph
[17:52] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[17:52] * ChanServ sets mode +v andreask
[17:53] <alphe> hum those messages are not of a trouble in fact it is more internal sauce ...
[17:54] <alphe> all is ok but ceph is rearanging pg
[17:54] <alphe> as it rearange one then it gets to rearange another etc
[17:54] <alphe> L2SHO my guess is that in some hours that will diseapear
[17:55] <alphe> well if you already see this for some days that can be a prob
[17:56] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[17:57] <L2SHO> alphe, it doesn't appear to be making any progress. It's been sitting in the same state for a while now stuck on the same pg. http://apaste.info/ksez
[17:57] <alphe> l2SHO you can look at the end of log files on osd
[17:57] * al-maisan (~al-maisan@94.236.7.185) Quit (Ping timeout: 480 seconds)
[17:57] <alphe> but normally you can locate the pgs that is being remapped on what osd it is and then see the log of that osd to see what is the trouble
[17:58] <alphe> can you show me what says basic stuff like ceph -s
[17:58] <alphe> and ceph -w ...
[18:00] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[18:00] <L2SHO> alphe, http://apaste.info/6vn7
[18:02] <L2SHO> alphe, and I don't see anything out of the ordinary in the osd logs
[18:05] * ScOut3R (~scout3r@dsl51B69BF7.pool.t-online.hu) has joined #ceph
[18:07] * al-maisan (~al-maisan@94.236.7.190) has joined #ceph
[18:10] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[18:13] * aliguori (~anthony@74.202.210.82) has joined #ceph
[18:13] <alphe> 18 near full osd(s)
[18:13] <alphe> nor enought disk space ...
[18:14] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[18:14] <alphe> not enought disk space ...
[18:15] <L2SHO> alphe, the 3 osd's related to that pg are not ones that are near full
[18:16] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Quit: Leaving.)
[18:16] <alphe> hum strange !
[18:17] <alphe> L2SHO are you registred to the ceph-user malling list ?
[18:17] <alphe> I think you should expose there your issue
[18:20] <L2SHO> alphe, I'm not currently on the mailing list. I much prefer irc
[18:20] <alphe> you can force the scrub of that pg if you know its id
[18:21] <alphe> ceph pg dump_stuck inactive|unclean|stale
[18:21] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[18:21] * sarob (~sarob@2601:9:7080:13a:ec70:3234:e9ff:c537) has joined #ceph
[18:21] <alphe> then once you know the name of the stuck pg you force it to scrub with
[18:21] <alphe> ceph pg scrub {pg-id}
[18:22] <alphe> that is the only way out I see ... probably there is more stuff possible to do
[18:23] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[18:23] <alphe> you ceph pg repair <pgid> but not sure that will help
[18:23] <alphe> you can ceph pg deep-scrub <pgid>
[18:24] * JoeGruher (~JoeGruher@134.134.139.72) has joined #ceph
[18:24] <alphe> L2SHO ?
[18:25] * thomnico (~thomnico@37.162.175.246) Quit (Quit: Ex-Chat)
[18:25] <L2SHO> alphe, I ran scrub, doing deep-scrub now
[18:25] <alphe> ok hope that will help
[18:25] * sleinen (~Adium@2001:620:0:25:1cdf:b2f:3207:437c) Quit (Quit: Leaving.)
[18:25] * sleinen (~Adium@130.59.94.71) has joined #ceph
[18:25] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[18:26] <L2SHO> is there a way to diable this random background scrubbing?
[18:29] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has left #ceph
[18:29] * sarob (~sarob@2601:9:7080:13a:ec70:3234:e9ff:c537) Quit (Ping timeout: 480 seconds)
[18:31] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:33] * sleinen (~Adium@130.59.94.71) Quit (Ping timeout: 480 seconds)
[18:35] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[18:38] <xdeller> L2SHO: just increase min scrub value and do it manually
[18:38] * dxd828 (~dxd828@31.55.55.195) has joined #ceph
[18:39] <L2SHO> alphe, I just ended up bouncing the OSD that it was trying to backfill to, now it's making progress again
[18:39] * dxd828 (~dxd828@31.55.55.195) Quit ()
[18:41] <ccourtaut> might be interesting for kraken to report when teuthology tests fails, with a link to the report
[18:42] <ccourtaut> sry wrong channel, moved to #ceph-summit
[18:44] * rongze (~rongze@211.155.113.166) Quit (Remote host closed the connection)
[18:51] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) Quit (Ping timeout: 480 seconds)
[18:52] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[18:53] * al-maisan (~al-maisan@94.236.7.190) Quit (Ping timeout: 480 seconds)
[18:54] <alphe> L2SHO so it it okay after the deep-scrub ?
[18:54] <L2SHO> alphe, I don't know. It doesn't seem like deep-scrub did anything
[18:55] <L2SHO> alphe, I just restarted the osd process and it looks like it restarted backfilling the pg from the beginning. This time it finished
[18:55] <alphe> ok by bouncing you mean restart
[18:55] <alphe> ho ok !
[18:55] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[18:56] <alphe> L2SHO great that your problem what solved so easyly ...
[18:56] <L2SHO> thanks for the help
[18:56] <alphe> hehehe
[18:56] <alphe> I wasn t of much help since I didn t even provide the basic ... "well restart the osd" advice ...
[18:56] <alphe> hehehe
[18:56] <L2SHO> ya, I hate having to restart services though. There's obviously a bug somewhere, but who knows how to reproduce it
[18:57] * alexxy[home] (~alexxy@2001:470:1f14:106::2) Quit (Remote host closed the connection)
[18:59] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[19:00] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[19:00] * andreask1 (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[19:00] * ChanServ sets mode +v andreask1
[19:00] * andreask1 is now known as andreask
[19:01] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has left #ceph
[19:03] * Remy- (~Remy@ks3292820.kimsufi.com) has joined #ceph
[19:04] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:07] * wiebalck_ (~wiebalck@AAnnecy-652-1-411-29.w90-36.abo.wanadoo.fr) has joined #ceph
[19:10] * Remy- (~Remy@ks3292820.kimsufi.com) has left #ceph
[19:14] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[19:15] * Pedras (~Adium@64.191.206.83) has joined #ceph
[19:15] * dvanders (~dvanders@46.227.20.178) has joined #ceph
[19:24] <via> where backfill target is -1
[19:25] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[19:26] * xmltok (~xmltok@216.103.134.250) has joined #ceph
[19:32] * linuxkidd_ (~linuxkidd@rrcs-70-62-120-189.midsouth.biz.rr.com) has joined #ceph
[19:33] * alphe (~alphe@0001ac6f.user.oftc.net) Quit (Quit: Leaving)
[19:37] * linuxkidd (~linuxkidd@cpe-066-057-061-231.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[19:39] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[19:40] * ScOut3R (~scout3r@dsl51B69BF7.pool.t-online.hu) Quit ()
[19:41] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[19:44] * rongze (~rongze@117.79.232.204) has joined #ceph
[19:45] * fouxm (~fouxm@185.23.92.11) Quit (Remote host closed the connection)
[19:46] * eternaleye (~eternaley@c-24-17-202-252.hsd1.wa.comcast.net) Quit (Read error: Connection reset by peer)
[19:48] * KevinPerks (~Adium@97.68.216.74) has joined #ceph
[19:51] * eternaleye (~eternaley@c-24-17-202-252.hsd1.wa.comcast.net) has joined #ceph
[19:54] * themgt (~themgt@pc-188-95-160-190.cm.vtr.net) Quit (Quit: themgt)
[19:54] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:56] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[19:56] * rongze (~rongze@117.79.232.204) Quit (Ping timeout: 480 seconds)
[20:01] * mtanski (~mtanski@cpe-66-68-155-199.austin.res.rr.com) Quit (Quit: mtanski)
[20:02] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[20:03] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:05] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[20:05] * unis_ (~unis@58.213.102.114) Quit (Ping timeout: 480 seconds)
[20:08] * unis (~unis@58.213.102.114) has joined #ceph
[20:08] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:e8fb:18a3:60e4:9cba) has joined #ceph
[20:12] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Quit: Leaving.)
[20:12] * madkiss (~madkiss@2001:6f8:12c3:f00f:5da7:ce67:7b59:2027) Quit (Ping timeout: 480 seconds)
[20:13] * alram_ (~alram@cpe-76-167-50-51.socal.res.rr.com) has joined #ceph
[20:14] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[20:23] * dmsimard1 (~Adium@70.38.0.248) has joined #ceph
[20:25] * aardvark1 (~Warren@2607:f298:a:607:3d58:9a3b:c9f8:8961) has joined #ceph
[20:25] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[20:26] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[20:27] * dmsimard2 (~Adium@108.163.152.66) has joined #ceph
[20:29] * dmsimard (~Adium@108.163.152.2) Quit (Ping timeout: 480 seconds)
[20:32] * WarrenUsui1 (~Warren@2607:f298:a:607:a8d9:5660:cbb1:e25d) Quit (Ping timeout: 480 seconds)
[20:32] * wusui (~Warren@2607:f298:a:607:3d58:9a3b:c9f8:8961) has joined #ceph
[20:32] * WarrenUsui (~Warren@2607:f298:a:607:a8d9:5660:cbb1:e25d) Quit (Ping timeout: 480 seconds)
[20:32] * dmsimard1 (~Adium@70.38.0.248) Quit (Ping timeout: 480 seconds)
[20:33] * al-maisan (~al-maisan@86.188.131.84) has joined #ceph
[20:35] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) has joined #ceph
[20:36] * dmsimard (~Adium@70.38.0.248) has joined #ceph
[20:37] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Quit: Leaving.)
[20:39] * dmsimard2 (~Adium@108.163.152.66) Quit (Ping timeout: 480 seconds)
[20:41] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
[20:41] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[20:41] * sjm (~sjm@38.98.115.250) has joined #ceph
[20:43] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[20:43] * Gamekiller77 (~oftc-webi@128-107-239-235.cisco.com) has joined #ceph
[20:46] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[20:47] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[20:47] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:49] * rongze (~rongze@117.79.232.236) has joined #ceph
[20:50] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[20:51] * sarob_ (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:51] * sleinen1 (~Adium@2001:620:0:25:6146:13c0:bdf6:6e18) has joined #ceph
[20:52] * michaelkk (~michaekk@ool-4353c729.dyn.optonline.net) has joined #ceph
[20:53] <michaelkk> hi all … im just getting into the wonderful world of ceph, and was curious to know / hear about whether anyone has used ceph (primarily CephFS) as a backing store for databases (such as MySQL, Postgres, etc.)?
[20:55] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[20:55] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[20:57] * asmaps (znc@2a01:4f8:100:5325::b01) has joined #ceph
[20:57] * rongze (~rongze@117.79.232.236) Quit (Ping timeout: 480 seconds)
[20:58] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:59] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) Quit (Read error: Operation timed out)
[20:59] * sarob_ (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[20:59] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) has joined #ceph
[21:03] * wiebalck_ (~wiebalck@AAnnecy-652-1-411-29.w90-36.abo.wanadoo.fr) Quit (Quit: wiebalck_)
[21:06] * john_barbee (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[21:07] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[21:09] * nwat (~textual@eduroam-240-40.ucsc.edu) has joined #ceph
[21:10] * john_barbee (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[21:11] * wiebalck_ (~wiebalck@AAnnecy-652-1-411-29.w90-36.abo.wanadoo.fr) has joined #ceph
[21:13] * wiebalck_ (~wiebalck@AAnnecy-652-1-411-29.w90-36.abo.wanadoo.fr) Quit ()
[21:14] * smiley1 (~smiley@205.153.36.170) has joined #ceph
[21:17] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:19] * john_barbee (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 25.0.1/20131112160018])
[21:20] * wiebalck_ (~wiebalck@AAnnecy-652-1-411-29.w90-36.abo.wanadoo.fr) has joined #ceph
[21:22] * Gamekiller77 (~oftc-webi@128-107-239-235.cisco.com) Quit (Remote host closed the connection)
[21:23] <smiley1> I need a way (I think) to recursively set (update) acl on radosgw on a nightly basis....I am thinking about a cron job using boto as it looks like the current version of s3cmd is broken....is anyone doing anything similar?
[21:24] * symmcom (~symmcom@184.70.203.22) Quit (Read error: Connection reset by peer)
[21:24] * dmsimard1 (~Adium@108.163.152.66) has joined #ceph
[21:26] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[21:27] * wiebalck_ (~wiebalck@AAnnecy-652-1-411-29.w90-36.abo.wanadoo.fr) Quit (Quit: wiebalck_)
[21:30] * dmsimard (~Adium@70.38.0.248) Quit (Ping timeout: 480 seconds)
[21:33] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[21:38] * Sodo (~Sodo@a88-113-108-239.elisa-laajakaista.fi) Quit (Ping timeout: 480 seconds)
[21:40] * dmsimard (~Adium@70.38.0.245) has joined #ceph
[21:43] <cjh973> michaelkk: I use it as the backing store for mercurial repos, docker containers. so far so good
[21:44] <michaelkk> cjh973: do u use it via CephFS?
[21:44] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[21:45] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[21:45] <michaelkk> cjh973: s/via/with
[21:45] * john_barbee (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[21:46] * dmsimard1 (~Adium@108.163.152.66) Quit (Ping timeout: 480 seconds)
[21:47] * sjustlaptop (~sam@38.122.20.226) Quit (Ping timeout: 480 seconds)
[21:48] <cjh973> michaelkk: yup cephfs
[21:48] <cjh973> the throughput is good. i'm pretty happy with it
[21:48] <michaelkk> cjh973: can u define throughput a bit better? :)
[21:48] <michaelkk> as in what kinda throughput do u see / expect?
[21:49] <cjh973> i have a bunch of crappy desktops with 4 1T sata drives in them. I can write to the cephfs mount at about 105MB-110MB/s over 1Gb links
[21:49] <cjh973> that's about as much as i can expect from that network links
[21:50] * rongze (~rongze@211.155.113.166) has joined #ceph
[21:51] <michaelkk> cjh973: nice, thats what i was curious about :)
[21:51] <michaelkk> no crashes, weird issues, or otherwise things that would keep me awake at night?
[21:51] <alfredodeza> hi all, there is a new ceph-deploy release out
[21:51] <alfredodeza> announcement just went out
[21:51] <alfredodeza> make sure you update :)
[21:51] * dmsimard (~Adium@70.38.0.245) Quit (Ping timeout: 480 seconds)
[21:53] * sjustlaptop (~sam@2607:f298:a:697:5179:c475:423:5dc6) has joined #ceph
[21:53] * KevinPerks (~Adium@97.68.216.74) Quit (Quit: Leaving.)
[21:54] <cjh973> michaelkk: use the native cephfs mount and you'll sleep better. I've been having issues with the fuse mounts just spinning into a loop forever and hanging
[21:54] <cjh973> the native mount doesn't seem to have that problem
[21:54] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[21:55] <michaelkk> yeah, i wouldn't use FUSE if i could avoid it!
[21:56] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[21:57] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[21:58] * sjustlaptop1 (~sam@38.122.20.226) has joined #ceph
[21:58] <L2SHO> I'm looking at rbd-fuse, does it not support only mounting a single rbd image?
[21:58] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[21:59] <cjh973> L2SHO: as far as i know it doesn't
[21:59] <cjh973> i was confused about that also
[21:59] * rongze (~rongze@211.155.113.166) Quit (Ping timeout: 480 seconds)
[22:00] <cjh973> michaelkk: fuse is great if you can't get the native mount work. when i only had tiny amounts of ram it wouldn't work but now that i have more ram it's fine
[22:00] <L2SHO> cjh973, how does it work exactly? I don't have time to play with it at the moment. Does it expose every image as a file, and then you loop mount one of those files?
[22:01] <saturnine> So what's the best way to mitigate the single point of failure for radosgw? Set up multiple instances and use a load balancer or RRDNS?
[22:02] * Sysadmin88 (~IceChat77@94.1.37.151) has joined #ceph
[22:03] * Cube (~Cube@c-98-208-30-2.hsd1.ca.comcast.net) has joined #ceph
[22:04] * sjustlaptop (~sam@2607:f298:a:697:5179:c475:423:5dc6) Quit (Ping timeout: 480 seconds)
[22:05] <alphe> I want to know if i can acces the data uploaded with cephfs to the ceph cluster with rbd ?
[22:05] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:05] <mikedawson> alphe: i do not believe you can
[22:06] <alphe> that is what I guessed since you have to crate a pool in rbd stuff
[22:06] <alphe> then format it ...
[22:06] * Cube (~Cube@c-98-208-30-2.hsd1.ca.comcast.net) Quit ()
[22:07] <alphe> cephfs with kernel 3.12 still have the filesystems that partly disapear on heavy duty
[22:07] <alphe> like a recursive chmod on a large number of files
[22:11] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[22:11] * astark (~astark@6cb32e01.cst.lightpath.net) Quit (Remote host closed the connection)
[22:12] * clayb (~kvirc@proxy-nj1.bloomberg.com) Quit (Read error: Connection reset by peer)
[22:13] * al-maisan (~al-maisan@86.188.131.84) Quit (Ping timeout: 480 seconds)
[22:15] * AfC (~andrew@101.119.15.150) has joined #ceph
[22:16] <alphe> hum thinking of that problem it could be related to the use of nfs server on the same machine than the cephfs ...
[22:16] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[22:17] <L2SHO> alphe, You can probably access the cephfs data using rados at a low level, but I'm guessing that's not what you are looking for
[22:17] <alphe> yeah ...
[22:18] <alphe> I m looking first at an explaination on why the heck my directory tree partly disapear from /mnt/ceph/ where /mnt/ceph is my mount point
[22:19] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[22:19] * ChanServ sets mode +o elder
[22:19] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) Quit (Remote host closed the connection)
[22:20] <L2SHO> alphe, this link was posted yesterday in response to a question about the filesystem disappearing: http://www.mail-archive.com/ceph-users@lists.ceph.com/msg05750.html
[22:20] * LeaChim (~LeaChim@host86-162-2-255.range86-162.btcentralplus.com) has joined #ceph
[22:21] * sleinen1 (~Adium@2001:620:0:25:6146:13c0:bdf6:6e18) Quit (Quit: Leaving.)
[22:21] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[22:21] <alphe> L2SHO heheh yeah that was my reply
[22:21] <alphe> at the time I run heavy duty tasks and got no problems
[22:22] <L2SHO> alphe, oh, right, I should have some more caffeine :)
[22:22] <alphe> then today those probs reapeared
[22:22] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[22:22] <alphe> L2SHO I am the crazy guy that want to do a ceph driver for windows ...
[22:23] <alphe> but while I have those problems I can t focus on the task ..
[22:23] <alphe> my idea was to access the ceph cluster porting cephfs
[22:23] <L2SHO> alphe, good luck. I believe at the ceph day NYC I heard something about there being librbd support compiled into some branch of samba
[22:23] <alphe> and not going the rbd way ..
[22:24] * AfC (~andrew@101.119.15.150) Quit (Ping timeout: 480 seconds)
[22:24] <alphe> samba is pretty slow so I don t want to use it to proxy ceph cluster basically
[22:25] <alphe> and I dont want a server dedicated to a task the windows clients should do
[22:25] <L2SHO> ya, makes sense. I avoid windows like the plague, so don't really have any more useful input
[22:25] <alphe> samba is slow basically because it was though for a secretary to share 2 or 3 files with her boss and for that it is perfect
[22:26] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:26] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[22:26] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[22:26] <alphe> when you have to upload a clone of a drive to a server then samba shows its true colors and its true limits
[22:27] <alphe> if you want to upload 1 large file to ceph using samba it is good if you want to upload 200 000 files under 5 MB then samba is a true mess
[22:28] <alphe> and all the "optimisation settings" recommanded for such task doesn t shows any real improvement
[22:28] <L2SHO> <sigh>, wtf is a multiply-claimed block?
[22:29] <alphe> when uploading a hard drive clone is estimated to 5 weeks a better of 5% is nothing noticeable
[22:29] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:29] <alphe> L2SHO a block that many osd wants to get ?
[22:30] <L2SHO> alphe, I think it's ext4 related, not specifically ceph
[22:30] <L2SHO> everytime one of my osd's becomes full I end up with filesystem corruption on my rbd images
[22:31] <L2SHO> and e2fsck spits out tons of crazy shit I've never heard of before
[22:34] * erice_ (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[22:34] <alphe> I use XFS
[22:34] <alphe> as a lower end
[22:35] <alphe> but then it is something I dont understand ....
[22:35] <alphe> with ceph-deploy you prepare your disks and OSD they are formated
[22:36] <L2SHO> right, my osd's run xfs filesystems. I'm saying when I export an rbd and format that rbd as ext4
[22:36] <alphe> then you use rbd and you create a pool special to isolate a bunch of disk that you will have to format again ?
[22:36] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[22:36] <alphe> L2SHO yeah that twice format stuff when it comes to rbd I don t understand it
[22:36] * Shmouel (~Sam@ns1.anotherservice.com) has joined #ceph
[22:37] <alphe> while cephfs directly use the ceph cluster as it no need to add another layer
[22:37] * Shmouel (~Sam@ns1.anotherservice.com) Quit ()
[22:37] * alfredodeza (~alfredode@c-98-194-83-79.hsd1.tx.comcast.net) has left #ceph
[22:38] * linuxkidd__ (~linuxkidd@cpe-066-057-061-231.nc.res.rr.com) has joined #ceph
[22:45] * linuxkidd_ (~linuxkidd@rrcs-70-62-120-189.midsouth.biz.rr.com) Quit (Ping timeout: 480 seconds)
[22:48] * BillK (~BillK-OFT@106-68-8-107.dyn.iinet.net.au) has joined #ceph
[22:52] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[22:52] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) has joined #ceph
[22:52] * rongze (~rongze@117.79.232.204) has joined #ceph
[22:55] * rturk is now known as rturk-away
[22:56] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[22:57] * rturk-away is now known as rturk
[22:59] * cfreak200 (~cfreak200@p4FF3F232.dip0.t-ipconnect.de) has joined #ceph
[23:01] * cfreak201 (~cfreak200@p4FF3E351.dip0.t-ipconnect.de) Quit (Read error: Operation timed out)
[23:01] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[23:01] * ChanServ sets mode +v andreask
[23:02] * rongze (~rongze@117.79.232.204) Quit (Ping timeout: 480 seconds)
[23:04] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:06] * rendar (~s@host28-176-dynamic.22-79-r.retail.telecomitalia.it) Quit ()
[23:10] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) Quit (Quit: Leaving)
[23:14] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[23:16] <tsnider> Can anyone tell me why after the fourth kernel mounted rdb devices iostat displays them as empower** devices? The 1st four are rbdX devices which I'd expect. http://pastebin.com/QQemsbv3
[23:20] <alphe> isn t emcpowerig the name of your new rbd ?
[23:20] <alphe> just trying to guess ..
[23:21] <tsnider> aplhe -- I didn't do any explicit naming --- wondering why it's not rbd4
[23:22] * rturk is now known as rturk-away
[23:22] <alphe> don t know ... and emcpowerig is a weird name ...
[23:22] <L2SHO> my wild guess would be some kind of device node number conflict
[23:22] <alphe> did you used an automated tool ?
[23:24] <sjm> emcpower devices are created by emc powerpath multipathing tool
[23:24] <sjm> i.e. remove/stop emc powerpath service
[23:25] <sjm> might require a reboot
[23:25] <Nats_> major:minor device id collision i would think
[23:25] <tsnider> alphe: wysiwid (what you see is what I did). create the image, rbd create --size 102400 --pool pool4 image1 and then map it as shown in the paste. nothing special.
[23:25] * gucki_ (~gucki@77-56-39-154.dclient.hispeed.ch) Quit (Quit: Konversation terminated!)
[23:26] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[23:26] <alphe> tsnider you have nothing on that rbd right ?
[23:27] <tsnider> sjm: I don't have any emc powerpath binaries loaded
[23:27] <alphe> if you delete it and recreate it what happend ?
[23:27] <tsnider> alphe: same thing this is not the 1st time I've seen it
[23:27] <Nats_> compare it to whats in /proc/diskstats
[23:27] <dmick> L2SHO: Nats_: +1
[23:28] <sjm> rm /dev/emcpower* and remap it
[23:28] <sjm> I would think
[23:29] <alphe> dmick is there a way to make rbd sees the files on cephfs ?
[23:29] <alphe> well I imagine I should create a rbd then map/mount it than copy files from my cephfs mount point to my rbd mount point
[23:29] <alphe> hehehe ...
[23:30] <dmick> alphe, you've asked me this at least four times over the last few months. The answer is still no.
[23:30] <alphe> dmick ...
[23:30] <alphe> and for the mount and copy is it possible ?
[23:30] <tsnider> nats: nothing in there like that: http://pastebin.com/5WBKZpKm
[23:31] <Nats_> tsnider, see the first two digits next to rbd4 - 247 and 0 . thats the 'major:minor' device id
[23:31] <alphe> but as far as I understand rbd they are not intended to get all the storage on one rbd but more to splice the virtual drive into smaller convenient store no ?
[23:32] * dxd828 (~dxd828@host-92-24-127-29.ppp.as43234.net) has joined #ceph
[23:32] <Nats_> tsnider, when iostat prints its data, it takes that number and converts it to a name (i dont know how exactly)
[23:32] <Nats_> tsnider, so whats happening is that both emcpowerig and RBD are registered as 247:0
[23:32] <tsnider> Nats: no /dev/em* devices http://pastebin.com/EkyYs77k
[23:32] * gucki (~smuxi@77-56-39-154.dclient.hispeed.ch) Quit (Remote host closed the connection)
[23:32] <alphe> tsnider lsmod and see if there is en emcpower stuff there
[23:33] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Remote host closed the connection)
[23:33] <tsnider> Nats: nuthin like dat in der either. :)
[23:34] <alphe> can it be an iostat bug ?
[23:34] <Nats_> no, its undefined behaviour
[23:34] <tsnider> nats: how does one determine the registery map
[23:34] <tsnider> Nats: for major /minor device numbers.
[23:34] <Nats_> because there are two things claiming to be 247:0
[23:35] <Nats_> tsnider, /proc/devices perhaps
[23:35] <tsnider> Nats & alphe: seems strange to me
[23:35] <Nats_> i've encountered a similar issue with LVM and just stopped using iostat
[23:35] <sjm> tsnider: anything from lsmod |grep -i emc
[23:36] <Nats_> be helpful if RBD incremented its minor device ID rather than chewing up major id's
[23:37] <alphe> hum I understand nothing of rbd ... a box in box that as box and copies of the boxes in other boxes ...
[23:37] <cjh973> LSHO: i think it exposes all the images as files but i dont remember
[23:37] <cjh973> L2SHO: ^^
[23:37] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[23:37] <alphe> so ceph-deploy prepare the device storage that is then used to store more pools that can store rbd or files directly
[23:37] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[23:37] <alphe> rbd are file though
[23:38] <tsnider> yeah that's what rbd kernel mounting does AFAIK -- enables images in pools to be mounted as kernel devices
[23:38] <alphe> then rbd you can drop their your files and directories ...
[23:38] * jks (~jks@3e6b5724.rev.stofanet.dk) Quit (Ping timeout: 480 seconds)
[23:39] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[23:39] <alphe> rbd are flexible so you can change their size you can use them to isolate or better said segment you virtual colosus ceph drive
[23:39] <alphe> you can snapshot them too
[23:40] <alphe> you can do a tons of recurent stuff that will pumps your big ceph storage ...
[23:41] <alphe> on top of it you can store virtual QEMU drives that can be used in virtual machines or in virtualised cloud storage systems
[23:41] <alphe> and you can snapshot that
[23:41] * sjm (~sjm@38.98.115.250) has left #ceph
[23:41] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[23:41] <alphe> seems like the best point of rbd is the snapshot
[23:42] * dxd828 (~dxd828@host-92-24-127-29.ppp.as43234.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[23:45] <Nats_> tsnider, whats in /proc/devices ?
[23:52] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) Quit (Quit: Leaving.)
[23:54] * sagelap (~sage@2600:1012:b025:db7:7cdc:682:a7d2:5029) has joined #ceph
[23:55] * joshd (~joshd@2607:f298:a:607:354c:fce3:e8b:463) Quit (Quit: Leaving.)
[23:55] * alfredodeza (~alfredode@c-98-194-83-79.hsd1.tx.comcast.net) has joined #ceph
[23:55] * rongze (~rongze@117.79.232.204) has joined #ceph
[23:56] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[23:57] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[23:57] * sagelap (~sage@2600:1012:b025:db7:7cdc:682:a7d2:5029) Quit ()
[23:57] * sagelap (~sage@2600:1012:b025:db7:7cdc:682:a7d2:5029) has joined #ceph
[23:58] * Tamil (~tamil@38.122.20.226) Quit (Ping timeout: 480 seconds)
[23:58] * sagelap1 (~sage@172.56.31.114) has joined #ceph
[23:59] * sagelap (~sage@2600:1012:b025:db7:7cdc:682:a7d2:5029) Quit (Read error: Connection reset by peer)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.