#ceph IRC Log

Index

IRC Log for 2016-09-08

Timestamps are in GMT/BST.

[0:04] * northrup (~northrup@75-146-11-137-Nashville.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[0:05] * vbellur (~vijay@71.234.224.255) has joined #ceph
[0:09] * srk (~Siva@32.97.110.53) Quit (Ping timeout: 480 seconds)
[0:21] * vegas3 (~Deiz@46.166.188.236) has joined #ceph
[0:22] * BennyRene2016 (~BennyRene@host-89-241-119-65.as13285.net) Quit (Ping timeout: 480 seconds)
[0:25] * rendar (~I@host5-183-dynamic.46-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:27] * BennyRene2016 (~BennyRene@host-89-241-115-229.as13285.net) has joined #ceph
[0:28] * KindOne (kindone@h148.130.30.71.dynamic.ip.windstream.net) has joined #ceph
[0:41] * brians (~brian@80.111.114.175) has joined #ceph
[0:51] * vegas3 (~Deiz@46.166.188.236) Quit ()
[0:53] * yuelongguang (~chatzilla@114.134.84.144) has joined #ceph
[0:56] * allenmelon1 (~dontron@exit0.radia.tor-relays.net) has joined #ceph
[0:57] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[1:03] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:4d30:1cb6:3a49:7398) Quit (Ping timeout: 480 seconds)
[1:10] * danieagle (~Daniel@200-171-230-208.customer.telesp.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[1:12] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[1:12] * _mrp (~mrp@93-86-57-229.dynamic.isp.telekom.rs) has joined #ceph
[1:15] * _mrp (~mrp@93-86-57-229.dynamic.isp.telekom.rs) Quit ()
[1:17] * _mrp (~mrp@93-86-57-229.dynamic.isp.telekom.rs) has joined #ceph
[1:19] * _mrp (~mrp@93-86-57-229.dynamic.isp.telekom.rs) Quit ()
[1:19] <iggy> if you are expecting immediate responses to every question that you can't bother to google, it's probably best to pay for support... then someone _has_ to answer you
[1:22] * sudocat (~dibarra@192.185.1.20) Quit (Quit: Leaving.)
[1:22] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[1:26] * allenmelon1 (~dontron@exit0.radia.tor-relays.net) Quit ()
[1:40] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[1:43] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[1:50] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[1:55] * oms101_ (~oms101@p20030057EA035E00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:57] * kuku (~kuku@119.93.91.136) has joined #ceph
[2:03] * oms101_ (~oms101@p20030057EA038500C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[2:04] * xarses (~xarses@64.124.158.3) Quit (Ping timeout: 480 seconds)
[2:10] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[2:23] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[2:39] * kristen (~kristen@134.134.139.83) Quit (Remote host closed the connection)
[2:43] * BennyRene2016 (~BennyRene@host-89-241-115-229.as13285.net) Quit (Ping timeout: 480 seconds)
[2:46] * northrup (~northrup@c-107-3-245-199.hsd1.tn.comcast.net) has joined #ceph
[2:47] * scuttle is now known as scuttle|afk
[2:47] * scuttle|afk is now known as scuttle
[2:48] * scuttle is now known as scuttlemonkey
[2:55] * joshd1 (~jdurgin@2602:30a:c089:2b0:b137:188e:d216:76bc) has joined #ceph
[2:56] * Misacorp (~xul@91.ip-164-132-51.eu) has joined #ceph
[2:59] * linuxkidd (~linuxkidd@ip70-189-202-62.lv.lv.cox.net) Quit (Quit: Leaving)
[3:01] * BrianA (~BrianA@fw-rw.shutterfly.com) Quit (Read error: Connection reset by peer)
[3:04] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[3:05] * mattbenjamin1 (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[3:07] <SamYaple> iggy: should we tell him about the secret ceph channel?
[3:08] <SamYaple> northrup: you can do that with active/active cephfs, but that is still an experimental feature
[3:08] <northrup> apart from that is there a way to do it?
[3:09] <SamYaple> no
[3:09] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[3:09] <SamYaple> the only stable way is active/passive
[3:09] <SamYaple> though the downtime is minimal in my experince
[3:09] <SamYaple> unnoticable even
[3:09] <SamYaple> but it varies based on suage of course
[3:09] <northrup> so stop the service on one?
[3:12] <SamYaple> northrup: yea you can do that
[3:12] <SamYaple> i guess
[3:12] <SamYaple> if this is for testing you can also `ceph mds fail` i believe
[3:15] <northrup> I have three mds nodes and I need to move them all, one at a time, to a new availability zone... which requires retiring the node, building a new one, and promoting it
[3:15] <northrup> so at some point, I can move the other two.. but I'll have to move the active one which will require a failure...
[3:15] <northrup> I was hoping for something more.... seamless
[3:17] <SamYaple> northrup: cephfs is stable as of Jewel only. its the first 'stable' release. if you want to play with the experimental features there is active/active which is more.... seemless
[3:18] * georgem (~Adium@157.52.3.206) has joined #ceph
[3:21] <northrup> I'm running Jewel
[3:21] <SamYaple> and in jewel it is only active/passive. surely this came up when you were researching this before putting it into production?
[3:22] <SamYaple> this is how cephfs currently works. hopefully by the next LTS active/active will be tested enough to be a stable feature
[3:23] * jfaj_ (~jan@p4FC5B354.dip0.t-ipconnect.de) has joined #ceph
[3:26] * Misacorp (~xul@91.ip-164-132-51.eu) Quit ()
[3:26] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:27] * flisky (~Thunderbi@106.38.61.185) has joined #ceph
[3:29] * jfaj (~jan@p4FC5BE73.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:34] * flisky (~Thunderbi@106.38.61.185) Quit (Quit: flisky)
[3:35] * oliveiradan2 (~doliveira@67.214.238.80) has joined #ceph
[3:38] * bauruine (~bauruine@2a01:4f8:130:8285:fefe::36) Quit (Ping timeout: 480 seconds)
[3:41] * northrup (~northrup@c-107-3-245-199.hsd1.tn.comcast.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[3:42] * elfurbe (~Adam@saint.uits.arizona.edu) Quit (Quit: Textual IRC Client: www.textualapp.com)
[3:48] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[3:51] * chengpeng__ (~chengpeng@180.168.126.179) has joined #ceph
[3:52] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[3:55] * toastydeath (~toast@pool-71-255-253-39.washdc.fios.verizon.net) has joined #ceph
[3:57] * Bosse (~bosse@erebus.klykken.com) Quit (Quit: WeeChat 1.4-dev)
[3:59] * kefu (~kefu@114.92.125.128) has joined #ceph
[4:00] * chengpeng__ (~chengpeng@180.168.126.179) Quit (Quit: Leaving)
[4:00] * chengpeng (~chengpeng@180.168.170.2) has joined #ceph
[4:05] * topro (~prousa@p578af414.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[4:06] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[4:07] * davidzlap (~Adium@2605:e000:1313:8003:4b4:77e8:ada5:66d3) has joined #ceph
[4:07] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[4:15] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[4:15] * yanzheng (~zhyan@125.70.22.186) has joined #ceph
[4:19] * wenduo (~wenduo@218.30.116.9) has joined #ceph
[4:24] * aleksag (~Inverness@192.73.244.121) has joined #ceph
[4:26] * mattbenjamin1 (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[4:29] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[4:31] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[4:32] * joshd1 (~jdurgin@2602:30a:c089:2b0:b137:188e:d216:76bc) Quit (Quit: Leaving.)
[4:43] * yanzheng1 (~zhyan@118.116.112.3) has joined #ceph
[4:45] * flisky (~Thunderbi@106.38.61.185) has joined #ceph
[4:46] * yanzheng (~zhyan@125.70.22.186) Quit (Ping timeout: 480 seconds)
[4:48] * kuku (~kuku@119.93.91.136) has joined #ceph
[4:54] * aleksag (~Inverness@192.73.244.121) Quit ()
[5:03] * Salamander_ (~Gecko1986@108.61.122.88) has joined #ceph
[5:03] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[5:06] * georgem (~Adium@157.52.3.206) Quit (Quit: Leaving.)
[5:27] * Vacuum__ (~Vacuum@i59F79DCC.versanet.de) has joined #ceph
[5:33] * Salamander_ (~Gecko1986@108.61.122.88) Quit ()
[5:34] * Vacuum_ (~Vacuum@i59F79635.versanet.de) Quit (Ping timeout: 480 seconds)
[5:36] * vimal (~vikumar@114.143.162.59) has joined #ceph
[5:42] * davidzlap (~Adium@2605:e000:1313:8003:4b4:77e8:ada5:66d3) Quit (Quit: Leaving.)
[5:45] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[5:47] * Inuyasha1 (~cryptk@185.65.134.78) has joined #ceph
[5:47] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[5:55] * schlitzer (~schlitzer@barriere.frankfurter-softwarefabrik.de) Quit (Ping timeout: 480 seconds)
[5:57] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[5:57] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[5:59] * schlitzer (~schlitzer@barriere.frankfurter-softwarefabrik.de) has joined #ceph
[6:01] * pvh_sa_ (~pvh@169-0-182-84.ip.afrihost.co.za) Quit (Ping timeout: 480 seconds)
[6:02] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[6:03] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:03] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[6:04] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[6:05] * oliveiradan2_ (~doliveira@67.214.238.80) has joined #ceph
[6:05] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[6:05] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[6:07] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[6:07] * oliveiradan2_ (~doliveira@67.214.238.80) Quit ()
[6:09] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[6:09] * vimal (~vikumar@114.143.162.59) Quit (Quit: Leaving)
[6:10] * karnan (~karnan@121.244.87.117) has joined #ceph
[6:13] * pvh_sa_ (~pvh@169-0-182-84.ip.afrihost.co.za) has joined #ceph
[6:16] * BlaXpirit (~irc@blaxpirit.com) Quit (Quit: Bye)
[6:16] * BlaXpirit (~irc@blaxpirit.com) has joined #ceph
[6:17] * Inuyasha1 (~cryptk@185.65.134.78) Quit ()
[6:20] * kefu (~kefu@114.92.125.128) has joined #ceph
[6:24] * hgichon (~hgichon@112.220.91.130) has joined #ceph
[6:32] * vimal (~vikumar@121.244.87.116) has joined #ceph
[6:38] * sleinen (~Adium@2001:620:1000:3:a65e:60ff:fedb:f305) has joined #ceph
[6:43] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[6:43] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[6:45] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[6:46] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[6:47] * kefu (~kefu@114.92.125.128) has joined #ceph
[6:48] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[6:49] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[6:58] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[7:00] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[7:01] * Jeffrey4l_ (~Jeffrey@120.11.189.28) has joined #ceph
[7:03] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[7:04] <masber> hi, can I ask generic storage questions not related with ceph in this channel?
[7:05] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[7:07] * Jeffrey4l__ (~Jeffrey@110.252.60.139) Quit (Ping timeout: 480 seconds)
[7:08] * pvh_sa_ (~pvh@169-0-182-84.ip.afrihost.co.za) Quit (Ping timeout: 480 seconds)
[7:09] * BennyRene2016 (~BennyRene@host-89-241-115-229.as13285.net) has joined #ceph
[7:20] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[7:22] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[7:25] * i_m1 (~ivan.miro@94.25.168.142) Quit (Ping timeout: 480 seconds)
[7:26] * BennyRene2016 (~BennyRene@host-89-241-115-229.as13285.net) Quit (Ping timeout: 480 seconds)
[7:36] * wenduo (~wenduo@218.30.116.9) Quit (Ping timeout: 480 seconds)
[7:36] <ivve> you can ask anything but maybe not expect an answer for it :)
[7:36] * cnf (~cnf@d5152daf0.static.telenet.be) Quit (Ping timeout: 480 seconds)
[7:40] * rendar (~I@host147-177-dynamic.22-79-r.retail.telecomitalia.it) has joined #ceph
[7:42] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[7:44] * sleinen (~Adium@2001:620:1000:3:a65e:60ff:fedb:f305) Quit (Quit: Leaving.)
[7:50] * swami1 (~swami@49.38.1.168) has joined #ceph
[8:03] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[8:03] * kefu is now known as kefu|afk
[8:04] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:07] * ledgr (~ledgr@78.57.252.56) has joined #ceph
[8:11] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[8:12] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[8:12] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit ()
[8:13] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[8:13] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit ()
[8:14] * sleinen (~Adium@130.59.94.136) has joined #ceph
[8:14] * Gecko1986 (~yuastnav@46.166.137.232) has joined #ceph
[8:15] * sleinen1 (~Adium@2001:620:0:82::105) has joined #ceph
[8:22] * sleinen (~Adium@130.59.94.136) Quit (Ping timeout: 480 seconds)
[8:25] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) Quit (Quit: Leaving)
[8:27] * i_m (~ivan.miro@109.188.125.40) has joined #ceph
[8:37] <sep> masber, just try, it's not that much traffic here :)
[8:37] * ledgr (~ledgr@78.57.252.56) Quit (Read error: Connection reset by peer)
[8:37] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[8:37] * briner (~briner@129.194.16.54) Quit (Ping timeout: 480 seconds)
[8:38] <Be-El> ceph + openstack question: is it possible to launch an instance from a glance image / snapshot without copying the image to the nova pool first (e.g. using the same pool for glance and nova)?
[8:38] <masber> jejeje
[8:38] <masber> ok
[8:39] * i_m (~ivan.miro@109.188.125.40) Quit (Ping timeout: 480 seconds)
[8:41] <masber> basically I need a centralized storage system
[8:42] * pvh_sa_ (~pvh@41.114.179.53) has joined #ceph
[8:42] <masber> my use case is special because I only write and read each file once
[8:43] <masber> so this is like scratch data, I mean I store the data, then I process it and then the data is deleted
[8:43] <vikhyat> Be-El: yes you can
[8:44] <vikhyat> Be-El: if you will use different pool also it will work
[8:44] <masber> I was looking for any advise about how to collect the metrics so i can give it to different vendors so they can give me proposals
[8:44] <vikhyat> as with rbd clone functionally it works perfectly fine
[8:44] * Gecko1986 (~yuastnav@46.166.137.232) Quit ()
[8:44] <vikhyat> it is just that you have to use RAW format images as ceph support only this format in glance_store
[8:44] <Be-El> vikhyat: across pool borders without copying the actual data?
[8:45] <vikhyat> Be-El: yes
[8:45] <vikhyat> enable_v2_api=True
[8:45] <vikhyat> show_image_direct_url=True
[8:45] <vikhyat> you should have these two option enabled in /etc/glance/glance-api.conf
[8:45] <vikhyat> Be-El: then when you will upload RAW format image
[8:45] <vikhyat> in rbd glance will create a clone of it
[8:47] <vikhyat> Be-El: then same glance rbd image clone will be used with the help of show_image_direct_url=True to create instance (without copy)
[8:47] <Be-El> vikhyat: ok, i do not have v2 apis enabled. do i need v2 for glance and glance registry?
[8:47] <vikhyat> Be-El: I think yes
[8:47] <vikhyat> but main option is show_image_direct_url=True
[8:48] <vikhyat> this is very much needed
[8:48] <Be-El> that option is already set to true
[8:48] <Be-El> thx a lot, i'll give it a try
[8:48] * wenduo (~wenduo@218.30.116.10) has joined #ceph
[8:48] <vikhyat> # rbd -p images ls -l
[8:48] <vikhyat> NAME SIZE PARENT FMT PROT LOCK
[8:48] <vikhyat> 76086b0d-7fce-46e6-9b63-e376add04d2f 40162k 2
[8:48] <vikhyat> 76086b0d-7fce-46e6-9b63-e376add04d2f@snap 40162k 2 yes
[8:48] <vikhyat> you should have @snap clone
[8:49] <vikhyat> Be-El: \o/
[8:49] <vikhyat> sure
[8:49] <Be-El> the @snap clones are already present in the image pool
[8:49] <vikhyat> Be-El: but it wont work for QCOW2
[8:50] <Be-El> i prefer raw images nonetheless
[8:50] <vikhyat> good then if you have raw
[8:50] <vikhyat> it should work
[8:50] <vikhyat> you can check
[8:50] <vikhyat> do you have any running instance
[8:50] <vikhyat> ?
[8:50] * ashah (~ashah@121.244.87.117) has joined #ceph
[8:50] <vikhyat> from any of the raw image
[8:50] <vikhyat> which has @snap
[8:50] <Be-El> give me 5 minutes to setup the v2 api
[8:51] <vikhyat> sure
[8:51] * pvh_sa_ (~pvh@41.114.179.53) Quit (Ping timeout: 480 seconds)
[8:51] <Be-El> the puppet module does not have a simple flag for api v2, so it's a little bit more effort
[8:52] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) has joined #ceph
[8:53] <vikhyat> Be-El: may be you can try without v2 first
[8:53] <vikhyat> if you have any instance running let me know
[8:53] <vikhyat> from same image
[8:54] <vikhyat> I will let you know how to verify
[8:54] <vikhyat> if copy has happened or not
[8:56] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) has joined #ceph
[8:57] * swami1 (~swami@49.38.1.168) Quit (Read error: Connection reset by peer)
[8:57] <Be-El> vikhyat: ok, running cirros vm -> snapshot as raw -> spawned new instance from snapshot
[8:58] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[8:58] <Be-El> images pool has snapshot object + @snap
[8:58] <vikhyat> good now can you give me instance id
[8:58] <Be-El> instance id is f17c5a67-5eea-4d73-9b8f-e2cf27e18bd3
[8:59] <Be-El> and nova pool listing shows : 17c5a67-5eea-4d73-9b8f-e2cf27e18bd3_disk 40960M os-images/957522fe-a345-456b-af83-bacf112e3684@snap 2
[8:59] <vikhyat> perfect then it is all working as expected :D
[8:59] <vikhyat> see
[8:59] <Be-El> \o/
[8:59] <vikhyat> os-images/957522fe-a345-456b-af83-bacf112e3684@snap
[8:59] <vikhyat> this this
[8:59] <vikhyat> --this
[8:59] <vikhyat> this one is parent for this instance
[9:00] <Be-El> i had the impression that rbd snapshots are restricted to the same pool
[9:00] <vikhyat> nope it goes to images pool
[9:00] <vikhyat> yes snapshot is I think restricted to same pool
[9:00] <vikhyat> but clones are not
[9:01] <Be-El> ok, i think i understand. modification in the running instance are cow operations on the nova pool then?
[9:01] * Jeffrey4l__ (~Jeffrey@119.251.222.18) has joined #ceph
[9:01] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[9:02] <vikhyat> Be-El: right
[9:02] * i_m (~ivan.miro@109.188.125.40) has joined #ceph
[9:03] <vikhyat> rbd info <nova pool>/17c5a67-5eea-4d73-9b8f-e2cf27e18bd3_disk
[9:03] <vikhyat> Be-El: run this command ^^
[9:03] <vikhyat> it will show you more information
[9:03] <Be-El> well, thx again, this has helped a lot. we have cloud course next week with over 40 partipants, and i need to ensure that there's no storage meltdown during that course ;-)
[9:03] <vikhyat> \o/
[9:03] <vikhyat> good luck :D
[9:03] * Jeffrey4l_ (~Jeffrey@120.11.189.28) Quit (Ping timeout: 480 seconds)
[9:07] * saintpablo (~saintpabl@gw01.mhitp.dk) has joined #ceph
[9:10] * karnan (~karnan@125.16.34.66) has joined #ceph
[9:11] * moegyver (~moe@212.85.78.250) has joined #ceph
[9:15] * LiamMon (~liam.monc@2.123.203.107) Quit (Quit: leaving)
[9:16] * LiamMon (~liam.monc@2.123.203.107) has joined #ceph
[9:19] * LiamMon (~liam.monc@2.123.203.107) Quit ()
[9:19] * LiamMon (~liam.monc@2.123.203.107) has joined #ceph
[9:21] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[9:22] * i_m (~ivan.miro@109.188.125.40) Quit (Ping timeout: 480 seconds)
[9:22] * KindOne_ (kindone@198.14.199.50) has joined #ceph
[9:22] * LiamMon (~liam.monc@2.123.203.107) Quit ()
[9:23] * LiamMon (~liam.monc@2.123.203.107) has joined #ceph
[9:28] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Remote host closed the connection)
[9:28] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:28] * KindOne_ is now known as KindOne
[9:31] * bauruine (~bauruine@2a01:4f8:130:8285:fefe::36) has joined #ceph
[9:32] * LiamMon (~liam.monc@2.123.203.107) Quit (Quit: leaving)
[9:34] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[9:34] * pfactum (~post-fact@2001:19f0:6c00:8846:5400:ff:fe0c:dfa0) Quit (Read error: Connection reset by peer)
[9:34] * LiamMon (~liam.monc@2.123.203.107) has joined #ceph
[9:34] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) has joined #ceph
[9:37] * ledgr (~ledgr@78.57.252.56) has joined #ceph
[9:37] * ledgr (~ledgr@78.57.252.56) Quit (Remote host closed the connection)
[9:37] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[9:41] * karnan (~karnan@125.16.34.66) Quit (Ping timeout: 480 seconds)
[9:41] * derjohn_mob (~aj@46.189.28.50) has joined #ceph
[9:48] * ffilzwin (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) Quit (Read error: Connection reset by peer)
[9:48] * ffilz (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) Quit (Read error: Connection reset by peer)
[9:50] * karnan (~karnan@121.244.87.117) has joined #ceph
[9:50] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[9:51] * ffilz (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) has joined #ceph
[9:51] * ffilzwin (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) has joined #ceph
[9:51] * fsimonce (~simon@host145-64-dynamic.52-79-r.retail.telecomitalia.it) has joined #ceph
[9:54] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[9:55] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[10:01] * ivve (~zed@m90-144-211-123.cust.tele2.se) has joined #ceph
[10:04] * saintpablo (~saintpabl@gw01.mhitp.dk) Quit (Read error: Connection reset by peer)
[10:05] * F|1nt (~F|1nt@195.68.37.211) has joined #ceph
[10:06] * saintpablo (~saintpabl@gw01.mhitp.dk) has joined #ceph
[10:09] * wjw-freebsd2 (~wjw@smtp.digiware.nl) has joined #ceph
[10:09] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Read error: Connection reset by peer)
[10:10] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[10:17] * wjw-freebsd2 (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[10:18] * sleinen1 (~Adium@2001:620:0:82::105) Quit (Read error: Connection reset by peer)
[10:19] * sleinen (~Adium@2001:620:0:2d:a65e:60ff:fedb:f305) has joined #ceph
[10:19] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[10:19] * F|1nt (~F|1nt@195.68.37.211) Quit (Quit: Oups, just gone away...)
[10:24] * Jeffrey4l__ (~Jeffrey@119.251.222.18) Quit (Ping timeout: 480 seconds)
[10:24] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:24] * fouxm (~foucault@ks01.commit.ninja) Quit (Server closed connection)
[10:24] * fouxm (~foucault@ks01.commit.ninja) has joined #ceph
[10:29] * derjohn_mob (~aj@46.189.28.50) Quit (Ping timeout: 480 seconds)
[10:29] * Hunger (~hunger@prodevops.net) Quit (Server closed connection)
[10:29] * Hungerhu (~hunger@prodevops.net) has joined #ceph
[10:30] * The_Ball (~pi@20.92-221-43.customer.lyse.net) Quit (Server closed connection)
[10:30] * The_Ball (~pi@20.92-221-43.customer.lyse.net) has joined #ceph
[10:30] * derjohn_mob (~aj@46.189.28.50) has joined #ceph
[10:31] * karnan (~karnan@121.244.87.117) has joined #ceph
[10:32] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) Quit (Quit: Bye guys! (??????????????????? ?????????)
[10:32] * Hungerhu is now known as Hunger
[10:33] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[10:50] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[10:50] * ^Spike^ (~Spike@188.cimarosa.openttdcoop.org) Quit (Server closed connection)
[10:51] * ^Spike^ (~Spike@188.cimarosa.openttdcoop.org) has joined #ceph
[11:04] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[11:07] * peetaur2_ (~peter@i4DF67CD2.pool.tripleplugandplay.com) has joined #ceph
[11:07] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:7472:ce93:221:48ca) has joined #ceph
[11:08] <peetaur2_> Howdy, I had some ceph vms I was testing with, then the host locked up, sysrq didn't work, and did hard reset... one of the mons won't start now. What does this output mean? https://bpaste.net/show/907f1fb69ca5
[11:10] <peetaur2_> I copied ceph.conf from another node and reinstalled the software in case they were corrupt, but no change.
[11:10] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[11:11] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[11:17] * Jeffrey4l__ (~Jeffrey@119.251.148.242) has joined #ceph
[11:17] * Hunger (~hunger@prodevops.net) Quit (Ping timeout: 480 seconds)
[11:21] * ivve (~zed@m90-144-211-123.cust.tele2.se) Quit (Ping timeout: 480 seconds)
[11:22] * Hungerhu (~hunger@prodevops.net) has joined #ceph
[11:22] * Hungerhu is now known as Hunger
[11:28] * xophe (~xophe@62-210-69-147.rev.poneytelecom.eu) Quit (Server closed connection)
[11:28] * DanFoster (~Daniel@office.34sp.com) has joined #ceph
[11:28] * xophe (~xophe@62-210-69-147.rev.poneytelecom.eu) has joined #ceph
[11:29] * DanFoster (~Daniel@office.34sp.com) Quit (Remote host closed the connection)
[11:31] * appleq (~appleq@86.57.157.66) has joined #ceph
[11:31] * DanFoster (~Daniel@2a00:1ee0:3:1337:6443:99f3:7f63:d8af) has joined #ceph
[11:32] * appleq (~appleq@86.57.157.66) Quit ()
[11:35] * sankarshan (~sankarsha@121.244.87.117) has joined #ceph
[11:37] * peetaur2_ (~peter@i4DF67CD2.pool.tripleplugandplay.com) Quit (Ping timeout: 480 seconds)
[11:40] * peetaur2 (~peter@i4DF67CD2.pool.tripleplugandplay.com) has joined #ceph
[11:47] * pvh_sa_ (~pvh@41.164.8.114) has joined #ceph
[11:48] * flisky (~Thunderbi@106.38.61.185) Quit (Quit: flisky)
[12:06] * wenduo (~wenduo@218.30.116.10) Quit (Ping timeout: 480 seconds)
[12:08] * derjohn_mob (~aj@46.189.28.50) Quit (Ping timeout: 480 seconds)
[12:09] * danieagle (~Daniel@187.74.73.163) has joined #ceph
[12:16] * Geph (~Geoffrey@41.77.153.99) has joined #ceph
[12:28] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) has joined #ceph
[12:28] * sankarshan (~sankarsha@121.244.87.117) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[12:37] * ledgr_ (~ledgr@78.57.252.56) has joined #ceph
[12:39] * walcubi (~walcubi@p5797A570.dip0.t-ipconnect.de) has joined #ceph
[12:40] <walcubi> Is the only way to make a more uniform pg distribution by using osd crush reweight?
[12:44] <Be-El> or using rados objects with a better size distribution or using more placement groups
[12:44] <Be-El> crush reweight is the preferred method
[12:44] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[12:48] <walcubi> Well, all PGs are growing in a uniform way, so there's nothing wrong with size distribution as far as I can see
[12:48] <walcubi> Otherwise you'd have PGs overtaking others, etc.
[12:49] <walcubi> All disks are the same size, it's just that you have one OSD with 122 and anther with 154.
[12:50] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[12:51] * kefu|afk (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:52] <Be-El> you have to take a certain mismatch in the distribution into account. that's why most people recommend not to fill up a ceph cluster over 60-70% of its capacity
[12:53] * ledgr_ (~ledgr@78.57.252.56) Quit (Remote host closed the connection)
[12:53] * kefu_ (~kefu@114.92.125.128) has joined #ceph
[12:53] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[12:53] <Be-El> depending on the ceph release and the client capabilities you may also want to try certain crush tunables, e.g. the straw2 distribution
[12:55] * ashah (~ashah@121.244.87.117) Quit (Remote host closed the connection)
[12:56] * ashah (~ashah@121.244.87.117) has joined #ceph
[12:57] <walcubi> I'm my small test env, I'm not sure setting distribution helps. You get pseudo-random pg distribution regardless.
[12:57] <s3an2> Anyone using inkscope instead of calamari?
[12:57] * bara (~bara@213.175.37.12) has joined #ceph
[12:57] <Be-El> in that case the only way is smoothing the distribution by increasing the number of pgs
[12:58] <Be-El> which has other disadvantages like higher memory consumption etc.
[12:58] <walcubi> I think I'm already on the limit of that. :-)
[12:58] * nwe (~nwe@sigwait.se) has joined #ceph
[12:58] <walcubi> 30 osds, 4096 pgs.
[12:59] * kefu__ (~kefu@114.92.125.128) has joined #ceph
[12:59] * sleinen (~Adium@2001:620:0:2d:a65e:60ff:fedb:f305) Quit (Quit: Leaving.)
[12:59] <walcubi> Using a pg_num that is divisible by 30 seems not to make an effect either.
[13:00] <nwe> hello I have setup ceph cluster and everything is working very well, but I will fetch graph via telegraf,influxdb and grafana.. I have following this guide http://www.datacentred.co.uk/blog/ceph-monitoring-telegraf-grafana/ but in telegraf I got field corresponding to `ceph_user' is not defined in `*ceph.Ceph'
[13:00] <nwe> any idea?
[13:01] <nwe> must I add something in ceph.conf?
[13:02] * bniver (~bniver@pool-98-110-180-234.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[13:04] * kefu_ (~kefu@114.92.125.128) Quit (Ping timeout: 480 seconds)
[13:05] * _mrp (~mrp@82.117.199.26) has joined #ceph
[13:06] * rraja (~rraja@121.244.87.117) has joined #ceph
[13:07] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[13:08] * derjohn_mob (~aj@46.189.28.50) has joined #ceph
[13:10] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Quit: Ex-Chat)
[13:10] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Read error: Connection reset by peer)
[13:13] * hgichon (~hgichon@112.220.91.130) Quit (Quit: Leaving)
[13:13] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[13:17] <Geph> Hi all,
[13:17] <Geph> I have 240GB enterprise SSD's for my planned Ceph OSD nodes.
[13:17] <Geph> They will be used for the OS on each node and the 5 OSD journals for 5 spinners.
[13:17] <Geph> Anyone have any advice on partitioning and if it might affect wear leveling?
[13:17] <Geph> Thanks
[13:25] <walcubi> Be-El, so reweight-by-pg is runtime, then I just move that reweight value into crush to make it permanent, right?
[13:25] * kevcampb (~kev@2001:41c8:1:60d0::225) Quit (Server closed connection)
[13:25] * kevcampb (~kev@orchid.vm.bytemark.co.uk) has joined #ceph
[13:25] <Be-El> walcubi: does reweight by pg change the osd weight or the osd crush weight?
[13:30] <walcubi> Be-El, osd reweight. Otherwise it would come under 'osd crush reweight' in the command-line?
[13:31] <Be-El> no clue, haven't worked with it yet
[13:31] * MrBy__ (~MrBy@85.115.23.42) Quit (Read error: Connection reset by peer)
[13:31] * MrBy__ (~MrBy@85.115.23.42) has joined #ceph
[13:42] * georgem (~Adium@24.114.54.226) has joined #ceph
[13:43] * arbrandes (~arbrandes@ec2-54-172-54-135.compute-1.amazonaws.com) has joined #ceph
[13:45] * georgem (~Adium@24.114.54.226) Quit ()
[13:46] * georgem (~Adium@206.108.127.16) has joined #ceph
[13:47] * foxxx0 (~fox@2a01:4f8:200:216b::2) Quit (Quit: WeeChat 1.0.1)
[13:49] * arbrandes1 (~arbrandes@ec2-54-172-54-135.compute-1.amazonaws.com) Quit (Ping timeout: 480 seconds)
[13:52] * foxxx0 (~fox@valhalla.nano-srv.net) has joined #ceph
[13:54] * ledgr_ (~ledgr@78.57.252.56) has joined #ceph
[13:57] * kefu__ (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:59] * ashah (~ashah@121.244.87.117) Quit (Quit: Leaving)
[14:00] * sleinen (~Adium@nat-dok-15-050.nat.fhnw.ch) has joined #ceph
[14:01] * jimbo_insa (~oftc-webi@129.162.189.102) Quit (Quit: Page closed)
[14:01] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[14:01] * nass5 (~fred@l-p-dn-in-12a.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[14:03] * sleinen1 (~Adium@2001:620:0:82::105) has joined #ceph
[14:04] * foxxx0 (~fox@valhalla.nano-srv.net) Quit (Quit: WeeChat 1.5)
[14:05] * foxxx0 (~fox@valhalla.nano-srv.net) has joined #ceph
[14:05] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[14:05] * BlaXpirit (~irc@blaxpirit.com) Quit (Quit: Bye)
[14:06] * BlaXpirit (~irc@blaxpirit.com) has joined #ceph
[14:08] * sleinen (~Adium@nat-dok-15-050.nat.fhnw.ch) Quit (Ping timeout: 480 seconds)
[14:09] * foxxx0 (~fox@valhalla.nano-srv.net) Quit ()
[14:09] * georgem (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[14:09] * foxxx0 (~fox@valhalla.nano-srv.net) has joined #ceph
[14:12] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[14:14] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[14:16] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[14:16] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[14:19] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:20] * BlaXpirit (~irc@blaxpirit.com) Quit (Quit: Bye)
[14:21] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[14:21] * BlaXpirit (~irc@blaxpirit.com) has joined #ceph
[14:21] * BlaXpirit (~irc@blaxpirit.com) Quit (Remote host closed the connection)
[14:22] * BlaXpirit (~irc@blaxpirit.com) has joined #ceph
[14:22] * georgem (~Adium@24.114.54.226) has joined #ceph
[14:24] * BlaXpirit (~irc@blaxpirit.com) Quit ()
[14:25] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[14:26] * BlaXpirit (~irc@blaxpirit.com) has joined #ceph
[14:31] * EinstCrazy (~EinstCraz@58.39.76.182) has joined #ceph
[14:32] * georgem (~Adium@24.114.54.226) Quit (Quit: Leaving.)
[14:36] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[14:38] * mattbenjamin1 (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[14:44] * ledgr_ (~ledgr@78.57.252.56) Quit (Remote host closed the connection)
[14:44] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[14:47] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[14:47] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[14:52] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:53] * zigo_ is now known as zigo
[14:54] <ledgr> Hi, I want to migrate metadata to nvme storage
[14:54] <ledgr> is it possible to migrate without downtime or to migrate at all?
[14:55] <Be-El> which metadata?
[14:58] <ledgr> cephfs
[15:00] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[15:00] <Be-El> you need to create a crush ruleset that puts pgs on nvme storage only, _test_ it with a test pool, and change the crush_ruleset for the metadata pool to the new one
[15:01] * nikbor (~n.borisov@admins.1h.com) has joined #ceph
[15:02] <ledgr> and everything *should* migrate to nvme storage seamlessly?
[15:02] <nikbor> hello, what does "failing to respond to cache pressure" mean? i just got this message and have no idea what it means, apparently there is no simple answer to this on the various ceph mailing lists as well
[15:05] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:06] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) has joined #ceph
[15:07] * mattbenjamin1 (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) Quit (Quit: Leaving.)
[15:08] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) Quit (Server closed connection)
[15:08] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) has joined #ceph
[15:14] * infernix (nix@000120cb.user.oftc.net) Quit (Server closed connection)
[15:15] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[15:18] * infernix (nix@000120cb.user.oftc.net) has joined #ceph
[15:18] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:19] <peetaur2> why won't my ceph-mds daemon start? says "ERROR: failed to authenticate: (22) Invalid argument" https://bpaste.net/show/9097b25a0fb6
[15:19] <Be-El> ledgr: yes, ceph will recognized that the pgs are on the wrong osds and move the data by backfilling
[15:20] <Be-El> ledgr: keep in mind that the metadata pool contains only a very small amount of data, and a high number of empty objects with associated metadata (e.g. directories and their content)
[15:20] <Be-El> nikbor: i means that a cephfs client is not releasing capabilities on request from the mds
[15:21] <Be-El> peetaur2: does the mds daemon has a valid ceph keyring?
[15:21] <nikbor> Be-El: and what does *that* mean :) and how to fix it?
[15:21] <nikbor> will remounting cephfs fix this?
[15:22] <peetaur2> Be-El: I believe so... I deployed it using ceph-deploy, and I can see the name and key lines in /var/lib/ceph/mds/ceph-ceph1/keyring file and `ceph auth get mds.ceph1` match
[15:22] <Be-El> nikbor: the mds has an internal cache for files, capabilities (locks on files for clients etc). if that cache is filled up, the mds asks clients to release capabilities
[15:22] <Be-El> nikbor: the message indicates that either a cephfs client is busy (that one mentioned in the message), or the client is buggy
[15:23] <Be-El> nikbor: or you have to many opened files for the mds cache size setting
[15:23] <Be-El> peetaur2: i haven't used ceph-deploy yet, but you can try to start the mds manually
[15:24] <peetaur2> I did try starting it manually... it's in my pastebing
[15:24] <peetaur2> I also added -f and --debug_ms 9 and --debug_mds 10 which added no extra output
[15:26] * squisher (~dasquishe@seeker.mcbf.net) Quit (Server closed connection)
[15:26] <peetaur2> maybe I'll try disabling cephx and see what happens. :)
[15:26] * squisher (~dasquishe@seeker.mcbf.net) has joined #ceph
[15:27] * KindOne (kindone@198.14.199.50) has joined #ceph
[15:27] * rakeshgm (~rakesh@121.244.87.117) Quit (Remote host closed the connection)
[15:27] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[15:28] <Be-El> peetaur2: maybe a permission problem?
[15:28] * kefu (~kefu@114.92.125.128) has joined #ceph
[15:29] * vimal (~vikumar@121.244.87.116) Quit (Quit: Leaving)
[15:31] * ivve (~zed@m90-144-211-123.cust.tele2.se) has joined #ceph
[15:31] <peetaur2> it seems to run now with auth supported/required stuff set to none
[15:31] <peetaur2> so it seems auth related, but don't know how to fix it properly
[15:33] <Be-El> peetaur2: which capabilities does the mds auth keyring provide? maybe some important one is missing
[15:34] * nass5 (~fred@l-p-dn-in-12a.lionnois.site.univ-lorraine.fr) has joined #ceph
[15:35] <peetaur2> so maybe try "allow *" on it?
[15:36] <Be-El> caps: [mds] allow
[15:36] <Be-El> caps: [mon] allow profile mds
[15:36] <Be-El> caps: [osd] allow rwx
[15:36] <peetaur2> I geuss you might be right because removing all and giving only mon allow r, it still works with auth disabled.
[15:36] <Be-El> these are the capabilities of our mds server
[15:37] <peetaur2> I tried that first, but because of the bug http://tracker.ceph.com/issues/16443 I changed it to mds allow *
[15:37] <peetaur2> but I can change it back with normal ceph commands...just ceph-deploy has that bug
[15:37] <peetaur2> will test that now
[15:37] <Be-El> well the capabilities are from our initial firefly setup, but they still work under jewel
[15:37] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[15:37] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[15:38] <nikbor> Be-El: the client is the kernel-mode cephfs driver, any ideas how to debug this further to try and pinpoint the root cause
[15:39] <Be-El> nikbor: which kernel version do you use?
[15:39] <nikbor> 4.4.14
[15:39] * sleinen1 (~Adium@2001:620:0:82::105) Quit (Ping timeout: 480 seconds)
[15:40] <Be-El> nikbor: you can check for processes which keep files open, have a look at the ceph debug file (/sys/kernel/debug/ceph/... ) etc
[15:40] <Be-El> nikbor: if you are able to umount/remount cephfs, it might also help (but not solve the problem itself)
[15:41] <peetaur2> Be-El: nope, doesn't work when changing caps
[15:42] <nikbor> Be-El: i'm told we can't unmount but only remount so will try this
[15:42] <Be-El> nikbor: so there's probably a process keeping the file open
[15:42] <Be-El> nikbor: remount won't help in that case
[15:43] <Be-El> peetaur2: in that case someone else has to step up and make further suggestions
[15:44] <Be-El> nikbor: use tools like lsof to see which files are opened on the host from the cephfs mountpoint
[15:44] <peetaur2> Be-El: ok, thanks for trying
[15:45] * vata1 (~vata@207.96.182.162) has joined #ceph
[15:45] <nikbor> Be-El: but what if the cepfs is actually in use ATM, i mean if there are legit users and unmounting is not an option?
[15:46] <nikbor> i mean should the said cache be sized according to the workload ?
[15:46] <Be-El> nikbor: you can check the cache and its size via the daemon socket on the mds host
[15:46] * shaunm (~shaunm@ms-208-102-105-216.gsm.cbwireless.com) Quit (Quit: Ex-Chat)
[15:47] * mason1 (~mollstam@213.61.149.100) has joined #ceph
[15:48] <peetaur2> Be-El: what kind of conf do you need in /etc/ceph/ceph.conf to run a ceph-mds daemon? ceph-deploy *insisted* on overwriting that file, but when it did, all it did was remove whitespace...I expected some addition; could that be the problem?
[15:48] <Be-El> nikbor: the 'session ls' command on the socket will list all active cephfs clients and the number of capabilities they currently have
[15:48] * Geph (~Geoffrey@41.77.153.99) Quit (Quit: Leaving)
[15:49] * shaunm (~shaunm@ms-208-102-105-216.gsm.cbwireless.com) has joined #ceph
[15:49] <Be-El> peetaur2: i do not have any settings for the mds server except cache size (necessary for larger setup) and failover
[15:50] * squizzi (~squizzi@107.13.237.240) has joined #ceph
[15:50] <m0zes> for larger setups, I'd also tune the mds max purge parameters
[15:51] <Be-El> m0zes: good point
[15:51] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[15:51] <Be-El> (especially if you have students creating millions of temporary files on cephfs.....)
[15:51] <m0zes> that bit me. hard.
[15:51] * Svedrin (svedrin@elwing.funzt-halt.net) Quit (Server closed connection)
[15:52] <peetaur2> when I make temp files, I make sure to put them on highly redundant shared storage :)
[15:52] * Svedrin (svedrin@elwing.funzt-halt.net) has joined #ceph
[15:53] <Be-El> m0zes: a well known problem, especially if users are deleting large files with many many chunks
[15:53] <Be-El> but it got way better with jewel
[15:54] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:54] <Be-El> speaking of it......1 million strays during the last hour, with 30.000 strays active on the mds
[15:57] * rendar (~I@host147-177-dynamic.22-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[16:03] * kefu (~kefu@114.92.125.128) has joined #ceph
[16:04] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[16:04] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:07] * derjohn_mob (~aj@46.189.28.50) Quit (Ping timeout: 480 seconds)
[16:08] <nikbor> Be-El: mind if i msg you in private?
[16:08] <Be-El> nikbor: feel free
[16:10] * derjohn_mob (~aj@46.189.28.92) has joined #ceph
[16:13] * mattbenjamin1 (~mbenjamin@12.118.3.106) has joined #ceph
[16:15] * MrBy__ (~MrBy@85.115.23.42) Quit (Read error: Connection reset by peer)
[16:15] * EinstCrazy (~EinstCraz@58.39.76.182) Quit (Remote host closed the connection)
[16:15] * MrBy_ (~MrBy@85.115.23.42) has joined #ceph
[16:17] * doppelgrau (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[16:17] * mason1 (~mollstam@213.61.149.100) Quit ()
[16:17] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[16:17] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[16:18] * verdurin (~verdurin@2001:8b0:281:78ec:e2cb:4eff:fe01:f767) Quit (Server closed connection)
[16:19] * verdurin (~verdurin@2001:8b0:281:78ec:e2cb:4eff:fe01:f767) has joined #ceph
[16:19] * EinstCrazy (~EinstCraz@58.39.76.182) has joined #ceph
[16:19] * derjohn_mob (~aj@46.189.28.92) Quit (Ping timeout: 480 seconds)
[16:20] * srk (~Siva@32.97.110.51) has joined #ceph
[16:23] * harbie (~notroot@2a01:4f8:211:2344:0:dead:beef:1) Quit (Server closed connection)
[16:23] * harbie (~notroot@2a01:4f8:211:2344:0:dead:beef:1) has joined #ceph
[16:23] * derjohn_mob (~aj@46.189.28.43) has joined #ceph
[16:24] * ivve (~zed@m90-144-211-123.cust.tele2.se) Quit (Ping timeout: 480 seconds)
[16:26] * doppelgrau (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[16:27] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:27] * kefu (~kefu@114.92.125.128) has joined #ceph
[16:32] * ggarg (~ggarg@host-82-135-29-34.customer.m-online.net) has left #ceph
[16:32] * ggarg (~ggarg@host-82-135-29-34.customer.m-online.net) has joined #ceph
[16:34] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:34] * saintpablo (~saintpabl@gw01.mhitp.dk) Quit (Ping timeout: 480 seconds)
[16:35] * pvh_sa_ (~pvh@41.164.8.114) Quit (Ping timeout: 480 seconds)
[16:40] * xarses (~xarses@64.124.158.3) has joined #ceph
[16:40] * sleinen (~Adium@2001:620:0:2d:a65e:60ff:fedb:f305) has joined #ceph
[16:41] * EinstCrazy (~EinstCraz@58.39.76.182) Quit (Remote host closed the connection)
[16:42] * ircolle (~Adium@2601:285:201:633a:90b2:2b58:76f9:cb69) has joined #ceph
[16:42] * derjohn_mob (~aj@46.189.28.43) Quit (Ping timeout: 480 seconds)
[16:43] * linuxkidd (~linuxkidd@ip70-189-202-62.lv.lv.cox.net) has joined #ceph
[16:45] * bara (~bara@213.175.37.12) has joined #ceph
[16:48] * yanzheng1 (~zhyan@118.116.112.3) Quit (Quit: This computer has gone to sleep)
[16:54] * bauruine (~bauruine@2a01:4f8:130:8285:fefe::36) Quit (Ping timeout: 480 seconds)
[17:05] * kefu is now known as kefu|afk
[17:08] * vimal (~vikumar@114.143.162.59) has joined #ceph
[17:08] * ledgr_ (~ledgr@78.57.252.56) has joined #ceph
[17:12] * mike_s (~oftc-webi@37.17.49.140) has joined #ceph
[17:12] * sebastian-w (~quassel@212.218.8.139) Quit (Remote host closed the connection)
[17:12] * sebastian-w (~quassel@212.218.8.139) has joined #ceph
[17:14] * derjohn_mob (~aj@88.128.80.60) has joined #ceph
[17:14] * TMM (~hp@dhcp-077-248-009-229.chello.nl) Quit (Ping timeout: 480 seconds)
[17:14] * kristen (~kristen@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[17:15] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[17:16] * kefu|afk (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:16] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[17:17] <mike_s> Hey
[17:17] <mike_s> please help, the error pool .rgw.buckets has many more objects per pg than average
[17:17] <mike_s> 3 OSDs
[17:18] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[17:18] <mike_s> pool 9 '.rgw.buckets' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 192 pgp_num 192 last_change 511 flags hashpspool min_write_recency_for_promote 1 stripe_width 0
[17:18] * ledgr_ (~ledgr@78.57.252.56) Quit (Remote host closed the connection)
[17:18] * kefu (~kefu@114.92.125.128) has joined #ceph
[17:19] <mike_s> too many PGs per OSD (1680 > max 800) pool .rgw.buckets objects per pg (17655) is more than 11.4272 times cluster average (1545)
[17:19] * MrBy__ (~MrBy@85.115.23.38) has joined #ceph
[17:20] <mike_s> how to fix it?
[17:20] * MrBy_ (~MrBy@85.115.23.42) Quit (Read error: Connection reset by peer)
[17:20] * TMM (~hp@dhcp-077-248-009-229.chello.nl) has joined #ceph
[17:21] * MrBy (~MrBy@85.115.23.42) has joined #ceph
[17:23] * ledgr (~ledgr@78.57.252.56) has joined #ceph
[17:25] * ledgr (~ledgr@78.57.252.56) Quit (Remote host closed the connection)
[17:25] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[17:26] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[17:26] * sudocat1 (~dibarra@192.185.1.20) has joined #ceph
[17:28] * MrBy__ (~MrBy@85.115.23.38) Quit (Ping timeout: 480 seconds)
[17:29] * kefu is now known as kefu|afk
[17:29] * BennyRene2016 (~BennyRene@80-44-55-115.dynamic.dsl.as9105.com) has joined #ceph
[17:29] * MrBy (~MrBy@85.115.23.42) Quit (Read error: Connection reset by peer)
[17:29] * MrBy (~MrBy@85.115.23.42) has joined #ceph
[17:32] * ju5t (~getup@gw.office.cyso.net) has joined #ceph
[17:33] <ju5t> Hello, we're seeing an error when starting up radosgw that seems to originate from common/ceph_crypto.cc: 77 FAILED assert(crypto_context != __null). I'm a little lost as to what could cause crypto_context to be empty. What am I missing?
[17:34] <BennyRene2016> Hey, Hi ?
[17:34] <BennyRene2016> Don't Worry ? You Not Late ?
[17:35] * bauruine (~bauruine@213.239.205.247) has joined #ceph
[17:36] * Racpatel (~Racpatel@2601:87:3:31e3::2433) Quit (Quit: Leaving)
[17:36] * kefu|afk (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:37] * Racpatel (~Racpatel@2601:87:3:31e3::2433) has joined #ceph
[17:38] * Unforgiven (~Aramande_@tsn109-201-154-144.dyn.nltelcom.net) has joined #ceph
[17:39] * derjohn_mob (~aj@88.128.80.60) Quit (Ping timeout: 480 seconds)
[17:43] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[17:43] * mike_s (~oftc-webi@37.17.49.140) Quit (Quit: Page closed)
[17:43] * MrBy (~MrBy@85.115.23.42) Quit (Ping timeout: 480 seconds)
[17:45] * kefu (~kefu@114.92.125.128) has joined #ceph
[17:47] * squizzi (~squizzi@107.13.237.240) Quit (Ping timeout: 480 seconds)
[17:48] * derjohn_mob (~aj@88.128.80.65) has joined #ceph
[17:52] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[17:53] <infernix> leseb: any reason why the roles in ceph-ansible aren't submodules to the various ceph-ansible-* repos on github.com/ceph?
[17:53] <infernix> or ansible-ceph-* repos actually
[17:56] <vimal> infernix, i think leseb is out.. expect a delay
[17:58] * mattch (~mattch@w5430.see.ed.ac.uk) Quit (Quit: Leaving.)
[17:58] <infernix> np
[17:58] <infernix> thanks!
[17:58] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[18:01] * derjohn_mob (~aj@88.128.80.65) Quit (Ping timeout: 480 seconds)
[18:08] * Unforgiven (~Aramande_@tsn109-201-154-144.dyn.nltelcom.net) Quit ()
[18:09] * sleinen (~Adium@2001:620:0:2d:a65e:60ff:fedb:f305) Quit (Quit: Leaving.)
[18:09] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[18:12] * bildramer (~thundercl@5.157.38.34) has joined #ceph
[18:13] * MrBy (~MrBy@85.115.23.42) has joined #ceph
[18:20] * masteroman (~ivan@93-142-246-193.adsl.net.t-com.hr) has joined #ceph
[18:21] * squizzi (~squizzi@2001:420:2240:1268:ad85:b28:ee1c:890) has joined #ceph
[18:21] * oliveiradan (~doliveira@137.65.133.10) Quit (Quit: Leaving)
[18:21] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[18:25] * oliveiradan2 (~doliveira@67.214.238.80) Quit (Remote host closed the connection)
[18:30] * mykola (~Mikolaj@91.245.78.210) has joined #ceph
[18:31] * oliveiradan (~doliveira@137.65.133.10) has joined #ceph
[18:38] * _kelv (uid73234@id-73234.highgate.irccloud.com) has joined #ceph
[18:41] * xarses (~xarses@64.124.158.3) Quit (Read error: Connection reset by peer)
[18:41] <_kelv> is `ceph daemon` expected to utilize the permissions defined in `ceph auth caps`?
[18:41] * xarses (~xarses@64.124.158.3) has joined #ceph
[18:42] * bauruine (~bauruine@213.239.205.247) Quit (Quit: ZNC - http://znc.in)
[18:42] * bildramer (~thundercl@5.157.38.34) Quit ()
[18:43] * bauruine (~bauruine@2a01:4f8:130:8285:fefe::36) has joined #ceph
[18:44] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:45] * rraja (~rraja@121.244.87.117) Quit (Ping timeout: 480 seconds)
[18:50] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:53] * ircolle (~Adium@2601:285:201:633a:90b2:2b58:76f9:cb69) Quit (Quit: Leaving.)
[18:53] * DanFoster (~Daniel@2a00:1ee0:3:1337:6443:99f3:7f63:d8af) Quit (Quit: Leaving)
[18:54] * dmick (~dmick@206.169.83.146) has left #ceph
[18:59] * srk (~Siva@32.97.110.51) Quit (Ping timeout: 480 seconds)
[19:02] * srk (~Siva@32.97.110.51) has joined #ceph
[19:05] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[19:05] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[19:06] * derjohn_mob (~aj@2001:6f8:1337:0:847:ccfa:c792:6896) has joined #ceph
[19:08] * skarn (skarn@0001f985.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:13] * sleinen (~Adium@2001:620:0:69::100) has joined #ceph
[19:18] <kingcu> so i am pretty excited about bluestore and have been holding off a cluster upgrade until it becomes production ready. anyone know if there's been an official announcement of the upgrade path for existing clusters? I haven't found anything official, but did see slides from a talk from a western digital engineer mentioning it. thought i'd ask y'all
[19:18] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[19:18] <kingcu> https://s3.amazonaws.com/rwgps/screenshots/rv1k2Xdw.png
[19:18] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[19:18] * ju5t (~getup@gw.office.cyso.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[19:23] * Xerati (~anadrom@185.133.32.19) has joined #ceph
[19:24] * vimal (~vikumar@114.143.162.59) Quit (Quit: Leaving)
[19:25] * vimal (~vikumar@114.143.162.59) has joined #ceph
[19:26] * derjohn_mob (~aj@2001:6f8:1337:0:847:ccfa:c792:6896) Quit (Ping timeout: 480 seconds)
[19:28] <georgem> kingcu: the upgrade path will be rebuilding each OSD node at a time, and let the recovery put the data back
[19:29] <kingcu> georgem: cool so that slidedeck was accurate - kraken will be mixed store compatible and the upgrade path should be "easy"
[19:29] <kingcu> wasn't sure since that wasn't an official slidedeck. thanks georgem
[19:31] <georgem> kingcu: http://events.linuxfoundation.org/sites/events/files/slides/LinuxCon%20NA%20BlueStore.pdf
[19:34] * pvh_sa_ (~pvh@169-0-182-84.ip.afrihost.co.za) has joined #ceph
[19:36] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Remote host closed the connection)
[19:37] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[19:39] * skarn (skarn@0001f985.user.oftc.net) has joined #ceph
[19:45] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:7472:ce93:221:48ca) Quit (Ping timeout: 480 seconds)
[19:50] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:52] <koollman> is there any documentation to run radosgw without ceph-deploy ?
[19:53] * Xerati (~anadrom@185.133.32.19) Quit ()
[19:53] * owasserm_ (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[19:53] * owasserm_ (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Remote host closed the connection)
[19:53] * vimal (~vikumar@114.143.162.59) Quit (Quit: Leaving)
[19:56] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[19:57] * _mrp (~mrp@82.117.199.26) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:03] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[20:10] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Quit: treenerd_)
[20:25] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Remote host closed the connection)
[20:28] <evilrob> koollman: I only see docs for installing it with apache+fastcgi http://docs.ceph.com/docs/infernalis/install/install-ceph-gateway/
[20:29] * i_m (~ivan.miro@109.188.125.40) has joined #ceph
[20:46] * i_m (~ivan.miro@109.188.125.40) Quit (Ping timeout: 480 seconds)
[20:54] <btaylor> i just noticed in systemctl that i have a few things like ???ceph-disk@dev-sde2.service??? are those for automounting devs?
[20:54] <btaylor> i only see sde2 and sdf2, which are journal partitions
[20:57] * i_m (~ivan.miro@109.188.125.40) has joined #ceph
[20:59] * northrup (~northrup@75-146-11-137-Nashville.hfc.comcastbusiness.net) has joined #ceph
[21:01] * georgem1 (~Adium@206.108.127.16) has joined #ceph
[21:01] * georgem (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[21:02] * wes_dillingham (~wes_dilli@140.247.242.44) Quit (Quit: wes_dillingham)
[21:06] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:07] * sickology (~mio@vpn.bcs.hr) has joined #ceph
[21:14] * sickolog1 (~mio@vpn.bcs.hr) Quit (Ping timeout: 480 seconds)
[21:16] * aiicore_ (~aiicore@s30.linuxpl.com) Quit (Quit: leaving)
[21:17] * aiicore (~aiicore@s30.linuxpl.com) has joined #ceph
[21:24] * smiley (~oftc-webi@pool-108-45-41-147.washdc.fios.verizon.net) has joined #ceph
[21:24] <smiley> Hello..is there anyone here who has a few minutes to help me debug a ceph-deploy issue I'm having?
[21:25] * BennyRene2016 (~BennyRene@80-44-55-115.dynamic.dsl.as9105.com) Quit (Quit: Quitting)
[21:28] <blizzow> smiley: just ask your question.
[21:28] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[21:28] <blizzow> someone may or may not answer.
[21:38] <smiley> Well...I sent an email to the mailing list a bit ago...so rather than typing it all out again...here is a link to the issue that I am having: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/012886.html
[21:39] <smiley> I am trying to use ceph-deploy as I have many other times...although this time I am using nvme for the journals...ceph-deploy seems does not error out...but the ods's are not getting added to the cluster
[21:39] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[21:43] * lmb_ (~Lars@ip5b404bab.dynamic.kabel-deutschland.de) Quit (Quit: Leaving)
[21:45] * i_m (~ivan.miro@109.188.125.40) Quit (Ping timeout: 480 seconds)
[21:53] * Jeffrey4l__ (~Jeffrey@119.251.148.242) Quit (Ping timeout: 480 seconds)
[22:00] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[22:05] * newbie|2 (~kvirc@72.2.237.33) has joined #ceph
[22:06] <newbie|2> Good day everyone. has anyone seen this issue before? (google didn't help) Failed to execute command: systemctl enable ceph.target
[22:06] <newbie|2> I'm trying to install ceph on a raspberry pi cluster
[22:06] <newbie|2> using jessie
[22:07] <newbie|2> I've been "battling" all day with this
[22:07] <newbie|2> I'm about to give up :)
[22:09] <btaylor> using ceph-deploy?
[22:10] <newbie|2> yes
[22:10] <btaylor> i???d guess your rpi cluster doesn???t have enough ram to do anything
[22:10] <btaylor> but if it works i???d be amazed
[22:10] <newbie|2> There are tutorials on the internet with people getting it working
[22:10] <newbie|2> It's not going to be very speedy but for learning ceph and "proof of concept" it would be great if I could do it like they did
[22:10] <btaylor> run journalctl -x to see what is in there and why it may have failed
[22:11] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[22:11] <newbie|2> OK, let me see. In the meantime, does this help you understand? (I'm sure you have a better understanding of linux than me)
[22:11] <newbie|2> [ceph001][DEBUG ] create the init path if it does not exist
[22:11] <newbie|2> [ceph001][INFO ] Running command: sudo systemctl enable ceph.target
[22:11] <newbie|2> [ceph001][WARNIN] Failed to execute operation: No such file or directory
[22:11] <newbie|2> [ceph001][ERROR ] RuntimeError: command returned non-zero exit status: 1
[22:11] <newbie|2> [ceph_deploy.mon][ERROR ] Failed to execute command: systemctl enable ceph.target
[22:11] <btaylor> newbie|2: i???m doing all my POC in virtualbox
[22:11] <newbie|2> [ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors
[22:12] <btaylor> pastebin should help with those logs, or github gist
[22:12] * _mrp (~mrp@178.254.148.42) has joined #ceph
[22:12] * wak-work (~wak-work@2620:15c:2c5:3:64:234a:77af:bd28) Quit (Remote host closed the connection)
[22:13] * wak-work (~wak-work@2620:15c:2c5:3:1db3:d0d3:9625:8e36) has joined #ceph
[22:14] <btaylor> newbie|2: what do you get from ???systemctl | grep ceph???
[22:14] * georgem1 (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[22:15] * georgem (~Adium@206.108.127.16) has joined #ceph
[22:15] <newbie|2> Hey btaylor. thanks for the suggestion. Here is where it fails: http://pastebin.com/print/8HenmwAx
[22:15] * _mrp (~mrp@178.254.148.42) Quit ()
[22:15] <newbie|2> one sec
[22:16] <btaylor> what are teh series of commands you???ve run to get to this point?
[22:16] <newbie|2> if I filter with grep, I don't get any output
[22:17] <btaylor> k so seems like it???s failed to install some things
[22:17] <btaylor> i think you may have missed a ceph-deploy command earlier.
[22:18] <newbie|2> No. here is what I ran
[22:18] <newbie|2> First, I installed ceph-deploy ceph and ceph-common
[22:18] <newbie|2> After that I did:
[22:18] <newbie|2> ceph-deploy new ceph001
[22:18] <newbie|2> which went fine
[22:19] <newbie|2> and next I did ceph-deploy mon create-initial
[22:19] <newbie|2> and this command fails with that error.
[22:19] <newbie|2> l[ceph_deploy.mon][ERROR ] Failed to execute command: systemctl enable ceph.target
[22:19] <newbie|2> Failed to execute operation: No such file or directory
[22:20] <newbie|2> I have the ceph file in /etc/init.d/ folder
[22:20] <newbie|2> I think that's what systemctl is using to enable it, no?
[22:20] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:20] <newbie|2> If I go to that server and I run it manually (sudo systemctl enable ceph.target)
[22:21] * squizzi (~squizzi@2001:420:2240:1268:ad85:b28:ee1c:890) Quit (Ping timeout: 480 seconds)
[22:21] <newbie|2> ceph@ceph001:~ $ sudo systemctl enable ceph.target
[22:21] <newbie|2> Failed to execute operation: No such file or directory
[22:21] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[22:21] <newbie|2> :(
[22:21] <btaylor> i wonder if you are grabbing older packages than ceph-deploy thinks you are. dpkg -l | grep ceph ?
[22:22] * thansen (~thansen@17.253.sfcn.org) has joined #ceph
[22:22] <newbie|2> ceph@ceph001:~ $ dpkg -l | grep ceph
[22:22] <newbie|2> ii ceph 0.94.3-1 armhf distributed storage and file system
[22:22] <newbie|2> ii ceph-common 0.94.3-1 armhf common utilities to mount and interact with a ceph storage cluster
[22:22] <newbie|2> ii ceph-deploy 1.5.35 all Ceph-deploy is an easy to use configuration tool
[22:22] <newbie|2> ii libcephfs1 0.94.3-1 armhf Ceph distributed file system client library
[22:22] <newbie|2> ii python-cephfs 0.94.3-1 armhf Python libraries for the Ceph libcephfs library
[22:22] <newbie|2> ceph@ceph001:~ $
[22:23] <newbie|2> I think the issue might not be with ceph. Because if I run that sysctl enable command, I get the same error
[22:24] <newbie|2> I was hoping someone with more linux experience could see the culprit there
[22:24] <newbie|2> or hopefully someone has seen the issue before
[22:27] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[22:30] <newbie|2> This is the 2nd time I've tried to "play" with ceph and learn it. Never got it to work :(.
[22:36] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[22:39] <btaylor> newbie|2: https://gist.github.com/ssplatt/151eff8961a58223ffa9d094bd89b6b2 thats the script i came up with to deploy things wuickly. basically taking the quick-deploy page and removing the comments
[22:39] <btaylor> https://gist.github.com/ssplatt/151eff8961a58223ffa9d094bd89b6b2#file-provision-sh-L20-L22 should be the steps to get the mons fucntional on 3 nodes
[22:40] * darthbacon (~oftc-webi@64.234.158.96) has joined #ceph
[22:40] <newbie|2> I used this guy's tutorial: Thanks, I'll check it out
[22:40] <newbie|2> I've been follwoing this guy's tutorial for pi :http://bryanapperson.com/blog/the-definitive-guide-ceph-cluster-on-raspberry-pi/
[22:40] <newbie|2> and I tried this as well: https://www.linkedin.com/pulse/ceph-raspberry-pi-rahul-vijayan
[22:41] <newbie|2> the theory sounds simple and solid; when put in practice... things don't go at all as expected
[22:42] <newbie|2> #btaylor I'm going to try your script (by hand) now
[22:43] <btaylor> some of the lines in there are to ensure removal of a previous install attempt. they may or may not do all of the cleanup. most likely not.
[22:44] * derjohn_mob (~aj@x590c6179.dyn.telefonica.de) has joined #ceph
[22:44] <newbie|2> Yeah, I tried to uninstall and failed with this: Package 'ceph-mds' is not installed, so not removed
[22:44] <blizzow> smiley: I had a weird issue when I used osd create, I had to add partitions to /etc/fstab to get OSDs to start on boot. Now I use ceph-deploy osd prepare myosdnode:mydrive:myjournalpartition . Maybe try zapping the disk/partitions and use the prepare command instead of create.
[22:45] <darthbacon> Hi, is there a tutorial that covers adding multiple new OSDs at once with only a single rebalance? It seems like I should be able to set osd norebalance, ceph-deploy all the new OSDs and then turn off osd norebalance. Thanks!
[22:45] * xul (~Pieman@108.61.122.216) has joined #ceph
[22:49] * vbellur (~vijay@71.234.224.255) has joined #ceph
[22:52] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[22:55] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[23:00] * newbie|2 (~kvirc@72.2.237.33) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[23:03] <rkeene> darthbacon, There's also a balance when you change pg_num/count/size versus pgp_num/count/size (can't remember)
[23:07] * wes_dillingham (~wes_dilli@140.247.242.44) Quit (Ping timeout: 480 seconds)
[23:10] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[23:11] <darthbacon> rkeene, good point, I came across this discussion http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-June/002478.html. Looks like nobackfill is what I'm looking for.
[23:14] <rkeene> Yeah, it also mentions norecover ... but it might be a bad idea if an OSD were to go offline while you were adding new ones ( http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-April/000365.html )
[23:15] * xul (~Pieman@26XAABRM6.tor-irc.dnsbl.oftc.net) Quit ()
[23:16] <rkeene> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-April/000401.html contains a better idea of why it would be a bad idea
[23:25] * scg (~zscg@181.122.4.166) has joined #ceph
[23:25] * scg (~zscg@181.122.4.166) Quit ()
[23:35] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[23:36] * sleinen (~Adium@2001:620:0:69::100) Quit (Read error: Connection reset by peer)
[23:41] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * vbellur (~vijay@71.234.224.255) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * wak-work (~wak-work@2620:15c:2c5:3:1db3:d0d3:9625:8e36) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * northrup (~northrup@75-146-11-137-Nashville.hfc.comcastbusiness.net) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * xarses (~xarses@64.124.158.3) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * _kelv (uid73234@id-73234.highgate.irccloud.com) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * Racpatel (~Racpatel@2601:87:3:31e3::2433) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * sudocat1 (~dibarra@192.185.1.20) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * haplo37 (~haplo37@199.91.185.156) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * kristen (~kristen@jfdmzpr03-ext.jf.intel.com) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * linuxkidd (~linuxkidd@ip70-189-202-62.lv.lv.cox.net) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * shaunm (~shaunm@ms-208-102-105-216.gsm.cbwireless.com) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * johnavp1989 (~jpetrini@8.39.115.8) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * BlaXpirit (~irc@blaxpirit.com) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * arbrandes (~arbrandes@ec2-54-172-54-135.compute-1.amazonaws.com) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * danieagle (~Daniel@187.74.73.163) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * blizzow (~jburns@50-243-148-102-static.hfc.comcastbusiness.net) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * wushudoin (~wushudoin@38.140.108.2) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * qman (~rohroh@2600:3c00::f03c:91ff:fe69:92af) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * Tene (~tene@173.13.139.236) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * nilez (~nilez@96.44.144.194) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * LegalResale (~LegalResa@66.165.126.130) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * ronrib (~boswortr@45.32.242.135) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * Kingrat (~shiny@2605:6000:1526:4063:d44d:3add:dc13:51ec) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * SamYaple (~SamYaple@162.209.126.134) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * ron-slc (~Ron@173-165-129-118-utah.hfc.comcastbusiness.net) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * ben1 (ben@pearl.meh.net.nz) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * dec (~dec@104.198.96.45) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * jproulx (~jon@kvas.csail.mit.edu) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * nathani (~nathani@2607:f2f8:ac88::) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * snelly (~cjs@sable.island.nu) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * chutz (~chutz@rygel.linuxfreak.ca) Quit (synthon.oftc.net charm.oftc.net)
[23:41] * lincolnb (~lincoln@c-71-57-68-189.hsd1.il.comcast.net) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * skullone (~skullone@shell.skull-tech.com) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * mjevans (~mjevans@li984-246.members.linode.com) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * folivora (~out@devnull.drwxr-xr-x.eu) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * jlayton (~jlayton@cpe-2606-A000-1125-405B-14D9-DFF4-8FF1-7DD8.dyn6.twc.com) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * mkfort (~mkfort@mkfort.com) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * DeMiNe0_ (~DeMiNe0@104.131.119.74) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * mnaser (~mnaser@162.253.53.193) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * owasserm (~owasserm@a212-238-239-152.adsl.xs4all.nl) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * rektide_ (~rektide@eldergods.com) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * jidar_ (~jidar@104.207.140.225) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * LiftedKilt (~LiftedKil@is.in.the.madhacker.biz) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * kaalia (~remote_us@45.55.206.107) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * gtrott (sid78444@id-78444.tooting.irccloud.com) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * Nixx (~quassel@bulbasaur.sjorsgielen.nl) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * grw (~grw@tsar.su) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * via (~via@smtp2.matthewvia.info) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * md_ (~john@205.233.53.42) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * rkeene (1011@oc9.org) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * herrsergio (~herrsergi@00021432.user.oftc.net) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * SpamapS (~SpamapS@xencbyrum2.srihosting.com) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * m0zes (~mozes@ns1.beocat.ksu.edu) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * jiffe (~jiffe@nsab.us) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * Psi-Jack (~psi-jack@mx.linux-help.org) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * jnq (sid150909@0001b7cc.user.oftc.net) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * devicenull (sid4013@id-4013.ealing.irccloud.com) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * beardo (~beardo__@beardo.cc.lehigh.edu) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * iggy (~iggy@mail.vten.us) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * scalability-junk (sid6422@id-6422.ealing.irccloud.com) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * frickler (~jens@v1.jayr.de) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * shaon (~shaon@shaon.me) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * ngoswami (~ngoswami@121.244.87.116) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * mlovell (~mlovell@69-195-66-94.unifiedlayer.com) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * med (~medberry@00012b50.user.oftc.net) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * elder_ (sid70526@id-70526.charlton.irccloud.com) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * JoeJulian (~JoeJulian@108.166.123.190) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * gmmaha (~gmmaha@00021e7e.user.oftc.net) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * ktdreyer (~kdreyer@polyp.adiemus.org) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * dmanchad (~dmanchad@nat-pool-bos-t.redhat.com) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * scubacuda (sid109325@0001fbab.user.oftc.net) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * braderhart (sid124863@braderhart.user.oftc.net) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * diegows (~diegows@main.woitasen.com.ar) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * ndevos (~ndevos@nat-pool-ams2-5.redhat.com) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * MaZ- (~maz@00016955.user.oftc.net) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * react (~react@retard.io) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] * tomaw (tom@tomaw.netop.oftc.net) Quit (synthon.oftc.net beauty.oftc.net)
[23:41] <darthbacon> ok, I think I will just start with nobackfill, add OSDs and then clear the flag. Worst case I trigger a little unnecessary I/O over the weekend.
[23:49] * lincolnb (~lincoln@c-71-57-68-189.hsd1.il.comcast.net) has joined #ceph
[23:49] * skullone (~skullone@shell.skull-tech.com) has joined #ceph
[23:49] * mjevans (~mjevans@li984-246.members.linode.com) has joined #ceph
[23:49] * folivora (~out@devnull.drwxr-xr-x.eu) has joined #ceph
[23:49] * jlayton (~jlayton@cpe-2606-A000-1125-405B-14D9-DFF4-8FF1-7DD8.dyn6.twc.com) has joined #ceph
[23:49] * mkfort (~mkfort@mkfort.com) has joined #ceph
[23:49] * DeMiNe0_ (~DeMiNe0@104.131.119.74) has joined #ceph
[23:49] * mnaser (~mnaser@162.253.53.193) has joined #ceph
[23:49] * owasserm (~owasserm@a212-238-239-152.adsl.xs4all.nl) has joined #ceph
[23:49] * rektide_ (~rektide@eldergods.com) has joined #ceph
[23:49] * jidar_ (~jidar@104.207.140.225) has joined #ceph
[23:49] * LiftedKilt (~LiftedKil@is.in.the.madhacker.biz) has joined #ceph
[23:49] * kaalia (~remote_us@45.55.206.107) has joined #ceph
[23:49] * gtrott (sid78444@id-78444.tooting.irccloud.com) has joined #ceph
[23:49] * Nixx (~quassel@bulbasaur.sjorsgielen.nl) has joined #ceph
[23:49] * frickler (~jens@v1.jayr.de) has joined #ceph
[23:49] * MaZ- (~maz@00016955.user.oftc.net) has joined #ceph
[23:49] * grw (~grw@tsar.su) has joined #ceph
[23:49] * tomaw (tom@tomaw.netop.oftc.net) has joined #ceph
[23:49] * via (~via@smtp2.matthewvia.info) has joined #ceph
[23:49] * ndevos (~ndevos@nat-pool-ams2-5.redhat.com) has joined #ceph
[23:49] * md_ (~john@205.233.53.42) has joined #ceph
[23:49] * react (~react@retard.io) has joined #ceph
[23:49] * braderhart (sid124863@braderhart.user.oftc.net) has joined #ceph
[23:49] * scalability-junk (sid6422@id-6422.ealing.irccloud.com) has joined #ceph
[23:49] * rkeene (1011@oc9.org) has joined #ceph
[23:49] * shaon (~shaon@shaon.me) has joined #ceph
[23:49] * herrsergio (~herrsergi@00021432.user.oftc.net) has joined #ceph
[23:49] * SpamapS (~SpamapS@xencbyrum2.srihosting.com) has joined #ceph
[23:49] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[23:49] * mlovell (~mlovell@69-195-66-94.unifiedlayer.com) has joined #ceph
[23:49] * m0zes (~mozes@ns1.beocat.ksu.edu) has joined #ceph
[23:49] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[23:49] * jiffe (~jiffe@nsab.us) has joined #ceph
[23:49] * Psi-Jack (~psi-jack@mx.linux-help.org) has joined #ceph
[23:49] * med (~medberry@00012b50.user.oftc.net) has joined #ceph
[23:49] * dmanchad (~dmanchad@nat-pool-bos-t.redhat.com) has joined #ceph
[23:49] * diegows (~diegows@main.woitasen.com.ar) has joined #ceph
[23:49] * ktdreyer (~kdreyer@polyp.adiemus.org) has joined #ceph
[23:49] * iggy (~iggy@mail.vten.us) has joined #ceph
[23:49] * gmmaha (~gmmaha@00021e7e.user.oftc.net) has joined #ceph
[23:49] * JoeJulian (~JoeJulian@108.166.123.190) has joined #ceph
[23:49] * beardo (~beardo__@beardo.cc.lehigh.edu) has joined #ceph
[23:49] * elder_ (sid70526@id-70526.charlton.irccloud.com) has joined #ceph
[23:49] * scubacuda (sid109325@0001fbab.user.oftc.net) has joined #ceph
[23:49] * devicenull (sid4013@id-4013.ealing.irccloud.com) has joined #ceph
[23:49] * jnq (sid150909@0001b7cc.user.oftc.net) has joined #ceph
[23:50] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[23:50] * vbellur (~vijay@71.234.224.255) has joined #ceph
[23:50] * wak-work (~wak-work@2620:15c:2c5:3:1db3:d0d3:9625:8e36) has joined #ceph
[23:50] * northrup (~northrup@75-146-11-137-Nashville.hfc.comcastbusiness.net) has joined #ceph
[23:50] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[23:50] * xarses (~xarses@64.124.158.3) has joined #ceph
[23:50] * _kelv (uid73234@id-73234.highgate.irccloud.com) has joined #ceph
[23:50] * Racpatel (~Racpatel@2601:87:3:31e3::2433) has joined #ceph
[23:50] * sudocat1 (~dibarra@192.185.1.20) has joined #ceph
[23:50] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[23:50] * kristen (~kristen@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[23:50] * linuxkidd (~linuxkidd@ip70-189-202-62.lv.lv.cox.net) has joined #ceph
[23:50] * shaunm (~shaunm@ms-208-102-105-216.gsm.cbwireless.com) has joined #ceph
[23:50] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[23:50] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) has joined #ceph
[23:50] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[23:50] * BlaXpirit (~irc@blaxpirit.com) has joined #ceph
[23:50] * arbrandes (~arbrandes@ec2-54-172-54-135.compute-1.amazonaws.com) has joined #ceph
[23:50] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) has joined #ceph
[23:50] * danieagle (~Daniel@187.74.73.163) has joined #ceph
[23:50] * cyphase (~cyphase@000134f2.user.oftc.net) has joined #ceph
[23:50] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) has joined #ceph
[23:50] * blizzow (~jburns@50-243-148-102-static.hfc.comcastbusiness.net) has joined #ceph
[23:50] * wushudoin (~wushudoin@38.140.108.2) has joined #ceph
[23:50] * qman (~rohroh@2600:3c00::f03c:91ff:fe69:92af) has joined #ceph
[23:50] * Tene (~tene@173.13.139.236) has joined #ceph
[23:50] * nilez (~nilez@96.44.144.194) has joined #ceph
[23:50] * LegalResale (~LegalResa@66.165.126.130) has joined #ceph
[23:50] * ronrib (~boswortr@45.32.242.135) has joined #ceph
[23:50] * Kingrat (~shiny@2605:6000:1526:4063:d44d:3add:dc13:51ec) has joined #ceph
[23:50] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) has joined #ceph
[23:50] * SamYaple (~SamYaple@162.209.126.134) has joined #ceph
[23:50] * ron-slc (~Ron@173-165-129-118-utah.hfc.comcastbusiness.net) has joined #ceph
[23:50] * ben1 (ben@pearl.meh.net.nz) has joined #ceph
[23:50] * dec (~dec@104.198.96.45) has joined #ceph
[23:50] * nathani (~nathani@2607:f2f8:ac88::) has joined #ceph
[23:50] * snelly (~cjs@sable.island.nu) has joined #ceph
[23:50] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[23:50] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[23:52] * jproulx (~jon@kvas.csail.mit.edu) has joined #ceph
[23:55] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[23:57] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Ping timeout: 480 seconds)
[23:59] * sleinen (~Adium@2001:620:0:69::101) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.