#ceph IRC Log

Index

IRC Log for 2016-08-16

Timestamps are in GMT/BST.

[0:01] * xarses (~xarses@4.35.170.198) has joined #ceph
[0:09] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[0:18] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[0:18] * haplo37 (~haplo37@199.91.185.156) Quit (Ping timeout: 480 seconds)
[0:19] * narthollis (~dicko@178.162.205.1) Quit ()
[0:22] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[0:26] * rendar (~I@host63-44-dynamic.51-82-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:26] * Nacer (~Nacer@vir78-1-82-232-38-190.fbx.proxad.net) Quit (Remote host closed the connection)
[0:32] * dnunez (~dnunez@209-6-91-147.c3-0.smr-ubr1.sbo-smr.ma.cable.rcn.com) Quit (Remote host closed the connection)
[0:35] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:41] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[0:41] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[0:43] * KindOne (sillyfool@0001a7db.user.oftc.net) Quit (Remote host closed the connection)
[0:55] * kuku (~kuku@119.93.91.136) has joined #ceph
[0:57] * Nacer (~Nacer@vir78-1-82-232-38-190.fbx.proxad.net) has joined #ceph
[1:05] * stiopa (~stiopa@81.110.229.198) Quit (Ping timeout: 480 seconds)
[1:05] * KindOne (sillyfool@h125.161.186.173.dynamic.ip.windstream.net) has joined #ceph
[1:25] * xarses (~xarses@4.35.170.198) Quit (Ping timeout: 480 seconds)
[1:34] * xarses (~xarses@66-219-216-151.static.ip.veracitynetworks.com) has joined #ceph
[1:40] * oms101 (~oms101@p20030057EA021A00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:46] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:49] * oms101 (~oms101@p20030057EA5F8100C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:51] * rdias (~rdias@bl7-92-98.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[1:56] * AXJ (~oftc-webi@static-108-47-170-18.lsanca.fios.frontiernet.net) Quit (Quit: Page closed)
[2:16] * georgem (~Adium@107-179-157-134.cpe.teksavvy.com) has joined #ceph
[2:17] * georgem (~Adium@107-179-157-134.cpe.teksavvy.com) Quit ()
[2:18] * georgem (~Adium@206.108.127.16) has joined #ceph
[2:26] * yanzheng (~zhyan@118.116.114.80) has joined #ceph
[2:27] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[2:30] * dbbyleo (~dbbyleo@50-198-202-93-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[2:30] * clusterfudge (~Helleshin@185.65.134.75) has joined #ceph
[2:33] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[2:33] * blizzow (~jburns@50.243.148.102) Quit (Ping timeout: 480 seconds)
[2:33] * yanzheng (~zhyan@118.116.114.80) Quit (Quit: This computer has gone to sleep)
[2:37] * xarses (~xarses@66-219-216-151.static.ip.veracitynetworks.com) Quit (Ping timeout: 480 seconds)
[2:37] * chunmei (~chunmei@134.134.139.70) Quit (Remote host closed the connection)
[2:40] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[2:41] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[2:42] * huangjun (~kvirc@113.57.168.154) has joined #ceph
[2:46] * clusterfudge (~Helleshin@9YSAABDQG.tor-irc.dnsbl.oftc.net) Quit (Read error: Connection reset by peer)
[2:48] * yanzheng (~zhyan@118.116.114.80) has joined #ceph
[2:49] * efirs (~firs@98.207.153.155) has joined #ceph
[2:49] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Ping timeout: 480 seconds)
[2:50] * rdias (~rdias@bl7-92-98.dsl.telepac.pt) has joined #ceph
[2:52] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) Quit (Remote host closed the connection)
[3:00] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[3:00] * `Jin (~AG_Clinto@178-175-128-50.static.host) has joined #ceph
[3:16] * Nacer (~Nacer@vir78-1-82-232-38-190.fbx.proxad.net) Quit (Remote host closed the connection)
[3:24] * haplo37 (~haplo37@107.190.44.23) has joined #ceph
[3:30] * `Jin (~AG_Clinto@26XAAA3P7.tor-irc.dnsbl.oftc.net) Quit ()
[3:30] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[3:34] * cyphase1 (~Vidi@se7x.mullvad.net) has joined #ceph
[3:34] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:38] * hellertime (~Adium@pool-71-162-119-41.bstnma.fios.verizon.net) has joined #ceph
[3:50] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[3:56] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[3:57] * haplo37 (~haplo37@107.190.44.23) Quit (Ping timeout: 480 seconds)
[3:59] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[4:01] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[4:02] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[4:03] * haplo37 (~haplo37@107-190-44-23.cpe.teksavvy.com) has joined #ceph
[4:04] * cyphase1 (~Vidi@se7x.mullvad.net) Quit ()
[4:06] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Ping timeout: 480 seconds)
[4:07] * xarses (~xarses@66-219-216-151.static.ip.veracitynetworks.com) has joined #ceph
[4:11] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[4:11] * haplo37 (~haplo37@107-190-44-23.cpe.teksavvy.com) Quit (Ping timeout: 480 seconds)
[4:17] * vbellur (~vijay@71.234.224.255) has joined #ceph
[4:18] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[4:21] * Racpatel (~Racpatel@2601:87:0:24af::53d5) Quit (Ping timeout: 480 seconds)
[4:31] * penguinRaider (~KiKo@104.250.141.44) Quit (Ping timeout: 480 seconds)
[4:34] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[4:37] * kefu (~kefu@45.32.49.168) has joined #ceph
[4:37] * jfaj_ (~jan@p20030084AF3738005EC5D4FFFEBB68A4.dip0.t-ipconnect.de) has joined #ceph
[4:39] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[4:40] * hellertime (~Adium@pool-71-162-119-41.bstnma.fios.verizon.net) Quit (Quit: Leaving.)
[4:44] * penguinRaider (~KiKo@104.250.141.44) has joined #ceph
[4:44] * jfaj (~jan@p578E773F.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[4:47] * kuku (~kuku@119.93.91.136) has joined #ceph
[4:54] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[4:56] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[4:59] * kefu (~kefu@45.32.49.168) Quit (Read error: Connection reset by peer)
[5:00] * kefu (~kefu@114.92.101.38) has joined #ceph
[5:12] * i_m (~ivan.miro@31.173.120.48) has joined #ceph
[5:16] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[5:16] * kefu (~kefu@114.92.101.38) Quit (Quit: Textual IRC Client: www.textualapp.com)
[5:21] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[5:21] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[5:22] * haplo37 (~haplo37@107.190.44.23) has joined #ceph
[5:23] * kefu (~kefu@114.92.101.38) has joined #ceph
[5:25] * ghostnote (~mrapple@ip95.ip-94-23-150.eu) has joined #ceph
[5:30] * Vacuum_ (~Vacuum@88.130.212.33) has joined #ceph
[5:35] * kefu (~kefu@114.92.101.38) Quit (Quit: Textual IRC Client: www.textualapp.com)
[5:37] * Vacuum__ (~Vacuum@i59F79BC4.versanet.de) Quit (Ping timeout: 480 seconds)
[5:40] * vimal (~vikumar@114.143.167.9) has joined #ceph
[5:55] * ghostnote (~mrapple@5AEAAA0RL.tor-irc.dnsbl.oftc.net) Quit ()
[5:56] * davidzlap (~Adium@2605:e000:1313:8003:f01b:8940:89d5:6266) Quit (Quit: Leaving.)
[6:02] * walcubi_ (~walcubi@p5795AE54.dip0.t-ipconnect.de) has joined #ceph
[6:04] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:07] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[6:08] * [0x4A6F]_ (~ident@p4FC274D4.dip0.t-ipconnect.de) has joined #ceph
[6:08] * [0x4A6F] (~ident@0x4a6f.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:08] * [0x4A6F]_ is now known as [0x4A6F]
[6:09] * walcubi (~walcubi@p5795B235.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:10] * vimal (~vikumar@114.143.167.9) Quit (Quit: Leaving)
[6:17] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:22] * rdas (~rdas@121.244.87.116) has joined #ceph
[6:25] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[6:29] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[6:33] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) Quit (Quit: Leaving)
[6:33] * vimal (~vikumar@121.244.87.116) has joined #ceph
[6:34] * kuku (~kuku@119.93.91.136) has joined #ceph
[6:35] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[6:36] * Xylios (~Jebula@46.166.188.197) has joined #ceph
[6:46] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) has joined #ceph
[6:48] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:55] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[7:06] * Xylios (~Jebula@46.166.188.197) Quit ()
[7:09] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[7:10] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[7:19] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[7:20] * bvi (~Bastiaan@185.56.32.1) has joined #ceph
[7:30] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) has joined #ceph
[7:37] * phyphor (~Coe|work@93.115.83.253) has joined #ceph
[7:44] * haplo37 (~haplo37@107.190.44.23) Quit (Ping timeout: 480 seconds)
[7:53] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[8:00] <ivve> Hey, I'm having an issue with xfs probably having issues creating threads. Is it correct to increase kernel.threads-max ?
[8:00] <ivve> INFO: task xfsaild/sdae1:1822 blocked for more than 120 seconds. [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
[8:01] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:01] <ivve> this results in OSD not responding to heartbeats and suiciding the entire OSD-server
[8:02] * bvi (~Bastiaan@185.56.32.1) Quit (Remote host closed the connection)
[8:02] * kefu_ (~kefu@114.92.101.38) has joined #ceph
[8:03] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[8:07] * phyphor (~Coe|work@93.115.83.253) Quit ()
[8:21] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[8:30] * Jeffrey4l (~Jeffrey@110.252.42.172) has joined #ceph
[8:39] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) has joined #ceph
[8:40] * flisky (~Thunderbi@210.12.157.93) has joined #ceph
[8:42] * ade (~abradshaw@p4FF7AAFC.dip0.t-ipconnect.de) has joined #ceph
[8:42] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[8:42] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[8:49] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[8:55] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[8:58] * getup (~getup@095-097-074-074.static.chello.nl) has joined #ceph
[8:59] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[9:01] * penguinRaider (~KiKo@104.250.141.44) Quit (Ping timeout: 480 seconds)
[9:01] <Be-El> upgrading to jewel on a centos host installs the ceph-selinux packages, which updates the selinux labels for the OSD data files; do you still need to change the file ownership afterwards are recommended in the jewel release notes?
[9:04] <IcePic> a well-tested upgrade should have fixed that I presume, but I think I recall a lot of people getting issues when ceph stopped running as root and more as the ceph user, so checking ownership seems prudent
[9:04] <IcePic> "trust, but verify"
[9:05] * rakeshgm (~rakesh@121.244.87.118) has joined #ceph
[9:06] <doppelgrau> Be-El: I still let run ceph as root on the existing nodes, chown???ing can take up to hours according to the mailinglist, and that was too much of an downtime for me. So I chnage it over time (new installations get the ceph-users and everytime too many disks fail the server is removed and set up again so I expect over some time every server has ceph running under it???s own user
[9:06] * Tarazed (~Rehevkor@46.166.190.190) has joined #ceph
[9:06] <Be-El> it's not about running as root or chown'ing files. the installation of the package triggers the selinux updates, which invoke restorecon to write the label
[9:07] <Be-El> ownership is another issue that i wanted to solve by taking the osd down one by one (and use the 'setuser match path' until then)
[9:10] * flisky (~Thunderbi@210.12.157.93) Quit (Quit: flisky)
[9:11] * penguinRaider (~KiKo@104.250.141.44) has joined #ceph
[9:14] * analbeard (~shw@support.memset.com) has joined #ceph
[9:20] * flisky (~Thunderbi@210.12.157.94) has joined #ceph
[9:24] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:25] * rakeshgm (~rakesh@121.244.87.118) Quit (Ping timeout: 480 seconds)
[9:27] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[9:28] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[9:34] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[9:35] * flisky (~Thunderbi@210.12.157.94) Quit (Quit: flisky)
[9:36] * Tarazed (~Rehevkor@26XAAA3XH.tor-irc.dnsbl.oftc.net) Quit ()
[9:36] * andihit (uid118959@id-118959.richmond.irccloud.com) has joined #ceph
[9:39] * swami1 (~swami@223.227.25.21) has joined #ceph
[9:44] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[9:52] * fsimonce (~simon@host203-44-dynamic.183-80-r.retail.telecomitalia.it) has joined #ceph
[9:52] * Hemanth (~hkumar_@103.228.221.179) has joined #ceph
[9:53] * DanFoster (~Daniel@office.34sp.com) has joined #ceph
[9:55] <IcePic> sure, selinux labels are somewhat a dimension of its own, but if you ALSO get the userid wrong, it wont work either
[9:56] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:59] * liumxnl (~liumxnl@45.32.74.135) has joined #ceph
[10:00] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[10:03] * theghost99 (~Kalado@46.166.190.190) has joined #ceph
[10:03] * b0e (~aledermue@213.95.25.82) has joined #ceph
[10:05] * rendar (~I@host77-34-dynamic.25-79-r.retail.telecomitalia.it) has joined #ceph
[10:10] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[10:10] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[10:15] * thomnico (~thomnico@2a01:e35:8b41:120:cc5b:7fd4:fe21:6eac) has joined #ceph
[10:24] * karnan (~karnan@121.244.87.117) has joined #ceph
[10:25] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[10:26] * joao (~joao@8.184.114.89.rev.vodafone.pt) has joined #ceph
[10:26] * ChanServ sets mode +o joao
[10:33] * theghost99 (~Kalado@26XAAA3YH.tor-irc.dnsbl.oftc.net) Quit ()
[10:33] * mattch (~mattch@w5430.see.ed.ac.uk) has joined #ceph
[10:39] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[10:41] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[10:47] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:f01f:657e:17e3:bfd5) has joined #ceph
[10:53] * swami2 (~swami@49.44.57.238) has joined #ceph
[10:55] * thomnico (~thomnico@2a01:e35:8b41:120:cc5b:7fd4:fe21:6eac) Quit (Ping timeout: 480 seconds)
[10:59] * _mrp (~mrp@178-222-114-173.dynamic.isp.telekom.rs) has joined #ceph
[10:59] * swami1 (~swami@223.227.25.21) Quit (Ping timeout: 480 seconds)
[11:00] * _mrp (~mrp@178-222-114-173.dynamic.isp.telekom.rs) Quit ()
[11:01] * nardial (~ls@p5DC06373.dip0.t-ipconnect.de) has joined #ceph
[11:06] * kefu_ is now known as kefu|afk
[11:07] * TMM (~hp@185.5.121.201) has joined #ceph
[11:10] * swami2 (~swami@49.44.57.238) Quit (Ping timeout: 480 seconds)
[11:12] * swami1 (~swami@223.227.160.115) has joined #ceph
[11:14] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[11:15] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Read error: Connection reset by peer)
[11:21] * i_m (~ivan.miro@31.173.120.48) Quit (Quit: Leaving.)
[11:21] * i_m (~ivan.miro@31.173.120.48) has joined #ceph
[11:22] * liumxnl (~liumxnl@45.32.74.135) Quit (Remote host closed the connection)
[11:23] * liumxnl (~liumxnl@45.32.74.135) has joined #ceph
[11:25] * Hemanth (~hkumar_@103.228.221.179) Quit (Quit: Leaving)
[11:26] * Hemanth (~hkumar_@103.228.221.179) has joined #ceph
[11:32] * liumxnl (~liumxnl@45.32.74.135) Quit (Remote host closed the connection)
[11:32] * liumxnl (~liumxnl@45.32.74.135) has joined #ceph
[11:40] * kefu|afk is now known as kefu_
[11:40] * makz (~makz@2a00:d880:6:2d7::e463) has joined #ceph
[11:41] * walcubi_ is now known as walbuci
[11:47] * swami2 (~swami@223.227.160.115) has joined #ceph
[11:48] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:f01f:657e:17e3:bfd5) Quit (Ping timeout: 480 seconds)
[11:52] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:f01f:657e:17e3:bfd5) has joined #ceph
[11:54] * swami1 (~swami@223.227.160.115) Quit (Ping timeout: 480 seconds)
[12:00] * IvanJobs_ (~ivanjobs@103.50.11.146) has joined #ceph
[12:01] * rraja (~rraja@121.244.87.117) has joined #ceph
[12:03] * isaxi (~Joppe4899@177.154.139.196) has joined #ceph
[12:04] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[12:05] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[12:07] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Ping timeout: 480 seconds)
[12:11] * Drumplayr (~thomas@r74-192-135-250.gtwncmta01.grtntx.tl.dh.suddenlink.net) has joined #ceph
[12:11] <walbuci> Hmm, maybe I should bring this up to the ML, I'm just trying to find a way around this.
[12:11] <Drumplayr> hello
[12:13] <Drumplayr> I experienced a power outage recently. After everything came back online, I noticed the MDS wasn't letting my other server mount the filesystem.
[12:14] <Drumplayr> when I run 'sudo service ceph mds start' I get a reply: '/etc/init.d/ceph: sta.rt not found (/etc/ceph/ceph.conf defines mds.server05, /var/lib/ceph defines )
[12:15] * liumxnl (~liumxnl@45.32.74.135) Quit (Remote host closed the connection)
[12:15] * liumxnl (~liumxnl@45.32.74.135) has joined #ceph
[12:15] <Drumplayr> I have looked everywhere for 'sta.rt' but can't find it.
[12:15] <Drumplayr> Any ideas?
[12:16] * liumxnl (~liumxnl@45.32.74.135) Quit (Remote host closed the connection)
[12:16] * liumxnl (~liumxnl@45.32.74.135) has joined #ceph
[12:21] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:23] <BlaXpirit> Drumplayr, I'm quite sure that `service ceph mds` is invalid
[12:23] <Drumplayr> Really? I'm sure it worked. I'll try /etc/init.d/ceph mds start.
[12:23] <Drumplayr> same message
[12:24] <BlaXpirit> invalid for the same reason
[12:24] <Drumplayr> yes
[12:25] <Drumplayr> I just set everything up a couple days ago. Everything was running fine.
[12:25] <Drumplayr> I did a couple reboots on each server to make sure everything is working as it should.
[12:25] <BlaXpirit> well I'm sure you started it with a correct command previously
[12:25] <BlaXpirit> i don't know which one exactly, search is not so easy
[12:26] <Drumplayr> After the power outage, that's when I got this problem. Noting has changed since the last successful reboot.
[12:27] <BlaXpirit> Drumplayr, please find some variant of a correct command here http://dachary.org/loic/ceph-doc/rados/operations/operating/
[12:27] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[12:28] <Drumplayr> ok.
[12:28] <Drumplayr> I just ran a stop command. I started getting a bunch of these: 2016-08-16 05:27:26.875446 ac3ffb40 0 -- :/2963777081 >> 192.168.15.7:6789/0 pipe(0xac501270 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0xac501c90).fault
[12:28] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Quit: Leaving)
[12:30] <Drumplayr> You're right. I got it backwards.
[12:30] <Drumplayr> It was sudo service start mds
[12:30] <Drumplayr> It's still not running though.
[12:30] * huangjun (~kvirc@113.57.168.154) Quit (Ping timeout: 480 seconds)
[12:31] * karnan (~karnan@121.244.87.117) has joined #ceph
[12:31] <Drumplayr> I had this problem when I first built the cluster. It's an issue with the newest kernel.
[12:32] <Drumplayr> I have to make an adjustment to the tunables.
[12:32] <Drumplayr> I'm going to give that a try and if I have any more problems, I'll log back in.
[12:32] <Drumplayr> Thanks for the help.
[12:33] * isaxi (~Joppe4899@5AEAAA0W2.tor-irc.dnsbl.oftc.net) Quit ()
[12:33] <Drumplayr> I feel stupid for getting the command backwards. That's what I get for working on this for days with very little sleep
[12:34] * georgem (~Adium@107-179-157-134.cpe.teksavvy.com) has joined #ceph
[12:37] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) has joined #ceph
[12:37] * Racpatel (~Racpatel@2601:87:0:24af::53d5) has joined #ceph
[12:37] * Racpatel (~Racpatel@2601:87:0:24af::53d5) Quit ()
[12:39] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[12:39] * Racpatel (~Racpatel@2601:87:0:24af::53d5) has joined #ceph
[12:44] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:45] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[12:48] * Racpatel (~Racpatel@2601:87:0:24af::53d5) Quit (Ping timeout: 480 seconds)
[12:53] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[12:53] * georgem (~Adium@107-179-157-134.cpe.teksavvy.com) Quit (Quit: Leaving.)
[12:55] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[12:59] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[12:59] * kefu_ (~kefu@114.92.101.38) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:08] * hellertime (~Adium@72.246.3.14) has joined #ceph
[13:12] * Hemanth (~hkumar_@103.228.221.179) Quit (Quit: Leaving)
[13:13] * _mrp (~mrp@82.117.199.26) has joined #ceph
[13:16] * bla_ (~b.laessig@chimeria.ext.pengutronix.de) has joined #ceph
[13:20] * haplo37 (~haplo37@107-190-44-23.cpe.teksavvy.com) has joined #ceph
[13:22] * liumxnl (~liumxnl@45.32.74.135) Quit (Quit: Leaving...)
[13:22] * bla (~b.laessig@chimeria.ext.pengutronix.de) Quit (Ping timeout: 480 seconds)
[13:27] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[13:28] <ivve> anyone got any bright ideas on howto minimize scrubbing and deepscrubbing (similair to max backfill 1)
[13:30] <ivve> scheduling is one way to go
[13:30] <ivve> however it will take up resources as well
[13:31] * bniver (~bniver@pool-98-110-180-234.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[13:35] * haplo37 (~haplo37@107-190-44-23.cpe.teksavvy.com) Quit (Ping timeout: 480 seconds)
[13:41] * huangjun (~kvirc@117.152.73.81) has joined #ceph
[13:43] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[13:45] * zokko (zok@neurosis.pl) has left #ceph
[13:54] * thomnico (~thomnico@2a01:e35:8b41:120:cc5b:7fd4:fe21:6eac) has joined #ceph
[13:58] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[14:01] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[14:02] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[14:03] * art_yo (~art_yo@149.126.169.197) Quit (Ping timeout: 480 seconds)
[14:04] * Racpatel (~Racpatel@2601:87:0:24af::53d5) has joined #ceph
[14:07] * huangjun|2 (~kvirc@117.151.51.30) has joined #ceph
[14:08] * Hemanth (~hkumar_@103.228.221.179) has joined #ceph
[14:08] * haplo37 (~haplo37@107.190.44.23) has joined #ceph
[14:12] * huangjun (~kvirc@117.152.73.81) Quit (Ping timeout: 480 seconds)
[14:18] * sebastian-w_ (~quassel@212.218.8.138) Quit (Remote host closed the connection)
[14:18] * haplo37 (~haplo37@107.190.44.23) Quit (Ping timeout: 480 seconds)
[14:18] * sebastian-w (~quassel@212.218.8.138) has joined #ceph
[14:21] * nardial (~ls@p5DC06373.dip0.t-ipconnect.de) Quit (Quit: Leaving)
[14:25] <jiffe> so I pulled two 3TB disks and ceph status kicked into 11% misplaced last night and started transferring PGs around at around 100+ MB/s, this morning its at 6.6% and barely moving now
[14:28] <jiffe> according to ceph -w, most of the time its sitting idle and every now and it will transfer something, generally around a few KB/s to a few MB/s
[14:28] * kuku (~kuku@112.203.56.253) has joined #ceph
[14:28] * andihit (uid118959@id-118959.richmond.irccloud.com) Quit (Quit: Connection closed for inactivity)
[14:29] <Be-El> jiffe: do you have large PGs?
[14:29] * kuku (~kuku@112.203.56.253) Quit (Read error: Connection reset by peer)
[14:30] * kuku (~kuku@112.203.56.253) has joined #ceph
[14:30] * kuku (~kuku@112.203.56.253) Quit ()
[14:31] <jiffe> Be-El: looks like 12TB of data, 2000 PGs
[14:32] <Be-El> jiffe: backfill processes each object in turn, so if you have a large number of objects in the PG, it might take some time for the OSDs to find out which objects needs to synchronized
[14:32] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[14:32] <jiffe> this is going to be horrible if this is whats going to happen everytime we need to replace a disk
[14:33] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[14:38] * kuku (~kuku@112.203.56.253) has joined #ceph
[14:39] * babilen (~babilen@babilen.user.oftc.net) has joined #ceph
[14:39] * Drumplayr (~thomas@r74-192-135-250.gtwncmta01.grtntx.tl.dh.suddenlink.net) Quit (Ping timeout: 480 seconds)
[14:40] <Be-El> jiffe: there's probably a reason the osd are that slow....maybe memory limitation / slow disks?
[14:40] <babilen> Hi all. My understanding is that ceph doesn't automatically includes striping, but the this is configured for RBD during creation .. how can I get the current configuration?
[14:41] <babilen> But then I'm not sure about ceph's exact architecture yet
[14:41] <Be-El> babilen: ceph has different layers; the lowest layer being RADOS. and RADOS does not use striping
[14:42] <Be-El> RBD operates on top of RADOS and has striping support
[14:42] <babilen> But I would have to configure that explicitly, wouldn't I?
[14:43] <Be-El> babilen: there's a default configuration in /etc/ceph/ceph.conf for rbd. or you can define them during creating a RBD instance
[14:43] <Be-El> babilen: the current settings for an existing RBD instance can be queried with 'rbd info'
[14:43] * Hemanth (~hkumar_@103.228.221.179) Quit (Quit: Leaving)
[14:43] <babilen> "rbd info ..." doesn't show information about striping (cf. http://paste.debian.net/789855/ ) -- And it is not listed in "features" -- Does that mean that this volume is not striped?
[14:44] <Be-El> the order line defines the striping.
[14:44] <Be-El> in your case you have chunks with 4 MB size (the default strip size)
[14:44] <babilen> We are trying to compare Ceph to a proprietary offering called Quobyte and we see significant performace differences between them when we run bonnie++ tests.
[14:44] <babilen> Ah, so it does striping?
[14:45] <Be-El> it always does striping
[14:45] * vimal (~vikumar@121.244.87.116) Quit (Quit: Leaving)
[14:45] <Be-El> there's also a 'striping' feature, but that's a different story
[14:45] <babilen> That threw me off .. what is the "striping" feature about then?
[14:46] <Be-El> good question....i've seen some mails floating by on the mailing list, but didn't had a closer look at it yet
[14:46] * Hemanth (~hkumar_@103.228.221.179) has joined #ceph
[14:47] <babilen> Fair enough .. http://docs.ceph.com/docs/master/man/8/rbd/#striping mentions "Specifying a different [stripe_unit] requires that the STRIPINGV2 feature be supported (added in Ceph v0.53) and format 2 images be used." and I wasn't sure if that means that there'll be only one stripe otherwise
[14:48] * Drumplayr (~thomas@r74-192-135-250.gtwncmta01.grtntx.tl.dh.suddenlink.net) has joined #ceph
[14:48] <babilen> Just to be clear .. we have a stripe width of 4M here and different stripes will be stored in different objects (and therefore on different OSDs in some cases)
[14:48] <Be-El> exactly
[14:48] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[14:49] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[14:50] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[14:51] <Be-El> there's also thin provisioning; stripe objects are created on first write access
[14:51] <babilen> Well, right now I've created an ext4 filesystem on that block device and mounted it.
[14:51] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[14:52] <babilen> As I said, we are comparing it to another solution and are seeing significant differences (orders of magnitude) between them (cf. http://paste.debian.net/789862/ )
[14:52] <babilen> Just unsure what explains it and the quobyte colleague mentioned that he didn't configure striping at all
[14:52] <babilen> (we are not seeing any parallel writes or reads there)
[14:53] <jiffe> Be-El: these machines have 32Gb of ram each and IO wait time is < 1% on all machines
[14:53] <koollman> babilen: what are the parameters for this test ?
[14:54] <babilen> koollman: bonnie++ -d /mnt/$FOO -s 220G -n 1024 -u root
[14:54] <babilen> (boxes have around 98G RAM hence the large size)
[14:54] <jiffe> load averages on them are around 0.1, these machines just look like they're not doing anything
[14:54] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:54] <Be-El> does bonnie use multiple threads?
[14:55] * thomnico (~thomnico@2a01:e35:8b41:120:cc5b:7fd4:fe21:6eac) Quit (Quit: Ex-Chat)
[14:56] <babilen> Be-El: It can run multiple processes, but this one is a single process and single-threaded I think
[14:56] * thomnico (~thomnico@2a01:e35:8b41:120:cc5b:7fd4:fe21:6eac) has joined #ceph
[14:56] * cronburg__ (~cronburg@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[14:57] <Be-El> babilen: do you use any kind of rbd cache?
[14:57] <babilen> These are all preliminary measurements while we learn more about the system and our testing procedure, so it might very well be that we need to use different tools or invocations for actual tests
[14:57] <babilen> Be-El: No
[14:58] * mhack (~mhack@nat-pool-bos-u.redhat.com) has joined #ceph
[14:58] <babilen> Sorry, we have a meeting now. I shall be back in ~20-30 minutes. If you can think of anything do not hesitate to let me know, but I just wanted to let you know that I won't reply immediately
[14:58] <Be-El> well, slower writes are to be expected (ceph uses replication and writes synchronously), but faster reads are unexpected
[14:59] * rraja_ (~rraja@121.244.87.118) has joined #ceph
[14:59] <babilen> What really baffles me though are the differences in all the "Create" tests (file creation, deletion, ...)
[14:59] <babilen> Those are magnitudes apart
[15:00] <babilen> I've run those tests multiple times and the numbers stay roughly the same
[15:01] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[15:01] * rraja (~rraja@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:01] <jiffe> I'm curious why this started going out >100MB/s last night and is going at a trickle this morning
[15:02] <Be-El> jiffe: you can try to restart one of the involved OSD
[15:02] * swami1 (~swami@223.227.239.78) has joined #ceph
[15:03] * jprins (~jprins@bbnat.betterbe.com) has joined #ceph
[15:03] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:04] * kefu (~kefu@114.92.101.38) has joined #ceph
[15:05] * kuku (~kuku@112.203.56.253) Quit (Remote host closed the connection)
[15:05] <jprins> Hi everyone. I have a few questions related to Ceph and RDS with a fresh Jewel install. Just posted something on the mailinglist, was wondering if someone could help me.
[15:06] <jprins> It looks to me like a fresh install of Ceph with RDS (Jewel) is missing the default Realm. Which works fine when you keep it all simple, but when you start playing around with different placements groups etc, you run into problems. At least I do.
[15:07] * swami2 (~swami@223.227.160.115) Quit (Read error: Connection reset by peer)
[15:08] <jiffe> hmm, the two osds that I took out and shutdown didn't actually shutdown, I killed the ceph processes for those osds and the misplaced transfer picked back up although now I have a degraded transfer also
[15:08] * kuku (~kuku@112.203.56.253) has joined #ceph
[15:09] * cronburg__ (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[15:11] * spgriffinjr (~spgriffin@66.46.246.206) Quit ()
[15:11] <wes_dillingham> In terms of recovery, does ceph prioritize recovery IO to those objects who have blocked IO requests because their replica count is below min_size ?
[15:13] * spgriffinjr (~spgriffin@66.46.246.206) has joined #ceph
[15:16] * jarrpa (~jarrpa@adsl-72-50-85-48.prtc.net) has joined #ceph
[15:18] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[15:19] * kuku (~kuku@112.203.56.253) Quit (Remote host closed the connection)
[15:20] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[15:22] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[15:22] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[15:23] * kefu (~kefu@114.92.101.38) Quit (Max SendQ exceeded)
[15:24] * kefu (~kefu@114.92.101.38) has joined #ceph
[15:26] * kuku (~kuku@112.203.56.253) has joined #ceph
[15:27] * rraja_ (~rraja@121.244.87.118) Quit (Ping timeout: 480 seconds)
[15:31] * squizzi (~squizzi@71-34-69-94.ptld.qwest.net) has joined #ceph
[15:31] * mikeyv (~mikeyv@50.58.123.252) has joined #ceph
[15:35] * bene2 (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) has joined #ceph
[15:36] * rraja_ (~rraja@121.244.87.117) has joined #ceph
[15:36] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[15:37] * mattbenjamin (~mbenjamin@12.118.3.106) has joined #ceph
[15:38] * yanzheng (~zhyan@118.116.114.80) Quit (Quit: This computer has gone to sleep)
[15:39] * mhackett (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[15:39] * kuku (~kuku@112.203.56.253) Quit (Read error: Connection reset by peer)
[15:39] * kuku (~kuku@112.203.56.253) has joined #ceph
[15:41] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:42] * yanzheng (~zhyan@118.116.114.80) has joined #ceph
[15:43] * Drumplayr (~thomas@r74-192-135-250.gtwncmta01.grtntx.tl.dh.suddenlink.net) Quit (Ping timeout: 480 seconds)
[15:44] * dnunez (~dnunez@nat-pool-bos-u.redhat.com) has joined #ceph
[15:45] * cronburg__ (~cronburg@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:46] * mhack (~mhack@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[15:47] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[15:47] <mikeyv> I have installed jewel on CentOS 7 and after rebooting a storage node for the first time all of of my ceph-osd services failed to start and ceph-disk shows all the osds in prepared state. 'ceph-disk activate <disk>' will bring up the services, but why are they not starting automatically?
[15:50] * getup (~getup@095-097-074-074.static.chello.nl) Quit (Ping timeout: 480 seconds)
[15:51] <IcePic> jprins: I may actually be in exactly your situation with my rgw
[15:51] * mhackett is now known as mhack
[15:52] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[15:54] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[15:55] <jprins> IcePic: I think I have fixed it. See my second mail.
[15:56] <IcePic> jprins: yeah, saw the followup. google actually was quite fast in indexing the list
[15:57] <IcePic> I shall try it at once, I have a def zone and zonegroup but no realm. This after an upgrade to jewel. I have data in the old renamed pools, but not critical data.
[15:57] <jprins> All my work is testing and learning.
[15:57] <jprins> Nothing critical
[15:57] <jprins> Don't kill me if you kill your cluster with my command ;-
[15:57] <IcePic> nah, I'm fully aware of my own fault in case it goes south
[15:58] <IcePic> as it is now, radosgw wont start, so its not getting better unless I do something
[15:58] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[15:58] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:59] <IcePic> and its not helpful with dummy error messages in jewel. 8-(
[16:00] <IcePic> RGWZoneParams::create(): error creating default zone params: (17) File exists
[16:00] * thomnico (~thomnico@2a01:e35:8b41:120:cc5b:7fd4:fe21:6eac) Quit (Quit: Ex-Chat)
[16:00] * thomnico (~thomnico@2a01:e35:8b41:120:cc5b:7fd4:fe21:6eac) has joined #ceph
[16:01] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[16:04] * kefu is now known as kefu|afk
[16:06] * swami1 (~swami@223.227.239.78) Quit (Ping timeout: 480 seconds)
[16:08] * rakeshgm (~rakesh@121.244.87.118) has joined #ceph
[16:09] * huangjun|2 (~kvirc@117.151.51.30) Quit (Ping timeout: 480 seconds)
[16:10] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[16:11] * onyb (~ani07nov@119.82.105.66) has joined #ceph
[16:13] * KapiteinKoffie (~Sun7zu@tor2r.ins.tor.net.eu.org) has joined #ceph
[16:15] * yanzheng (~zhyan@118.116.114.80) Quit (Quit: This computer has gone to sleep)
[16:16] * yanzheng (~zhyan@118.116.114.80) has joined #ceph
[16:17] * yanzheng (~zhyan@118.116.114.80) Quit ()
[16:19] * jfaj_ (~jan@p20030084AF3738005EC5D4FFFEBB68A4.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[16:19] * xarses (~xarses@66-219-216-151.static.ip.veracitynetworks.com) Quit (Ping timeout: 480 seconds)
[16:19] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[16:20] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[16:20] * dbbyleo (~dbbyleo@50-198-202-93-static.hfc.comcastbusiness.net) has joined #ceph
[16:21] * kuku (~kuku@112.203.56.253) Quit (Remote host closed the connection)
[16:22] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[16:23] * kefu|afk is now known as kefu
[16:25] * cronburg__ (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[16:25] * mikeyv (~mikeyv@50.58.123.252) Quit (Quit: Leaving)
[16:29] * jtw (~john@2601:644:4000:b0bf:a455:c53f:4a2b:be52) Quit (Quit: Leaving)
[16:30] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[16:38] * lpabon (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[16:38] * rakeshgm (~rakesh@121.244.87.118) Quit (Remote host closed the connection)
[16:42] * xarses (~xarses@4.35.170.198) has joined #ceph
[16:43] * rakeshgm (~rakesh@121.244.87.118) has joined #ceph
[16:43] * saintpablo (~saintpabl@gw01.mhitp.dk) has joined #ceph
[16:43] * KapiteinKoffie (~Sun7zu@61TAABCYK.tor-irc.dnsbl.oftc.net) Quit ()
[16:46] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[16:46] <jprins> IcePic: That error is now gone in my configuration since I created a default realm and put everything correctly underneath.
[16:46] <jprins> At least, I think that this is my current status.
[16:49] <IcePic> I did all your steps, but still have ERROR: failed to initialize watch: (1) Operation not permitted left to fix.
[16:49] <IcePic> I lacked the realm stuff exactly where you did though, so I think it was correct to run that anyhow
[16:51] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:52] * cyphase (~cyphase@c-50-148-131-137.hsd1.ca.comcast.net) has joined #ceph
[16:55] * thomnico (~thomnico@2a01:e35:8b41:120:cc5b:7fd4:fe21:6eac) Quit (Ping timeout: 480 seconds)
[16:56] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[16:58] * jarrpa (~jarrpa@adsl-72-50-85-48.prtc.net) Quit (Ping timeout: 480 seconds)
[17:01] * saintpablo (~saintpabl@gw01.mhitp.dk) Quit (Ping timeout: 480 seconds)
[17:02] * squizzi (~squizzi@71-34-69-94.ptld.qwest.net) Quit (Quit: bye)
[17:04] * srk (~Siva@32.97.110.53) has joined #ceph
[17:05] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[17:06] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[17:07] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:09] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[17:10] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) has joined #ceph
[17:12] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[17:16] * rakeshgm (~rakesh@121.244.87.118) Quit (Ping timeout: 480 seconds)
[17:18] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[17:18] * onyb (~ani07nov@119.82.105.66) Quit (Quit: raise SystemExit())
[17:25] * danieagle (~Daniel@179.97.148.125) has joined #ceph
[17:26] * doppelgrau (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[17:26] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[17:28] * i_m (~ivan.miro@31.173.120.48) Quit (Ping timeout: 480 seconds)
[17:30] * darthbacon (~darthbaco@67-61-63-35.cpe.cableone.net) has joined #ceph
[17:32] * darthbacon (~darthbaco@67-61-63-35.cpe.cableone.net) Quit ()
[17:34] * blizzow (~jburns@50.243.148.102) has joined #ceph
[17:34] * rakeshgm (~rakesh@121.244.87.117) Quit (Quit: Leaving)
[17:36] * topro_ (~prousa@p578af414.dip0.t-ipconnect.de) Quit (Quit: Konversation terminated!)
[17:36] * topro_ (~prousa@p578af414.dip0.t-ipconnect.de) has joined #ceph
[17:42] * garphy`aw is now known as garphy
[17:43] * mykola (~Mikolaj@91.245.78.8) has joined #ceph
[17:44] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[17:49] * haplo37 (~haplo37@199.91.185.156) Quit (Ping timeout: 480 seconds)
[17:50] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:51] * squizzi (~squizzi@71-34-69-94.ptld.qwest.net) has joined #ceph
[17:52] * jarrpa (~jarrpa@67.224.250.2) has joined #ceph
[17:54] * wjw-freebsd (~wjw@smtp.medusa.nl) has joined #ceph
[17:54] * thomnico (~thomnico@2a01:e35:8b41:120:cc5b:7fd4:fe21:6eac) has joined #ceph
[17:54] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[17:57] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) has joined #ceph
[17:57] * vimal (~vikumar@114.143.167.9) has joined #ceph
[17:57] * oliveiradan (~doliveira@137.65.133.10) has joined #ceph
[18:01] * xarses (~xarses@4.35.170.198) Quit (Remote host closed the connection)
[18:01] * xarses (~xarses@4.35.170.198) has joined #ceph
[18:03] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[18:05] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[18:08] * dnunez (~dnunez@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[18:10] * thomnico (~thomnico@2a01:e35:8b41:120:cc5b:7fd4:fe21:6eac) Quit (Ping timeout: 480 seconds)
[18:17] * i_m (~ivan.miro@31.173.120.48) has joined #ceph
[18:20] <jprins> Hi, I try to give a second user access to a bucket using s3cmd but this fails. I created 2 users, one with full access and a second with read-write access. Then I create a bucket using the user with full access, and I add the second user to the acl of this backup with Write access. But I always get access denied / 403 when I try to access the bucket using the second user.
[18:21] <jprins> Anyone an idea what could be causing this? (Setup Jewel 10.2.2 with RDS)
[18:21] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Ping timeout: 480 seconds)
[18:23] * wjw-freebsd (~wjw@smtp.medusa.nl) Quit (Ping timeout: 480 seconds)
[18:27] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[18:27] * toMeloos (~toMeloos@2a03:fc02:2:1:9eeb:e8ff:fe06:cfbb) has joined #ceph
[18:32] * ade (~abradshaw@p4FF7AAFC.dip0.t-ipconnect.de) Quit (Quit: Too sexy for his shirt)
[18:33] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[18:36] * ron-slc (~Ron@173-165-129-118-utah.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[18:42] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:43] * oarra (~rorr@45.73.146.238) has joined #ceph
[18:44] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[18:59] * vimal (~vikumar@114.143.167.9) Quit (Quit: Leaving)
[18:59] * cathode (~cathode@50.232.215.114) has joined #ceph
[19:02] * DanFoster (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[19:03] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[19:07] * xarses (~xarses@4.35.170.198) Quit (Ping timeout: 480 seconds)
[19:08] * w0lfeh (~totalworm@178-175-128-50.static.host) has joined #ceph
[19:09] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[19:09] * rraja_ (~rraja@121.244.87.117) Quit (Quit: Leaving)
[19:10] * derjohn_mob (~aj@tmo-112-104.customers.d1-online.com) has joined #ceph
[19:14] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) Quit (Quit: jermudgeon)
[19:15] * dnunez (~dnunez@nat-pool-bos-u.redhat.com) has joined #ceph
[19:20] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[19:24] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[19:26] * i_m (~ivan.miro@31.173.120.48) Quit (Quit: Leaving.)
[19:26] * i_m (~ivan.miro@31.173.120.48) has joined #ceph
[19:34] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has left #ceph
[19:37] * w0lfeh (~totalworm@5AEAAA06T.tor-irc.dnsbl.oftc.net) Quit ()
[19:40] * northrup (~northrup@189-211-129-205.static.axtel.net) has joined #ceph
[19:44] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:44] <northrup> I know that ceph-deploy / ceph-disk commands do not support LVM, and that there are a good many reasons why that should be - however I find myself in a predicament where I have no choice
[19:44] <northrup> is there ANY way that I can make a deployment work with LVM managed objects?
[19:45] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[19:45] * cyphase (~cyphase@c-50-148-131-137.hsd1.ca.comcast.net) has joined #ceph
[19:47] <IcePic> I think you can run osds in directories, so those could be on top of lvm, if you must have it
[19:47] <IcePic> as opposed to handing devices to osd initialization. Which perhaps also would accept lvm'ed devices, but I haven't tested that.
[19:48] * chunmei (~chunmei@134.134.139.72) has joined #ceph
[19:48] * dmick (~dmick@206.169.83.146) has left #ceph
[19:48] * srk (~Siva@32.97.110.53) Quit (Ping timeout: 480 seconds)
[19:50] <northrup> Hmm.. you're right, I could manually initialize the OSD and storage without using the ceph-deploy command structure... I might give that a try
[19:52] * derjohn_mob (~aj@tmo-112-104.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[19:52] <northrup> I'm backed into a clusterf*#k scenario of having to implement this in MS Azure, which only allows disks of 1TB in size... so if you want an OSD node with 4 6TB OSD targets, you have to use LVM to stripe the 1TB devices together...
[19:53] <northrup> ... or I suppose I could have 24 1TB OSD targets on a node, but I think journal performance would be the bottleneck on that
[19:53] <northrup> thoughts?
[19:54] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:55] * srk (~Siva@32.97.110.53) has joined #ceph
[19:57] * wes_dillingham (~wes_dilli@140.247.242.44) Quit (Ping timeout: 480 seconds)
[19:59] * wes_dillingham (~wes_dilli@65.112.8.195) has joined #ceph
[20:01] * georgem1 (~Adium@206.108.127.16) has joined #ceph
[20:01] * georgem (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[20:03] <blizzow> anyone here have recommendations of riofs vs. s3fs vs. goofyfs to access a ceph based s3 bucket?
[20:05] * toMeloos (~toMeloos@2a03:fc02:2:1:9eeb:e8ff:fe06:cfbb) Quit (Ping timeout: 480 seconds)
[20:05] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:05] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[20:06] * cyphase (~cyphase@c-50-148-131-137.hsd1.ca.comcast.net) has joined #ceph
[20:08] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[20:08] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:f01f:657e:17e3:bfd5) Quit (Ping timeout: 480 seconds)
[20:13] * xarses (~xarses@4.35.170.198) has joined #ceph
[20:18] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[20:20] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[20:20] <blizzow> northrup: don't use lvm, use mdadm.
[20:21] <northrup> blizzow and that won't make ceph-disk complain?
[20:22] <blizzow> northrup: it didn't complain with me. Using /dev/mdXX didn't present any problems with ceph-deploy
[20:22] * davidzlap (~Adium@2605:e000:1313:8003:218b:daed:f39e:f090) has joined #ceph
[20:23] <northrup> blizzow EXCELLENT! thank you!
[20:23] <blizzow> It does however mean that if one of your disks fails (in either LVM or mdadm), you will lose the whole OSD.
[20:23] <blizzow> I ended up doing a bunch of separate OSDs in my machine instead of that.
[20:24] * kefu (~kefu@114.92.101.38) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:24] <blizzow> I asked nearly the same question in here a couple weeks ago and was told I might be in for a world of hurt.
[20:24] * wer (~wer@216.197.66.124) has joined #ceph
[20:24] <blizzow> From what I can tell, performance is similar between the setups and I prefer to manage fewer OSD nodes.
[20:24] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) has joined #ceph
[20:28] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:32] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[20:34] * Jeffrey4l (~Jeffrey@110.252.42.172) Quit (Ping timeout: 480 seconds)
[20:44] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[20:44] * penguinRaider (~KiKo@104.250.141.44) Quit (Ping timeout: 480 seconds)
[20:44] <blizzow> *OSDs not nodes.
[20:45] * joshd1 (~jdurgin@2602:30a:c089:2b0:15dd:dcf1:f5e0:b5d0) Quit (Quit: Leaving.)
[20:52] * scg (~zscg@181.122.4.166) Quit (Quit: Ex-Chat)
[20:53] * penguinRaider (~KiKo@104.250.141.44) has joined #ceph
[20:54] * penguinRaider (~KiKo@104.250.141.44) Quit ()
[20:54] * penguinRaider (~KiKo@104.250.141.44) has joined #ceph
[20:58] * _mrp (~mrp@82.117.199.26) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:06] * Drumplayr (~thomas@r74-192-135-250.gtwncmta01.grtntx.tl.dh.suddenlink.net) has joined #ceph
[21:07] * georgem (~Adium@206.108.127.16) has joined #ceph
[21:07] * georgem1 (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[21:10] <srk> Hi, anyone converted a single backend cluster (sata) into multi backend (sata, ssd )in combination of OpenStack?
[21:11] * penguinRaider_ (~KiKo@14.139.82.6) has joined #ceph
[21:12] * penguinRaider (~KiKo@104.250.141.44) Quit (Ping timeout: 480 seconds)
[21:18] * lcurtis_ (~lcurtis@47.19.105.250) has joined #ceph
[21:22] <jprins> Hi everyone. Is it possible create a setup where a bucket has an ACL with 2 users, one with FullControll on the bucket and the other with Read/Write access?
[21:22] <jprins> Or maybe even with groups ?
[21:26] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[21:28] <jprins> I tried to create an ACL by adding a second user into the ACL that has Write access but that didn't work.
[21:29] * dougf (~dougf@75-131-32-223.static.kgpt.tn.charter.com) Quit (Ping timeout: 480 seconds)
[21:31] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[21:40] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[21:41] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Quit: Ex-Chat)
[21:42] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[21:45] <Drumplayr> Thanks for the help.
[21:45] <Drumplayr> Hello
[21:46] * _mrp (~mrp@178-222-114-173.dynamic.isp.telekom.rs) has joined #ceph
[21:46] * oarra (~rorr@45.73.146.238) Quit (Quit: oarra)
[21:47] * _mrp (~mrp@178-222-114-173.dynamic.isp.telekom.rs) Quit ()
[21:47] <Drumplayr> I'm new to ceph, and I don't know what happened. Power went out over night and ever since, nothing works. All the log files are empty. The zipped log files look normal.
[21:48] <Drumplayr> No matter what command I run, I get a whole bunch of 2016-08-16 14:45:35.197564 ac89fb40 0 -- :/3363546911 >> 192.168.15.7:6789/0 pipe(0xac505ad8 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0xac500ce8).fault
[21:48] <Drumplayr> I used ceph-deploy to get everything up and running. I checked all the config files and they look normal.
[21:49] <Drumplayr> I don't know where to go from here.
[21:49] * Hemanth (~hkumar_@103.228.221.179) Quit (Ping timeout: 480 seconds)
[21:53] * dougf (~dougf@75-131-32-223.static.kgpt.tn.charter.com) has joined #ceph
[21:54] * jfaj_ (~jan@p20030084AF3738005EC5D4FFFEBB68A4.dip0.t-ipconnect.de) has joined #ceph
[22:01] <Drumplayr> I could just wipe everything out and start over, but I'd prefer to fix it first
[22:01] * dvahlin (~saint@battlecruiser.thesaint.se) Quit (Ping timeout: 480 seconds)
[22:03] * oarra (~rorr@45.73.146.238) has joined #ceph
[22:03] * derjohn_mob (~aj@x4db0fe13.dyn.telefonica.de) has joined #ceph
[22:05] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[22:06] * northrup (~northrup@189-211-129-205.static.axtel.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[22:09] * hellertime (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[22:09] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[22:11] * rendar (~I@host77-34-dynamic.25-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[22:11] <jprins> Found it. Giving both read and write access did it for me.
[22:15] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[22:17] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[22:17] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[22:19] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:20] * mykola (~Mikolaj@91.245.78.8) Quit (Quit: away)
[22:23] <jprins> Ok, not really. I created a bucket using user A that has FullControll. Gave user B Read/Write access to the bucket and put some files into the bucket using User B.
[22:23] <jprins> Now those files have an ACL with user B having full-controll and user A no access.
[22:24] <jprins> That is not really what I want. Is this normal with Ceph and Rados?
[22:24] * jfaj_ (~jan@p20030084AF3738005EC5D4FFFEBB68A4.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[22:31] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[22:33] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[22:35] * erice (~eric@c-76-120-53-165.hsd1.co.comcast.net) has joined #ceph
[22:36] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[22:37] * wes_dillingham (~wes_dilli@65.112.8.195) Quit (Quit: wes_dillingham)
[22:37] * rendar (~I@host77-34-dynamic.25-79-r.retail.telecomitalia.it) has joined #ceph
[22:44] * jdillaman is now known as jdillaman_afk
[22:52] * lpabon (~quassel@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[23:04] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[23:05] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[23:07] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[23:22] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[23:22] <Drumplayr> anyone on?
[23:23] <jprins> Yes, I am, but I'm still learning.
[23:23] <Drumplayr> Same here.
[23:23] <Drumplayr> I'm thinking of starting over.
[23:24] <Drumplayr> I have empty log files and nothing is working.
[23:24] <jprins> My Ceph is working fine at the moment. Testing with S3 at the moment.
[23:29] <Drumplayr> Mine was running fine for a few days.
[23:31] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[23:33] <Drumplayr> Then I had a power outage. Hasn't been online since
[23:34] * wak-work_ (~wak-work@2620:15c:202:0:38e1:44bf:b383:7120) Quit (Remote host closed the connection)
[23:34] * wak-work (~wak-work@2620:15c:202:0:c82:2e9:6b8d:6875) has joined #ceph
[23:36] * northrup (~northrup@201.103.87.199) has joined #ceph
[23:38] <Drumplayr> hello
[23:39] <jprins> Hi
[23:39] <Drumplayr> I was hoping the guys that just joined were able to help
[23:39] <jprins> Could we have a look at your cluster together? Maybe we can both learn something from it.
[23:39] <Drumplayr> ok
[23:40] <Drumplayr> Where would you like to start?
[23:40] <jprins> See private message.
[23:41] <Drumplayr> I have no private message
[23:42] <jprins> I opened a query window with you.
[23:42] <Drumplayr> I didn't get it
[23:42] <jprins> Could you try /query jprins
[23:44] * i_m (~ivan.miro@31.173.120.48) Quit (Quit: Leaving.)
[23:44] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Ping timeout: 480 seconds)
[23:44] * i_m (~ivan.miro@31.173.120.48) has joined #ceph
[23:45] * cyphase (~cyphase@c-50-148-131-137.hsd1.ca.comcast.net) has joined #ceph
[23:46] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[23:48] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[23:55] * toMeloos (~toMeloos@53568B3D.cm-6-7c.dynamic.ziggo.nl) has joined #ceph
[23:55] * toMeloos (~toMeloos@53568B3D.cm-6-7c.dynamic.ziggo.nl) Quit ()
[23:56] * srk (~Siva@32.97.110.53) Quit (Ping timeout: 480 seconds)
[23:59] * rendar (~I@host77-34-dynamic.25-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.