#ceph IRC Log

Index

IRC Log for 2016-04-12

Timestamps are in GMT/BST.

[0:02] * wwdillingham (~LobsterRo@mobile-166-186-169-140.mycingular.net) has joined #ceph
[0:03] * maku (~Unforgive@4MJAAD5X1.tor-irc.dnsbl.oftc.net) Quit ()
[0:08] * wwdillingham (~LobsterRo@mobile-166-186-169-140.mycingular.net) Quit (Read error: Connection reset by peer)
[0:11] * dgurtner (~dgurtner@217.149.140.193) Quit (Ping timeout: 480 seconds)
[0:12] * rendar (~I@host159-27-dynamic.44-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:13] * ehall (~oftc-webi@129.59.122.16) has joined #ceph
[0:13] <ehall> Hello? anyone?
[0:14] <TMM> hello ehall, if you have a question it is best to just ask it.
[0:14] <TMM> ehall, if someone can help you someone will answer
[0:15] * branto (~branto@178-253-130-11.3pp.slovanet.sk) Quit (Quit: Leaving.)
[0:17] <ehall> Ok: power failure in data center has left 3 mons unable to start with mon/OSDMonitor.cc: 125: FAILED assert(version >= osdmap.epoch)
[0:17] <ehall> Firefly 0.80.11-1trusty pkgs
[0:18] <ehall> Have found simliar problem discussed at http://irclogs.ceph.widodh.nl/index.php?date=2015-05-29, but am unsure how to proceed.
[0:19] <TMM> ehall, I'm sorry, I don't know how to help with that, maybe someone else here can. m0zes and T1 seem very knowledgeable. Maybe one of you guys can help?
[0:19] * vicente (~vicente@111-241-31-95.dynamic.hinet.net) has joined #ceph
[0:20] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[0:22] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[0:24] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[0:25] * dgurtner (~dgurtner@217.149.140.193) has joined #ceph
[0:27] <ehall> Ok. Will check back in later. Thanks.
[0:27] <TMM> ehall, that irc log talks about injecting versions of the monmap, perhaps you can try to see how that works
[0:27] * vicente (~vicente@111-241-31-95.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[0:28] <TMM> ehall, you should stay connected to IRC, someone may answer in a couple of hours but you can't see it if you're not conneted
[0:28] * Kvisle (~tv@tv.users.bitbit.net) Quit (magnet.oftc.net kinetic.oftc.net)
[0:28] * MaZ- (~maz@00016955.user.oftc.net) Quit (magnet.oftc.net kinetic.oftc.net)
[0:28] * wonko_be (bernard@november.openminds.be) Quit (magnet.oftc.net kinetic.oftc.net)
[0:28] <TMM> ehall, you can also try sending a message to the ceph-users mailing list
[0:28] * Kvisle (~tv@tv.users.bitbit.net) has joined #ceph
[0:28] * wonko_be (bernard@november.openminds.be) has joined #ceph
[0:29] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[0:31] <ehall> Have to travel, but will post shortly.
[0:34] * MaZ- (~maz@00016955.user.oftc.net) has joined #ceph
[0:38] * Kyso1 (~Kurimus@hessel0.torservers.net) has joined #ceph
[0:42] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[0:43] * thomnico (~thomnico@cable-46.253.163.149.coditel.net) has joined #ceph
[0:43] * shaunm (~shaunm@74.83.215.100) has joined #ceph
[0:45] * ehall (~oftc-webi@129.59.122.16) Quit (Quit: Page closed)
[0:54] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[0:58] * thomnico (~thomnico@cable-46.253.163.149.coditel.net) Quit (Ping timeout: 480 seconds)
[0:59] * bjornar__ (~bjornar@ti0099a430-1561.bb.online.no) Quit (Ping timeout: 480 seconds)
[0:59] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[1:03] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[1:04] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[1:06] * bjornar__ (~bjornar@ti0099a430-1561.bb.online.no) has joined #ceph
[1:06] * vata1 (~vata@207.96.182.162) Quit (Quit: Leaving.)
[1:08] * Kyso1 (~Kurimus@06SAAA6BO.tor-irc.dnsbl.oftc.net) Quit ()
[1:08] * adept256 (~Pulec@ns316491.ip-37-187-129.eu) has joined #ceph
[1:13] * dgurtner (~dgurtner@217.149.140.193) Quit (Ping timeout: 480 seconds)
[1:16] * bjornar__ (~bjornar@ti0099a430-1561.bb.online.no) Quit (Ping timeout: 480 seconds)
[1:20] * vicente (~vicente@111-241-31-95.dynamic.hinet.net) has joined #ceph
[1:21] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[1:22] * ircolle1 (~Adium@c-71-229-136-109.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[1:22] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[1:22] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[1:22] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[1:28] * vicente (~vicente@111-241-31-95.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[1:35] * Lea (~LeaChim@host86-159-239-193.range86-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:38] * adept256 (~Pulec@06SAAA6B8.tor-irc.dnsbl.oftc.net) Quit ()
[1:38] * kalmisto1 (~n0x1d@exit1.ipredator.se) has joined #ceph
[1:39] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[1:42] * xarses (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[1:43] * fsimonce (~simon@host201-70-dynamic.26-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:47] * oms101 (~oms101@p20030057EA029900C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:51] * mattbenjamin (~mbenjamin@aa2.linuxbox.com) Quit (Ping timeout: 480 seconds)
[1:54] * angdraug (~angdraug@64.124.158.100) Quit (Quit: Leaving)
[1:56] * oms101 (~oms101@p20030057EA078000C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:58] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[2:08] * kalmisto1 (~n0x1d@6AGAAARQJ.tor-irc.dnsbl.oftc.net) Quit ()
[2:16] * vicente (~vicente@111-241-31-95.dynamic.hinet.net) has joined #ceph
[2:20] * huangjun (~kvirc@113.57.168.154) has joined #ceph
[2:22] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[2:25] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[2:32] * compass (~compass@ctlt01-fwsm.net.ubc.ca) has joined #ceph
[2:35] <compass> hi everyone, I have a ceph cluster running on CoreOS and Ceph componments are running in containers using ceph-docker repo. Cluster is running fine except removing a large file in a RBD mount is really slow. I tried to remove a 500MB file, it takes a few hours. I don't think this is nornmal. Copying the file only takes 2,3 seconds. Any suggestions?
[2:36] <compass> I also notice some message in the log: "slow request 480.575783 seconds old, received at 2016-04-11 22:53:57.379342: osd_op(client.4151.0:237386 notify.7 [watch ping cookie 94533872064 gen 6] 4.a204812d ondisk+write+known_if_redirected e48) currently waiting for peered". Not sure if it is related
[2:38] * Kalado1 (~Shesh@tor-exit-node.seas.upenn.edu) has joined #ceph
[2:42] * swami1 (~swami@27.7.161.87) has joined #ceph
[2:43] * swami1 (~swami@27.7.161.87) Quit ()
[2:44] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[2:50] * rmerritt_ (~rmerritt@lin-nyc.accolari.net) Quit (Remote host closed the connection)
[2:51] * ehall (~oftc-webi@c-68-52-107-224.hsd1.tn.comcast.net) has joined #ceph
[2:53] <ehall> Power failure in data center has left 3 mons unable to start with mon/OSDMonitor.cc: 125: FAILED assert(version >= osdmap.epoch)
[2:54] <ehall> Have found simliar problem discussed at http://irclogs.ceph.widodh.nl/index.php?date=2015-05-29, but am unsure how to proceed.
[2:54] <ehall> If I read ceph-kvstore-tool /var/lib/ceph/mon/ceph-cephsecurestore1/store.db list correctly, they believe osdmap is 1, but they also have osdmap:full_38456 and osdmap:38630 in the store.
[2:54] <ehall> Working from http://irclogs info, something like ceph-kvstore-tool /var/lib/ceph/mon/ceph-foo/store.db set osdmap NNNNN in /tmp/osdmap might help, but I am unsure of value for NNNN. Seems like too delicate an operation for experimentation.
[2:55] <ehall> ceph: Firefly 0.80.11-1trusty OS: Ubuntu 14.04.4 kernel: 3.13.0-83-generic
[2:55] <ehall> any guidance appreciated
[2:55] * vicente (~vicente@111-241-31-95.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[2:59] * RameshN (~rnachimu@101.222.247.208) has joined #ceph
[3:04] * georgem (~Adium@45.72.132.68) has joined #ceph
[3:08] * Kalado1 (~Shesh@4MJAAD53E.tor-irc.dnsbl.oftc.net) Quit ()
[3:08] * Grimhound (~Hidendra@216.218.134.12) has joined #ceph
[3:31] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:32] * IvanJobs (~hardes@103.50.11.146) has joined #ceph
[3:34] * linjan (~linjan@58PAAC0AA.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[3:38] * Grimhound (~Hidendra@4MJAAD533.tor-irc.dnsbl.oftc.net) Quit ()
[3:38] * Sliker (~darkid@exit1.ipredator.se) has joined #ceph
[3:48] * vicente (~vicente@111-241-31-95.dynamic.hinet.net) has joined #ceph
[3:48] * zhaochao (~zhaochao@125.39.112.5) has joined #ceph
[3:53] * ehall (~oftc-webi@c-68-52-107-224.hsd1.tn.comcast.net) Quit (Quit: Page closed)
[3:53] * ehall (~oftc-webi@c-68-52-107-224.hsd1.tn.comcast.net) has joined #ceph
[3:53] * linjan (~linjan@dsl-olubrasgw1-54fb5b-165.dhcp.inet.fi) has joined #ceph
[3:55] * mhack (~mhack@66-168-117-78.dhcp.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[3:56] * vicente (~vicente@111-241-31-95.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[3:57] * freakybanana (~freakyban@c-73-158-201-226.hsd1.ca.comcast.net) Quit (Quit: freakybanana)
[3:57] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[4:03] * shyu (~Shanzhi@119.254.120.66) has joined #ceph
[4:04] * kefu (~kefu@183.193.127.235) has joined #ceph
[4:05] * Eduardo_ (~Eduardo@85.193.28.37.rev.vodafone.pt) has joined #ceph
[4:06] <Eduardo_> Hi everyone. Bought more RAM, that clear the issue, but now I'm getting another error while trying to preform "ceph-deploy mon create-initial", it says
[4:07] <Eduardo_> [ceph_deploy.gatherkeys][WARNING] Unable to find /etc/ceph/ceph.client.admin.keyring on ceph1
[4:08] * Sliker (~darkid@4MJAAD54T.tor-irc.dnsbl.oftc.net) Quit ()
[4:08] * darks (~AluAlu@freedom.ip-eend.nl) has joined #ceph
[4:08] <Eduardo_> [ceph_deploy][ERROR ] KeyNotFoundError: Could not find keyring file: /etc/ceph/ceph.client.admin.keyring on host ceph1
[4:09] <Eduardo_> shoudn't create-initial set up all this?
[4:10] * yanzheng (~zhyan@125.70.21.212) has joined #ceph
[4:10] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[4:11] <IvanJobs> Hi cephers, anyone knows how to revert "ceph osd create"? I just mis-typing this for my cluster node.
[4:14] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[4:15] * lae (~lae@177.118.197.104.bc.googleusercontent.com) Quit (Remote host closed the connection)
[4:18] * dec (~dec@71.29.197.104.bc.googleusercontent.com) Quit (Quit: bye)
[4:19] <Eduardo_> IcanJobs, not sure you have to do any migration but check the Removing OSD part: http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/
[4:19] <Eduardo_> *IvanJobs
[4:21] <IvanJobs> thx Eduardo_, I found the solution "ceph osd rm {id}"
[4:26] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[4:27] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[4:29] * lae (~lae@177.118.197.104.bc.googleusercontent.com) has joined #ceph
[4:29] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[4:30] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[4:33] * shyu (~Shanzhi@119.254.120.66) Quit (Ping timeout: 480 seconds)
[4:33] * shyu (~Shanzhi@119.254.120.67) has joined #ceph
[4:35] * RameshN (~rnachimu@101.222.247.208) Quit (Ping timeout: 480 seconds)
[4:35] * haomaiwang (~haomaiwan@s16.ezlink.hk) has joined #ceph
[4:36] * haomaiwang (~haomaiwan@s16.ezlink.hk) Quit (Remote host closed the connection)
[4:36] * haomaiwang (~haomaiwan@s16.ezlink.hk) has joined #ceph
[4:37] * haomaiwang (~haomaiwan@s16.ezlink.hk) Quit (Remote host closed the connection)
[4:38] * darks (~AluAlu@6AGAAARXQ.tor-irc.dnsbl.oftc.net) Quit ()
[4:38] * Dysgalt (~Spessu@tor1e1.privacyfoundation.ch) has joined #ceph
[4:38] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[4:39] * haomaiwang (~haomaiwan@s16.ezlink.hk) has joined #ceph
[4:40] * haomaiwang (~haomaiwan@s16.ezlink.hk) Quit (Remote host closed the connection)
[4:40] * haomaiwang (~haomaiwan@s16.ezlink.hk) has joined #ceph
[4:41] * haomaiwang (~haomaiwan@s16.ezlink.hk) Quit (Remote host closed the connection)
[4:42] * kefu_ (~kefu@114.92.120.83) has joined #ceph
[4:44] * kefu (~kefu@183.193.127.235) Quit (Ping timeout: 480 seconds)
[4:45] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[4:47] * neurodrone_ (~neurodron@pool-100-35-226-97.nwrknj.fios.verizon.net) has joined #ceph
[4:48] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:48] * georgem (~Adium@45.72.132.68) Quit (Quit: Leaving.)
[4:51] * haomaiwang (~haomaiwan@s16.ezlink.hk) has joined #ceph
[4:52] * haomaiwang (~haomaiwan@s16.ezlink.hk) Quit (Remote host closed the connection)
[5:08] * Dysgalt (~Spessu@6AGAAARY7.tor-irc.dnsbl.oftc.net) Quit ()
[5:09] * bugzc-EC (~bugzc_EC@ec2-52-3-149-142.compute-1.amazonaws.com) has joined #ceph
[5:12] * xENO_ (~Kottizen@atlantic480.us.unmetered.com) has joined #ceph
[5:12] <Eduardo_> IvanJobs no problem
[5:14] * bugzc_EC (~bugzc_EC@ec2-52-3-149-142.compute-1.amazonaws.com) Quit (Ping timeout: 480 seconds)
[5:24] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[5:24] * neurodrone_ (~neurodron@pool-100-35-226-97.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[5:25] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[5:25] * neurodrone_ (~neurodron@pool-100-35-226-97.nwrknj.fios.verizon.net) has joined #ceph
[5:25] * overclk (~quassel@121.244.87.117) has joined #ceph
[5:27] * shyu (~Shanzhi@119.254.120.67) Quit (Ping timeout: 480 seconds)
[5:31] <Eduardo_> So, I've been searching around, someone on stackoverflow market this as a solving answer to a simmilar problem
[5:31] <Eduardo_> "It has been happened because in the ceph.conf you must set mon ip in the public network not in the private. And I had mon ip : 192.168.57.101 (which is the private) but public network was: 10.0.2.0/24."
[5:33] <Eduardo_> Anyway, I defined two networks, one I defined public that is 192.168.0.0/24 and a private that is 172..16.0.0/24
[5:33] <Eduardo_> on the hosts file and ssh cnfiguration I gave the private IPs
[5:34] <Eduardo_> I already changed that IP on the ceph-cong to the public one
[5:34] <Eduardo_> and didn't work
[5:37] <Eduardo_> should I change also to public the ones in /etc/hosts and ~/.ssh/config ?
[5:37] <Eduardo_> or there are supposed to be private?
[5:42] * shyu (~Shanzhi@119.254.120.66) has joined #ceph
[5:42] * xENO_ (~Kottizen@6AGAAAR0D.tor-irc.dnsbl.oftc.net) Quit ()
[5:42] * mog_ (~TheDoudou@torsrvu.snydernet.net) has joined #ceph
[5:45] * neurodrone_ (~neurodron@pool-100-35-226-97.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[5:51] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[5:52] * laevar (~jschulz1@134.76.80.11) Quit (Quit: WeeChat 1.4)
[5:53] * ehall (~oftc-webi@c-68-52-107-224.hsd1.tn.comcast.net) Quit (Ping timeout: 480 seconds)
[5:55] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[5:55] * Vacuum__ (~Vacuum@88.130.205.88) has joined #ceph
[5:56] * dec (~dec@223.119.197.104.bc.googleusercontent.com) has joined #ceph
[5:58] <Eduardo_> Changed them to the public IPs
[5:58] <Eduardo_> not difference
[6:00] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[6:00] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[6:02] * Vacuum_ (~Vacuum@88.130.212.12) Quit (Ping timeout: 480 seconds)
[6:11] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[6:12] * mog_ (~TheDoudou@6AGAAAR1H.tor-irc.dnsbl.oftc.net) Quit ()
[6:12] * Hazmat (~Swompie`@176.10.99.205) has joined #ceph
[6:37] * swami1 (~swami@49.32.0.90) has joined #ceph
[6:42] * Hazmat (~Swompie`@6AGAAAR2J.tor-irc.dnsbl.oftc.net) Quit ()
[6:42] * AotC (~zviratko@hessel2.torservers.net) has joined #ceph
[6:47] <Eduardo_> Newbie question: there is a keyring file on /var/lib/ceph/mon/ceph-ceph1/ dir in the monitor filesystem
[6:48] <Eduardo_> is htis file the same thatshould be in /etc/ceph/ceph.client.admin.keyring?
[6:53] <vikhyat> Eduardo_: no
[6:53] <vikhyat> ceph.client.admin.keyring file would be present in your admin node
[6:53] <vikhyat> from where you ran ceph-deploy mon create-initial
[6:55] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[6:58] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[6:59] <Eduardo_> vikhyat: ok. Everythime I try to run something like 'ceph-deploy mon create-initial' it complains that ceph.client.admin.keyring is not present
[7:00] <Eduardo_> how should it be generated?
[7:01] <vikhyat> http://docs.ceph.com/docs/master/start/quick-ceph-deploy/#create-a-cluster
[7:01] <vikhyat> ceph-deploy purgedata {ceph-node} [{ceph-node}]
[7:01] <vikhyat> ceph-deploy forgetkeys
[7:01] <vikhyat> run these two commands
[7:01] <vikhyat> ceph-deploy new {initial-monitor-node(s)}
[7:01] <vikhyat> ^^ this is important command
[7:01] <vikhyat> before ceph-deploy mon create-initial
[7:02] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[7:02] * kefu_ (~kefu@114.92.120.83) Quit (Max SendQ exceeded)
[7:02] <vikhyat> http://docs.ceph.com/docs/master/start/quick-ceph-deploy/#create-a-cluster
[7:02] <vikhyat> this link has all the details
[7:02] <vikhyat> HTH
[7:02] * kefu (~kefu@114.92.120.83) has joined #ceph
[7:04] <Eduardo_> vikhyat: Thank you. However, if I try to run ceph-deploy purgedata it returns 'Ceph is still installed on: ['ceph1'] / RuntimeError:refusing to purge data while Ceph is still installed
[7:06] * rraja (~rraja@121.244.87.117) has joined #ceph
[7:06] <vikhyat> Eduardo_: I am not sure may be new check
[7:06] <vikhyat> then you can do
[7:06] <vikhyat> ceph-deploy purge {ceph-node} [{ceph-node}]
[7:06] * kawa2014 (~kawa@194.170.156.187) has joined #ceph
[7:06] <vikhyat> remember it will remove packages
[7:07] <vikhyat> then
[7:07] <vikhyat> ceph-deploy purgedata {ceph-node} [{ceph-node}]
[7:07] <vikhyat> ceph-deploy forgetkeys
[7:07] <vikhyat> and then you can start from here
[7:07] <vikhyat> ceph-deploy new {initial-monitor-node(s)}
[7:11] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:12] * AotC (~zviratko@76GAAED8W.tor-irc.dnsbl.oftc.net) Quit ()
[7:12] * hoopy (~curtis864@6AGAAAR4T.tor-irc.dnsbl.oftc.net) has joined #ceph
[7:17] * EinstCra_ (~EinstCraz@58.247.117.134) has joined #ceph
[7:17] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:19] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) Quit (Quit: Bye)
[7:22] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[7:24] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) has joined #ceph
[7:24] * ade (~abradshaw@dslb-188-106-099-129.188.106.pools.vodafone-ip.de) has joined #ceph
[7:30] * vata (~vata@cable-21.246.173-197.electronicbox.net) Quit (Quit: Leaving.)
[7:33] * itamarl_ (~itamarl@194.90.7.244) has joined #ceph
[7:33] <Eduardo_> Yay, it worked. Thanks vikhyat.
[7:34] <vikhyat> Eduardo_: great
[7:34] <vikhyat> np!
[7:36] * evelu (~erwan@37.160.157.11) has joined #ceph
[7:42] * hoopy (~curtis864@6AGAAAR4T.tor-irc.dnsbl.oftc.net) Quit ()
[7:42] * rushworld (~kiasyn@tor2e1.privacyfoundation.ch) has joined #ceph
[7:54] * natarej_ (~natarej@CPE-101-181-53-14.lnse4.cha.bigpond.net.au) has joined #ceph
[7:58] * haomaiwang (~haomaiwan@s16.ezlink.hk) has joined #ceph
[7:59] * haomaiwang (~haomaiwan@s16.ezlink.hk) Quit (Remote host closed the connection)
[7:59] * haomaiwang (~haomaiwan@45.32.28.138) has joined #ceph
[8:00] * haomaiwang (~haomaiwan@45.32.28.138) Quit (Remote host closed the connection)
[8:00] * haomaiwang (~haomaiwan@45.32.28.138) has joined #ceph
[8:00] * natarej (~natarej@CPE-101-181-134-103.lnse5.cha.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[8:01] * haomaiwang (~haomaiwan@45.32.28.138) Quit (Remote host closed the connection)
[8:01] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:01] * haomaiwang (~haomaiwan@45.32.28.138) has joined #ceph
[8:02] * haomaiwang (~haomaiwan@45.32.28.138) Quit (Remote host closed the connection)
[8:12] * rushworld (~kiasyn@4MJAAD59R.tor-irc.dnsbl.oftc.net) Quit ()
[8:12] * hoopy (~BillyBobJ@tor.krosnov.net) has joined #ceph
[8:14] * dgurtner (~dgurtner@217.149.140.193) has joined #ceph
[8:16] * c_soukup (~csoukup@2605:a601:9c8:6b00:56f:e6a7:f22f:499f) has joined #ceph
[8:18] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[8:22] * csoukup (~csoukup@2605:a601:9c8:6b00:56f:e6a7:f22f:499f) Quit (Ping timeout: 480 seconds)
[8:26] * dgurtner (~dgurtner@217.149.140.193) Quit (Ping timeout: 480 seconds)
[8:29] * Hemanth (~hkumar_@121.244.87.117) has joined #ceph
[8:37] * bjornar__ (~bjornar@ti0099a430-1561.bb.online.no) has joined #ceph
[8:40] * shyu (~Shanzhi@119.254.120.66) Quit (Remote host closed the connection)
[8:40] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:41] * shyu (~Shanzhi@119.254.120.66) has joined #ceph
[8:42] * hoopy (~BillyBobJ@4MJAAD6AE.tor-irc.dnsbl.oftc.net) Quit ()
[8:42] * rapedex (~smf68@6AGAAAR8M.tor-irc.dnsbl.oftc.net) has joined #ceph
[8:47] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[8:47] * dkrdkr (48a3dc12@107.161.19.53) has joined #ceph
[8:47] * dkrdkr (48a3dc12@107.161.19.53) Quit ()
[8:50] * dkrdkr (uid110802@id-110802.tooting.irccloud.com) has joined #ceph
[8:52] * dkrdkr (uid110802@id-110802.tooting.irccloud.com) Quit ()
[8:53] * dkrdkr (uid110802@id-110802.tooting.irccloud.com) has joined #ceph
[8:54] * bvi (~bastiaan@185.56.32.1) has joined #ceph
[8:54] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[8:57] * olid19811117 (~olid1982@ip503c3d0c.speed.planet.nl) has joined #ceph
[8:58] * ronrib (~boswortr@45.32.242.135) Quit (Remote host closed the connection)
[8:59] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[9:00] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[9:02] * analbeard (~shw@support.memset.com) has joined #ceph
[9:06] * dgurtner (~dgurtner@82.199.64.68) has joined #ceph
[9:07] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[9:08] <T1w> mornings
[9:12] * rapedex (~smf68@6AGAAAR8M.tor-irc.dnsbl.oftc.net) Quit ()
[9:12] * nih (~PeterRabb@4MJAAD6BO.tor-irc.dnsbl.oftc.net) has joined #ceph
[9:13] * olid19811117 (~olid1982@ip503c3d0c.speed.planet.nl) Quit (Ping timeout: 480 seconds)
[9:18] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[9:26] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[9:27] * borun (~oftc-webi@178.237.98.13) Quit (Quit: Page closed)
[9:28] * derjohn_mobi (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:32] * fsimonce (~simon@host201-70-dynamic.26-79-r.retail.telecomitalia.it) has joined #ceph
[9:33] * bjornar__ (~bjornar@ti0099a430-1561.bb.online.no) Quit (Ping timeout: 480 seconds)
[9:42] * nih (~PeterRabb@4MJAAD6BO.tor-irc.dnsbl.oftc.net) Quit ()
[9:42] * Grum (~superdug@146.0.74.160) has joined #ceph
[9:44] <boolman> anyone running ceph on docker and have any tip on graphing
[9:49] * rendar (~I@host1-179-dynamic.47-79-r.retail.telecomitalia.it) has joined #ceph
[9:51] * derjohn_mobi (~aj@fw.gkh-setu.de) has joined #ceph
[9:56] * Miouge_ (~Miouge@94.136.92.20) has joined #ceph
[10:01] <Jeeves_> ceph on docker?!
[10:01] <Jeeves_> boolman: Librenms with my Ceph plugin
[10:01] * Miouge (~Miouge@94.136.92.20) Quit (Ping timeout: 480 seconds)
[10:01] * Miouge_ is now known as Miouge
[10:01] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:03] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) has joined #ceph
[10:09] * The1w (~jens@node3.survey-it.dk) has joined #ceph
[10:11] * DanFoster (~Daniel@2a00:1ee0:3:1337:a4a0:2452:d982:7799) has joined #ceph
[10:12] * Grum (~superdug@06SAAA6KP.tor-irc.dnsbl.oftc.net) Quit ()
[10:14] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[10:20] <masterpe> Is it possibe to nfs export an cephfs share? To create an ceph nfs gateway?
[10:20] * Lea (~LeaChim@host86-159-239-193.range86-159.btcentralplus.com) has joined #ceph
[10:23] <Jeeves_> masterpe: There's this thing CephFS, which is not ready for production yet, iirc.
[10:23] * itamarl_ (~itamarl@194.90.7.244) Quit (Quit: itamarl_)
[10:23] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[10:23] <Jeeves_> masterpe: Otherwise, create a vm with a ceph-disk and export it..
[10:26] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) has joined #ceph
[10:26] * bjornar__ (~bjornar@109.247.131.38) has joined #ceph
[10:27] <masterpe> Jeeves_: I thought that with the infernalis release CephFS got production ready
[10:28] <Jeeves_> http://docs.ceph.com/docs/master/cephfs/early-adopters/
[10:28] <Jeeves_> unless ceph.com isn't updated properly.
[10:29] <masterpe> Jeeves_: I was just reading that page
[10:30] <IvanJobs> boolman, ceph on dockers are not recommended. Some source code of ceph made some optimization for bare metal hosts.
[10:31] <IvanJobs> just FYI.
[10:31] * shylesh__ (~shylesh@121.244.87.124) has joined #ceph
[10:31] <Jeeves_> Why would you even run ceph on docker?
[10:31] <boolman> hyperconverged solution
[10:32] <Jeeves_> https://www.youtube.com/watch?v=PivpCKEiQOQ&feature=youtu.be
[10:32] <IvanJobs> Jeeves_, version J of ceph has removed this "not suit for production" warning. right?
[10:32] <Jeeves_> IvanJobs: No clue. I just look at what docs.ceph.com tells me.
[10:32] <TMM> boolman, just use systemd cgroups in that case
[10:33] <Be-El> masterpe: we are running a NFS gateway (NFS 4 + krb5) with a cephfs mountpoint. works fine. you just need to stick to the kernel implementation and avoid ceph-fuse
[10:33] <TMM> boolman, that's how I do it, create some slices for guaranteed memory/cpu capacity. Don't go full docker.
[10:33] <TMM> boolman, systemd has all the features you need to do the resource isolation necessary for the hyperconverged scenario
[10:33] <IvanJobs> Jeeves_, ceph's official doc has far behind the development progress.
[10:34] <masterpe> Be-El: Is your NFS gw HA?
[10:34] <Be-El> masterpe: nope, single server
[10:34] <Jeeves_> IvanJobs: That's a terrible situation
[10:34] <IcePic> I thought nfs didnt lend itself to balancing or HA?
[10:35] <Be-El> masterpe: there's also direct support for cephfs over nfs in the ganesha userspace NFS server
[10:35] <Jeeves_> It doesn't
[10:35] <IcePic> at least not in any stateful way that your clients will appreciate.
[10:35] <TMM> nfs4 has features to make HA much more feasible
[10:35] <Be-El> IcePic: plain nfs does not support it, but pNFS at least allows you to balance servers
[10:35] <TMM> nfs3 is not really very reasonable to implement in a ha situation
[10:39] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[10:39] * atheism (~atheism@182.48.117.114) has joined #ceph
[10:39] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[10:41] <boolman> IvanJobs: why is it not recommended to run ceph on docker, is it unstable? bad performance?
[10:42] * Jebula (~theghost9@6AGAAASDW.tor-irc.dnsbl.oftc.net) has joined #ceph
[10:43] <TMM> I would argue that docker gets you no benefits in the OSD case whatsoever, adding it increases complexity and takes you further away from all other ceph implementations
[10:43] <TMM> making getting help harder
[10:44] <IvanJobs> boolman, truth be told, both ceph and docker are develped rapidly, much changes would be introduced. In my opinion, combining ceph with docker makes more complexity.
[10:44] <IvanJobs> just FYI. control risks, and choose your way.
[10:45] <Jeeves_> What does hyperconverged even mean?
[10:46] <Jeeves_> Sounds like something from a startrek movie, to sound interesting and futuristic
[10:46] <TMM> Jeeves_, running vms, osds, network overlay all on the same hosts. You only have one type of physical host
[10:47] <Jeeves_> Sounds like exactly the opposit of what ceph clusters try to do..
[10:49] <TMM> it has some benefits if you have chronic overcapacity
[10:49] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[10:49] <Jeeves_> You mean bad capacity planning ? ;)
[10:49] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[10:49] <TMM> in my case we run a lot of vm instances but none use very much cpu. We need tons of ram, so we need tons of cpus so we have a lot of cpus idling all the time
[10:49] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[10:50] <TMM> putting the osds on the same machines and using systemd slices to guarantee capacity for the osd processes made sense for us
[10:50] <Jeeves_> Why do you need tons of cpus if you need tons of ram?
[10:51] <TMM> the memory controllers of modern systems are on the cpus
[10:51] <TMM> you have a limit of number of dimm slots per physical cpu
[10:52] <TMM> if you want more than 256gb of ram you have to go dual socket, and if you want to be able to get all memory bandwidth out of them you need at least 10 cores per socket
[10:52] <TMM> so we have a minimum of 20 physical cores per hypervisor node anyway
[10:52] <TMM> which then sit there with a load of 5.1 :P
[10:53] <TMM> buying more UP nodes with 256GB of ram is more expensive than the DP systems
[10:53] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[10:53] <TMM> and they have 8 2.5" slots
[10:53] <TMM> so, we stuffed them full of ssds and run osds on them
[10:53] <TMM> now during backfills we sometimes see a load of 10!
[10:54] <TMM> still about 70 short of saturation :p
[10:54] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[10:54] <Jeeves_> And where would docker introduce new features in this picture?
[10:54] <TMM> if you don't know how to work systemd's slice system I suppose you can use it for some resource allocations
[10:54] <TMM> I'm not advocating this at all btw
[10:55] <TMM> I think that's a terrible solution to a simple problem
[10:55] * branto (~branto@178-253-128-175.3pp.slovanet.sk) has joined #ceph
[10:55] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[10:55] <Jeeves_> Ok, lets agree on that :)
[10:56] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[10:56] * thomnico (~thomnico@cable-46.253.163.149.coditel.net) has joined #ceph
[10:58] <TMM> unless you actually implement some seccomp filters for the containers I don't really see the usecase of docker at all. Just run it on the system if you're not going to bother with some kind of additional security mechanism anyway
[10:58] <TMM> different useraccounts should serve you just as well imho
[10:58] <Jeeves_> docker is for people that are version-number-nazis
[10:58] <Jeeves_> 'My application only runs on php 7.1.a.3.f.v.5.1.1-2'
[10:59] <Jeeves_> (IMHO!)
[10:59] <TMM> even if you're a 'version number nazi' docker doesn't really do much for you. You can easily run multiple copies of software under different user accounts if you have to
[10:59] * itamarl_ (~itamarl@194.90.7.244) has joined #ceph
[10:59] <TMM> what it does do is take control away from your systems' init system, and implement another one that's less good
[11:02] * itamarl (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[11:02] * itamarl_ is now known as itamarl
[11:02] * dan__ (~Daniel@office.34sp.com) has joined #ceph
[11:03] <boolman> TMM: thanks for the input
[11:04] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[11:05] <TMM> boolman, docker rant notwithstanding, it is not a standard way of deploying ceph and I would recommend against it just on those grounds. If you need resource constraints there are ways that make it easier to get help later.
[11:06] <TMM> boolman, getting commercial support later on should your company require it may also be more expensive to get, or just harder to find, or both
[11:07] * thomnico (~thomnico@cable-46.253.163.149.coditel.net) Quit (Ping timeout: 480 seconds)
[11:09] * DanFoster (~Daniel@2a00:1ee0:3:1337:a4a0:2452:d982:7799) Quit (Ping timeout: 480 seconds)
[11:12] * dan__ is now known as DanFoster
[11:12] * Jebula (~theghost9@6AGAAASDW.tor-irc.dnsbl.oftc.net) Quit ()
[11:12] * Thayli (~xolotl@freedom.ip-eend.nl) has joined #ceph
[11:18] * b0e (~aledermue@213.95.25.82) has joined #ceph
[11:21] * thomnico (~thomnico@cable-46.253.163.149.coditel.net) has joined #ceph
[11:21] * pabluk__ is now known as pabluk_
[11:28] * yatin (~oftc-webi@walmart.com) has joined #ceph
[11:30] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[11:31] * thomnico (~thomnico@cable-46.253.163.149.coditel.net) Quit (Quit: Ex-Chat)
[11:31] * thomnico (~thomnico@cable-46.253.163.149.coditel.net) has joined #ceph
[11:36] * mortn (~mortn@217-215-219-69-no229.tbcn.telia.com) has joined #ceph
[11:39] * yatin_ (~yatin@161.163.44.8) has joined #ceph
[11:40] * evelu (~erwan@37.160.157.11) Quit (Ping timeout: 480 seconds)
[11:41] * zhaochao_ (~zhaochao@124.202.191.137) has joined #ceph
[11:41] * haomaiwang (~haomaiwan@45.32.28.138) has joined #ceph
[11:42] * haomaiwang (~haomaiwan@45.32.28.138) Quit (Remote host closed the connection)
[11:42] * haomaiwang (~haomaiwan@45.32.28.138) has joined #ceph
[11:42] * haomaiwang (~haomaiwan@45.32.28.138) Quit ()
[11:42] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:42] * Thayli (~xolotl@06SAAA6M0.tor-irc.dnsbl.oftc.net) Quit ()
[11:42] * measter (~Redshift@06SAAA6NO.tor-irc.dnsbl.oftc.net) has joined #ceph
[11:44] * haomaiwang (~textual@45.32.28.138) has joined #ceph
[11:44] * yatin_ (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[11:45] * haomaiwang (~textual@45.32.28.138) Quit (Remote host closed the connection)
[11:45] * thomnico (~thomnico@cable-46.253.163.149.coditel.net) Quit (Read error: Connection reset by peer)
[11:46] * zhaochao (~zhaochao@125.39.112.5) Quit (Ping timeout: 480 seconds)
[11:46] * zhaochao_ is now known as zhaochao
[11:48] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[11:50] * evelu (~erwan@37.161.190.25) has joined #ceph
[11:52] * yatin (~oftc-webi@walmart.com) Quit (Ping timeout: 480 seconds)
[12:04] * yatin (~yatin@161.163.44.8) has joined #ceph
[12:07] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[12:07] * yatin (~yatin@161.163.44.8) Quit (Quit: Leaving...)
[12:08] * yatin (~yatin@161.163.44.8) has joined #ceph
[12:12] * measter (~Redshift@06SAAA6NO.tor-irc.dnsbl.oftc.net) Quit ()
[12:12] * Kottizen (~Zyn@politkovskaja.torservers.net) has joined #ceph
[12:19] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[12:22] * yatin (~yatin@161.163.44.8) has joined #ceph
[12:26] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:9c0a:2a4f:6369:c6) Quit (Ping timeout: 480 seconds)
[12:34] * TMM (~hp@213.197.255.171) has joined #ceph
[12:36] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:bd89:46d9:72e3:18ce) has joined #ceph
[12:40] * garphy is now known as garphy`aw
[12:42] * Kottizen (~Zyn@06SAAA6OA.tor-irc.dnsbl.oftc.net) Quit ()
[12:47] * Swompie` (~chrisinaj@193.90.12.86) has joined #ceph
[12:52] * EinstCra_ (~EinstCraz@58.247.117.134) Quit (Remote host closed the connection)
[12:56] * Mika_c (~quassel@122.146.93.152) Quit (Remote host closed the connection)
[12:58] * IvanJobs (~hardes@103.50.11.146) Quit (Ping timeout: 480 seconds)
[13:00] * shyu_ (~shyu@111.201.76.105) has joined #ceph
[13:04] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[13:05] * smerz (~ircircirc@37.74.194.90) Quit (Remote host closed the connection)
[13:07] * yatin (~yatin@161.163.44.8) has joined #ceph
[13:08] * smerz (~ircircirc@37.74.194.90) has joined #ceph
[13:08] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) Quit (Quit: Bye guys! (??????????????????? ?????????)
[13:09] * j3roen (~j3roen@77.60.46.13) Quit (Ping timeout: 480 seconds)
[13:12] * cruegge (~cruegge@134.76.80.20) has joined #ceph
[13:13] * ehall (~oftc-webi@c-68-52-107-224.hsd1.tn.comcast.net) has joined #ceph
[13:14] * atheism (~atheism@182.48.117.114) Quit (Ping timeout: 480 seconds)
[13:17] * Swompie` (~chrisinaj@76GAAEEOU.tor-irc.dnsbl.oftc.net) Quit ()
[13:17] * arsenaali (~Bwana@relay1.tor.openinternet.io) has joined #ceph
[13:22] * huangjun (~kvirc@113.57.168.154) Quit (Ping timeout: 480 seconds)
[13:28] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[13:29] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:29] * madkiss (~madkiss@2001:6f8:12c3:f00f:c4e5:108c:436e:37ad) Quit (Quit: Leaving.)
[13:31] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[13:32] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[13:32] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[13:34] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[13:35] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[13:36] * zhaochao (~zhaochao@124.202.191.137) Quit (Quit: ChatZilla 0.9.92 [Firefox 45.0.1/20160318172635])
[13:42] * georgem (~Adium@24.114.59.246) has joined #ceph
[13:44] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[13:46] * ehall (~oftc-webi@c-68-52-107-224.hsd1.tn.comcast.net) Quit (Ping timeout: 480 seconds)
[13:47] * arsenaali (~Bwana@76GAAEEQB.tor-irc.dnsbl.oftc.net) Quit ()
[13:47] * Hidendra (~Shadow386@207.244.70.35) has joined #ceph
[13:53] * rraja (~rraja@121.244.87.117) has joined #ceph
[13:58] * shylesh__ (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[13:58] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[13:59] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Remote host closed the connection)
[14:01] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[14:13] * Racpatel (~Racpatel@2601:87:3:3601::baf1) has joined #ceph
[14:16] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[14:17] * Hidendra (~Shadow386@6AGAAASLT.tor-irc.dnsbl.oftc.net) Quit ()
[14:17] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[14:17] * wyang (~wyang@59.45.74.71) has joined #ceph
[14:20] * askb (~askb@117.208.163.201) has joined #ceph
[14:23] * jschulz1 (~jschulz1@134.76.80.11) has joined #ceph
[14:23] * jschulz1 is now known as laevar
[14:24] * neurodrone_ (~neurodron@pool-100-35-226-97.nwrknj.fios.verizon.net) has joined #ceph
[14:25] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[14:26] * dkrdkr (uid110802@id-110802.tooting.irccloud.com) Quit (Quit: Connection closed for inactivity)
[14:27] * i_m (~ivan.miro@deibp9eh1--blueice2n3.emea.ibm.com) has joined #ceph
[14:29] * elimat (~elimat@user170.217-10-117.netatonce.net) has joined #ceph
[14:29] * sugoruyo (~georgev@paarthurnax.esc.rl.ac.uk) has joined #ceph
[14:29] * elimat (~elimat@user170.217-10-117.netatonce.net) Quit ()
[14:29] * matel (~elimat@user170.217-10-117.netatonce.net) has joined #ceph
[14:30] <matel> Hi I have qustions regarding backup in ceph, we are using just RadosGW (object store) and what to take some sort of backup of the data
[14:31] * wjw-freebsd2 (~wjw@176.74.240.1) has joined #ceph
[14:31] <matel> is there a way to use take snapshot of the pool in rados or enable versioning in the buckets or is it not good enough for backup usage?
[14:32] * c_soukup (~csoukup@2605:a601:9c8:6b00:56f:e6a7:f22f:499f) Quit (Ping timeout: 480 seconds)
[14:36] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[14:37] * The1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[14:38] * overclk (~quassel@121.244.87.117) Quit (Remote host closed the connection)
[14:38] * garphy`aw is now known as garphy
[14:39] * georgem (~Adium@24.114.59.246) Quit (Read error: Connection reset by peer)
[14:43] <sugoruyo> hey folks, I'm having a problem and wondering if anyone else has come across this: all my OSD hosts suddenly had a huge load spike and looking at top output the cause seems to be a bunch of [migration/X] threads eating all the CPU. Stopping all Ceph daemons seems to stop this behaviour but, the question is: what might be causing this and how do I fix it?
[14:46] * itamarl_ (~itamarl@194.90.7.244) has joined #ceph
[14:47] * briun (~oftc-webi@178.237.98.13) has joined #ceph
[14:47] <briun> hi all,
[14:47] <briun> I can't start my rados gw, it keeps sending me "2 RGWDataChangesLog::ChangesRenewThread: start" messages
[14:48] <briun> and they are many ping requests or so between each message
[14:48] <briun> what are the good command to lauch radosgw?
[14:49] <briun> I actually try : /usr/bin/radosgw -d -c /etc/ceph/clusterTest.conf --debug-rgw 20 --rgw-socket-path=/tmp/radosgw.sock
[14:50] * itamarl (~itamarl@194.90.7.244) Quit (Ping timeout: 480 seconds)
[14:50] * itamarl_ is now known as itamarl
[14:52] * neurodrone_ (~neurodron@pool-100-35-226-97.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[14:53] * neurodrone_ (~neurodron@pool-100-35-226-97.nwrknj.fios.verizon.net) has joined #ceph
[14:54] <briun> anyone ?
[14:55] * georgem (~Adium@206.108.127.16) has joined #ceph
[15:00] * rraja_ (~rraja@121.244.87.117) has joined #ceph
[15:02] * Hemanth (~hkumar_@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:03] * rraja (~rraja@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:04] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:05] * yang_ (~wyang@116.216.30.4) has joined #ceph
[15:05] <laevar> hi, we have an unfound object in the object "rbd_directory" which results in that rbd list (and other) commands doesnt work. Is it safe to mark it lost? will it be regenerated or can we regenerate it_
[15:06] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:06] <laevar> also: is rbd map functioning with optimal jewel tunables on a debian 8 4.4 kernel?
[15:06] * itamarl (~itamarl@194.90.7.244) Quit (Quit: itamarl)
[15:07] * thomnico (~thomnico@cable-46.253.163.149.coditel.net) has joined #ceph
[15:08] * wyang (~wyang@59.45.74.71) Quit (Ping timeout: 480 seconds)
[15:08] * EinstCrazy (~EinstCraz@116.225.239.138) has joined #ceph
[15:08] * mattbenjamin (~mbenjamin@aa2.linuxbox.com) has joined #ceph
[15:09] <laevar> we had some strange problems when osd go out the cluster and come back, that we got unfound objects. It *was* during a backfilling due to changing of tunables. Is there some known problem there or a best practice to avoid such things?
[15:10] * atheism (~atheism@106.38.140.252) has joined #ceph
[15:12] <briun> Does anyone have a working example of a radosgw?
[15:13] <Anticimex> with infernalis and jerasure pool, 10+4 pieces, i get placement on 14 different hosts in our case
[15:13] <Anticimex> though as soon as one of the k's disappear, ceph reports misplaced objects and starts to migrate data
[15:14] <Anticimex> is there away to defer this to not starting and replacing data until k=1 or so?
[15:14] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[15:17] * geegeegee (~blip2@6AGAAASQS.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:17] * bara (~bara@213.175.37.12) has joined #ceph
[15:19] <sugoruyo> Anticimex: AFAIK k is the number of data chunks and m is the number of EC chunks
[15:19] <sugoruyo> if you go down to k=1 you'd have lost the data
[15:19] <Anticimex> ok these are always messed up :)
[15:19] <Anticimex> i'm talking about EC chunks
[15:20] <sugoruyo> I guess what you want to is defer it until you have k+1 chunks available
[15:20] <Anticimex> i don't want cluster to start repairs until a number > 0 of Ec chunks is losts
[15:20] <Anticimex> yeah, something like that
[15:21] <Anticimex> or k+2, not sure, but definitely not at k+3 or k+4
[15:21] <sugoruyo> I'm not sure you can do that
[15:21] * EinstCrazy (~EinstCraz@116.225.239.138) Quit (Remote host closed the connection)
[15:21] <Anticimex> nod
[15:25] <sugoruyo> anyone have any idea about my [migration/X] and load question?
[15:25] * yatin (~yatin@203.212.245.200) has joined #ceph
[15:25] <sugoruyo> some machines are hitting low thousands for load and take half a minute to open ssh connections
[15:26] * bara_ (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[15:27] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[15:28] <Anticimex> aren't migration threads related to moving processes etc between NUMA nodes?
[15:28] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[15:29] <Anticimex> google seems to suggest simply between cores
[15:30] * askb_ (~askb@117.208.164.87) has joined #ceph
[15:30] * askb (~askb@117.208.163.201) Quit (Read error: Connection reset by peer)
[15:30] * RameshN (~rnachimu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:30] <TMM> Jeeves_, https://www.youtube.com/watch?v=PivpCKEiQOQ
[15:30] <sugoruyo> yep, they're eating my CPU and I can't figure out why. One forum thread suggested the Ceph journal sync was happening too often but I've seen no diff
[15:31] <Jeeves_> TMM: I posted that here this morning :)
[15:31] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[15:31] <Jeeves_> It's very cool
[15:31] <TMM> Jeeves_, haha, ok! I hadn't seen you post it too
[15:31] <TMM> Jeeves_, seemed appropriate given boolman's thing :P
[15:32] <Jeeves_> TMM: Indeed :D
[15:33] * neurodrone_ (~neurodron@pool-100-35-226-97.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[15:33] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[15:33] <briun> anyone for radosgw help ?
[15:33] <T1> TMM: it's noooot so good when you understand german..
[15:33] <T1> they shuld have dubbed it
[15:33] <T1> should even
[15:34] * neurodrone_ (~neurodron@pool-100-35-226-97.nwrknj.fios.verizon.net) has joined #ceph
[15:34] <TMM> T1, I suppose not, I don't speak german though :)
[15:35] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:36] * wjw-freebsd2 (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[15:37] <briun> is there a way to use S3 or swift if radosgw doesn't work?
[15:37] <T1> okay, the last comment "use openstack for all I care" is pretty good on what he actually says (Do what you want..)
[15:37] <TMM> well, we do use openstack :P I don't see how it fits in that docker rant, but sure
[15:38] <T1> oss is baaad, mkay?
[15:38] <TMM> ah, I see
[15:38] <TMM> I don't see how that follows from 'docker is a pile' :P But, sure, whatever
[15:39] * doliveira (~doliveira@137.65.133.10) Quit (Ping timeout: 480 seconds)
[15:40] * thomnico (~thomnico@cable-46.253.163.149.coditel.net) Quit (Ping timeout: 480 seconds)
[15:42] * EinstCrazy (~EinstCraz@116.225.239.138) has joined #ceph
[15:43] <sugoruyo> briun: I think most people who use RGW (us included install it from the packages and start it via their normal service management system)
[15:45] <sugoruyo> T1 they should find someone else and do these for German people so they're not left out... Maybe we can try the Supreme Leader or something, it's not like North Koreans have YouTube...
[15:47] * geegeegee (~blip2@6AGAAASQS.tor-irc.dnsbl.oftc.net) Quit ()
[15:47] * matel (~elimat@user170.217-10-117.netatonce.net) Quit (Quit: leaving)
[15:48] * doliveira (~doliveira@137.65.133.10) has joined #ceph
[15:48] * ehall (~oftc-webi@DHCP-129-59-122-56.n1.vanderbilt.edu) has joined #ceph
[15:51] <ehall> Looking for guidance in using ceph-kvstore-tool and ceph-monstore-tool to address mon/OSDMonitor.cc: 125: FAILED assert(version >= osdmap.epoch). Any assistance appreciated.
[15:54] * yk (~yatin@216.207.42.140) has joined #ceph
[15:55] * rraja_ (~rraja@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:57] * bvi (~bastiaan@185.56.32.1) Quit (Quit: Ex-Chat)
[15:57] * bvi (~bastiaan@185.56.32.1) has joined #ceph
[15:57] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[15:58] <Anticimex> sugoruyo: what's the core <-> OSD ratio
[15:59] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[16:00] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:00] * yatin (~yatin@203.212.245.200) Quit (Ping timeout: 480 seconds)
[16:00] * yk (~yatin@216.207.42.140) Quit (Quit: Leaving...)
[16:03] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[16:03] * kefu (~kefu@114.92.120.83) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[16:05] * kefu (~kefu@183.193.127.235) has joined #ceph
[16:06] * Racpatel (~Racpatel@2601:87:3:3601::baf1) Quit (Ping timeout: 480 seconds)
[16:07] * c_soukup (~csoukup@159.140.254.106) has joined #ceph
[16:08] <sugoruyo> Anticimex: one logical (HT) core per OSD
[16:09] <briun> someone over radosgw for some help ?
[16:09] <sugoruyo> briun: just ask your question
[16:10] * rakeshgm (~rakesh@121.244.87.117) Quit (Remote host closed the connection)
[16:11] <briun> I can't get radosgw to start
[16:12] <tafelpoot> do you have more info? (just asking, not sure if I will be able to help you)
[16:13] <briun> so I laucnh the rados gw with /usr/bin/radosgw -d -c /etc/ceph/clusterTest.conf --debug-rgw 20 --rgw-socket-path=/tmp/radosgw.sock
[16:13] * yang_ (~wyang@116.216.30.4) Quit (Quit: This computer has gone to sleep)
[16:13] <briun> it keeps sending me "2 RGWDataChangesLog::ChangesRenewThread: start" messages
[16:13] <briun> and many kinds of stuuf that has "ping"in it
[16:14] <briun> sugoruyo: I try almost evrything I could find on ceph,
[16:14] * yang_ (~wyang@122.225.69.4) has joined #ceph
[16:15] <briun> on the official documentation I mean
[16:15] <briun> plus stuff on forums, etc etc
[16:19] <briun> does anyone ever got rados gateway to work?
[16:19] <briun> seems that nobody knows this fonction
[16:19] <briun> :)
[16:19] * ZyTer (~ZyTer@ghostbusters.apinnet.fr) has joined #ceph
[16:21] * yanzheng (~zhyan@125.70.21.212) Quit (Quit: This computer has gone to sleep)
[16:28] * kefu_ (~kefu@114.92.120.83) has joined #ceph
[16:28] * andrei (~andrei@109.247.131.38) Quit (Ping timeout: 480 seconds)
[16:28] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:30] * vata (~vata@207.96.182.162) has joined #ceph
[16:30] <briun> :(
[16:31] * wjw-freebsd2 (~wjw@176.74.240.1) has joined #ceph
[16:31] * wushudoin (~wushudoin@38.140.108.2) has joined #ceph
[16:31] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) has joined #ceph
[16:32] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[16:33] * kefu (~kefu@183.193.127.235) Quit (Ping timeout: 480 seconds)
[16:33] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[16:34] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[16:35] <sugoruyo> briun: I've got 4 GB/sec going through RGW
[16:35] * cruegge (~cruegge@134.76.80.20) Quit (Quit: bye)
[16:36] <sugoruyo> I've never started it manually though...
[16:38] * linuxkidd (~linuxkidd@174.sub-70-195-201.myvzw.com) has joined #ceph
[16:38] * Racpatel (~Racpatel@2601:87:3:3601:4e34:88ff:fe87:9abf) has joined #ceph
[16:39] * MannerMan (~oscar@user170.217-10-117.netatonce.net) Quit (Ping timeout: 480 seconds)
[16:40] <hroussea> ehall: I'm the one that had the problem you referred to on your mailing-list post
[16:41] <hroussea> I didn't suffer from an outage but from a bug at the time, so just reverting to a previous osdmap that was safe did the trick
[16:41] <hroussea> not sure you are in the same situation, especially if the db is broken
[16:42] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[16:43] * Geph (~Geoffrey@41.77.153.99) has joined #ceph
[16:47] * cooey (~Teddybare@7V7AAD31A.tor-irc.dnsbl.oftc.net) has joined #ceph
[16:49] <ehall> I am looking to revert as you did, but don't think understand how to use the ceph-kvstore-tool (and ceph-monstore-tool?) from your mailing-list post to accomplish the set.
[16:51] <hroussea> hardwire: we did it under directions from joao to be sure that we wouldn't trash everything, but your osdmap:1 is strange and something i haven't seen
[16:51] <hroussea> oups
[16:51] <hroussea> I meant ehall
[16:53] * bjornar__ (~bjornar@109.247.131.38) Quit (Ping timeout: 480 seconds)
[16:54] <joao> ah, just replied to ehall's thread on ceph-devel
[16:54] <joao> ehall, please check the email; I'll be happy to help once you manage to gather the info I mentioned :)
[16:55] * huangjun (~kvirc@117.151.52.32) has joined #ceph
[17:02] * angdraug (~angdraug@c-69-181-140-42.hsd1.ca.comcast.net) has joined #ceph
[17:02] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:02] <ehall> Working on info. Will move to ceph-users. Thanks.
[17:03] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Quit: Leaving.)
[17:04] * atheism (~atheism@106.38.140.252) Quit (Ping timeout: 480 seconds)
[17:07] <joao> ehall, feel free to ping me here once you got it
[17:07] * huangjun (~kvirc@117.151.52.32) Quit (Ping timeout: 480 seconds)
[17:08] * dgurtner_ (~dgurtner@82.199.64.68) has joined #ceph
[17:09] * tomy2374 (~tomy@106.51.232.198) has joined #ceph
[17:10] * dgurtner (~dgurtner@82.199.64.68) Quit (Ping timeout: 480 seconds)
[17:11] * tomy2374 (~tomy@106.51.232.198) has left #ceph
[17:11] * TMM (~hp@213.197.255.171) Quit (Quit: Ex-Chat)
[17:12] * overclk (~quassel@117.202.103.17) has joined #ceph
[17:12] * tomy (~tomy@106.51.232.198) has joined #ceph
[17:17] * cooey (~Teddybare@7V7AAD31A.tor-irc.dnsbl.oftc.net) Quit ()
[17:17] * Coe|work (~Azerothia@93.171.205.34) has joined #ceph
[17:18] * kefu_ is now known as kefu
[17:24] * tomy (~tomy@106.51.232.198) has left #ceph
[17:25] * kawa2014 (~kawa@194.170.156.187) Quit (Quit: Leaving)
[17:25] * tomy (~tomy@106.51.232.198) has joined #ceph
[17:27] * tomy (~tomy@106.51.232.198) Quit ()
[17:30] * dynamicudpate (~overonthe@199.68.193.54) has joined #ceph
[17:30] <ehall> @joao joao, info posted.
[17:31] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:31] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[17:33] * tomy (~tomy@106.51.232.198) has joined #ceph
[17:34] <joao> ehall, got it
[17:35] <sugoruyo> anyone have any insights on [migration/N] threads stealing CPU while ceph-osd daemons are running on a machine?
[17:37] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[17:38] * logan- (~logan@2607:ff68:100:36::29d) Quit (Ping timeout: 480 seconds)
[17:38] * wjw-freebsd2 (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[17:39] * logan- (~logan@63.143.60.136) has joined #ceph
[17:40] * Skaag (~lunix@65.200.54.234) has joined #ceph
[17:41] * tomy (~tomy@106.51.232.198) Quit (Read error: Connection reset by peer)
[17:42] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) has joined #ceph
[17:43] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[17:44] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[17:46] * flisky (~Thunderbi@223.104.255.93) has joined #ceph
[17:47] * Coe|work (~Azerothia@4MJAAD6OR.tor-irc.dnsbl.oftc.net) Quit ()
[17:47] * JohnO (~hyst@37.48.109.23) has joined #ceph
[17:47] * compass (~compass@ctlt01-fwsm.net.ubc.ca) Quit (Ping timeout: 480 seconds)
[17:47] * tomy (~tomy@106.51.232.198) has joined #ceph
[17:47] * flisky (~Thunderbi@223.104.255.93) Quit ()
[17:51] * swami1 (~swami@49.32.0.90) Quit (Quit: Leaving.)
[17:51] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[17:51] * tomy (~tomy@106.51.232.198) Quit (Read error: Connection reset by peer)
[17:52] * tomy (~Tomy@106.51.232.198) has joined #ceph
[17:53] * EinstCrazy (~EinstCraz@116.225.239.138) Quit (Remote host closed the connection)
[17:54] * tomy (~Tomy@106.51.232.198) Quit (Read error: Connection reset by peer)
[17:54] * shyu_ (~shyu@111.201.76.105) Quit (Remote host closed the connection)
[17:55] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[17:55] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:56] * ircolle (~Adium@2601:285:201:2bf9:8483:3587:6860:ca5) has joined #ceph
[17:57] <Axion_Joey> Question guys. I know Ceph is designed/optimized for large deployments. But what is the bottleneck for small deployments. I have ceph in our lab with only 2 osd servers, but they're filled with SSD's on a 10G network. And I can't get more than 200MB's write speed to the cluster.
[17:58] * compass (~compass@ctlt01-fwsm.net.ubc.ca) has joined #ceph
[18:00] * rraja (~rraja@121.244.87.117) has joined #ceph
[18:00] * LDA (~lda@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[18:00] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[18:01] * davidzlap (~Adium@2605:e000:1313:8003:f4ed:59c6:2a5f:f77b) Quit (Quit: Leaving.)
[18:02] * bara_ (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[18:03] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[18:04] * Geph (~Geoffrey@41.77.153.99) Quit (Ping timeout: 480 seconds)
[18:07] <joao> ehall, replied; let me know if it doesn't work and we can try to do it interactively via irc
[18:08] <joao> also, my apologies to the the list for the emails not being hard wrapped at 80 cols, but thunderbird has been mocking me all week and this is just the latest one -_-
[18:12] * davidz (~davidz@2605:e000:1313:8003:c0f3:53d5:831f:be4d) has joined #ceph
[18:14] * bara_ (~bara@213.175.37.12) has joined #ceph
[18:15] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[18:16] * yang_ (~wyang@122.225.69.4) Quit (Quit: This computer has gone to sleep)
[18:17] * JohnO (~hyst@06SAAA6W7.tor-irc.dnsbl.oftc.net) Quit ()
[18:17] * skrblr (~pepzi@207.244.70.35) has joined #ceph
[18:21] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[18:25] * davidz1 (~davidz@2605:e000:1313:8003:c0f3:53d5:831f:be4d) has joined #ceph
[18:25] * davidz (~davidz@2605:e000:1313:8003:c0f3:53d5:831f:be4d) Quit (Read error: Connection reset by peer)
[18:26] <ehall> joao, Thanks. Working through it now.
[18:28] * nhm_ (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) Quit (Quit: Lost terminal)
[18:29] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[18:30] * davidz (~davidz@2605:e000:1313:8003:c0f3:53d5:831f:be4d) has joined #ceph
[18:30] * davidz1 (~davidz@2605:e000:1313:8003:c0f3:53d5:831f:be4d) Quit (Read error: Connection reset by peer)
[18:30] * pabluk_ is now known as pabluk__
[18:33] * branto (~branto@178-253-128-175.3pp.slovanet.sk) Quit (Quit: Leaving.)
[18:35] * kefu is now known as kefu|afk
[18:36] <ehall> joao, this worked on mon3 isolated. Isolate mon2 and do sets there, then mon1 (with diff values) isolated. Assuming they are all happy with themselves, inject a map with all 3 and cross fingers?
[18:43] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[18:47] * skrblr (~pepzi@6AGAAAS09.tor-irc.dnsbl.oftc.net) Quit ()
[18:47] * starcoder (~AluAlu@85.93.218.204) has joined #ceph
[18:47] * bene2 (~bene@2601:18c:8501:25e4:ea2a:eaff:fe08:3c7a) has joined #ceph
[18:49] * kefu|afk is now known as kefu
[18:51] * DanFoster (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[18:51] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:51] * bene2 (~bene@2601:18c:8501:25e4:ea2a:eaff:fe08:3c7a) Quit ()
[18:53] * mykola (~Mikolaj@91.245.72.200) has joined #ceph
[18:57] * sjackson (~sjackson@207.111.246.196) has joined #ceph
[19:02] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[19:03] <joao> ehall, have you checked mon1's osdmap:full_latest?
[19:03] * bniver (~bniver@nat-pool-bos-u.redhat.com) has joined #ceph
[19:04] <ehall> checked and set to latest_full on mon1, yes.
[19:04] * bvi (~bastiaan@185.56.32.1) Quit (Quit: Ex-Chat)
[19:04] <ehall> joao, checked and set to latest_full on mon1, yes.
[19:04] <ehall> joao, all mons appear happy. starting osds now.
[19:04] <joao> alright
[19:05] * rakeshgm (~rakesh@106.51.225.4) has joined #ceph
[19:05] <joao> I'm going to step away for a few hours - be back after dinner
[19:07] * todin (tuxadero@kudu.in-berlin.de) Quit (Remote host closed the connection)
[19:07] <cholcombe> is it possible to view the history of a particular placement group?
[19:08] <gregsfortytwo1> cholcombe: what history?
[19:08] <gregsfortytwo1> there's nothing super explicit, but you can do stuff like dump pgmaps and look at which OSDs it's mapped to
[19:08] <ehall> joao, hmmm... mon1 and mon3 died, leaving mon2
[19:08] <cholcombe> gregsfortytwo1, that's a good question. i'm curious if you can see a list of events that happened to that PG
[19:08] <cholcombe> gregsfortytwo1, ok that's what i thought
[19:09] <gregsfortytwo1> yeah, PGs don't really have events except for OSDs going away
[19:09] <gregsfortytwo1> you can also get some of that with pg query, but I'm not sure how much
[19:09] <cholcombe> yeah it has some basic high level things
[19:10] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[19:11] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[19:12] * absbang (~absbang@182.178.197.40) has joined #ceph
[19:12] * dgurtner (~dgurtner@87.215.61.26) has joined #ceph
[19:14] * absbang (~absbang@182.178.197.40) Quit ()
[19:14] * dgurtner_ (~dgurtner@82.199.64.68) Quit (Ping timeout: 480 seconds)
[19:15] * thomnico (~thomnico@cable-46.253.163.149.coditel.net) has joined #ceph
[19:15] * absbang (absbang@182.178.197.40) has joined #ceph
[19:17] * starcoder (~AluAlu@6AGAAAS2M.tor-irc.dnsbl.oftc.net) Quit ()
[19:17] * KeeperOfTheSoul (~OODavo@torsrva.snydernet.net) has joined #ceph
[19:17] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) Quit (Quit: Leaving)
[19:18] <sjackson> I have a ceph cluster that changed it's FSID (https://www.mail-archive.com/ceph-users@lists.ceph.com/msg26329.html). When I try to add a monitor now, it doesn't enter the cluster. In the ceph-mon.xyz.log on the host I tried to add, I get the following error:
[19:18] <sjackson> cephx: verify_reply couldn't decrypt with error: error decoding block for decryption 0 -- 10.5.68.217:6789/0 >> 10.5.68.65:6789/0 pipe(0x3e71000 sd=13 :34392 s=1 pgs=0 cs=0 l=0 c=0x3cbc3c0).failed verifying authorize reply
[19:19] <sjackson> this cluster is running our production openstack vm storage
[19:22] * dgurtner (~dgurtner@87.215.61.26) Quit (Ping timeout: 480 seconds)
[19:24] * kefu (~kefu@114.92.120.83) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:25] * kefu (~kefu@211.22.145.245) has joined #ceph
[19:25] <Heebie> Is there a way to tell a ceph cluster to limit the amount of traffic to use for rebuilding data? (so as not to "choke out" live traffic.. such as limiting it to 75% of available network bandwidth)
[19:26] <sjackson> I also just noticed that the monitor I just added is set to a new (3rd) FSID
[19:27] <cholcombe> gregsfortytwo1, so where is the authoritative history stored? It doesn't seem to be in the pglog's
[19:27] <gregsfortytwo1> PGs don't have authoritative history independently
[19:27] <gregsfortytwo1> it's reconstructed from the OSD maps
[19:28] <cholcombe> interesting
[19:28] <cholcombe> hmm ok
[19:31] <gregsfortytwo1> there's an OSD function
[19:31] * gregsfortytwo1 moves channels
[19:31] * s3an2 (~root@korn.s3an.me.uk) Quit (Quit: leaving)
[19:32] * s3an2 (~root@korn.s3an.me.uk) has joined #ceph
[19:33] * overclk (~quassel@117.202.103.17) Quit (Remote host closed the connection)
[19:33] * kefu (~kefu@211.22.145.245) Quit (Ping timeout: 480 seconds)
[19:40] * angdraug (~angdraug@c-69-181-140-42.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[19:40] * swami1 (~swami@27.7.172.119) has joined #ceph
[19:46] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[19:47] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[19:47] * KeeperOfTheSoul (~OODavo@4MJAAD6SC.tor-irc.dnsbl.oftc.net) Quit ()
[19:47] * uhtr5r (~Sophie@76GAAEFBZ.tor-irc.dnsbl.oftc.net) has joined #ceph
[19:49] * bara_ (~bara@213.175.37.12) Quit (Quit: Bye guys! (??????????????????? ?????????)
[19:52] * derjohn_mobi (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[20:01] * thomnico (~thomnico@cable-46.253.163.149.coditel.net) Quit (Ping timeout: 480 seconds)
[20:02] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[20:04] * wwdillingham (~LobsterRo@140.247.242.44) has joined #ceph
[20:06] <wwdillingham> quick question about permissions, I want a particular user to be able to read and make snapshots of rbd devices in a pool but not write to objects, is it possible to separate the two ???perhaps this is what X of class class-write is?)
[20:08] * askb_ (~askb@117.208.164.87) Quit (Quit: Leaving)
[20:09] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[20:11] * olid19811117 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[20:13] * thomnico (~thomnico@cable-46.253.163.149.coditel.net) has joined #ceph
[20:14] * thomnico (~thomnico@cable-46.253.163.149.coditel.net) Quit (Read error: Connection reset by peer)
[20:17] * uhtr5r (~Sophie@76GAAEFBZ.tor-irc.dnsbl.oftc.net) Quit ()
[20:17] * tunaaja (~vend3r@tor-exit1-readme.dfri.se) has joined #ceph
[20:20] * absbang (absbang@182.178.197.40) Quit (Ping timeout: 480 seconds)
[20:20] * angdraug (~angdraug@64.124.158.100) has joined #ceph
[20:21] * absbang (~absbang@119.63.142.1) has joined #ceph
[20:22] * olid19811118 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[20:22] * olid19811117 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[20:23] * bdx (~jbeedy@70.96.128.243) has joined #ceph
[20:23] <bdx> hello everone
[20:23] <bdx> how can I increase the account quotas referenced here -> http://docs.openstack.org/liberty/config-reference/content/object-storage-account-quotas.html
[20:23] <bdx> ?
[20:23] <bdx> when using radosgw for object storage
[20:24] <theanalyst> bdx, you should prolly try radosgw-admin quota commands http://docs.ceph.com/docs/master/radosgw/admin/#quota-management
[20:25] <theanalyst> bdx, not sure whether the swift account quota stuff is supported in rgw swift apis..
[20:26] <bdx> theanalyst: yea
[20:26] <bdx> theanalyst: the default account quota is 30GB, and is not set at the ceph level
[20:27] <laevar> hi, we have an unfound object in the object "rbd_directory" which results in that rbd list (and other) commands doesnt work. Is it safe to mark it lost? will it be regenerated or can we regenerate it?
[20:27] <bdx> theanalyst: this qouta -> http://docs.openstack.org/liberty/config-reference/content/object-storage-account-quotas.html
[20:28] <bdx> theanalyst: not the ceph level qouta
[20:29] <bdx> object storage account qoutas are set at 30GB by default .... if the radosgw api doesn't support modifying this attribute, then we have a hard capacity limit of 30GB
[20:29] * mhackett (~mhack@nat-pool-bos-u.redhat.com) has joined #ceph
[20:29] * mhackett is now known as mhack|mtg
[20:30] <bdx> this is currently a huge issue, and is blocking me, as I am trying to make basebackups of a database > 30GB .....
[20:30] * olid19811118 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[20:30] * olid19811118 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[20:31] <theanalyst> bdx, see what the rgw-quota commands for the particular swift tenant says.. if it is not set to anything.. then issue may be somewhere else..
[20:32] <theanalyst> bdx, also i guess you may need to do multipart uploads for files of that size
[20:33] <sjackson> my cluster is now down. Looks like no monitors are active. I'm getting this in the ceph-mon log: http://pastebin.com/Ncy8W3AZ. Can anyone help?
[20:33] <bdx> theanalyst: yea, I have set rgw-quotas to unlimited
[20:33] <bdx> for all users and buckets
[20:33] <theanalyst> bdx, I guess multipart uploads swift -s or something for upload?
[20:34] <bdx> theanalyst: I'm using wal-e to make my basebackups .... it automatically uses multipart
[20:35] * shohn (~shohn@dslb-178-005-158-247.178.005.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[20:35] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[20:36] <bdx> theanalyst: ok, wal-e isn't using .... multipart
[20:36] <bdx> theanalyst: I can upload (basebackup) until I reach ~30GB .....
[20:37] <s3an2> sjackson, not seen your error before 'e9 handle_probe ignoring fsid 6870cff2-6cbc-4e99-8615-c159ba3a0546 != e238f5b3-7d67-4b55-8563-52008828db51' do you just have 1 mon?
[20:37] * absbang (~absbang@119.63.142.1) Quit (Ping timeout: 480 seconds)
[20:38] <bdx> theanalyst: I am honestly so stumped by this
[20:38] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) has joined #ceph
[20:38] <bdx> I've been trying to figure it out for a while now ... where should file a bug on this?
[20:39] <theanalyst> bdx, sorry replied in other channel, wrong key combo :)
[20:39] * yguang11 (~yguang11@2001:4998:effd:600:3d81:b37:eb28:f72d) has joined #ceph
[20:40] * swami1 (~swami@27.7.172.119) Quit (Ping timeout: 480 seconds)
[20:40] * olid19811118 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[20:40] * olid19811118 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[20:41] * bjornar__ (~bjornar@ti0099a430-1561.bb.online.no) has joined #ceph
[20:47] * tunaaja (~vend3r@76GAAEFDW.tor-irc.dnsbl.oftc.net) Quit ()
[20:47] * qable (~Silentkil@hessel2.torservers.net) has joined #ceph
[20:49] * olid19811119 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[20:49] * olid19811118 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[20:51] * c0dice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) Quit (Quit: leaving)
[20:52] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) has joined #ceph
[20:53] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) Quit ()
[20:53] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) has joined #ceph
[20:57] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[20:57] * dyasny (~dyasny@46-117-8-108.bb.netvision.net.il) Quit (Ping timeout: 480 seconds)
[20:58] * olid198111110 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[20:58] * olid19811119 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[21:02] * bniver (~bniver@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[21:02] * mhack|mtg (~mhack@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[21:06] * bene2 (~bene@2601:18c:8501:25e4:ea2a:eaff:fe08:3c7a) has joined #ceph
[21:07] * olid198111111 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[21:07] * olid198111110 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[21:14] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:17] * qable (~Silentkil@06SAAA61S.tor-irc.dnsbl.oftc.net) Quit ()
[21:17] * pakman__ (~neobenedi@109.163.234.7) has joined #ceph
[21:23] <med> maybe better here...
[21:24] <med> Proxy Question: Do you have any information on what kind of latency an S3 or Swift proxy in front of Ceph causes? Know anyone who would know?
[21:25] * olid198111112 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[21:25] * olid198111111 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[21:31] * ade (~abradshaw@dslb-188-106-099-129.188.106.pools.vodafone-ip.de) Quit (Quit: Too sexy for his shirt)
[21:34] * olid198111113 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[21:34] * olid198111112 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[21:40] <neurodrone_> Anyone running Jewel on dmcrypt-ed disks?
[21:40] <neurodrone_> I am seeing permission issues when the new ceph user is given the task of setting up mounts.
[21:43] * olid198111114 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[21:43] * olid198111113 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[21:46] * i_m (~ivan.miro@deibp9eh1--blueice2n3.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[21:47] * pakman__ (~neobenedi@6AGAAAS83.tor-irc.dnsbl.oftc.net) Quit ()
[21:51] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:bd89:46d9:72e3:18ce) Quit (Ping timeout: 480 seconds)
[21:52] * olid198111115 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[21:52] * olid198111114 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[21:53] * davidz (~davidz@2605:e000:1313:8003:c0f3:53d5:831f:be4d) Quit (Ping timeout: 480 seconds)
[21:53] * davidz (~davidz@2605:e000:1313:8003:1d34:bd8d:a1bf:be6a) has joined #ceph
[21:55] * rendar (~I@host1-179-dynamic.47-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:58] * rendar (~I@host1-179-dynamic.47-79-r.retail.telecomitalia.it) has joined #ceph
[22:01] * olid198111115 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[22:01] * olid198111115 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[22:02] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[22:08] * mykola (~Mikolaj@91.245.72.200) Quit (Quit: away)
[22:10] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[22:11] * olid198111116 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[22:11] * olid198111115 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[22:11] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[22:14] * laevar (~jschulz1@134.76.80.11) Quit (Ping timeout: 480 seconds)
[22:19] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:19] * olid198111116 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[22:20] * olid198111116 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[22:21] * capitalthree (~Jamana@politkovskaja.torservers.net) has joined #ceph
[22:24] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) has joined #ceph
[22:25] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) Quit (Quit: WeeChat 1.4)
[22:28] * olid198111117 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[22:28] * olid198111116 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[22:29] * mortn (~mortn@217-215-219-69-no229.tbcn.telia.com) Quit (Quit: Leaving)
[22:33] * bvi (~bastiaan@152-64-132-5.ftth.glasoperator.nl) has joined #ceph
[22:33] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) Quit (Ping timeout: 480 seconds)
[22:33] * bvi (~bastiaan@152-64-132-5.ftth.glasoperator.nl) Quit (Remote host closed the connection)
[22:35] * ehall (~oftc-webi@DHCP-129-59-122-56.n1.vanderbilt.edu) has left #ceph
[22:36] <ircolle> https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf
[22:38] * olid198111117 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[22:38] * olid198111117 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[22:43] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[22:43] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) has joined #ceph
[22:47] * olid198111118 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[22:47] * olid198111117 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[22:48] * Bartek (~Bartek@dynamic-78-9-153-220.ssp.dialog.net.pl) has joined #ceph
[22:50] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[22:51] * capitalthree (~Jamana@6AGAAATBU.tor-irc.dnsbl.oftc.net) Quit ()
[22:51] * Dragonshadow (~offender@6AGAAATDA.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:52] * bene2 (~bene@2601:18c:8501:25e4:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[22:53] * evelu (~erwan@37.161.190.25) Quit (Ping timeout: 480 seconds)
[22:55] * wwdillingham (~LobsterRo@140.247.242.44) Quit (Quit: wwdillingham)
[22:56] * olid198111119 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[22:56] * olid198111118 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[23:01] * LDA (~lda@host217-114-156-249.pppoe.mark-itt.net) Quit (Quit: LDA)
[23:05] <post-factum> is there some possibility to reparent image against new parent? lets say, i have image1 and image2 created by copying, and now i want image1 to be a parent of image2 to save some space
[23:05] * olid198111119 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[23:05] * olid198111119 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[23:14] * olid198111119 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[23:14] * derjohn_mobi (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[23:15] * olid198111119 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[23:21] * Dragonshadow (~offender@6AGAAATDA.tor-irc.dnsbl.oftc.net) Quit ()
[23:21] * Aal (~Inverness@chomsky.torservers.net) has joined #ceph
[23:23] * olid1981111110 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[23:23] * olid198111119 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[23:26] <sjackson> does anyone know if it's possible to recover from a situation where all monitors are down and broken? is it possible to create a new monitor?
[23:29] * AntonE (~AntonE@196.34.18.253) has joined #ceph
[23:32] * c_soukup (~csoukup@159.140.254.106) Quit (Ping timeout: 480 seconds)
[23:32] * olid1981111111 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[23:32] <AntonE> Hi all, I have OSDs with corrupt journals in Infernalis. I want to export the relevant pages with ceph-objectstore-tool but keep getting errors. I have 6 OSDs down and tried to create a new journal as http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-April/000238.html and/or http://www.sebastien-han.fr/blog/2014/11/27/ceph-recover-osds-after-ssd-journal-failure/
[23:32] * olid1981111110 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[23:33] <qman> from my understanding you need at least one good monitor to recover
[23:34] * rendar (~I@host1-179-dynamic.47-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[23:35] <AntonE> I tried ceph-objectstore-tool with --skip-journal-replay --skip-mount-omap but still get errors, get similar errors if I do " ceph-osd -i 4 --flush-journal" on old or new journal. (I had to work the partuuids and chown for CEPH to regognize it, but that seems fine noe)
[23:36] <AntonE> Is there a method to get a corrupted OSD with journal (on SSD) to at least export it's objects with ceph-objectstore-tool in Infernalis? Or some way to get data off OSDs that does not want to come 'up' without rebuilding RBD images from raw objects
[23:36] <AntonE> Is there a method to get a corrupted OSD with journal (on SSD) to at least export it's objects with ceph-objectstore-tool in Infernalis? Or some way to get data off OSDs that does not want to come 'up' without rebuilding RBD images from raw objects?
[23:39] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:41] * olid1981111112 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[23:41] * olid1981111111 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[23:42] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[23:47] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[23:48] * lookcrabs (~lookcrabs@tail.seeee.us) Quit (Remote host closed the connection)
[23:48] * Bartek (~Bartek@dynamic-78-9-153-220.ssp.dialog.net.pl) Quit (Ping timeout: 480 seconds)
[23:50] * olid1981111113 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[23:50] * olid1981111112 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)
[23:51] * Aal (~Inverness@4MJAAD6YS.tor-irc.dnsbl.oftc.net) Quit ()
[23:51] * dkrdkr (uid110802@id-110802.tooting.irccloud.com) has joined #ceph
[23:53] * rotbeard (~redbeard@2a02:908:df18:b980:6267:20ff:feb7:c20) has joined #ceph
[23:58] * yguang11 (~yguang11@2001:4998:effd:600:3d81:b37:eb28:f72d) Quit (Remote host closed the connection)
[23:59] * olid1981111114 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) has joined #ceph
[23:59] * olid1981111113 (~olid1982@aftr-185-17-206-203.dynamic.mnet-online.de) Quit (Read error: Connection reset by peer)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.