#ceph IRC Log

Index

IRC Log for 2016-05-30

Timestamps are in GMT/BST.

[0:00] * rendar (~I@95.239.176.239) has joined #ceph
[0:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[0:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[0:03] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) Quit (Remote host closed the connection)
[0:04] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) has joined #ceph
[0:11] * ieth0 (~ieth0@m83-185-84-51.cust.tele2.se) has joined #ceph
[0:24] * djidis__ (~Frostshif@7V7AAFFC4.tor-irc.dnsbl.oftc.net) Quit ()
[0:24] * Silentspy (~Kyso@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[0:27] * raeven (~raeven@h89n10-oes-a31.ias.bredband.telia.com) has joined #ceph
[0:29] * m0zes (~mozes@ns1.beocat.ksu.edu) Quit (Ping timeout: 480 seconds)
[0:30] * jordanP (~jordan@pas38-2-82-67-72-49.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[0:30] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[0:31] <raeven> Is it a good idea to automate adding osds to the cluster when i drive is added to a ceph node?
[0:32] * ieth0 (~ieth0@m83-185-84-51.cust.tele2.se) Quit (Read error: Connection reset by peer)
[0:32] * ieth0 (~ieth0@user232.77-105-223.netatonce.net) has joined #ceph
[0:33] * m0zes (~mozes@ns1.beocat.ksu.edu) has joined #ceph
[0:38] * jordanP (~jordan@pas38-2-82-67-72-49.fbx.proxad.net) has joined #ceph
[0:40] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) has joined #ceph
[0:43] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) Quit (Quit: jermudgeon)
[0:49] * billwebb (~billwebb@66.56.15.14) has joined #ceph
[0:54] * Silentspy (~Kyso@4MJAAFPQQ.tor-irc.dnsbl.oftc.net) Quit ()
[0:54] * matx (~blip2@5.135.85.23) has joined #ceph
[0:59] * billwebb (~billwebb@66.56.15.14) Quit (Quit: billwebb)
[1:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[1:01] * jordanP (~jordan@pas38-2-82-67-72-49.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[1:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[1:05] * dgurtner (~dgurtner@178.197.239.57) Quit (Ping timeout: 480 seconds)
[1:24] * matx (~blip2@7V7AAFFEZ.tor-irc.dnsbl.oftc.net) Quit ()
[1:24] * Harryhy (~Helleshin@0.tor.exit.babylon.network) has joined #ceph
[1:27] * oms101 (~oms101@p20030057EA639D00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:29] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[1:35] * oms101 (~oms101@p20030057EA4E9700C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:40] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[1:43] * m0zes (~mozes@ns1.beocat.ksu.edu) Quit (Ping timeout: 480 seconds)
[1:43] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[1:46] * m0zes (~mozes@ns1.beocat.ksu.edu) has joined #ceph
[1:50] * rendar (~I@95.239.176.239) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[1:51] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:54] * Harryhy (~Helleshin@7V7AAFFFT.tor-irc.dnsbl.oftc.net) Quit ()
[1:54] * Hazmat (~fauxhawk@7V7AAFFG2.tor-irc.dnsbl.oftc.net) has joined #ceph
[1:54] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[1:58] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[2:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[2:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[2:19] * ieth0 (~ieth0@user232.77-105-223.netatonce.net) Quit (Quit: ieth0)
[2:24] * Hazmat (~fauxhawk@7V7AAFFG2.tor-irc.dnsbl.oftc.net) Quit ()
[2:24] * roaet (~Behedwin@192.42.115.101) has joined #ceph
[2:24] * kmroz (~kilo@00020103.user.oftc.net) has joined #ceph
[2:27] * huangjun (~kvirc@113.57.168.154) has joined #ceph
[2:50] * mhuang (~mhuang@117.114.129.4) has joined #ceph
[2:52] * thansen (~thansen@162.219.43.108) Quit (Ping timeout: 480 seconds)
[2:53] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[2:54] * roaet (~Behedwin@7V7AAFFHW.tor-irc.dnsbl.oftc.net) Quit ()
[2:54] * Xeon06 (~Miho@marylou.nos-oignons.net) has joined #ceph
[2:57] * khyron (~khyron@fixed-190-159-187-190-159-75.iusacell.net) has joined #ceph
[3:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[3:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[3:02] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[3:04] * thansen (~thansen@162.219.43.108) has joined #ceph
[3:05] * khyron (~khyron@fixed-190-159-187-190-159-75.iusacell.net) Quit (Ping timeout: 480 seconds)
[3:06] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[3:15] * vata (~vata@cable-192.222.249.207.electronicbox.net) Quit (Quit: Leaving.)
[3:21] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[3:24] * Xeon06 (~Miho@06SAAC7T5.tor-irc.dnsbl.oftc.net) Quit ()
[3:24] * LRWerewolf (~Kyso_@atlantic480.us.unmetered.com) has joined #ceph
[3:29] * kmroz (~kilo@00020103.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:31] * kmroz (~kilo@00020103.user.oftc.net) has joined #ceph
[3:44] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[3:44] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[3:48] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[3:48] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[3:54] * LRWerewolf (~Kyso_@7V7AAFFJZ.tor-irc.dnsbl.oftc.net) Quit ()
[3:54] * dicko (~CobraKhan@109.236.90.209) has joined #ceph
[3:59] * m0zes (~mozes@ns1.beocat.ksu.edu) Quit (Ping timeout: 480 seconds)
[3:59] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[4:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[4:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[4:03] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Ping timeout: 480 seconds)
[4:06] * m8x (~user@182.150.27.112) has joined #ceph
[4:06] * m8x (~user@182.150.27.112) has left #ceph
[4:07] * flisky (~Thunderbi@36.110.40.22) has joined #ceph
[4:09] * m0zes (~mozes@ns1.beocat.ksu.edu) has joined #ceph
[4:12] * povian (~povian@211.189.163.250) has joined #ceph
[4:17] * zhaochao (~zhaochao@125.39.112.4) has joined #ceph
[4:18] * mhuang (~mhuang@117.114.129.4) Quit (Quit: This computer has gone to sleep)
[4:23] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[4:24] * dicko (~CobraKhan@7V7AAFFK0.tor-irc.dnsbl.oftc.net) Quit ()
[4:24] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) has joined #ceph
[4:26] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[4:31] * WhiteBxEng (~whitebxen@ip72-208-208-76.ph.ph.cox.net) Quit (Remote host closed the connection)
[4:32] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[4:33] * flisky (~Thunderbi@246e2816.test.dnsbl.oftc.net) Quit (Quit: flisky)
[4:45] * shyu (~shyu@218.241.172.114) has joined #ceph
[4:50] <geli> What would be the most straight forward way to get cephfs mounted on RHEL6.8 besides upgrading to RHEL7?
[4:51] <via> elrepo kernels
[4:53] * povian_ (~povian@211.189.163.250) has joined #ceph
[4:54] <geli> via: Thanks I'll have a look. Have you used to the kernels from elrepo to do what I want to do?
[4:54] * Tralin|Sleep (~FierceFor@atlantic850.dedicatedpanel.com) has joined #ceph
[4:56] <via> i use kernel-lt on most of my centos6 boxes
[4:56] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) Quit (Quit: valeech)
[4:57] <geli> via:I'll have a read, cheers.
[4:58] * povian__ (~povian@211.189.163.250) has joined #ceph
[4:58] * povian (~povian@211.189.163.250) Quit (Ping timeout: 480 seconds)
[5:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[5:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[5:01] * povian_ (~povian@211.189.163.250) Quit (Ping timeout: 480 seconds)
[5:10] * Vacuum__ (~Vacuum@i59F7938F.versanet.de) has joined #ceph
[5:17] * Vacuum_ (~Vacuum@88.130.198.33) Quit (Ping timeout: 480 seconds)
[5:18] * mhuang (~mhuang@119.254.120.71) has joined #ceph
[5:24] * Tralin|Sleep (~FierceFor@4MJAAFPY8.tor-irc.dnsbl.oftc.net) Quit ()
[5:24] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[5:29] * kefu (~kefu@211.22.145.245) has joined #ceph
[5:37] * Skaag2 (~lunix@cpe-172-91-77-84.socal.res.rr.com) has joined #ceph
[5:42] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[5:45] * Skaag2 (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[5:47] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[5:54] * ggg (~Teddybare@Relay-J.tor-exit.network) has joined #ceph
[6:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[6:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[6:03] * gad0lin_ (~oftc-webi@c-98-207-168-70.hsd1.ca.comcast.net) has joined #ceph
[6:05] * prallab (~prallab@106.51.138.197) has joined #ceph
[6:05] <gad0lin_> hey, I have run 3 containers of ceph-docker on same host in EC2 (each used one EBS /dev/xvdf,g,h,), after a month 2 months on of the OSD containers died and i am not sure how to bring it back. Eventually i have restarted all containers, and now none of 3 wants to get up
[6:06] <gad0lin_> I see following error in the logs: http://pastebin.com/tEFew9in
[6:06] <gad0lin_> EALTH_WARN 400 pgs degraded; 400 pgs stale; 400 pgs stuck degraded; 400 pgs stuck stale; 400 pgs stuck unclean; 400 pgs stuck undersized; 400 pgs undersized; recovery ...
[6:07] <gad0lin_> is there a way to bring them back?
[6:07] * prallab_ (~prallab@216.207.42.140) has joined #ceph
[6:08] * gauravbafna (~gauravbaf@122.178.205.57) has joined #ceph
[6:08] * povian (~povian@211.189.163.250) has joined #ceph
[6:13] * povian__ (~povian@211.189.163.250) Quit (Ping timeout: 480 seconds)
[6:14] * prallab (~prallab@106.51.138.197) Quit (Ping timeout: 480 seconds)
[6:16] * gauravbafna (~gauravbaf@122.178.205.57) Quit (Ping timeout: 480 seconds)
[6:18] * prallab_ (~prallab@216.207.42.140) Quit (Ping timeout: 480 seconds)
[6:24] * ggg (~Teddybare@4MJAAFP00.tor-irc.dnsbl.oftc.net) Quit ()
[6:31] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[6:32] * zhaochao_ (~zhaochao@125.39.9.148) has joined #ceph
[6:35] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[6:35] * thansen (~thansen@162.219.43.108) Quit (Ping timeout: 480 seconds)
[6:35] * zhaochao (~zhaochao@125.39.112.4) Quit (Read error: Connection timed out)
[6:36] * zhaochao_ is now known as zhaochao
[6:36] * rdas (~rdas@121.244.87.116) has joined #ceph
[6:36] * prallab (~prallab@216.207.42.140) has joined #ceph
[6:39] * deepthi (~deepthi@122.172.67.166) has joined #ceph
[6:39] * mhuang_ (~mhuang@119.254.120.72) has joined #ceph
[6:39] * shyu (~shyu@218.241.172.114) Quit (Ping timeout: 480 seconds)
[6:41] * overclk (~quassel@117.202.96.183) has joined #ceph
[6:44] * mhuang (~mhuang@119.254.120.71) Quit (Ping timeout: 480 seconds)
[6:46] * thansen (~thansen@162.219.43.108) has joined #ceph
[6:46] * prallab (~prallab@216.207.42.140) Quit (Ping timeout: 480 seconds)
[6:50] * mhuang (~mhuang@119.254.120.72) has joined #ceph
[6:51] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:51] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[6:52] * MentalRay (~MentalRay@107.171.161.165) Quit ()
[6:52] * gad0lin_ (~oftc-webi@c-98-207-168-70.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:53] * kefu (~kefu@211.22.145.245) Quit (Max SendQ exceeded)
[6:53] * mhuang_ (~mhuang@119.254.120.72) Quit (Ping timeout: 480 seconds)
[6:54] * Pieman (~mason@62-210-37-82.rev.poneytelecom.eu) has joined #ceph
[6:54] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[6:58] * kefu (~kefu@114.92.122.74) has joined #ceph
[6:58] * khyron (~khyron@fixed-190-159-187-190-159-75.iusacell.net) has joined #ceph
[7:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[7:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[7:19] * kefu is now known as kefu|afk
[7:20] * shylesh (~shylesh@45.124.227.43) has joined #ceph
[7:22] * thansen (~thansen@162.219.43.108) Quit (Ping timeout: 480 seconds)
[7:24] * Pieman (~mason@4MJAAFP2T.tor-irc.dnsbl.oftc.net) Quit ()
[7:27] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[7:36] * gauravbafna (~gauravbaf@122.172.225.118) has joined #ceph
[7:40] * gauravbafna (~gauravbaf@122.172.225.118) Quit (Remote host closed the connection)
[7:45] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[7:54] * AG_Clinton (~K3NT1S_aw@daskapital.tor-exit.network) has joined #ceph
[7:56] * gauravbafna (~gauravbaf@122.172.225.118) has joined #ceph
[8:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[8:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[8:01] * dgurtner (~dgurtner@178.197.225.26) has joined #ceph
[8:10] * epicguy (~epicguy@41.164.8.42) has joined #ceph
[8:16] * gauravbafna (~gauravbaf@122.172.225.118) Quit (Ping timeout: 480 seconds)
[8:16] * Lokta (~Lokta@carbon.coe.int) has joined #ceph
[8:24] * AG_Clinton (~K3NT1S_aw@4MJAAFP48.tor-irc.dnsbl.oftc.net) Quit ()
[8:24] * Cue (~Ralth@ns316491.ip-37-187-129.eu) has joined #ceph
[8:28] * karnan (~karnan@121.244.87.117) has joined #ceph
[8:29] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[8:30] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:32] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[8:38] * Kurt (~Adium@2001:628:1:5:185a:b838:e0c2:6177) has joined #ceph
[8:43] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[8:49] * fsimonce (~simon@host128-29-dynamic.250-95-r.retail.telecomitalia.it) has joined #ceph
[8:53] * rraja (~rraja@121.244.87.117) has joined #ceph
[8:54] * Cue (~Ralth@7V7AAFFU5.tor-irc.dnsbl.oftc.net) Quit ()
[8:54] * Bwana (~Miho@4MJAAFP7I.tor-irc.dnsbl.oftc.net) has joined #ceph
[9:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[9:01] * T1w (~jens@217.195.184.71) has joined #ceph
[9:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[9:04] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[9:09] * dvanders (~dvanders@dvanders-pro.cern.ch) has joined #ceph
[9:10] * linjan_ (~linjan@86.62.112.22) has joined #ceph
[9:11] * derjohn_mob (~aj@185.65.67.249) Quit (Ping timeout: 480 seconds)
[9:13] * gauravbafna (~gauravbaf@122.172.239.49) has joined #ceph
[9:15] * ade (~abradshaw@dslb-092-078-141-047.092.078.pools.vodafone-ip.de) has joined #ceph
[9:16] * nagyz (~textual@84-75-164-108.dclient.hispeed.ch) has joined #ceph
[9:20] * derjohn_mob (~aj@x4db29e19.dyn.telefonica.de) has joined #ceph
[9:21] * dgurtner (~dgurtner@178.197.225.26) Quit (Read error: Connection reset by peer)
[9:22] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:22] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[9:23] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Remote host closed the connection)
[9:24] * Bwana (~Miho@4MJAAFP7I.tor-irc.dnsbl.oftc.net) Quit ()
[9:24] * jakekosberg (~thundercl@ded31663.iceservers.net) has joined #ceph
[9:24] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[9:25] * hommie (~hommie@hosd.leaseweb.net) has joined #ceph
[9:30] * jordanP (~jordan@pas38-2-82-67-72-49.fbx.proxad.net) has joined #ceph
[9:30] <hommie> (to all rgw experts): what are the consequences of removing a key (reference for an object) from the .rgw.buckets.index omap? will the object be purged at some point or will it be "forever lost" in time-space?
[9:33] * povian_ (~povian@211.189.163.250) has joined #ceph
[9:36] * vikhyat is now known as vikhyat|food
[9:38] * povian (~povian@211.189.163.250) Quit (Ping timeout: 480 seconds)
[9:49] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:53] * rendar (~I@host183-178-dynamic.18-79-r.retail.telecomitalia.it) has joined #ceph
[9:53] * thansen (~thansen@162.219.43.108) has joined #ceph
[9:54] * jakekosberg (~thundercl@4MJAAFP8O.tor-irc.dnsbl.oftc.net) Quit ()
[9:54] * danielsj (~w0lfeh@4MJAAFP9U.tor-irc.dnsbl.oftc.net) has joined #ceph
[9:54] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[9:56] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:58] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[10:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[10:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[10:01] * T1 (~the_one@87.104.212.66) Quit (Read error: Connection reset by peer)
[10:02] * T1 (~the_one@87.104.212.66) has joined #ceph
[10:02] * monsted (~monsted@rootweiler.dk) Quit (Read error: Connection reset by peer)
[10:02] * monsted (~monsted@rootweiler.dk) has joined #ceph
[10:04] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[10:08] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[10:10] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[10:18] * sickology (~mio@vpn.bcs.hr) Quit (Read error: Connection reset by peer)
[10:19] * sickology (~mio@vpn.bcs.hr) has joined #ceph
[10:24] * danielsj (~w0lfeh@4MJAAFP9U.tor-irc.dnsbl.oftc.net) Quit ()
[10:24] * yanzheng (~zhyan@118.116.112.223) has joined #ceph
[10:28] * vikhyat|food is now known as vikhyat
[10:31] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[10:31] * evelu (~erwan@46.231.131.178) has joined #ceph
[10:31] * epicguy (~epicguy@41.164.8.42) Quit (Read error: Connection reset by peer)
[10:34] * epicguy (~epicguy@41.164.8.42) has joined #ceph
[10:36] * raeven_ (~raeven@h89n10-oes-a31.ias.bredband.telia.com) has joined #ceph
[10:38] * kawa2014 (~kawa@212.110.41.244) Quit (Ping timeout: 480 seconds)
[10:40] * Xroot (~root@103.63.159.114) has joined #ceph
[10:42] * Xroot (~root@103.63.159.114) Quit (Remote host closed the connection)
[10:42] * raeven (~raeven@h89n10-oes-a31.ias.bredband.telia.com) Quit (Ping timeout: 480 seconds)
[10:44] * TMM (~hp@185.5.121.201) has joined #ceph
[10:44] * epicguy (~epicguy@41.164.8.42) Quit (Quit: Leaving)
[10:51] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[10:51] * pabluk_ is now known as pabluk
[10:58] * Zyn (~offender@tor1.mysec-arch.net) has joined #ceph
[11:00] * kefu|afk is now known as kefu
[11:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[11:01] * zhaochao_ (~zhaochao@124.202.191.132) has joined #ceph
[11:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[11:04] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:8490:50ec:39e9:deca) has joined #ceph
[11:06] * zhaochao (~zhaochao@125.39.9.148) Quit (Ping timeout: 480 seconds)
[11:06] * zhaochao_ is now known as zhaochao
[11:11] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[11:14] * MannerMan (~oscar@user170.217-10-117.netatonce.net) has joined #ceph
[11:14] * m0zes (~mozes@ns1.beocat.ksu.edu) Quit (Ping timeout: 480 seconds)
[11:14] * shyu (~shyu@218.241.172.114) has joined #ceph
[11:19] * ieth0 (~ieth0@user232.77-105-223.netatonce.net) has joined #ceph
[11:20] * dgurtner (~dgurtner@178.197.232.251) has joined #ceph
[11:21] * bvi (~Bastiaan@185.56.32.1) has joined #ceph
[11:23] * bvi (~Bastiaan@185.56.32.1) Quit (Remote host closed the connection)
[11:28] * Zyn (~offender@06SAAC8C1.tor-irc.dnsbl.oftc.net) Quit ()
[11:30] * allaok (~allaok@machine107.orange-labs.com) has left #ceph
[11:32] * hommie (~hommie@hosd.leaseweb.net) Quit (Read error: Connection reset by peer)
[11:33] * m0zes (~mozes@ns1.beocat.ksu.edu) has joined #ceph
[11:38] * Hemanth (~hkumar_@121.244.87.117) has joined #ceph
[11:38] * gauravbafna (~gauravbaf@122.172.239.49) Quit (Remote host closed the connection)
[11:39] * shyu (~shyu@218.241.172.114) Quit (Quit: Leaving)
[11:40] * gauravbafna (~gauravbaf@122.172.239.49) has joined #ceph
[11:41] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[11:42] * shyu (~shyu@218.241.172.114) has joined #ceph
[11:44] * gauravbafna (~gauravbaf@122.172.239.49) Quit (Remote host closed the connection)
[11:45] * allaok1 (~allaok@machine107.orange-labs.com) has joined #ceph
[11:52] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:54] * allaok1 (~allaok@machine107.orange-labs.com) Quit (Remote host closed the connection)
[11:58] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[12:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[12:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[12:03] * ylmson (~Ralth@7V7AAFF38.tor-irc.dnsbl.oftc.net) has joined #ceph
[12:05] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[12:05] * gauravbafna (~gauravbaf@122.172.239.49) has joined #ceph
[12:08] * mhuang (~mhuang@119.254.120.72) Quit (Quit: This computer has gone to sleep)
[12:12] * tobiash (~quassel@212.118.206.70) Quit (Ping timeout: 480 seconds)
[12:13] * gauravba_ (~gauravbaf@122.172.231.128) has joined #ceph
[12:13] * gauravbafna (~gauravbaf@122.172.239.49) Quit (Read error: Connection reset by peer)
[12:14] * tobiash (~quassel@212.118.206.70) has joined #ceph
[12:15] * yanzheng (~zhyan@118.116.112.223) Quit (Quit: This computer has gone to sleep)
[12:18] * gauravbafna (~gauravbaf@122.167.101.127) has joined #ceph
[12:19] * T1 (~the_one@87.104.212.66) Quit (Read error: Connection reset by peer)
[12:19] * T1 (~the_one@87.104.212.66) has joined #ceph
[12:20] * monsted (~monsted@rootweiler.dk) Quit (Read error: Connection reset by peer)
[12:20] * monsted (~monsted@rootweiler.dk) has joined #ceph
[12:22] * gauravba_ (~gauravbaf@122.172.231.128) Quit (Ping timeout: 480 seconds)
[12:24] * m0zes (~mozes@ns1.beocat.ksu.edu) Quit (Ping timeout: 480 seconds)
[12:25] * m0zes (~mozes@ns1.beocat.ksu.edu) has joined #ceph
[12:28] * jweismueller (~MrBy@85.115.23.2) Quit (Quit: Ex-Chat)
[12:29] * MrBy (~MrBy@85.115.23.2) has joined #ceph
[12:33] * ylmson (~Ralth@7V7AAFF38.tor-irc.dnsbl.oftc.net) Quit ()
[12:33] * Scymex (~Frostshif@exit1.ipredator.se) has joined #ceph
[12:33] * MannerMan (~oscar@user170.217-10-117.netatonce.net) Quit (Ping timeout: 480 seconds)
[12:35] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[12:35] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[12:37] * yanzheng (~zhyan@118.116.112.223) has joined #ceph
[12:39] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[12:42] * MannerMan (~oscar@user170.217-10-117.netatonce.net) has joined #ceph
[12:53] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Remote host closed the connection)
[12:55] * ade (~abradshaw@dslb-092-078-141-047.092.078.pools.vodafone-ip.de) Quit (Quit: Too sexy for his shirt)
[12:59] * ade (~abradshaw@dslb-092-078-141-047.092.078.pools.vodafone-ip.de) has joined #ceph
[12:59] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:00] * dgurtner (~dgurtner@178.197.232.251) Quit (Read error: Connection reset by peer)
[13:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[13:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[13:03] * Scymex (~Frostshif@4MJAAFQFK.tor-irc.dnsbl.oftc.net) Quit ()
[13:03] * Revo84 (~Vidi@37.48.81.27) has joined #ceph
[13:04] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[13:05] * shylesh (~shylesh@45.124.227.43) Quit (Ping timeout: 480 seconds)
[13:07] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:09] * bvi (~Bastiaan@185.56.32.1) has joined #ceph
[13:09] * bvi (~Bastiaan@185.56.32.1) Quit ()
[13:09] * rakeshgm (~rakesh@121.244.87.118) has joined #ceph
[13:10] * gauravbafna (~gauravbaf@122.167.101.127) Quit (Remote host closed the connection)
[13:10] * huangjun (~kvirc@113.57.168.154) Quit (Ping timeout: 480 seconds)
[13:11] * gauravbafna (~gauravbaf@122.167.101.127) has joined #ceph
[13:13] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[13:16] * shyu (~shyu@218.241.172.114) Quit (Ping timeout: 480 seconds)
[13:22] * mhuang (~mhuang@119.254.120.71) has joined #ceph
[13:22] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[13:27] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[13:27] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[13:29] * zhaochao (~zhaochao@124.202.191.132) Quit (Quit: ChatZilla 0.9.92 [Firefox 45.1.1/20160507231935])
[13:33] * Revo84 (~Vidi@4MJAAFQGL.tor-irc.dnsbl.oftc.net) Quit ()
[13:33] * kutija (~kutija@89.216.27.139) has joined #ceph
[13:33] * Helleshin1 (~Bobby@tor-exit7-readme.dfri.se) has joined #ceph
[13:36] * shyu (~shyu@218.241.172.114) has joined #ceph
[13:42] * rakeshgm (~rakesh@121.244.87.118) Quit (Ping timeout: 480 seconds)
[13:46] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[13:51] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[13:51] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[13:53] * jordanP (~jordan@pas38-2-82-67-72-49.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[13:58] * jordanP (~jordan@pas38-2-82-67-72-49.fbx.proxad.net) has joined #ceph
[14:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[14:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[14:03] * Helleshin1 (~Bobby@7V7AAFF7J.tor-irc.dnsbl.oftc.net) Quit ()
[14:06] * gauravbafna (~gauravbaf@122.167.101.127) Quit (Remote host closed the connection)
[14:07] * basicxman (~Esge@06SAAC8JZ.tor-irc.dnsbl.oftc.net) has joined #ceph
[14:09] * T1w (~jens@217.195.184.71) Quit (Ping timeout: 480 seconds)
[14:12] * geli12 (~geli@1.136.97.19) Quit (Ping timeout: 480 seconds)
[14:13] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[14:14] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[14:14] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Remote host closed the connection)
[14:16] * povian_ (~povian@211.189.163.250) Quit (Remote host closed the connection)
[14:22] * dgurtner (~dgurtner@178.197.232.251) has joined #ceph
[14:26] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[14:28] * ronrib (~boswortr@45.32.242.135) Quit (Remote host closed the connection)
[14:29] * sage__ (~quassel@pool-173-76-103-210.bstnma.fios.verizon.net) Quit (Read error: Connection reset by peer)
[14:30] * sage_ (~quassel@pool-173-76-103-210.bstnma.fios.verizon.net) has joined #ceph
[14:31] * mhuang (~mhuang@119.254.120.71) Quit (Quit: This computer has gone to sleep)
[14:36] * yanzheng (~zhyan@118.116.112.223) Quit (Quit: This computer has gone to sleep)
[14:37] * basicxman (~Esge@06SAAC8JZ.tor-irc.dnsbl.oftc.net) Quit ()
[14:37] * pepzi (~Aethis@tollana.enn.lu) has joined #ceph
[14:40] * ade (~abradshaw@dslb-092-078-141-047.092.078.pools.vodafone-ip.de) Quit (Remote host closed the connection)
[14:41] * allaok (~allaok@machine107.orange-labs.com) has left #ceph
[14:44] * rakeshgm (~rakesh@121.244.87.117) Quit (Quit: Leaving)
[14:44] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) Quit (Quit: smokedmeets)
[14:48] * Lokta (~Lokta@carbon.coe.int) Quit (Ping timeout: 480 seconds)
[14:48] * Lokta (~Lokta@carbon.coe.int) has joined #ceph
[14:55] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[14:57] * karnan (~karnan@103.227.97.69) has joined #ceph
[14:58] * shyu (~shyu@218.241.172.114) Quit (Ping timeout: 480 seconds)
[15:01] * georgem (~Adium@206.108.127.16) has joined #ceph
[15:03] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[15:04] * wes_dillingham (~wes_dilli@cpe-74-70-28-196.nycap.res.rr.com) has joined #ceph
[15:06] * nagyz (~textual@84-75-164-108.dclient.hispeed.ch) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:07] * pepzi (~Aethis@7V7AAFF9U.tor-irc.dnsbl.oftc.net) Quit ()
[15:07] * rhonabwy (~Bobby@67.ip-92-222-38.eu) has joined #ceph
[15:08] * dgurtner (~dgurtner@178.197.232.251) Quit (Read error: No route to host)
[15:09] * ade (~abradshaw@dslb-092-078-141-047.092.078.pools.vodafone-ip.de) has joined #ceph
[15:09] * wes_dillingham (~wes_dilli@cpe-74-70-28-196.nycap.res.rr.com) Quit (Quit: wes_dillingham)
[15:09] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[15:13] * yanzheng (~zhyan@118.116.112.223) has joined #ceph
[15:14] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[15:14] * Georgyo (~georgyo@2600:3c03:e000:71::cafe:3) Quit (Remote host closed the connection)
[15:16] * Georgyo (~georgyo@2600:3c03:e000:71::cafe:3) has joined #ceph
[15:17] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[15:25] * shylesh__ (~shylesh@45.124.227.25) has joined #ceph
[15:26] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Ping timeout: 480 seconds)
[15:28] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[15:35] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Quit: Hiring PHP developers does not contribute to the quota of employees with disabilities.)
[15:36] * dgurtner (~dgurtner@178.197.232.251) has joined #ceph
[15:37] * rhonabwy (~Bobby@4MJAAFQLC.tor-irc.dnsbl.oftc.net) Quit ()
[15:39] * dgurtner (~dgurtner@178.197.232.251) Quit ()
[15:40] * dgurtner (~dgurtner@178.197.232.251) has joined #ceph
[15:42] * KindOne (kindone@h118.147.186.173.dynamic.ip.windstream.net) has joined #ceph
[15:45] * rraja (~rraja@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:47] * jordanP (~jordan@pas38-2-82-67-72-49.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[15:52] * mhuang (~mhuang@101.36.77.152) has joined #ceph
[15:52] * karnan (~karnan@103.227.97.69) Quit (Quit: Leaving)
[15:53] * mhuang (~mhuang@101.36.77.152) Quit ()
[15:56] * rraja (~rraja@121.244.87.118) has joined #ceph
[15:56] * ade (~abradshaw@dslb-092-078-141-047.092.078.pools.vodafone-ip.de) Quit (Remote host closed the connection)
[15:57] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[15:58] * huangjun (~kvirc@117.152.73.127) has joined #ceph
[15:59] * ade (~abradshaw@dslb-092-078-141-047.092.078.pools.vodafone-ip.de) has joined #ceph
[15:59] * jordanP (~jordan@pas38-2-82-67-72-49.fbx.proxad.net) has joined #ceph
[16:02] * mhuang (~mhuang@59.109.104.164) has joined #ceph
[16:05] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[16:10] * kefu (~kefu@114.92.122.74) Quit (Max SendQ exceeded)
[16:10] * kefu (~kefu@li1072-18.members.linode.com) has joined #ceph
[16:16] * raeven_ is now known as raeven
[16:21] * Hideous (~Pulec@exit1.ipredator.se) has joined #ceph
[16:23] * derjohn_mob (~aj@x4db29e19.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[16:26] * vata (~vata@207.96.182.162) has joined #ceph
[16:27] * The_Ball (~pi@20.92-221-43.customer.lyse.net) Quit (Ping timeout: 480 seconds)
[16:28] * rraja (~rraja@121.244.87.118) Quit (Ping timeout: 480 seconds)
[16:28] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:35] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[16:38] * bvi (~Bastiaan@185.56.32.1) has joined #ceph
[16:38] * bvi (~Bastiaan@185.56.32.1) Quit ()
[16:40] * mhuang_ (~mhuang@119.90.24.2) has joined #ceph
[16:42] * gauravbafna (~gauravbaf@122.178.192.230) has joined #ceph
[16:42] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[16:44] * mhuang (~mhuang@59.109.104.164) Quit (Ping timeout: 480 seconds)
[16:44] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[16:47] * kefu_ (~kefu@116.251.213.236) has joined #ceph
[16:50] * gauravbafna (~gauravbaf@122.178.192.230) Quit (Ping timeout: 480 seconds)
[16:51] * Hideous (~Pulec@4MJAAFQOV.tor-irc.dnsbl.oftc.net) Quit ()
[16:51] * Drezil1 (~TehZomB@7V7AAFGFQ.tor-irc.dnsbl.oftc.net) has joined #ceph
[16:51] * kefu (~kefu@li1072-18.members.linode.com) Quit (Remote host closed the connection)
[16:55] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) Quit (Remote host closed the connection)
[16:55] * kefu_ (~kefu@116.251.213.236) Quit (Max SendQ exceeded)
[16:56] * kefu (~kefu@116.251.213.236) has joined #ceph
[16:57] * derjohn_mob (~aj@185.65.67.249) has joined #ceph
[16:58] * kefu (~kefu@116.251.213.236) Quit (Max SendQ exceeded)
[16:59] * georgem1 (~Adium@206.108.127.16) has joined #ceph
[17:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[17:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[17:04] * kefu (~kefu@116.251.213.236) has joined #ceph
[17:05] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[17:09] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[17:10] * danieagle (~Daniel@179.110.89.229) has joined #ceph
[17:12] * deepthi (~deepthi@122.172.67.166) Quit (Ping timeout: 480 seconds)
[17:13] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:14] * Hemanth (~hkumar_@121.244.87.117) Quit (Ping timeout: 480 seconds)
[17:19] * linjan_ (~linjan@86.62.112.22) Quit (Ping timeout: 480 seconds)
[17:21] * Drezil1 (~TehZomB@7V7AAFGFQ.tor-irc.dnsbl.oftc.net) Quit ()
[17:21] * deepthi (~deepthi@122.172.109.149) has joined #ceph
[17:21] * ade (~abradshaw@dslb-092-078-141-047.092.078.pools.vodafone-ip.de) Quit (Quit: Too sexy for his shirt)
[17:27] * bvi (~Bastiaan@185.56.32.1) has joined #ceph
[17:30] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Remote host closed the connection)
[17:31] * georgem (~Adium@206.108.127.16) has joined #ceph
[17:31] * georgem1 (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[17:35] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:35] * bvi (~Bastiaan@185.56.32.1) Quit (Remote host closed the connection)
[17:37] * tobiash (~quassel@212.118.206.70) Quit (Ping timeout: 480 seconds)
[17:40] * tobiash (~quassel@212.118.206.70) has joined #ceph
[17:40] * thansen (~thansen@162.219.43.108) Quit (Quit: Ex-Chat)
[17:42] * plr777 (~PLR@182.156.164.53) has joined #ceph
[17:43] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[17:46] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[17:49] * huangjun (~kvirc@117.152.73.127) Quit (Ping timeout: 480 seconds)
[17:50] * kefu (~kefu@116.251.213.236) Quit (Remote host closed the connection)
[17:51] * _s1gma (~Xylios@torsrva.snydernet.net) has joined #ceph
[17:53] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[17:54] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[17:55] * kefu (~kefu@114.92.122.74) has joined #ceph
[17:55] * dgurtner (~dgurtner@178.197.232.251) Quit (Ping timeout: 480 seconds)
[17:58] * tobiash (~quassel@212.118.206.70) Quit (Ping timeout: 480 seconds)
[18:00] * tobiash (~quassel@212.118.206.70) has joined #ceph
[18:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[18:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[18:08] * plr777 (~PLR@182.156.164.53) has left #ceph
[18:08] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[18:17] * povian (~povian@211.189.163.250) has joined #ceph
[18:18] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[18:20] * dneary (~dneary@pool-96-233-46-27.bstnma.fios.verizon.net) has joined #ceph
[18:21] * _s1gma (~Xylios@4MJAAFQSU.tor-irc.dnsbl.oftc.net) Quit ()
[18:21] * Borf (~Skyrider@94.102.49.64) has joined #ceph
[18:21] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) has joined #ceph
[18:21] * jordanP (~jordan@pas38-2-82-67-72-49.fbx.proxad.net) Quit (Quit: Leaving)
[18:23] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[18:24] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:25] * povian (~povian@211.189.163.250) Quit (Ping timeout: 480 seconds)
[18:26] * timmy (~oftc-webi@rz16.vpn.hetzner.de) has joined #ceph
[18:26] <timmy> Hi, could anybody help my by upgrading radosgw from hammer to jewel
[18:26] <timmy> i created a zone
[18:28] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[18:29] <timmy> https://gist.github.com/timmyArch/e1636ac66a6042a095591adc805f16cf
[18:30] <timmy> it's unable to find the zone
[18:30] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[18:30] * ircuser-1 (~Johnny@158.183-62-69.ftth.swbr.surewest.net) Quit (Quit: because)
[18:31] <timmy> anybody an idea
[18:32] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[18:33] * deepthi (~deepthi@122.172.109.149) Quit (Ping timeout: 480 seconds)
[18:34] * shylesh__ (~shylesh@45.124.227.25) Quit (Ping timeout: 480 seconds)
[18:36] * shylesh__ (~shylesh@45.124.227.25) has joined #ceph
[18:38] * dneary (~dneary@pool-96-233-46-27.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[18:44] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[18:45] * kutija (~kutija@89.216.27.139) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:46] * mykola (~Mikolaj@91.245.76.80) has joined #ceph
[18:46] * thansen (~thansen@17.253.sfcn.org) has joined #ceph
[18:49] * gauravbafna (~gauravbaf@122.172.247.196) has joined #ceph
[18:51] * Xeon061 (~drupal@178.32.251.105) has joined #ceph
[18:52] * Borf (~Skyrider@4MJAAFQT9.tor-irc.dnsbl.oftc.net) Quit ()
[18:53] * The_Ball (~pi@20.92-221-43.customer.lyse.net) has joined #ceph
[18:53] * evelu (~erwan@46.231.131.178) Quit (Ping timeout: 480 seconds)
[19:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[19:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[19:01] * al (d@niel.cx) Quit (Ping timeout: 480 seconds)
[19:02] * gauravbafna (~gauravbaf@122.172.247.196) Quit (Read error: Connection reset by peer)
[19:02] * natarej (~natarej@2001:8003:4885:6500:d804:46ec:50fd:29e3) Quit (Read error: Connection reset by peer)
[19:03] * natarej (~natarej@2001:8003:4885:6500:d804:46ec:50fd:29e3) has joined #ceph
[19:04] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:05] * thansen (~thansen@17.253.sfcn.org) Quit (Ping timeout: 480 seconds)
[19:12] * overclk (~quassel@117.202.96.183) Quit (Remote host closed the connection)
[19:14] * gauravbafna (~gauravbaf@122.167.76.94) has joined #ceph
[19:14] * thansen (~thansen@162.219.43.108) has joined #ceph
[19:19] * onyb (~ani07nov@112.133.232.12) has joined #ceph
[19:20] * ieth0 (~ieth0@user232.77-105-223.netatonce.net) Quit (Quit: ieth0)
[19:21] * Xeon061 (~drupal@4MJAAFQVU.tor-irc.dnsbl.oftc.net) Quit ()
[19:21] * Sophie1 (~Borf@edwardsnowden0.torservers.net) has joined #ceph
[19:22] * gauravbafna (~gauravbaf@122.167.76.94) Quit (Ping timeout: 480 seconds)
[19:23] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[19:28] * gauravbafna (~gauravbaf@122.167.76.94) has joined #ceph
[19:28] * mhuang_ (~mhuang@119.90.24.2) Quit (Quit: This computer has gone to sleep)
[19:31] * kefu (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:31] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:8490:50ec:39e9:deca) Quit (Ping timeout: 480 seconds)
[19:36] * gauravbafna (~gauravbaf@122.167.76.94) Quit (Remote host closed the connection)
[19:36] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[19:45] * pabluk is now known as pabluk_
[19:48] * linjan_ (~linjan@176.195.187.182) has joined #ceph
[19:48] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:8490:50ec:39e9:deca) has joined #ceph
[19:51] * Sophie1 (~Borf@7V7AAFGMZ.tor-irc.dnsbl.oftc.net) Quit ()
[19:51] * Bored (~Diablodoc@politkovskaja.torservers.net) has joined #ceph
[19:51] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[19:53] * shylesh__ (~shylesh@45.124.227.25) Quit (Ping timeout: 480 seconds)
[19:54] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[19:57] * rotbeard (~redbeard@2a02:908:df13:bb00:898c:c612:7e2c:7f00) has joined #ceph
[20:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[20:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[20:03] * madkiss (~madkiss@2001:6f8:12c3:f00f:15d9:726:2ef9:9382) has joined #ceph
[20:04] * yanzheng (~zhyan@118.116.112.223) Quit (Quit: This computer has gone to sleep)
[20:06] * rraja (~rraja@121.244.87.117) has joined #ceph
[20:06] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[20:07] * sudocat (~dibarra@2602:306:8bc7:4c50:98fb:8575:4e86:814a) has joined #ceph
[20:10] * johnavp19891 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[20:10] <- *johnavp19891* To prove that you are human, please enter the result of 8+3
[20:12] * vbellur (~vijay@71.234.224.255) Quit (Remote host closed the connection)
[20:15] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[20:16] <s3an2> In ceph jewel is it no longer possible to use ceph tell osd.* injectargs '--osd-recovery-max-active 2' as it returns 'osd_recovery_max_active = '2' (unchangeable)'
[20:19] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:8490:50ec:39e9:deca) Quit (Ping timeout: 480 seconds)
[20:21] * Bored (~Diablodoc@4MJAAFQYG.tor-irc.dnsbl.oftc.net) Quit ()
[20:21] * Jamana (~blank@atlantic850.dedicatedpanel.com) has joined #ceph
[20:24] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[20:25] * madkiss1 (~madkiss@ip5b414c62.dynamic.kabel-deutschland.de) has joined #ceph
[20:29] * rotbeard (~redbeard@2a02:908:df13:bb00:898c:c612:7e2c:7f00) Quit (Quit: Leaving)
[20:30] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[20:31] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[20:32] * madkiss (~madkiss@2001:6f8:12c3:f00f:15d9:726:2ef9:9382) Quit (Ping timeout: 480 seconds)
[20:32] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[20:37] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has left #ceph
[20:41] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[20:48] * dgurtner (~dgurtner@178.197.239.57) has joined #ceph
[20:49] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[20:51] * Jamana (~blank@7V7AAFGP2.tor-irc.dnsbl.oftc.net) Quit ()
[20:51] * Dinnerbone (~rogst@tor1.mysec-arch.net) has joined #ceph
[20:52] <s3an2> http://tracker.ceph.com/issues/16054 << looks related
[20:55] * vanham (~vanham@12.199.84.146) has joined #ceph
[20:55] * nagyz (~textual@109.74.56.122) has joined #ceph
[20:56] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[20:56] * neurodrone_ (~neurodron@162.243.191.67) has joined #ceph
[21:00] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[21:01] <vanham> Guys, is there anything on RadosGW optimization? Or performance visualization?
[21:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[21:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[21:02] <vanham> My buckets are not that big, CPU is low, iostats is low, and I still do only about 70 writes per second there
[21:02] * deepthi (~deepthi@122.171.82.204) has joined #ceph
[21:03] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[21:03] <vanham> 12 SSD OSDs, 4 server cluster, all small files (average 50k)
[21:03] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit ()
[21:03] <vanham> Running Jewel 10.2.1 btw
[21:04] <vanham> 64/s last test
[21:04] <vanham> Writing 5172 objects in a new bucket took 80 seconds
[21:07] * nagyz (~textual@109.74.56.122) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:07] <vanham> (with 30 threads)
[21:09] * khyron (~khyron@fixed-190-159-187-190-159-75.iusacell.net) Quit (Quit: The computer fell asleep)
[21:09] * khyron (~khyron@fixed-190-159-187-190-159-75.iusacell.net) has joined #ceph
[21:09] * matejz (~matejz@element.planetq.org) has joined #ceph
[21:10] * matejz is now known as matejz11123
[21:10] <matejz11123> hey
[21:10] <vanham> hey matejz11123
[21:11] <matejz11123> I was tasked to get up with some figures for our Ceph cluster???. so far we have 0 experience with Ceph, so all I could do in a day was to read some docs and get up to speed
[21:11] <matejz11123> I wanted to swing by here to double check what I came up
[21:14] <matejz11123> currently, my biggest problem is comming up with specs for OSD nodes???
[21:14] <vanham> matejz11123, shoot
[21:14] <vanham> Those OSDs are they HD or SSD?
[21:14] <matejz11123> HD
[21:15] <matejz11123> I should have 1 HD per OSD
[21:15] <vanham> How many HDs per node?
[21:15] <matejz11123> not sure yet
[21:15] <vanham> Range
[21:15] <matejz11123> 12-40
[21:15] <matejz11123> :)
[21:15] <vanham> 40 is quite a lot :)
[21:15] <matejz11123> true
[21:15] <vanham> OK, normal is to have one OSD process per HD
[21:16] <vanham> Don't use RAID, as it is less efficient than Ceph
[21:16] <vanham> Usually 1GB of RAM per TB
[21:16] <matejz11123> ok
[21:16] <matejz11123> 1GHz Xeon per drive?
[21:16] <vanham> You'll only need that RAM when it's on recovery
[21:16] <vanham> But you will need it anyway
[21:16] <vanham> 1Ghz Xeon per drives seems to be the norman
[21:17] <matejz11123> ok
[21:17] <matejz11123> this were my conclusions so far as well
[21:17] <vanham> Add one SSD every few drives to work as your journal
[21:17] <vanham> Remember that it is a failure domain.
[21:17] <matejz11123> hum
[21:17] <vanham> So, you could have RAID1 for the journal SSDs
[21:17] <matejz11123> how much space would I need on a SSD?
[21:18] <vanham> I usually put between 10GBs and 20GBs per OSD
[21:18] <matejz11123> ok
[21:18] * khyron (~khyron@fixed-190-159-187-190-159-75.iusacell.net) Quit (Ping timeout: 480 seconds)
[21:18] <vanham> Remember the those SSDs will wear out real fast
[21:18] <matejz11123> Intel S3700 or something ?
[21:18] <vanham> Yeah, that's better than what I use
[21:18] <matejz11123> Yea I know??? Currently using SSD for ZFS ZILs, so I know how fast then can go
[21:19] * vanham (~vanham@12.199.84.146) has left #ceph
[21:19] * vanham (~vanham@12.199.84.146) has joined #ceph
[21:19] <vanham> Wrong key here :)
[21:19] <matejz11123> hehe
[21:19] <vanham> What's your use case? CephFS? RBD? Straight to Rados? RadosGW?
[21:20] <georgem> matejz11123: it really depends on your use case???you should tell us more about what you plan to use it for
[21:20] * deepthi (~deepthi@122.171.82.204) Quit (Read error: Connection timed out)
[21:20] <matejz11123> ok
[21:20] <matejz11123> so we have 3 different scenarios
[21:20] <matejz11123> one will be for remote storage location, one will, hopefully, be for VM storage
[21:20] <matejz11123> and one for HPC data storage
[21:21] <vanham> Wow, ok
[21:21] * Dinnerbone (~rogst@7V7AAFGRC.tor-irc.dnsbl.oftc.net) Quit ()
[21:21] * fauxhawk (~CydeWeys@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[21:21] <matejz11123> BUT:)
[21:21] <matejz11123> this is the long term plan
[21:21] <vanham> Soooo... Try to understand what is your hot-active data. I'll probably make sense to add ssd cache-tiering for the VMs
[21:21] <matejz11123> we first need to do some testing, but we would like to test on the gear we can later use:)
[21:22] <vanham> HPC and Remote depends on the size of your hot data
[21:22] * neurodrone_ (~neurodron@162.243.191.67) Quit (Ping timeout: 480 seconds)
[21:23] <vanham> For example, here I use RadosGW, RBD and CephFS. With RBD 98% of my IO is SSD only, even though I only have 5% of the total RBD space on SSDs
[21:23] <matejz11123> ok
[21:23] <vanham> I wasn't so lucky with CephFS and RadosGW because the use case is totally different
[21:23] <matejz11123> I need to do some checking on what IO usage we have
[21:24] <matejz11123> as far as HPC goes, I think there is a lot of sequential read/write, but we need throughput
[21:24] <matejz11123> but that is not my domain, so I would have to ask my coworker
[21:24] <matejz11123> I will be covering the remote backup part
[21:25] <matejz11123> my load is mostly sequential write of big chunks, so I???m not sure I really need SSDs
[21:26] <vanham> The remote part will not then
[21:26] <matejz11123> VMs would probably benefit nicely from a SSD
[21:26] <matejz11123> does Ceph support auto tiering?
[21:26] <vanham> It's not recommended to run HD OSDs without SSDs for journaling.
[21:26] <matejz11123> so, hot data on ssds and not-used data moved to HDD?
[21:27] <vanham> Yes, Ceph does have SSD Cache Tiering
[21:27] <vanham> It works pretty well when it makes sense
[21:27] <matejz11123> ok??? SSD on every OSD for journaling
[21:27] <matejz11123> for VMs and possibly HPC, I could also use SSD for cache tiering
[21:28] <vanham> Yeah
[21:28] <matejz11123> great
[21:28] <matejz11123> one more thing that came up todat
[21:28] <matejz11123> today
[21:28] <matejz11123> ammm
[21:28] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[21:29] <matejz11123> lets say I have 2 nodes with 20 OSD each
[21:29] <matejz11123> is there a way to tell OSDs on which node they are, so when Ceph is doing replication, it doesn???t replicate on the same node?
[21:29] <matejz11123> so in case we loose a node, we still have a working copy on the other node
[21:29] * ade (~abradshaw@dslb-092-078-141-047.092.078.pools.vodafone-ip.de) has joined #ceph
[21:30] <s3an2> matejz11123, ceph crush map is your goolge term
[21:31] <MentalRay> it does that natively but yeah its control via the crush map
[21:31] <vanham> You can do multiple levels of failure domain replication: datacenter, rack, case, node, HD, etc.
[21:31] <vanham> case = chassis
[21:31] <matejz11123> wuuuuhuuu)
[21:31] <matejz11123> :)
[21:31] <matejz11123> great news!!
[21:31] <MentalRay> Also dont go with a full SSD cluster since your journal at the moment are the bottlenecks
[21:31] <MentalRay> CPU+Journals
[21:32] <matejz11123> yea I know
[21:32] <MentalRay> some new features might make this possible soon
[21:32] <TMM> I think this received wisdom isn't entirely correct
[21:32] <MentalRay> my ssd thing?
[21:32] <TMM> I run a full ssd cluster and particularly in damaged cluster scenarios it performs significantly better than the mixed cluster I have
[21:33] <MentalRay> how many OSD?
[21:33] <s3an2> TMM, +1
[21:33] <TMM> 240
[21:33] <MentalRay> and how many nodes?
[21:33] <TMM> 30
[21:33] <MentalRay> for recover I would agree with you
[21:34] <MentalRay> yes
[21:34] <MentalRay> but in term of raw performances
[21:34] <TMM> I can now service nodes without telling anyone, on my mixed cluster I have to be way more careful
[21:34] <MentalRay> what size are your OSD?
[21:34] <TMM> 1tb
[21:35] <MentalRay> same here but 100 OSD
[21:35] <matejz11123> as far as CPUs goes, how do I choose the right CPU??? I read somewhere that I should have 1GHz Xeon for every HDD
[21:35] <matejz11123> is that true?
[21:36] <TMM> not really
[21:36] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[21:36] <TMM> maybe if you have a ton of spinners, I don't know, but I only really see super heavy load on my mons when things go to shit
[21:36] <TMM> my osds remain fairly calm
[21:37] <MentalRay> during recovery
[21:37] <MentalRay> CPU cause up like crazy on my ssd cluster
[21:37] <MentalRay> I had to little the recovery to prevent isues
[21:38] <matejz11123> ok, but one has to plan for recovery scenarios, so more cpu is always better:)
[21:38] <matejz11123> http://www.supermicro.com/solutions/datasheet_Ceph.pdf
[21:38] <matejz11123> how are those nodes for ceph
[21:38] <matejz11123> specially the 36 drives ones
[21:40] <MentalRay> I never run a cluster of hdd other than in POC
[21:40] <MentalRay> but I dont know, I would go with smaller Nodes
[21:41] <MentalRay> always depending on how many nodes you would run
[21:41] <MentalRay> how many of those 36 drives chasis would you run?
[21:43] <matejz11123> for start, 3
[21:43] <matejz11123> and then scale
[21:43] <MentalRay> personnaly
[21:43] <vanham> Supermicro is my main brand for anything I do here. Currently with 42 nodes
[21:43] <MentalRay> I think its a bad idea
[21:43] <MentalRay> 3 node of 36 drives
[21:44] <matejz11123> MentalRay: why
[21:44] <MentalRay> recovery will be problematic
[21:44] * nagyz (~textual@109.74.56.122) has joined #ceph
[21:44] <matejz11123> MentalRay: problematic because it will take long time?
[21:44] <matejz11123> MentalRay: and performance will be bad
[21:46] <MentalRay> yeah
[21:46] <MentalRay> and if you run a 3 replica
[21:46] <MentalRay> you will be undersized
[21:46] <matejz11123> what would then be the sweet spot?
[21:46] <matejz11123> 12 drives per node?
[21:47] <MentalRay> well it depend on you
[21:47] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[21:47] <MentalRay> lets say you have 10 nodes
[21:47] <MentalRay> 1 goes down
[21:47] <MentalRay> it doesnt comeback fast
[21:47] <MentalRay> it need to backfill (equilibrate the data)
[21:47] * sudocat (~dibarra@2602:306:8bc7:4c50:98fb:8575:4e86:814a) Quit (Ping timeout: 480 seconds)
[21:48] <MentalRay> you have +- 10% of the data to be backfill
[21:48] <MentalRay> if you have 20 nodes
[21:48] <MentalRay> then its 5%
[21:48] <MentalRay> etc etc
[21:48] <MentalRay> so its your risk tolerance
[21:48] <MentalRay> On my cluster
[21:48] <MentalRay> if I put multi thread for recover and backfill
[21:48] <MentalRay> than it affect the VMs running
[21:48] <MentalRay> if I limit it
[21:48] <MentalRay> then it doesnt but recover is slower
[21:49] <MentalRay> which increase your risk if another node goes down etc etc
[21:49] <MentalRay> so this part is more regarding your tolerance to fault or service degradation
[21:49] <matejz11123> ok
[21:49] <matejz11123> I guess I will need to do some testing around this
[21:49] <T1> ant the 1GB og ram per 1TB of storage per OSD is a bit higher if you use erasure encoding - in some instances with backfill and recovery it can go up to 2GB per 1TB of storage
[21:50] <MentalRay> yes
[21:50] <MentalRay> also as you add more replicate
[21:50] * deepthi (~deepthi@122.171.82.204) has joined #ceph
[21:50] <MentalRay> you affect overall performance (at least on full ssd cluster)
[21:50] <matejz11123> ok
[21:51] <T1> also keep in mind that the total IO capacity of your OSDs in a single nodes should be high enough that network IO for the node doesn't become a bottleneck
[21:51] * fauxhawk (~CydeWeys@06SAAC842.tor-irc.dnsbl.oftc.net) Quit ()
[21:51] * Kalado (~Defaultti@7V7AAFGTX.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:51] <MentalRay> also true
[21:51] <matejz11123> ok
[21:52] <MentalRay> As anyone test bluetec techpreview on jewel?
[21:52] <matejz11123> great info guys
[21:52] <MentalRay> matejz11123 also
[21:52] <MentalRay> hba card limitation
[21:52] <MentalRay> a lot to be checked
[21:52] <matejz11123> as far as HBAs go
[21:52] <T1> also keep in mind that you should not go above 8or 10 OSDs on a single OSD journal device
[21:52] <matejz11123> we usually or always go with LSI
[21:53] <matejz11123> either 9300 for SAS3 or 9207 for SAS2
[21:53] <T1> .. and even that might be a bit on the high side - if the journal device dies all OSDs die too
[21:53] <vanham> Guys, about my performance issue with RadosGW, if anyone could help, what I have found here is that, although I have a SSD cache in front of it, 80% of my latency is RadosGW trying to make sure an object doesn't exist. Since it does not exist, it's not on cache and it will generate a read op on the HDs, through a getxattrs,stat OSD op.
[21:53] <matejz11123> T1: going RAID1 with journal SSDs could protect me from this?
[21:53] <TMM> I run my journals on softraid 10 with 4 copies
[21:53] * madkiss1 (~madkiss@ip5b414c62.dynamic.kabel-deutschland.de) Quit (Quit: Leaving.)
[21:53] <T1> matejz11123: yes
[21:54] <T1> I use software raid1 for that
[21:54] <TMM> spreads the write load across the ssds better too
[21:54] <vanham> This is why my performance is suffering so much. Reading on the HDs to make sure that the object doesn't exist.
[21:54] <MentalRay> how many journals you put per OSD T1 and TMM ?
[21:54] <vanham> The writing itself is quite fast. So, no index problems here.
[21:54] <T1> raid10 is probably better like TMM says, but it's dependent on your node configuration
[21:55] <matejz11123> ok
[21:55] <matejz11123> do you use SAS/SATA or NVMe SSDs?
[21:55] <TMM> MentalRay, I currently have 33 osds per box, with 15 1tb ssds for journals
[21:55] <T1> MentalRay: I'm runnung a small cluster, so only 2 OSDs per journal device
[21:56] <TMM> this is not super efficient use of space, but with 1tb ssds and 2gb journals I can get away with really cheap-ass ssds and not write them to hell in a month
[21:56] <TMM> these journal drives are only about 200 bucks
[21:56] <TMM> and I am well below their .3 wpd now
[21:56] <T1> I've got 1U nodes with 4x 3.5" bays - 2x Intel S3710 for OS and journals in software raid1, 2x 4TB data drives for 2 OSDs per node
[21:56] <MentalRay> ok
[21:56] <MentalRay> we do
[21:57] <MentalRay> 1x Intel DC S3610 200Gb Journal per 3x 1TB SSD OSD
[21:57] <TMM> on linux with softraid 10 you can specify the number of copies you want btw, this is part of the 'layout' option when you create the array
[21:57] <matejz11123> T1: is that a supermicro 1u box?
[21:57] <TMM> not many people seem aware of this
[21:57] <T1> the next nodes will probably house 6 to 10 OSDs - larger and my failure domain becomes a problem
[21:58] <T1> matejz11123: no, Dell R320
[21:58] <MentalRay> matejz11123 I use supermicro
[21:58] <matejz11123> T1: not the cheapest boxes then
[21:58] <matejz11123> MentalRay: 1U boxes?
[21:58] <MentalRay> all size
[21:58] <matejz11123> MentalRay: which models?
[21:58] <MentalRay> we probably have 2000 supermicro in production overall
[21:58] * deepthi (~deepthi@122.171.82.204) Quit (Remote host closed the connection)
[21:58] <T1> matejz11123: no, they were pretty cheap - I'm getting a hefty discount on everything from Dell
[21:59] * rendar (~I@host183-178-dynamic.18-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:59] <T1> at least 50%
[21:59] <T1> and often more
[21:59] <matejz11123> T1: nice:) thats another story all together
[21:59] <MentalRay> you are sleeping with Michael Dell ? :p
[21:59] <T1> indeed
[21:59] <T1> haha
[21:59] <T1> no
[21:59] <matejz11123> MentalRay: why do you use it? For hosting VMs?
[22:00] <T1> I just work for a place that puts me in the "large enterprise" segment and give me direct access to some people whom service me
[22:00] <MentalRay> yes
[22:01] <MentalRay> for our OpenStack Ceph used
[22:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Read error: Connection reset by peer)
[22:01] <matejz11123> nice
[22:01] <vanham> Guys, is it possible to move the XFS metadata (dirs lists, xattrs, etc) to SSD drives? Not talking journal here, metadata...
[22:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[22:01] <T1> I never purchase anything via an online shop - everything is configured according to my specs and then I get a custom price quotation for it.. ;)
[22:01] <matejz11123> T1: same on my end??? I usually send out specs and they configure is for me and send me the prices
[22:01] <MentalRay> we built everything our self
[22:02] <T1> vanham: the XFS metadata for the data drives themselves?
[22:02] <vanham> T1, yeah!
[22:02] <matejz11123> T1: we usually get around 50% discount, compared to the online prices
[22:02] <matejz11123> at least for ibm
[22:02] <vanham> I'm trying to make that getxattrs,stat OSD operation go faster, since it won't stay on my cache tier
[22:02] <matejz11123> what are you peope use for FS??? xfs?
[22:03] <vanham> XFS is the recommended filesystem here matejz11123
[22:03] <T1> vanham: you could probably do stuff when you create the data drive XFS filesystem, but I'd like to think that bluestore renders that small optimization obsolete
[22:03] <T1> use XFS
[22:03] <MentalRay> yes but it will change with the next release
[22:03] * sudocat (~dibarra@45-17-188-191.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[22:03] <MentalRay> no?
[22:03] <Kruge> Does anyone happen to know if civetweb will return a 403 on a HEAD request, by any chance?
[22:03] <T1> ext4 is almost depricated
[22:03] <Kruge> Or radosgw, for that matter
[22:04] <vanham> T1, but bluestore is not really there yet, right?
[22:04] <MentalRay> in Jewel its techpreview
[22:04] <vanham> Yeah
[22:04] <T1> vanham: alas, no - but it looks really interesting
[22:04] <MentalRay> suppose to be production readu on next release
[22:04] <MentalRay> https://www.youtube.com/watch?v=-Aa2lKR68gA
[22:04] <MentalRay> around 22 minutes
[22:04] <MentalRay> :p
[22:04] <T1> afk..
[22:04] <MentalRay> Im actually trying to find some1 who tested this on jewel
[22:04] <vanham> T1, it really does! I have some servers that I couldn't add a SSD for journaling, can't wait for it!
[22:04] <matejz11123> vanham & T1: thanks
[22:05] * ade (~abradshaw@dslb-092-078-141-047.092.078.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[22:06] <matejz11123> MentalRay: what supermicro models do you use for smaller OSD nodes?
[22:07] <vanham> T1, with mkfs.xfs you can separate the log section of the filesystem to an external device, but directories stay on the data section :(
[22:07] <MentalRay> https://www.supermicro.com/products/chassis/2U/216/SC216BE26-R920U.cfm
[22:07] <MentalRay> I think
[22:08] * gauravbafna (~gauravbaf@122.172.225.199) has joined #ceph
[22:09] <MentalRay> but next model need to have hotswap NVE
[22:10] <matejz11123> MentalRay: ufff??? those are some high density servers
[22:11] * sudocat (~dibarra@45-17-188-191.lightspeed.hstntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[22:12] <matejz11123> MentalRay: so this is 24 OSD in a single node??? what CPU do you use and how much ram?
[22:13] <MentalRay> yeah but only using 16 slot
[22:13] <MentalRay> 12 + OS
[22:13] <MentalRay> sorry
[22:14] <matejz11123> ok
[22:14] <MentalRay> nah 16
[22:14] <MentalRay> sorry heheh
[22:14] <MentalRay> 4 Journal
[22:14] <MentalRay> 12 osd
[22:14] <MentalRay> + 2 os drive
[22:15] <matejz11123> we need to go with 3,5" drives, since we get more space for the bucks and we need space over performance
[22:15] <matejz11123> at least for my project
[22:15] <matejz11123> aka remote backup location
[22:16] <vanham> CephFS will allow you to put directory data on SSDs and file data on HDs. I think it is awesome. Now I expect that from all my filesystems!
[22:16] * gauravbafna (~gauravbaf@122.172.225.199) Quit (Ping timeout: 480 seconds)
[22:17] * mykola (~Mikolaj@91.245.76.80) Quit (Quit: away)
[22:20] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:21] * abeck (~textual@213.152.161.30) has joined #ceph
[22:21] * Kalado (~Defaultti@7V7AAFGTX.tor-irc.dnsbl.oftc.net) Quit ()
[22:21] <matejz11123> MentalRay: what cpu do you have in those boxes?
[22:22] <MentalRay> e5-2670
[22:24] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[22:25] * rendar (~I@host183-178-dynamic.18-79-r.retail.telecomitalia.it) has joined #ceph
[22:32] <matejz11123> what number of nodes would you say is the recommended minimum
[22:32] <matejz11123> I know one can get away with 3
[22:33] <MentalRay> you will have people working around the clock or you will be the only one managing this?
[22:33] <matejz11123> only me
[22:34] <MentalRay> if you dont spread on enough nodes
[22:34] <MentalRay> each time a node goes down
[22:34] <MentalRay> you will sleep less ;p
[22:34] <matejz11123> hum
[22:35] <matejz11123> why does ceph need interaction in case node goes down
[22:35] <matejz11123> wont it heal itself?
[22:35] <MentalRay> it doesnt but I still keep an eye
[22:35] <MentalRay> on a recover procedure
[22:36] <MentalRay> or my team
[22:38] <TheSov> indeed
[22:38] <MentalRay> during 2 week
[22:38] <MentalRay> we had a piece of hardware in each node
[22:38] <TheSov> i have gone entire quarters not even looking at individual systems, the only thing we have on the monitors is busy% and free space
[22:38] <MentalRay> that made them unstable
[22:38] <TheSov> that you need to keep an eye on
[22:38] <TheSov> all the time
[22:39] <matejz11123> ok
[22:39] <matejz11123> oh yea
[22:39] <matejz11123> what happens if cluster reaches 100%:)
[22:40] <TheSov> it dies in a fire
[22:40] <TheSov> dont let that happen
[22:40] <MentalRay> you have setting for that
[22:40] <MentalRay> it cannot
[22:40] <MentalRay> same with zfs
[22:40] <MentalRay> if you reach 80% it will diyish
[22:40] <TheSov> basically the way ceph recovers from a disk failure is to populate other disks with that data, if you run out of disk space it cannot do that anymore and you get wierdness
[22:40] <MentalRay> TheSov you have a dashboard setup for your ceph cluster?
[22:41] <TheSov> devops built a little report screen that shows iops and freespace
[22:41] <TheSov> pulls it right from the monitors
[22:42] <MentalRay> how did you setup the cluster to respond to recovery and backfill
[22:42] <MentalRay> do you let it backfill and recover has soon as a node goes down?
[22:45] <matejz11123> do you virtualize monitor nodes or use physical?
[22:45] <MentalRay> physical
[22:46] <MentalRay> personnaly but can probably mix this with docker
[22:48] <MentalRay> matejz11123
[22:48] <MentalRay> https://www.youtube.com/watch?v=q7WOlte7hco
[22:51] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[22:52] <matejz11123> will take a look
[22:52] <matejz11123> ammm
[22:52] <matejz11123> what happens if I only have 1 monitor node and it dies
[22:53] <matejz11123> does the whole cluster go down
[22:57] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[22:57] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[22:58] <matejz11123> how beefy should the monitors be?
[22:58] * thansen (~thansen@162.219.43.108) Quit (Ping timeout: 480 seconds)
[22:59] <matejz11123> ceph web page says it needs 1GB per daemon??? does that mean per monitor daemon? So if I run only one monitor daemon on a server, I only need 1GB RAM?
[23:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[23:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[23:01] <lurbs> matejz11123: You need a quorum (> half and synced) of monitors. So yes, your cluster would be down.
[23:01] <lurbs> s/half/half up/
[23:02] <matejz11123> ok
[23:02] <lurbs> Recommended number of monitors is 3, with some very large clusters more. But always an odd number.
[23:02] <matejz11123> ok
[23:03] <MentalRay> 1gb
[23:03] <MentalRay> is per OSD
[23:03] <MentalRay> osd = Object Storage Daemons
[23:06] <MentalRay> I also think TheSov have some installation videos
[23:06] <MentalRay> but sure if he did them for jewel yet
[23:06] <MentalRay> ;p
[23:07] <matejz11123> ok
[23:08] <matejz11123> so what servers would I need for a monitor node
[23:08] <matejz11123> what is a "standard"
[23:08] <TMM> Is anyone else using an LSI SAS3008 controller with some sas expanders?
[23:09] <TMM> I'm using the supermicro firmwares and I have to disable ncq to make it work at all
[23:09] <TMM> I'm wondering if it's a generic avago firmware issue or if it's sm specific
[23:09] <lurbs> matejz11123: A lot of this is very workload and cluster dependent.
[23:11] * thansen (~thansen@17.253.sfcn.org) has joined #ceph
[23:11] <matejz11123> lurbs: damn??? same problems I had with designing elasticsearch cluster:)
[23:11] * natarej (~natarej@2001:8003:4885:6500:d804:46ec:50fd:29e3) Quit (Read error: Connection reset by peer)
[23:12] <matejz11123> how hard are monitor nodes actually working?
[23:12] * natarej (~natarej@2001:8003:4885:6500:d804:46ec:50fd:29e3) has joined #ceph
[23:12] <matejz11123> do they only work when cluster is foobar?
[23:13] <lurbs> In general they're under far less load than the OSDs themselves.
[23:13] <lurbs> For example: https://paste.nothing.net.nz/54e71c#1nJGPtvAF/VvN/ufIP2Y4g== <-- Point in time from one of our clusters.
[23:14] <lurbs> But that's a reasonably small cluster - only 4 nodes and 32 OSDs currently.
[23:20] <matejz11123> ok
[23:21] * Dragonshadow (~capitalth@strasbourg-tornode.eddai.su) has joined #ceph
[23:22] <matejz11123> how does erasure pool performs?
[23:22] <matejz11123> erasure coded pool
[23:23] <MentalRay> next on my todo
[23:23] <TMM> matejz11123, 'it depends' :P
[23:24] <matejz11123> :)
[23:24] <matejz11123> writing the specs for this is killing me:)
[23:24] <TMM> matejz11123, in my experience you only really start to pay for ec when you're in a degraded mode
[23:25] <lurbs> matejz11123: It depends. Slower than replicated, more CPU overhead, and you can't run RBD direct on an erasure coded pool (you need a replicated cache pool on top).
[23:26] <lurbs> In general EC+cache is a bad combination for RBD anyway.
[23:27] <TMM> I find for reads the actual overhead is pretty minimal, but writes and recovery are expensive
[23:27] <matejz11123> ok
[23:27] <lurbs> http://docs.ceph.com/docs/master/rados/operations/cache-tiering/#a-word-of-caution
[23:27] <MentalRay> since jewel caching tier support writes cache and read cache right?
[23:28] <TMM> the word of caution is correct, but in practice the most important thing is to just set your cache tier target full ratio pretty low
[23:28] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[23:29] <matejz11123> ok
[23:29] <matejz11123> thanks
[23:30] <MentalRay> you know what
[23:30] <MentalRay> You need to play with ceph :p
[23:30] <matejz11123> I think I have enough info to come up with a decent config
[23:30] <matejz11123> :)
[23:30] <matejz11123> I think I will order a 12 bays OSD nodes, probably from supermicro
[23:30] <matejz11123> and some 1U boxes for monitors, play a little and see where it takes me
[23:30] <TMM> sm is pretty good, if you get an lsi3008 controller from them disable ncq
[23:31] <matejz11123> this could be nice
[23:31] <matejz11123> https://www.supermicro.nl/products/system/2U/6028/SSG-6028R-E1CR12L.cfm
[23:32] <lurbs> You can probably get away with co-locating your monitors and OSDs by the way, especially if you're just testing it out.
[23:32] <lurbs> Moving monitors across machines isn't particularly fun, but it's possible.
[23:32] <matejz11123> hum
[23:33] <matejz11123> I might look around and find some servers for initial test
[23:33] <matejz11123> I think I have a few servers with 24 cores and 256gb memory, I just need to find some drives to put in
[23:34] <TMM> if you're an existing customer you can usually get sm to build you something in a lab
[23:34] <matejz11123> hum
[23:34] <matejz11123> I might ping my rep if he can get something for me
[23:34] <TMM> don't let them sell you one of their pre-ordained ceph solutions though
[23:34] <TMM> they aren't very good
[23:35] <TMM> well...
[23:35] * beck_ (~textual@ip-54-229-238-178.static.contabo.net) has joined #ceph
[23:35] <TMM> if your usecase is similar to what they had in mind it's probably very good
[23:35] <TMM> the sm guys aren't idiots :)
[23:36] * beck_ (~textual@ip-54-229-238-178.static.contabo.net) Quit ()
[23:39] <lurbs> Figure out your expected workload (RBD and/or RADOS Gateway and/or CephFS, random or sequential, read or write heavy etc) and work from there.
[23:39] <lurbs> And make sure you don't have any obvious bottlenecks. Disk IOPS, network, CPU, etc.
[23:40] * beck_ (~textual@213.152.161.40) has joined #ceph
[23:40] * allaok (~allaok@ARennes-658-1-51-67.w2-13.abo.wanadoo.fr) has joined #ceph
[23:40] <lurbs> Also consider the impact of failures, and how much data would need to migrate around.
[23:40] <matejz11123> TMM: yea, I was looking at those SM prebuild Ceph boxes:)
[23:40] * rendar (~I@host183-178-dynamic.18-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[23:40] <matejz11123> lurbs: ok, will do some calculations tomorrow
[23:41] <TMM> matejz11123, I found them to be unsuitable for rbd workloads, and overpriced for that purpose
[23:41] * abeck (~textual@213.152.161.30) Quit (Ping timeout: 480 seconds)
[23:41] <matejz11123> not powerful enough?
[23:42] <TMM> well, rbd is all tiny writes
[23:42] <TMM> you have a different equation there
[23:43] <matejz11123> what are SM boxes good for?
[23:44] <TMM> as far as I can tell if you use the preordained boxes for the more s3-like object stores they will perform very well
[23:44] <TMM> smaller objects with larger writes
[23:45] * danieagle (~Daniel@179.110.89.229) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[23:47] <matejz11123> ok
[23:47] <matejz11123> if I wanted to go with more of a rbd load
[23:47] <matejz11123> what would need to be changed on those boxes?
[23:47] <TMM> more ssds, mostly
[23:47] <matejz11123> more memory, more/less cpu, SSD gor journaling
[23:47] <matejz11123> oooo, ok
[23:48] <TMM> you don't need as much memory as you'll have fewer objects
[23:48] * beck_ is now known as abeck
[23:48] <TMM> but more memory will of course still help for reads
[23:48] <matejz11123> well, I guess I won???t get away without some testing:)
[23:49] <matejz11123> thank you all for help guys
[23:49] <matejz11123> you have been wonderful:)
[23:49] <TMM> btw: fewer objects means you'll have less pressure on your metadata caches
[23:50] <TMM> I don't think ceph inherently uses more memory when you have more objects
[23:50] <TMM> but I can be wrong about this
[23:51] * Dragonshadow (~capitalth@4MJAAFQ7I.tor-irc.dnsbl.oftc.net) Quit ()
[23:51] * kalleeen (~AG_Clinto@hessel0.torservers.net) has joined #ceph
[23:52] <TMM> in the end though, you do need to just test your workload and your budget against eachother
[23:52] * allaok (~allaok@ARennes-658-1-51-67.w2-13.abo.wanadoo.fr) has left #ceph
[23:52] <TMM> there are osd nodes you can design that will work perfectly for any workload
[23:52] <TMM> but it's going to be the most expensive thing you can possibly by
[23:52] <TMM> buy*
[23:53] <TMM> I can imagine that buying a 1u box with 8 nvme drives, 2x 40gbit ethernet and 2 18 core xeons is probably going to outperform anything you can get
[23:53] <TMM> but engineering is more about getting the most out of your budget than it is about getting the most out of the hardware :)
[23:53] * neurodrone_ (~neurodron@162.243.191.67) has joined #ceph
[23:59] * matejz11123 (~matejz@element.planetq.org) Quit (Quit: matejz11123)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.